Hi @<1655381985293504512:profile|LividGorilla28> , this is because your browser cannot show local files as it is a sandbox environment. Opening a new tab and going directly to a local file by pasting the path is the only way to view images that are saved locally.
The correct way is to store it on some storage solution such as s3/gcp/minio which can serve the data to the browser
It looks as it is running, did the status of the experiment change?
Yep, although I'm quite sure you could build some logic on top of that to manage proper queueing
Can you provide a self contained contained snippet that reproduces this behavior?
Try adding the following to your Task.init()
task = Task.init(project_name='<PROJECT_NAME>', task_name='<TASK_NAME>', output_uri=True)
Another option that will solve the issue as well is editing your ~/clearml.conf and changing the following section sdk.development.default_output_uri to point to your fileserver/hosting
The communication is done via HTTPS so relevant ports should be open.
Did you try with a hotspot connection from your phone?
Hi @<1727497172041076736:profile|TightSheep99> , you can change it in the settings -> configuration section
So when you do torch.save() it doesn't save the model?
Looks like it's not running in docker mode 🙂
Otherwise you'd have the 'docker run' command at the sttart
TrickyRaccoon92 , Hi!
Yes I believe this is the intended behavior. Since if you upload automatically you can upload many artifacts during a single run, whereas when you upload manually you create the object yourself.
or /home/<USER_NAME>/clearml.conf
Hi @<1523701235335565312:profile|HugeArcticwolf77> , user roles & permissions are only available in the Scale & Enterprise versions
ResponsiveHedgehong88 , please look here:
https://clearml.slack.com/archives/CTK20V944/p1660142477652039
Is this what you're looking for?
Can you please elaborate further or add some information on how you've reached this situation?
Can you try deleting the cache folder? It should be somewhere around ~/.clearml
With what host config were you trying the last attempts?
@<1546303277010784256:profile|LivelyBadger26> , it is Nathan Belmore's thread just above yours in the community channel 🙂
TartSeagull57 , you said the problem was with automatic reporting. Can you give an example of how you solved the issue for yourself?
Can you also add a full log of the run that was showing the git pass in the startup print?
Hi @<1724960464275771392:profile|DepravedBee82> , you have the auto_connect_frameworks parameter in Task.init() , this way you can disable the auto connection to Pytorch
None
I don't think there is any out of the box method for this. You can extract everything using the API from one workspace and repopulate it in another workspace also using the APIs.
Hi @<1546303277010784256:profile|LivelyBadger26> , can you provide a snippet that reproduces this?
The project should have a system tag called 'hidden'. If you remove the tag via the API ( None ) that should solve the issue.
How was the project turned to hidden?
Hi UpsetBlackbird87 ,
If you're in the pipelines UI, you can switch to the detailed view and you can see each step of the pipeline as a node 🙂
You can see an example here:
https://clear.ml/docs/latest/docs/pipelines/pipelines
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I'm guessing it's a self deployed server. What version are you on? Did you ever see any errors/issues in mongodb/elastic?
Do you mean that ALL experiments are being deleted from all projects?
The cool thing is that you can also configure this from code as well 🙂
One example is:
https://clear.ml/docs/latest/docs/references/sdk/task#taskinitTask.init(...,output_uri="<URL_TO_BUCKET>")