FrothyShrimp23 , I think this is more of a product design - The idea of a published task is one that cannot be easily changed afterwards. What is your use case for wanting to often unpublish tasks? Why publish them to begin with? And why manually?
Hi @<1817731756720132096:profile|WickedWhale51> , are you using the Logger module from the SDK?
Hi @<1784754456546512896:profile|ConfusedSealion46> , in that case you can simply use add_external_files to the files that are already in your storage. Or am I missing something?
Are you sure you pasted the credentials correctly? Does it give you feedback on which key/secret you used during the process? Which version of ClearML-Agent are you on?
Hi MagnificentWorm7 , what version of ClearML server are you running?
I'm happy you found a solution 🙂
Hi @<1793451774179282944:profile|TestyMouse38> , not sure I understand, can you please elaborate?
Any chance you could provide a share-able link if you're running on the community server?
Hi @<1658643479691005952:profile|TroubledLobster8> , about the agents in the web UI, they usually clear out in 10-15 minutes from there
Now try logging in
VexedCat68 , can you give a small example?
Hi @<1523701260895653888:profile|QuaintJellyfish58> , if you run in docker mode you can easily add environment variables.
Can you elaborate a bit on your use case? If it's python code, why not just put it in the original file or import from the repo?
Hi @<1679299603003871232:profile|DefeatedOstrich25> , you mean you're on the community server? Do you see any sample datasets in the Datasets section?
@<1797800418953138176:profile|ScrawnyCrocodile51> , you can edit the hyper params when a task is in draft mode
BoredPigeon26 , it looks like the file isn't accessible through your browser. Are you sure the remote machine files are accessible?
It also could be something on your machine that is blocking this. Either way, I would start with them.
VividDucks43 , I think I might have misunderstood you a bit - For a single pipeline you would have the same requirements.txt, why would you need many?
It's already implemented in the GCP autoscaler. You can use preemptible instances with GPUs
Hi EcstaticBaldeagle77 ,
I'm not sure I follow. Are you using the self hosted server - and you'd like to move data from one self hosted server to another?
UnevenDolphin73 , Hi!
I would avoid using cache_dir since it's only a cache. I think using S3 or the fileserver with Task.upload_artifact() is a nice solution
Also what do you mean by 'augment' arguments?
Try running with all them marked out so it will take defaults
Is it your own server installation or are you using the SaaS?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the reason for this is because each file is hashed and this is how the feature compares between versions. If you're looking to keep track of specific links then the HyperDatasets might be what you're looking for - None
Hi FrothyShrimp23 , you can use Task.mark_completed() and use force=True
https://clear.ml/docs/latest/docs/references/sdk/task#mark_completed
Hi @<1638712150060961792:profile|SilkyCrocodile89> , it looks like a connectivity issue. Are you trying to upload data to the files server? Can you share the full log?
Oh I see. Technically speaking the pipeline controller is a task of a special type of itself. So technically speaking you could provide the task ID of the controller and clone that. You would need to make sure that the relevant system tags are also applied so it would show up properly as a pipeline in the webUI.
In addition to that, you can also trigger it using the API also
There is a CLI for working with datasets but nothing specific for task artifacts I think, only the SDK. What is your use case?
Also, can you share which machine image you're using?
So how do you attach the pytorch requirement?
You also have onboarding videos there, I suggest you review all videos.