Hi SmugTurtle78 , this issue is handled in the coming update of ClearML PRO
GreasyLeopard35 , What versions of clearml
& clearml-agent
are you using? Also, what happens if you try in python 3.9?
Hi UnevenDolphin73 ,
I think you need to lunch multiple instances to use multiple creds.
Hi @<1749965229388730368:profile|UnevenDeer21> , I think this is what you're looking for
None
Hi TartSeagull57 , are you running a local ClearML server? Did you upgrade it recently or maybe did you change clearml
version?
Hi @<1549202366266347520:profile|GorgeousMonkey78> , at what point does it get stuck? What happens if you remove the Task.init
line from the script?
Hi @<1560073997809356800:profile|RotundPigeon65> , I think this is what you're looking for 🙂
None
Maybe AnxiousSeal95 might have some input 🙂
If you shared an experiment to a colleague in a different workspace, can't they just clone it?
Hi GiganticMole91 , what version of ClearML server are you using?
Also, can you take a look inside the elastic container to see if there are any errors there?
You mean the community server?
Hi @<1533159639040921600:profile|JoyousReindeer30> , the pipeline controller is currently pending. I am guessing it is enqueued into the services queue. You would need to run an agent on the services queue for the pipeline to start executing 🙂
How are you trying to log the dataset? Can you provide the full error message please?
How did you create the dataset originally, can you share a snippet that reproduces this?
There is a CLI for working with datasets but nothing specific for task artifacts I think, only the SDK. What is your use case?
I see. Can you provide a simple stand alone code snippet that reproduces this behaviour for you?
@<1554638160548335616:profile|AverageSealion33> , what if you just run a very simple piece of code that includes Task.init()
? One of the examples in the repository, does this issue reproduce?
JitteryCoyote63 , if you go to a completed experiment you only see the packages stage installed in the log?
What OS/ClearML-Agent are you running?
Doesn't work for me either. I guess the guys are already looking into it
Hi @<1768447000723853312:profile|RipeSeaanemone60> , I think the sparse flag isn't supported currently. I'd suggest opening a GitHub feature request for this 🙂
Hi JumpyPig73 ,
It appears that only the AWS autoscalar is in the open version and other autoscalars are only in advanced tiers (Pro and onwards):
https://clear.ml/pricing/
Hi UnevenDolphin73 ,
I don't think that ClearML exposes anything into env variables unless done so explicitly.
If I'm understanding correctly you're hoping for a scenario where ClearML would expose some contents of sdk.aws.s3
for example so you could use it later in boto3, correct?
If that's the case why not use env vars to begin with?
Hi @<1686184974295764992:profile|ClumsyKoala96> , you can set CLEARML_API_DEFAULT_REQ_METHOD to POST
and that should work - None
Hi @<1797800418953138176:profile|ScrawnyCrocodile51> , you can use Task.add_requirements
to add any packages. Additionally, you can also install packages with the docker bash init script
Hi @<1566596960691949568:profile|UpsetWalrus59> , I think this basically means you have an existing model and it's using it as the starting point.
In that case you should check out pipelines from decorators, basically pushing functions to run on different machines - None
I suggest reading the full doc page on this 🙂