Hi @<1535069219354316800:profile|PerplexedRaccoon19> , not sure what you mean. Can you please share the full log, a screenshot of the two experiments and some snippet that re-creates this for you?
It's supported 🙂
Hi DilapidatedDucks58 , I think this might be a bug. Please open a GitHub issue to follow this 🙂
Hi @<1523703961872240640:profile|CrookedWalrus33> , this should be supported. How did you configure HPO?
Hi :)
I'm guessing you're running a self hosted version? I think that access rules are a feature in the enterprise version only.
Hi DeliciousSeal67 , You want the agent to run in a container and you added the container to the 'installed packages'?
What's the version of your ClearML-Agent?
Are all agents running on the same machine or is it spread out?
Did you try to edit clearml.conf
on the agent side and add the extra index url there - https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
What happens if you look at elastic container logs directly? I think it's something along the lines sudo docker logs clearml-elastic --follow
. Don't catch me on the exact syntax naming tho 😛
Hi @<1523701717097517056:profile|ScantMoth28> , what version of ClearML are you using? Are you using a self hosted server or the community one?
Hi DilapidatedDucks58 , what is your server version?
SarcasticSquirrel56 , you're right. I think you can use the following setting in ~/clearml.conf
: sdk.development.default_output_uri: <S3_BUCKET>
. Tell me if that works
@<1739818374189289472:profile|SourSpider22> , this capability is available only in the HyperDatasets feature which is part of the Scale/Enterprise license. I suggest taking a look here - None
RotundHedgehog76 ,
What do you mean regarding language? If I'm not mistaken ClearML should include Optuna args as well.
Also, what do you mean by commit hash? ClearML logs the commit itself but this can be changed by editing
Hi @<1607184400250834944:profile|MortifiedChimpanzee9> , to use a specific requirements.txt
you can use Task.add_requirements
None
I'm not entirely sure which steps you took and if you missed something. Elastic is complaining about permissions - Maybe you missed one of the steps?
BoredPigeon26 , it looks like the file isn't accessible through your browser. Are you sure the remote machine files are accessible?
Looks like a permissions issue:nested: IOException[failed to test writes in data directory [/usr/share/elasticsearch/data/nodes/0/indices/mQ-x_DoZQ-iZ7OfIWGZ72g/_state] write permission is required]; nested
You should remove anything confidential ofc 🙂
Hi @<1569496075083976704:profile|SweetShells3> , and how do you expect to control the contents of the file? Via the UI or to upload it and then run the pipeline?
Where/how are you getting that version?
Hi FierceHamster54 , I think it should install it correctly. Did you have a different experience?
UnevenDolphin73 , can you please provide a screenshot of the window, message and the URL in sight?
I don't think datasets don't have visualization out of the box, you need to add these previews manually. Only HyperDatasets feature from the Scale & Enterprise versions truely visualizes all the data.
According to your code snippet there isn't any visualization add on top of the dataset
I see. Makes sense. Maybe open a GitHub issue for this to follow up on the request 🙂
Hi DeliciousKoala34 , is there also an exceptionally large amount of files in that Dataset? How do you create the dataset? What happens if you use something like s3 if you have available?
Hi @<1523702439230836736:profile|HomelyShells16> , I'm afraid that's not really possible since the links themselves are saved on the backend
Hi SarcasticSquirrel56 ,
In Task.init()
you have the parameter auto_resource_monitoring
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
You can specify there what to turn off