Hi @<1702492411105644544:profile|YummyGrasshopper29> , I suggest doing it via the webUI with developer tools open so you can see what the webUI sends to the backend and then copy that.
wdyt?
Hi @<1749602841338580992:profile|ImpressionableSparrow64> , the S3 configuration (Credentials) is always done on the client side. You don't need to configure anything server side. Also good that you configured the agent.
How is the model being saved/logged into clearml?
--status Print the worker's schedule (uptime properties, server's runtime properties and listening queues)
ShakyJellyfish91 , Hi!
If I understand correctly you wish for the agent to take the latest commit in the repo while the task was ran at a previous commit?
Hi @<1691258549901987840:profile|PoisedDove36> , not sure I understand. Can you please elaborate with screenshots maybe?
And what is the issue? You can't access the webUI?
Not sure what you're trying to do but why not simply use Task.init() and set everything there?
It's not a requirement but I guess it really depends on your setup. Do you see any errors in the docker containers? Specifically the API server
RotundHedgehog76 ,
What do you mean regarding language? If I'm not mistaken ClearML should include Optuna args as well.
Also, what do you mean by commit hash? ClearML logs the commit itself but this can be changed by editing
Hi @<1752139558343938048:profile|ScatteredLizard17> , the two of supported types of instances by the ClearML autoscaler are on demand/spot instances, nothing to do with reserved ones
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , what is the version of your ClearML server?
Hi @<1752139552044093440:profile|UptightPenguin12> , for that you would need to use the API and use the mark_completed call with the force flag on
Hi @<1569496075083976704:profile|SweetShells3> , yes, it will remove all related documents both from ES & Mongo
Hi @<1593413673383104512:profile|MiniatureDragonfly17> , no. The assumption is that serving runs on a dedicated machine. Of course you can edit the docker compose to use different ports
I think you would need to contact the sales department for this 🙂
None
Hi @<1578193384537853952:profile|MoodyOx45> , those are actually pretty good questions 🙂
- I think so, yes, but your code & pipeline inputs would need to allow this. Your pipeline would need to be written with decorators and there would need to be some logic dependent on the parameters you give the pipeline when running
- I'm afraid that currently not. I would suggest opening a github feature request for this 🙂
I think this might be what you're looking for:
https://clear.ml/docs/latest/docs/references/api/workers
https://clear.ml/docs/latest/docs/references/api/queues
You can access all reports through the REST API
Sounds like an issue with your deployment. Did your Devops deploy this? How was it deployed?
I see. In this case you might need to re-register your datasets
Hi @<1673501379764686848:profile|VirtuousSeaturtle4> , what do you mean? Connect to a server someone else set up?
Hi @<1523701083040387072:profile|UnevenDolphin73> , looping in @<1523701435869433856:profile|SmugDolphin23> & @<1523701087100473344:profile|SuccessfulKoala55> for visibility 🙂
Hi ShakyOstrich31 ,
Can you verify that you did push the updated code into your repository?
Hi AbruptHedgehog21 ,
Access controls appear only in the enterprise version
It's handled by a separate process, my guess that it will start downloading other chunks of the data or just wait for the original process.
Hi @<1524560082761682944:profile|MammothParrot39> , did you make sure to finalize the dataset you're trying to access?
MelancholyElk85 if you're using add_function_step() it has a 'docker' parameter. You can read more here:
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_function_step
BitterLeopard33 , ReassuredTiger98 , my bad. I just dug a bit in slack history, I think I got the issue mixed up with long file names 😞
Regarding http/chunking issue/solution - I can't find anything either. Maybe open a github issue / github feature request (for chunking files)
Is there a firewall in between or something stopping the connection?