Hi @<1523703961872240640:profile|CrookedWalrus33> , this should be supported. How did you configure HPO?
What specific compatibility issues are you getting?
Hmmmmm do you have a specific usecase in mind? I think pipelines are created only through the SDK but I might be wrong
Can you export them somehow?
Please follow the instructions.
I think you'd have to re-run them to get them logged
SubstantialElk6 , do you mean the dataset task version?
Everything in None
Hi MoodyCentipede68 , yes I think this is indeed what you're looking for
Hi @<1673501397007470592:profile|RelievedDuck3> , no 🙂
Hi @<1572032849320611840:profile|HurtRaccoon43> , I'd suggest trying this docker image: nvcr.io/nvidia/pytorch:23.03-py3
Hi @<1523704674534821888:profile|SourLion48> , making sure I understand - You push a job into a queue that an autoscaler is listening to. A machine is spun up by the autoscaler and takes the job and it runs. Afterwards during the idle time, you push another job to the same queue, it is picked up by the machine that was spun up by the autoscaler and that one will fail?
Is that the entire log? Any errors in the webserver container?
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , I'm not sure I understand what you mean. Can you elaborate on the use case?
You have the open source repository of the documents - None
I think you could generate a pdf from that with some code.
Do you mean like sub modules or actually just clone several independent repositories?
Hi @<1529271085315395584:profile|AmusedCat74> , the agent technically has two modes, daemon
and execute
(clearml-agent daemon/clearml-agent execute).
When in daemon mode the agent will start the docker container for example, install the agent inside and the agent inside will run in execute
mode
How did you setup the ClearML server?
So you're using the community server? Response time really depends on the resources of the machine that is running the server and amount of data to filter
If you're running in docker mode you can add those steps very easily to the bash startup script
Hi @<1817731748759343104:profile|IrritableHippopotamus34> , I think it should be safe.
Please try setting it to True, that should fix it
Hi @<1749965229388730368:profile|UnevenDeer21> , can you add the log of the job that failed?
Also, note that you can set these arguments from the webUI on the task level itself as well, Execution tab and then container section
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , you can do it if you run in docker mode
Hi TimelyCrab1 , directing all your outputs to s3 is actually pretty easy. You simply need to configure api.files_server: <S3_BUCKET/SOME_DIR>
in clearml.conf
of all machines working on it.
Migrating existing data is more difficult since everywhere in the system everything is saved as links. I guess you could change the links in mongodb but I would advise against it.
Hi @<1631102016807768064:profile|ZanySealion18> , I think this is what you're looking for:
None