Hi @<1546303293918023680:profile|MiniatureRobin9> , can you add the full console log?
It can run dockers and it can run over K8s
It's a way to execute tasks remotely and even automate the entire process of data pre processing -> training -> output model 🙂
You can read more here:
https://github.com/allegroai/clearml-agent
It depends on what you use K8s for
Hi JitteryCoyote63 ,
If you run in docker mode it will execute the shell script. I think it should be printed in the logs on startup.
Hi @<1702130048917573632:profile|BlushingHedgehong95> , I would suggest the following few tests:
- Run some mock task that uploads an artifact to the files server. Once done, verify you can download the artifact via the web UI - there should be a link to it. Save that link. Then delete the task and mark to delete all artifacts. Test the link again to see that it fails to delete
- Please repeat the same with a dataset
Hi AttractiveShrimp45 . You input min value as 0, max value as 1 and step as 1?
It needs to be in the base task
Hi SucculentWoodpecker18 ,
The two are a bit different, this is why the versions are different. Functionality wise they should be almost the same - And bugs shouldn't be present in either. Do you have a code snippet that reproduces this behavior?
Please try setting it to True, that should fix it
How does your requirements.txt look like?
Hi @<1595225628804648960:profile|TroubledLion34> , I'm afraid you can't upload via an API since what is doing the uploading is the SDK/CLI, however, you can upload files via your java application and then register the dataset via the API
Makes sense?
Hi DepravedCoyote18 , can you please elaborate a bit on what the current state is now and how you would like it to be?
Hi @<1534706830800850944:profile|ZealousCoyote89> , can you please add the full log?
I think you can force the script diff to be empty with Task.set_script(diff="")
or maybe Task.set_script(diff=None)
https://clear.ml/docs/latest/docs/references/sdk/task#set_script
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I'm afraid there is no clear-cut way to migrate data from the community server to your own self hosted server since the databases aren't compatible.
One work around would be to pull all experiments information via API (The structure/logs/metrics) and then repopulate the new server using the API. I think it would be a bit cumbersome but it can be achieved.
Did you raise a serving engine?
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , I'm not sure what you mean. Can you please elaborate?
It should look something like this
Hi CrookedWalrus33 , I think this is what you're looking for:
https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Hi @<1523703961872240640:profile|CrookedWalrus33> , I think by "mutable" it means that the object itself is mutable when connecting.
I'm curious, what is your use case that you want to change the values in the code itself? The intended usage is to connect the config object and then control it via the webUI / API
You can use scroll_id
to scroll through the tasks. When you call tasks.get_all
you will get a scroll_id
back. Use that scroll_id
in the following calls to go through the entire database. Considering you have only 2k tasks, you can cover this in 4 scrolls 🙂
Hi StraightParrot3 , page_size
is indeed limited to 500 from my understanding. You need to scroll through the tasks. The first tasks.get_all
response will return scroll_id
, you need to use this scroll_id
in your following call. Every call afterwards will return a different scroll_id
which you will always need to use in your next call to continue scrolling through the tasks. Makes sense?
You have a small cog wheel on the right of the graphs. You can switch presentation to 'Wall Time' to see how much time it took 🙂