lol! Can you hit F12 and see what the server returns for the call projects.get_all_ex
Hi @<1664079296102141952:profile|DangerousStarfish38> , I think the issue is resolving the versions of torch. Are you using an older python version on the agent?
It's handled by a separate process, my guess that it will start downloading other chunks of the data or just wait for the original process.
Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , not sure I understand what you mean. Can you please elaborate?
I think you can simply reset and enqueue the task again for it to run. Question is, why did it fail?
Are you running the HPO example? What do you mean by adding more parameter combinations? If the optimizer task finished you either need a new one or to reset the previous and re-run it.
You can do various edits while in draft mode
The functionality is basically the same as the GCP/AWS ones but since it is only in the Scale/Enterprise I don't think there is any documentation externally
A worker can execute only a single job at a time
One way that is, every new feature will be save as a new file and I will specify the parent.
I think that would be the best way 🙂
Hi @<1535793988726951936:profile|YummyElephant76> , did you use Task.add_requirements ?
None
This is the env variable you're looking for - CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
Then indeed it looks like a network/provider issue
Hi GloriousPenguin2 , have you tried the method you mentioned in the previous thread?
Like fetching the task of the scheduler and then changing the configuration json?
AbruptCow41 , can you please elaborate? You want to move around files to some common folder and then at the end just create the dataset using that folder?
Is there any specific reason you're not running in docker mode? Running in docker would simplify things
Hi DepravedCoyote18 , as long as you have everything backed up (configurations and data) on /opt/clearml/ (I think this is the default folder for storing clearml related stuff) the server migration should work (Data is a different issue).
However, ClearML holds links internally for datasets/debug samples/artifacts and a few other outputs maybe. Everything currently logged in the system to a certain minio server will still be pointing to that minio server.
Does that make sense?
Just to make sure, does Backblaze support the boto3 SDK?
Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.
I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?
Hi, where did you get the instructions thay specify 'trains' ? Everything should be switched to 'clearml'
Hi JitteryCoyote63 ,
Regarding Edit 2: This seems like a nice idea.
Regarding adding option to only stop them - Please open a feature request on GitHub 🙂
@<1544853721739956224:profile|QuizzicalFox36> , are you running the steps from the machine who's config you checked?
I've also suspected as much. I've asked the guys check out the credentials starting with TX4PW3O (What you provided). They managed to use the credentials successfully with errors.
Therefor it is a configuration issue.
Are the cloned tasks running? Can you add logs from the HPO and one of the child tasks?
I think you can do this only through the API, if at all possible since it's a system tag. Which project do you want to edit?
I played a bit with it and got to the value. OutrageousSheep60 , please tell me if this helps you 🙂
` >>> task.set_user_properties(x=5)
True
y=task.get_user_properties()
y
{'x': {'section': 'properties', 'name': 'x', 'value': '5'}}
y["x"]["value"]
'5' `
AttractiveShrimp45 , can you please open a GitHub issue to follow on this please?
What versions of ClearML/matplotlib are you using?
get_parameter returns the value of a parameter as documented:
https://clear.ml/docs/latest/docs/references/sdk/task#get_parameter
Maybe try https://clear.ml/docs/latest/docs/references/sdk/task#get_parameters
Hi FierceHamster54 , you have docker_args in https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#pipelinedecoratorcomponent
Hi @<1523702932069945344:profile|CheerfulGorilla72> , I think you need to map out the relevant folders for the docker. You can add docker arguments to the task using Task.set_base_docker