MysteriousBee56 what do you mean "save Scalars on the machine"? All metrics are sent to the trains server. You can later retrieve them from code, if you need.
BTW: you still can get race/starvation cases... But at least no crash
That's the right place but
like you would use hydra --override, which in your case I think it should be "accelerator.gpu" ,
You can also change allow_omegaconf_edit
in the UI to True, and then you could just edit the OmegaConf in the UI (if you do not change
allow_omegaconf_edit` then the edit in the UI is ignored)
I have to admit mounting it to a different drive is a good reason to bring this feature back, the reasoning was it means the agent needs to make sure it manages them (e.g. multiple agents running on the same machine)
hmm that is odd, let me check
Sorry for pinging you on this old thread.
...
And what was the learning strategy? ADAM? RMSProp?
Sorry, missed it...
I would actually use the HPO to test various setups (it uses Optuna under the hood so really SOTA hyper band Bayesian optimization ontop of them)
https://github.com/allegroai/clearml/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py
If I have access to the logs, python env and git commits, is there an API to log those to the experiments too?
Sure:task.update_task
see here:
https://clear.ml/docs/latest/docs/references/sdk/task#update_task
example:task.update_task(task_data={'script': {'branch': 'new_branch', 'repository': 'new_repo'}})
The easiest way to get all the different sections (they should be relatively self explanatory) is calling task.export_task() which returns a dict with all the fields yo...
this
from fastai.callbacks.tensorboard import LearnerTensorboardWriter
doesn’t exist anymore in fastai2
Hmm we should definitely update the example to fastai2 API
maybe the fastai bindings in clearml package are outdated
Are you getting any scalars reported to clearml?
they also appear to be relying on the tensorboard callback which seems not to work on distributed training
Yes that is correct, usually the way it works all nodes report back to "master...
So the "packages" are the packages you need in the steps themselves ?
Can the host server's service agent be used?
In theory yes, just make sure you expose the containers network (check the docker compose)
Hmm, I really like this one:
https://chart-studio.plotly.com/~empet/14632/plotly-joyplotridgelines/#plot
What I'm thinking is a global setting basically telling the TB binding layer to always do ridgeline instead of 3d surface.
wdyt?
Hi ExcitedFish86
In Pytorch-Lightning I use DDP
I think a fix for pytorch multi-node / process distribution was commited to 1.0.4rc1, could you verify it solves the issue ? (rc1 should fix this specific issue)
BTW: no problem working with cleaml-server < 1
Hi EmbarrassedSpider34clearml-init
will try to create ~/clearml.conf
I'm assuming that when you execute under root it is resolved to /root/clearml.conf
That said you might be able to override it with:CLEARML_CONFIG_FILE=$HOME/clearml.con sudo clearml-init
HI ResponsiveCamel97
What's the clearml-server version? How do you spin the server on your k8s cluster, helm ?
and the step is "queued" or is it "queued" in the pipeline state (i.e. the visualization did not update) ?
Hi AverageBee39
Did you setup an agent to execute the actual Tasks ?
OddAlligator72 I like this idea.
The single thing I'm not sure about is the "function entry point"
Why would one do that? Meaning why wouldn't you have a proper python entry-point.
The reason I'm reluctant is that you might have calls/functions/variables in global scope of the file storing the function, and then users will not know why something broke, ans it will be very cumbersome to debug.
A simple script entry point seems trivial to launch and debug locally.
What do you think ? What woul...
Try to set this line in your clearml.conf to true:
https://github.com/allegroai/clearml/blob/6e6271fb91f2aeb2aa7a13c6d07d4e635baaa670/docs/clearml.conf#L177
Should I use
update_weights_package
Yes
BTW, config.pbtxt you should pass when "registering" the endpoint with the CLI
I want to keep the above setup, the remote branch that will track my local will be on
fork
so it needs to pull from there. Currently it recognizes
origin
so it doesn’t work because the agent then can’t find the commit.
So you do not want to push the change set ?
You can basically add the entire change set (uncomitted changes) from the last pushed commit).
In your clearml.conf, set store_code_diff_from_remote: true
https://github.com/allegroai...
DAG which get scheduled at given interval and
Yes exactly what will be part of the next iteration of the controller/service
an example achieving what i propose would be greatly helpful
Would this help?from trains.automation import TrainsJob job = TrainsJob(base_task_id='step1_task_id_here') job.launch(queue_name='default') job.wait() job2 = TrainsJob(base_task_id='step2_task_id_here') job2.launch(queue_name='default') job2.wait()
Hi DizzyPelican17
I’d like to configure requirements file, docker image, docker command for my pipeline controller, but it seems I cannot set it up. Am I missing something?The decorator itself accepts those as arguments:
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#pipelinedecoratorcomponent
https://github.com/allegroai/clearml/blob/90f30e8d9a5ca9a1afa6b2e5ffccb96b0afe9c78/examples/pipeline/pipeline_from_decorator.py#L8
I’d like to setup up...
Hmm let me check, I think we changed the offline mode to use the latest API version (because by definition it cannot know what's the server).
Let me check if you can override it
Hi @<1539055479878062080:profile|FranticLobster21>
hey, how do I use local files as dependencies?
You mean like a repository ?
Can I specify in task what local files do I use that should be packaged?
In a git repo?
Basically the agent can do two things, either replicate a single script or clone a git repo + uncommitted changes
Hi @<1691620877822595072:profile|FlutteringMouse14>
Do I have to use Hydra
You can, and then the entire configuration is fully captured by ClearML (automatically) while you can still override values with the manual "key.sub=value" both in the UI and in the CLI
Otherwise you can connect nested dict with task.connect (these will be flattened with /
for sub keys).
Or you can connect configuration files ( task.connect_configuration
) and edit them as is in the UI (with override of...
What is the difference to
file_history_size
Number of unique files per titles/series combination (aka how many images to store in the history, when the iteration is constantly increasing)