With this scenario, your data should be updated when running the pipeline
if you are using add_function_step
you can pass packages=["protobuf<=3.20.1"]
(there is an example in the sdk docs https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_function_step-1 )
Hi PompousParrot44 , you mean delete experiment?
DepressedChimpanzee34 how do you generate the task thats running remotely? once the agent pulled the task, this is your running configuration (it will pull the same configuration from the server as you see in the UI)
and under uncommitted changes you have the entire script?
Hi UnevenDolphin73 ,
try to re run it, a new instance will be created, under this specific instance Actions you have Monitoring and troubleshoot, and you can select Get system logs
I want to verify you scaler doesnt have any failures in this log
can you share the local run log?
When connecting a nested dict the keys will be in the struct of period/end
and period/start
, so those are the keys you need to change, in additional to the section name, General
if name not given.
This should work for example in your example:
` cloned_task = Task.clone(source_task=template_task,
name=template_task.name+' for params', parent=template_task.id)
put back into the new cloned task
cloned_task.set_parameters({"General/period/start": "...
I just tried and everything works.
I run this for the template task:
` from clearml import Task
task = Task.init(project_name="Examples", task_name="task with connected dict")
period = {"start": "2020-01-01 00:00", "end": "2020-12-31 23:00"}
task.connect(period, name="period") `
and this for the clone one:
` from clearml import Task
template_task = Task.get_task(task_id="<Your template task id>")
cloned_task = Task.clone(source_task=template_task,
name=templat...
Hi BattyLizard6 ,
Do you have a toy example so I can check this issue my side?
Hi LethalCentipede31
You can report plotly with task.get_logger().report_plotly
, like in https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py
For seaborn, once you use plt.show
it will be in the UI (example https://github.com/allegroai/clearml/blob/master/examples/frameworks/matplotlib/matplotlib_example.py#L48 )
you can use clearml logger for the reporting, like in this example https://github.com/allegroai/clearml/blob/master/examples/reporting/media_reporting.py
Hi TeenyFly97 ,
With task.close()
the task will do a full shutdown process. This includes repo detection, logs, metrics and artifacts flush, and more. The task will not be the running task anymore and you can start a new task.
With task.mark_stopped()
, the task logs will be flushed and the task will mark itself as stopped
, but will not perform the full shutdown process, so the current_task
will still be this task.
For example:
` from trains import Task
task = Task.in...
You are definitely right! We will fix this issue, Thanks 🙂
Hi GiganticTurtle0 ,
All the packages you are using should be under installed packages
section in your task (in the UI). ClearML analyze and the full report should be under this section.
You can add any package you like with Task.add_requirements('tensorflow', '2.4.0')
for tensorflow version 2.4.0 (or Task.add_requirements('tensorflow', '')
for no limit).
If you dont want the package analyzer, you can configure in your ~/clearml.conf file: ` sdk.development.detect_with_...
You can configure env vars in your docker compose, but what is your scenario? Maybe there are some other solutions
Hi SmarmySeaurchin8
You can configure TRAINS_CONFIG_FILE
env var with the conf file you want to run it with. Can this do the trick?
somehow the uncommitted changes (full script in the case) weren't detected
I only want to save it as a template so I can later call it in a pipeline
running with task.execute_remotely()
it wont really run the task. it will start it and abort it, so you will have it Aborted
, and this is your template task
Hi WickedBee96 ,
Are you running a standalone script or some code part of a git repository?
Hi DilapidatedDucks58 ,
ClearML supports dynamic gpus allocation as part of the paid version - https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation
can this help?
Hi GiganticTurtle0 ,
Uploading artifact is being done async, maybe this is the issue in your case, you can change it with wait_on_upload=True
, can you try it?
Hi NonchalantDeer14 ,
Can you share the env you are running with?
NonchalantDeer14 thanks for the logs, do you maybe have some toy example I can run to reproduce this issue my side?
Which storage are you using? ClearML files server?
Hi MysteriousBee56 .
What trains-agent version are you running? Do you run it docker mode (e.g.trains-agent daemon --queue <your queue name> --docker
?