Reputation
Badges 1
25 × Eureka!Ohh I see, okay next pipeline version (coming very very soon π will have the option of function as Task, would that be better for your use case ?
(Also in case of local execution, and I can totally see why this is important, how would you specify where is the current code base ? are you expecting it to be local ?)
I'm all for trying to help with debugging pipeline, because this is really challenging.
BTW: you can run your code as if it is executed from an agent (including the param ove...
Hi WackyRabbit7
I believe this is fixed in clearml-server 1.1 (this is a plotly color issue), releasing later today or tomorrow π
Hmm I think you are correct:param auto_create: Create new dataset if it does not exist yet
it should have created it, this seems like a bug, I'll make sure to pass along π
BoredHedgehog47 I tried changing the order of imports on the sample code I shared before, it worked in both cases ...
LovelyHamster1 from the top, we have two steps:
We run the code "manually" (i.e. without the agent) this step create the experiment (Task) and automatically feels in the "installed packages" (which are in the same format as regular requirements.txt) An agent is running a cloned copy of the experiment (Task). The agents creates a new venv on the agent's machine, then the agent is using the "Installed packages" section as a replacement to regular "requirements.txt" and installs everything fro...
Hmm reading this: None
How are you checking the health of the serving pod ?
TrickyRaccoon92 I'm not sure I follow, TB do show? and you want to add additional plotly plot ?
The only workaround I can think of is :series = series + 'IoU>X'
It doesn't look that bad π
WhimsicalLion91
What would you say the use case for running an experiment with iterations
That could be loss value per iteration, or accuracy per epoch (iteration is just a name for the x-axis in a sense , this is equivalent to time series)
Make sense?
Try to set this line in your clearml.conf to true:
https://github.com/allegroai/clearml/blob/6e6271fb91f2aeb2aa7a13c6d07d4e635baaa670/docs/clearml.conf#L177
I want to keep the above setup, the remote branch that will track my local will be onΒ
fork
Β so it needs to pull from there. Currently it recognizesΒ
origin
Β so it doesnβt work because the agent then canβt find the commit.
So you do not want to push the change set ?
You can basically add the entire change set (uncomitted changes) from the last pushed commit).
In your clearml.conf, set store_code_diff_from_remote: true
https://github.com/allegroai...
DAG which get scheduled at given interval and
Yes exactly what will be part of the next iteration of the controller/service
an example achieving what i propose would be greatly helpful
Would this help?from trains.automation import TrainsJob job = TrainsJob(base_task_id='step1_task_id_here') job.launch(queue_name='default') job.wait() job2 = TrainsJob(base_task_id='step2_task_id_here') job2.launch(queue_name='default') job2.wait()
Hi @<1691620877822595072:profile|FlutteringMouse14>
Do I have to use Hydra
You can, and then the entire configuration is fully captured by ClearML (automatically) while you can still override values with the manual "key.sub=value" both in the UI and in the CLI
Otherwise you can connect nested dict with task.connect (these will be flattened with /
for sub keys).
Or you can connect configuration files ( task.connect_configuration
) and edit them as is in the UI (with override of...
You can set torch to be installed last:
post_packages: ["horovod", "torch"]
Which will make sure the "trains-agent" version (the one you specified in the "installed packages" will be installed last.
Notice the pipeline step/Task at execution is not aware of the pipeline context
Hi WickedGoat98
Will I need to wrap their execution in python by system calls?
That would probably be the easiest solution π
Then you can plug it into your pipeline as a preprocessing Task:
You can check this example:
https://github.com/allegroai/trains/tree/master/examples/pipeline
Just making sure, the machine that you were running the "trains-init" on can access the API server ?
Are you running a jupyter notebook inside vscode ?
After removing the task.connect lines, it encountered another error related to 'einops' that is not recognized. It does exist on my environment file but was not installed by the agent (according to what I see on 'Summary - installed python packages'. should I add this manually?
Yes, I'm assuming this is a derivative package that is needed by one of your packages?
Task.add_requirements("einops")
task = Task.init(...)
Hi @<1724960468822396928:profile|CumbersomeSealion22>
It starts the pipeline, logs that the first step is started, and then...does nothing anymore.
How many agents do you have running? by default an agent will run a Task per agent (unless executed with --services-mode which would allow it to run unlimited amount of parallel tasks)
Hmm, not a bad idea π
Could you please open a Git Issue, so it will not get forgotten ?
(btw: I'm not sure how trivial it is to implement, nonetheless obviously possible π
Its stored on the Task, you can see it under the execution tab in the UI