Hi ZippyAlligator65
You can configure it in the clearml.conf: see here:
https://github.com/allegroai/clearml-agent/blob/ebb955187dea384f574a52d059c02e16a49aeead/clearml_agent/backend_api/config/default/agent.conf#L202
GrievingTurkey78 I'm not sure I follow, are you asking how to add additional scalars ?
another option is the download fails (i.e. missing credentials on the client side, i.e. clearml.conf)
All the 3 steps can be found here:
https://github.com/allegroai/trains/tree/master/examples/pipeline
I see, so in theory you could call add_step with a pipeline parameter (i.e. pipe.add_parameter etc.)
But currently the implementation is such that if you are starting the pipeline from the UI
(i.e. rerunning it with a different argument), the pipeline DAG is deserialized from the Pipeline Task (the idea that one could control the entire DAG externally without changing the code)
I think a good idea would be to actually allow the pipeline class to have an argument saying always create from cod...
I would like to use ClearML together with Hydra multirun sweeps, but Iβm having some difficulties with the configuration of tasks.
Hi SoreHorse95
In theory that should work out of the box, why do you need to manually create a Task (as opposed to just have Task.init call inside the code) ?
So it should cache the venvs right?
Correct,
path: /clearml-cache/venvs-cache
Just making sure, this is the path to the host cache folder
ClumsyElephant70 I think I lost track of the current issue π what's exactly not being cached (or working)?
Was going crazy for a short amount of time yelling to myself: I just installed clear-agent init!
oh noooooooooooooooooo
I can relate so much, happens to me too often that copy pasting into bash just uses the unicode character instead of the regular ascii one
I'll let the front-end guys know, so we do not make ppl go crazy π
I'm assuming you cannot directly access port 10022 (default ssh port on the remote machine) from your local machine, hence the connection issue. Could that be?
GiganticTurtle0 notice that when you spin an agent with --services-mode, you basically let it run many Tasks at once (this is in contrast to the default behavior, when you have one Task per agent).
It can be a different agent.
If inside a docker thenclearml-agent execute --id <task_id here> --docker
If you need venv doclearml-agent execute --id <task_id here>
You can run that on any machine and it will respin and continue your Task
(obviously your code needs to be aware of that and be able to pull its own last model checkpoint from the Task artifacts / models)
Is this what you are after?
Yes, it will always create a new Task.
Hi JitteryCoyote63
could you check if the problem exists in the latest RC?pip install clearml==1.0.4rc1
Okay that kind of makes sense, now my followup question is how are you using the ASG? I mean the clearml autoscaler does not use it, so I just wonder on what the big picture, before we solve this little annoyance π
Hmm, I'm without, no reason why it will get stuck .
Removing all the auto loggers, this can be done with
Task.init(..., auto_connect_frameworks=False)
which would disconnect all the automatic loggers (Hydra etc) off course this is for debugging purposes
@<1535793988726951936:profile|YummyElephant76>
Whenever I create any task the "uncommitted changes" are the contents of
ipykernel_launcher.py
, is there a way to make ClearML recognize that I'm running inside a venv?
This sounds like a bug, it should have the entire notebook there, no?
We are always looking for additional talented people π DM me...
@<1523701079223570432:profile|ReassuredOwl55> did you try adding manually ?
./path/to/package
You can also do that from code:
Task.add_requirements("./path/to/package")
# notice you need to call Task.add_requirements before Task.init
task = Task.init(...)
Yep, everything (both conda and pip)
Now I need to figure out how to export that task id
You can always look it up π
How come you do not have it?
Notice the pipeline step/Task at execution is not aware of the pipeline context
Hello guys, i have 4 workers (2 in default and 2 in service queue on same machine)
Hi @<1526734437587357696:profile|ShaggySquirrel23>
I think what happens is one agent is deleting it's cfg file when it is done, but at least in theory each one should have it's own cfg
One last request can you try with the agent's latest RC version 1.5.3rc2 ?
Nice guys! Notice that the clearml-task can auto add the Task.init call on the fly, so you can connect any arbitrary Task and control the argparser arguments (again as parameters to the cleaml-task)
BTW: A fix for the --task-type Issue will be pushed later today π
Hi SkinnyPanda43
This issue was fixed with clearml-agent 1.5.1, can you verify?
Hi AgitatedTurtle16 could you verify you can access the API server with curl?
TrickySheep9
Is there a way to see a roadmap on such thingsΒ
?Β (edited)
Hmm I think we have some internal one, I have to admit these things change priority all the time (so it is hard to put an actual date on them).
Generally speaking, pipelines with functions should be out in a week or so, TaskScheduler + Task Triggers should be out at about the same time.
UI for creating pipelines directly from the web app is in the working, but I do not have a specific ETA on that
you can also get it flattened with:task.get_parameters()
Type in both cases is string