Reputation
Badges 1
25 × Eureka!Hi ShortElephant92
No, this is opt-in, so other then checking for updates once in a while, no traffic at all
I'm not sure I follow the example... Are you sure this experiment continued a previous run?
What was the last iteration on the previous run ?
okay but still I want to take only a row of each artifact
What do you mean?
How do I get from the node to the task object?
pipeline_task = Task.get_task(task_id=Task.current_task().parent)
Can you let me know if i can override the docker image using template.yaml?
No, you cannot.
But you can pass OS environment "CLEARML_DOCKER_IMAGE" to set a diff default one
@<1523720500038078464:profile|MotionlessSeagull22> you cannot have two graphs with the same title, the left side panel presents graph titles. That means that you cannot have a title=loss series=train & title=loss series=test on two diff graphs, they will always be displayed on the same graph.
That said, when comparing experiments, all graph pairs (i.e. title+series) will be displayed as a single graph, where the diff series are the experiments.
Nice ! 🙂
btw: clone=True means creating a copy of the running Task, but basically there is no need for that , with clone=False, it will stop the running process, and launch it on the remote host, logging everything on the original Task.
@<1523707653782507520:profile|MelancholyElk85>
What's the clearml version you are using ?
Just making sure... base_task_id has to point to a Task that is in "draft" mode, for the pipeline to use it
BTW: generally speaking the default source dir inside a docker will be:/root/.trains/venvs-builds/<python_version>/task_repository/<repository_name>/
for example:/root/.trains/venvs-builds/3.6/task_repository/trains.git/
Hi FunnyTurkey96
Any chance you can try to run with the latest form GitHub (i just tested your code and it seemed to work on my machine).pip install git+
@<1523707653782507520:profile|MelancholyElk85> I just run a single step pipeline and it seemed to use the "base_task_id" without cloning it...
Any insight on how to reproduce ?
Yes that's the part that is supposed to only pull the GPU usage for your process (and sub processes) instead of globally on the entire system
Is there a way to filter a experiments in a hyperparameter sweep based on a given range of a parameter/metric in the UI
Are you referring to the HPO example? or the Task comparison ?
error in my-package setup command:
Okay this seems like an error in the setup.py you have in the "mypackage" folder
Does it say it runs something ?
(on the workers tab on the agents table it should say which Task it is running)
Check on which queue the HPO puts the Tasks, and if the agent is listening to these queues
Wait, is "SSH_AUTH_SOCK" defined on the host? it should auto mount the SSH folder as well?!
Added -v /home/uname/.ssh:/root/.ssh and it resolved the issue. I assume this is some sort of a bug then?
That is supposed to be automatically mounted the SSH_AUTH_SOCK defined means that you have to add the mount to the SSH_AUTH_SOCK socket so that the container can access it.
Try to run when you undefine SSH_AUTH_SOCK and keep the force_git_ssh_protocol (no need to manually add the .ssh mount it will do that for you)
But it does make me think, if instead of changing the optimizer I launch a few workers that "pull" enqueued tasks, and then report values for them in such a way that the optimizer is triggered to collect the results? would it be possible?
But this is Exactly how the optimizer works.
Regardless of the optimizer (OptimizerOptuna or OptimizerBOHB) both set the next step based on the scalars reported by the tasks executed by agents (on remote machines), then decide on the next set of para...
EnviousStarfish54 good news, this is fully reproducible
(BTW: for some reason this call will pop the logger handler clearml installs, hence the lost console output)
@<1523716917813055488:profile|CloudyParrot43> yes server upgrades deleted it 😞 we are redeploying a copy, should take a few min
If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)
There is no current way to "globally" change the default files server (I think this is part of the enterprise version, alongside vault etc.).
What you can do is use an OS environment to override the conf file:CLEARML_FILES_HOST=" "PricklyRaven28 wdyt?
I'm trying to achieve a workflow similar to the one
You mean running everything on a single machine (manually)?
So the "packages" are the packages you need in the steps themselves ?
Yes docker was not installed in the machine
Okay make sense, we should definitely check that you have docker before starting the daemon 😉
Ok, it would be nice to have a --user-folder-mounted that do the linking automatically
It might be misleading if you are running on k8s cluster, where one cannot just -v mount volume...
What do you think?
How would one do this? Do I just share a link to the experiment, like
See "Share" in the right click menu on the experiment
Clearml 1.13.1
Could you try the latest (1.16.2)? I remember there was a fix specific to Datasets