SubstantialElk6 whats the command line you are using ?
s like the
would be a really good starting place.
This is actually JS (typescript) ... not python, not sure on how to continue from there ๐
Would it also be possible to query based on
multiple
user properties
multiple key/value I think are currently not that easy to query,
but multiple tags are quite easy to do
tags=["__$all", "tag1", "tag2],
@<1527459125401751552:profile|CloudyArcticwolf80> what are you seeing in the Args section ?
what exactly is not working ?
I believe AnxiousSeal95 is.
ElatedFish50 any specific reason for the question?
Interesting... TrickyRaccoon92 could it be the validation phase was creating a new Tensorboard file ?
Hi JitteryCoyote63 , is there a callback for that?
instead of terminating them once they are inactive, so that they could be available immediately when they are needed.
JitteryCoyote63 I think you can increase the IDLE timeout on the autoscaler, and achive the same behavior, no ?
Hi SmallDeer34
On the SaaS you can right click on an experimenter and publish it ๐
This will make the link available for everyone, would that help?
ReassuredTiger98 if this user passes to the task as docker args the following, it might work:
'-e CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1'
That being said it returns none for me when I reload a task but it's probably something on my side.
MistakenDragonfly51 just making sure, you did call Task.init, correct ?
What duesfrom clearml import Task task = Task.current_task()
returns ?
Notice that you need to create the Task before actually calling Logger.current_logger()
or Task.current_task()
Hmm yes, @<1570220858075516928:profile|SlipperySheep79> I think you are right in your case it make sense to do add this option.
Could you add GH issue with the feature request? it should be fairly easy to add and we use GH to make sure we track those requests
wdyt?
BTW: dockerhub is free and relatively cheap to upgrade ๐
(GitHub also offers docker registry)
JitteryCoyote63 I found it ๐
Are you working in docker mode or venv mode ?
You can change it the CWD folder, if you put .
in working dir it will be the root git repo, but you can do any subfolder, obviously you need to change the script path to match the folder, e.g. ./folder/script.py
etc.
Hmm apparently it is not passed, but it could be.
Would the object itslef be enough to get the values? wouldn't it make sense to get them from outside somehow? (I'm assuming there is one set of args used at any certain moment?)
Hi ThoughtfulBadger56
Just add --stop
to the clearml-agent
(the exact same command as you used to spin it, just add --stop at the end and it will stop it, or just do clearml-agent daemon --stop and it will iteratively close them)
I think we should open a GitHub Issue and get some more feedback, maybe we should just add support in the backend side ?
ZanyPig66 this should have worked, any chance you can send the full execution log (in the UI "results -> console" download full log) and attach it here? (you can also DM it so it is not public)
OddAlligator72 just so I'm sure I understand your suggestion:
pickle the entire locals()
on current machine.
On remote machine, create a mock entry point python, restore the "locals()" and execute the function ?
BTW:
Making this actually work regardless on a machine is some major magic in motion ... ๐
Is there a quicker way to abort all running experiments in a project? I have over a thousand running anonymous data tasks in a specific project and I want to abort them beforeย debugging them.
We are adding "select" all in the next UI version to do that as quickly as possible ๐
RobustRat47 what's the Triton container you are using ?
BTW, the Triton error is:model_repository_manager.cc:1152] failed to load 'test_model_pytorch' version 1: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.
https://github.com/triton-inference-server/server/issues/3877
Alternatively I understand I can also run the agent using...
No you should not if you are running the agent inside a container it cannot work in docker mode and spin its own containers
Bottom line use clearml-agent daemon