
Reputation
Badges 1
25 × Eureka!do you have git repo link in the execution section of the experiment ?
WackyRabbit7 If you have an idea on an interface to shut it down, please feel free to suggest?
Plan is to have it out in the next couple of weeks.
Together with a major update in v0.16
So I assume, trains assumes I have nvidia-docker installed on the agent machine?
docker + nvidia-docker-runtime are assumed to be installed
nvidia/cuda docaker image is pulled when requested (like any other container image)
Moreover, since I'm going to use
Task.execute_remotely
(and not through the UI) is there any code way to specify the docker image to be used?
Sure, task.set_base_docker(docker_cmd='nvidia/cuda -v /mnt:/tmp')
Notice that you can not only pass the dock...
MysteriousBee56 what do you mean by "local repository"?
Like no git server, or local commit before pushing it ?
Yes, but I'm not sure that they need to have separate task
Hmm okay I need to check if this can be easily done
(BTW, the downside of that, you can only cache a component, not a sub-component)
Hmm you will have to set the trains-server on a machine somewhere, it can be any machine win / Mac / Linux
How are you getting:
beautifulsoup4 @ file:///croot/beautifulsoup4-split_1681493039619/work
is this what you had on the Original manual execution ? (i.e. not the one executed by the agent) - you can also look under "org _pip" dropdown in the "installed packages" of the failed Task
Can you also make sure you did not check "Disable local nachine git detection" in the clearml PyCharm plugin?
Hi WackyRabbit7 ,
Running in Docker mode provides you greater flexibility in terms of environment control, from switching cuda versions, to pre-compiled packages that are needed (think apt-get) etc. Specifically for DL if you are using multiple tensorflow versions, they are notorious for compiling against a specific CUDA version, and the only easy way to be able to switch between them would be different dockers. If your are a PyTorch user, then you are in luck, they have all the pytorch ver...
There is some overhead, but it should be negligible.
HappyLion37 did you check the https://github.com/allegroai/trains/tree/master/examples/services/hyper-parameter-optimization ?
You can very quickly get it distributed as well
Yes, in tandem with the experiments (because they constantly log to the server).
That said, with 0.16 we added offline mode, so you can run in offline mode, then import the experiment into the system.
when you clone the Task, it might be before it is done syncying git / packages.
Also, since you are using 0.16 you have to have a section name (Args or General etc.)
How will task b use the parameters ? (argparser / connect dict?)
seems to run properly now
Are you saying the problem disappeared ?
Ohh, the controller task itself holds the artifacts ?
I see. If you are creating the task externally (i.e. from the controller), you should probably call. task.close() it will return when everything is in order (including artifacts uploaded, and other async stuff).
Will that work?
JitteryCoyote63 to filter out 'archived tasks' (i.e. exclude archived tasks)Task.get_tasks(project_name="my-project", task_name="my-task", task_filter=dict(system_tags=["-archived"])))
Getting the last checkpoint can be done via.
Task.get_task(task_id='aabbcc').models['output'][-1]
@<1523712386849050624:profile|NastyFox63>
is there a limit to the search depth for this?
Yes, the Task.init auto package listing is Only the first depth (i.e. directly imported),
the reason is that the derivative packages should be resolved by pip, when the agent remotely executes that Task.
Now when the Agent is installing the task the Entire python environment is stored, so that it is always fully reprpoducible,
Make sense ?
Hi JealousParrot68
spinning the clearml-agent with docker support (i.e. each experiment is running inside its own container):
https://clear.ml/docs/latest/docs/clearml_agent#docker-mode
Basically you can specify a default docker to use (per agent) and a specific docker container to use per Task (configured in the UI under execution at the bottom)
pip install clearml==1.0.6rc2
Did not work?!
JitteryCoyote63 sure, this is how it was designed to work 🙂
After it finishes the 1st Optimzation task, what's the next job which will be pulled ?
The one in the highest queue (if you have multiple queues)
If you use fairness it will pull in round robin from all queues, (obviously inside every queue it is based on the order of jobs).
fyi, you can reorder the jobs inside the queue from the UI 🙂
DeliciousBluewhale87 wdyt?
Is it possible to make a connection to a S3 bucket via this authentication method with the open source version on EKS?
Hi BoredBluewhale23
In your setup, are we talking about agents running inside the Kubernetes cluster, or clients connecting from their own machine ?
ReassuredTiger98 I'm trying to debug what's going on, because it should have worked.
Regrading Prints ...
` from clearml import Task
from time import sleep
def main():
task = Task.init(project_name="test", task_name="test")
d = {"a": "1"}
print('uploading artifact')
task.upload_artifact("myArtifact", d)
print('done uploading artifact')
# not sure if this helps but it won'r hurt to debug
sleep(3.0)
if name == "main":
main() `
I should mention this is run within a TF v1 session context
This should not be connected.
everything gets stored as intended (to clearML dashboard)
So in jupyter it works? But from command line it does not ? what's the difference ?
hmm I assume the reason is the cookie / storage changed?