Reputation
Badges 1
25 × Eureka!Can you let me know if i can override the docker image using template.yaml?
No, you cannot.
But you can pass OS environment "CLEARML_DOCKER_IMAGE" to set a diff default one
Are you running the agent in docker mode? or venv mode ?
Can you manually ssh on port 10022 to the remote agent's machine ?ssh -p 10022 root@agent_ip_here
hmmm, somehow I have a bed feeling about it... Could you check the log, it should say something like "Collecting torch==1.6.0.dev20200421+cu101 from https://"
It should be right at the top of the installation. What do you have there?
We are always looking for additional talented people 😉 DM me...
replace it with:git+
No need for the repository name, this will ensure you always reinstall it (again pip feature)
Oh task_id is the Task ID of step 2.
Basically the idea is, you run your code once (lets call it debugging / programming), that run creates a task in the system, the task stores the environment definition and the arguments used. Then you can clone that Task and launch it on another machine using the Agent (that basically will setup the environment based on the Task definition and will run your code with the new arguments). The Pipeline is basically doing that for you (i.e. cloning a task chan...
WickedGoat98 you mean the server is on your home network and the agents are in a VPS?
If this is the case, then regular "clearml-server" port forwarding is the only thing you need.
TCP ports 8008/8080/8081
Notice that on the agents you will have to specify the address of your home IP.
I would recommend using a host name and not IP, since the artifact/debug samles links will contain direct links into the file server, and it is always safer to have a host name rather than IP that can change.
E...
WickedGoat98 if this is the case, you can check this example. Same idea only "manual":
https://github.com/allegroai/trains/blob/master/examples/automation/task_piping_example.py
Hi DrabCockroach54
... and no logs for python script.
what do you mean by "no logs" , is it clearml logs? or k8s pod logs ?
I'm thinking of a few plots in my current in-house tooling which are slightly different than the standard charts we look at. For example a custom parallel coordinate chart that can use aggregations, categorical variables, etc.
This can be done by comparing experiments, then check the Hyper-Parameters tab, and select graph from the drop down at the top
So my question in general is pertaining to if I would need to get better at Javascript if I were to make those changes. My guess is ...
Ohh, sure then editing git config will solve it.
btw: why would you need to do that, the agent knows how to do this conversion on the fly
Guys I think I lost context here 🙂 what are we talking about? Can I help in anyway ?
Are you running the agent in docker mode ?
Is there a mount to the host machine ?
Okay that look s good, now in the UI start here and then get to the artifacts Tab,
Is it there ?
The agent cannot use another user (it literally has no way of getting credentials). I suspect this is all a by product of the actual mount point)
Does this mean the model weights are stored on the clearml-server file system?
By default they are just logged (i.e. the local path is stored, but the file is not uploaded). If you want to automatically store the model, pass output_uri=True
to the Task.init , or any object store / shared folder (e.g. output_uri='
s3://bucket/folder '
). ClearML will automatically create a subfolder for the Task, and upload all models/artifacts to it.
` task = Task.init(project_name='ex...
GiganticTurtle0
What do you mean by "reuse_last_task_id" ? each component is always a new Task generated (unless it is cached, and then it will reuse the previous executed)
What am I missing here?
Hmm are you running the clearml-agent on this machine? (This is the orchestration module, it will spin the Tasks and the dockers on the gpus)
Also, the IDs as an entry in the Configuration will not be clickable in the web interface, right?
No, but on the other hand, it will be editable if you clone the Task.
Which brings me to a different scenario,
In the original one, the Main Task created the Dataset, i.e. Output Dataset (and stored it both ways).
I could think of a situation the Task is using the Dataset as input (say preprocessing or traing), then we might want to enable users to clone and change the Input dataset. wdyt?
Yep it should :)
I assume you add the previous iteration somewhere else, and this is the cause for the issue?
Hi @<1523702000586330112:profile|FierceHamster54>
Nope 🙂 nothing to worry about.
That said do notice the open-source file-server is not secure, this does not mean it will spill data on the server, but it does mean that you should probably put it behind a VPN or use S3/GCP/Azure if this is open to the public internet
SkinnyPanda43 issue verified, this seems to be related to python 3.9 and subprocesses.
Let me check what we can do
Hi GloriousPenguin2 , Sorry this is a bit confusing. Let me expand:
When converting into a plotly object (the default), you cannot really control the dimensions of the plot in the UI programatically, you can however drag the seperator and expand width / height If you pass to report_matplotlib_figure
the argument " report_image=True,
" it will create a static image from matplotlib figure (as rendered locally) and use that as the figure, this way you get exactly wysiwyg , but the...
Are you also adding those metrics to the experiment table as extra columns ?
Thank you JuicyOtter4 ! 😍
. Is there a way to programmatically set that in the code?
Something like?
` task = Task.init(...)
probably we should change that to description ?!
task.set_comment("best thing ever") `
at the end of the manual execution
Which means there will be atleast multiple published models entries of same model over time?
Only the specific one will be published (not all the Models the Task created)