Reputation
Badges 1
92 × Eureka!not sure how for debug sample and scalars ....
But theorically, with the above, one should be able to fully reproduce a run
very hard to diagnose with this tiny bit of log ...
what about the log aroundwhen it try to actually clone your repo ?
one the same or different machine !
because when I was running both agents on my local machine everything was working perfectly fine
This is probably you (or someone) had set up ssh public key with your git repo sometime in the past
even it's just a local image ? You need a docker repository even if it will only be local PC ?
normally, you should have a agent running behind a "services" queue, as part of your docker-compose. You just need to make sure that you populate the appropriate configuration on the Server (aka set the right environment variable for the docker services)
That agent will run as long as your self-hosted server is running
you can upload the df as artifact.
Or the statistics as a DataFrame and upload as artifact ?
if you want plot, you can simply generate plot with matplotlib and clearml can upload them in the Plot or Debug Sample section
I really like how you make all this decoupled !! 🎉
is this mongodb type of filtering?
Nice ! That is handy !!
thanks !
but afaik this only works locally and not if you run your task on a clearml-agent!
Isn;t the agent using the same clearml.conf ?
We have our agent running task and uploading everything to Cloud. As I said, we don;t even have file server running
what does your clearml.conf look liks ?
if you have 2 agent serving the same queue and then send 2 task to that queue, each agent should take one task
But if you queue sequentially one task then wait until that task to finish and queue the next: then it will be random which agent will take the task. Can be the same on from the previous task
Are you saying that you have 1 agent running task, 1 agent sitting idle while there is a task waiting in the queue and no one is processing it ??
so in your case, in the clearml-agent conf, it contains multiple credential, each for different cloud storage that you potential use ?
Try to set CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=true
in the terminal start clearml-agent
See None
I also use this: None
Which can give more control
and just came across this: None
That sounds like what you may be looking for
all good. Just wanted to know in case I missed it
so i guess it need to be set inside the container
I use ssh public key to access to our repo ... Never tried to provide credential to clearml itself (via clearml.conf
) so I cannot help much here ...
Please refer to here None
The doc need to be a bit clearer: one require a path and not just true/false
what about having 2 agents, one on each GPU, on the same machine, serving the same queue ? So that when you enqueue, which ever agent (thus GPU) available will take the new task
the config that I mention above are the clearml.conf for each agent
Oh, I was assuming you are passing the entire DB backups to the cloud.
Yes, that is what I want to do.
So I need to migrate both the MongoDB database and elastic search database from my local docker instance to the equivalent in the cloud ?
how did you deploy your clearml server ?
like for dataset_dir
I would expect a single path, not an array of 2 paths duplicated
Without clearml-session, how one could set this up ?? I cannot find any documentation/guide on how to do this ... The official doc seems to say: you start a code server that then connect to vscode.dev Then from your laptop, you go to vscode.dev in order to access to your code server. Is there anyway you do this but without going to vscode.dev ???