I think it basically needs the ability to raise/terminate instances
I'm afraid there is no such capability at the moment. However, I'd suggest opening a GitHub feature request for this 🙂
Hi @<1533257411639382016:profile|RobustRat47> , what would you define as most metrics?
Are you self hosted or using the community?
@<1719524641879363584:profile|ThankfulClams64> , there is a difference between models & tasks/experiments. Everything during training is automatically reported to the task/experiment, not the model. If you want to add anything to models themselves you have to add it manually. (Keep in mind that taks/experiments are separate entities from models, although there is a connection between the two)
Once you manually add either metadata or metrics you will be able to add custom columns. This is not...
RobustRat47 , do you mean the weights file?
The following command should give you something:docker logs --follow clearml-elastic
Hi WorriedRabbit94 , what do you see in the execution section of the experiment when you run it locally?
Hi EnviousPanda91 , are you running in docer mode? It looks like you're trying to use a CUDA image without a GPU on it
Hi TeenyHamster79 ,
I think the API you're looking for is tasks.get_by_id
and the fields you're looking for are:data.tasks.0.execution.queue.name
data.tasks.0.execution.queue.id
Tell me if it helps 🙂
Hi @<1652120623545061376:profile|FrightenedSealion82> , do you see any errors in the apiserver or the webserver containers?
Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , not sure I understand what you mean. Can you please elaborate?
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , the agent will try to use the relevant python version according to what the experiment ran on originally. In general, it's best to run inside dockers with a docker image specified per experiment 🙂
WittyOwl57 , when creating credentials, the credentials are associated with your user. So even if you give others those credentials, the experiments in the system will show up under the user who's credentials were being used when running the experiment 🙂
Hope this helps
You can edit the mongodb manually (strongly suggest against) to change users of experiments. Besides that, I'm afraid not. Each user would have to create separate credentials for themselves under their own user in the system.
A suggestion I might have is using the 'Description' field to write down the relevant user manually and adding that as a column in your view. The small cogwheel near the top right (next to the refresh button) will give you the option to add that column.
Hope this helps...
WittyOwl57 , It determines the user that created the object. What is the sign in method that you and your team are using?
Hi @<1529995795791613952:profile|NervousRabbit2> , if you're running in docker mode you can easily pass it in the docker_args
parameter for example so you can set env variables with -e
docker arg
Hi @<1569496075083976704:profile|SweetShells3> , do you mean to run the CLI command via python code?
Hi @<1571308079511769088:profile|GentleParrot65> , ideally you shouldn't be terminating instances manually. However you mean that the autoscaler spins down a machine and still recognizes it as running and refuses to spin up a new machine?
Hi @<1523703572984762368:profile|SlimyDove85> , conceptually I think it's possible. However, what would be the use case? In the end it would all be abstracted to a single pipeline
Aight. What OS are you on, also, what is the status of this deploy? Is this a clean install, version upgrade or it just stopped working after a restart? 🙂
I would suggest directly using the API for this. Then simply look at what the web UI sends as a reference 🙂
Is there a vital reason why you want to keep the two accounts separate when they run on the same machine?
Also, what if you try aligning all the cache folders for both configuration files to use the same folders?
Hi @<1655744373268156416:profile|StickyShrimp60> , happy to hear you're enjoying ClearML 🙂
To address your points:
Is there any way to lock setting of scalar plots? Especially, I have scalars that are easiest comparable on log scale, but that setting is reverted to default linear scale with any update of the comparison (e.g. adding/removing experiments to the comparison).
I would suggest opening a GitHub feature request for this
Are there plans of implementing a simple feature t...
I see. Sounds like a good idea! Please open a GitHub feature request 🙂
Hi I think it's stated in Slack integration docs:
None
Cant you paste the output until the stuck point? Sounds very strange. Does it work when it's not enqueued? Also, what version of clearml-agent & server are you on?
On what OS are you on?
Regarding your question - I can't recall for sure. I think it still creates a virtualenv
Interesting idea. From the looks of it even by searching for the task id manually archived experiments aren't fetched. Maybe open a github issue for this, really cool feature idea 🙂