Hi @<1576381444509405184:profile|ManiacalLizard2> , I would suggest playing with the Task object in python. You can do dir(<TASK_OBJECT>)
in python to see all of it's parameters/attributes.
Elastic Search, MongoDB & Redis. They are all inside containers on the machine running the server
Could be. If it's not picking what you expect then it means something is misconfigured
Hi @<1840924589736071168:profile|EmbarrassedCrab49> , the AI Gateway is part of only the Enterprise version, its not available in the OS
Just making sure we cover all bases - you changed updated the optimized to use a base task with _allow_omegaconf_edit_ : True
Hi @<1544128915683938304:profile|DepravedBee6> , the task that created the model would also get published.
About our second question, I think this is what you are looking for - None
Hi @<1792727007507779584:profile|HollowKangaroo53> , if you only want to delete old checkpoints you would need to write some automation for that. I'm guessing that the cleanup example can be a good baseline. Then just fetch the artifacts list from tasks and use StorageManager to delete the relevant files from the files server.
However when downloading the log manually it appears all the data is there?
Hi @<1570220858075516928:profile|SlipperySheep79> , you can set various cache limitations in clearml.conf
. The issue you encountered is specifically regarding Datasets? If that is the case, I think this is the section that you're looking for - None
let me know if it changes anything. Of course rerun the agent afterwards
Hi @<1585078752969232384:profile|FantasticDuck7> , I think you can pass this in bash setup script when the docker spins up
I think this might be what you're looking for:
https://clear.ml/docs/latest/docs/references/api/workers
https://clear.ml/docs/latest/docs/references/api/queues
You can access all reports through the REST API
the question how does ClearML know to create env and what files does it copy to the task
Either automatically detecting the packages in requirements.txt OR using the packages listed in the task itself
Hi @<1524560082761682944:profile|MammothParrot39> , how do you usually fetch metadata from a dataset?
ProudElephant77 , I think you might need to finalize the dataset for it to appear
Hi @<1523701260895653888:profile|QuaintJellyfish58> , if you run in docker mode you can easily add environment variables.
Can you elaborate a bit on your use case? If it's python code, why not just put it in the original file or import from the repo?
RattyLouse61 , SuccessfulKoala55 , I think your solution is better 🙂
WackyRabbit7 , how did you report the table? Can you please provide an example for the data structure of the table?
REMOTE MACHINE:
- git ssh key is located at ~/.ssh/id_rsa
Is this also mounted into the docker itself?
Hi RobustRat47 , what if you run them as sub-processes?
The one sitting in the repository
Hi @<1571308079511769088:profile|GentleParrot65> , ideally you shouldn't be terminating instances manually. However you mean that the autoscaler spins down a machine and still recognizes it as running and refuses to spin up a new machine?
Hi @<1547028074090991616:profile|ShaggySwan64> , You can try this. However, Elastic takes space according to the amount of metrics you're saving. Clearing some older experiments would free up space. What do you think?
Can you check the machine status? Is the storage running low?
Hi AverageRabbit65 , can you elaborate on what you're trying to do?
ClearML-Agent will automatically create a venv and install everything
VexedCat68 hi!
Hi @<1578555761724755968:profile|GrievingKoala83> , please provide a standalone code snippet. I mean a single stand alone piece of code that I can just run without modifications, even imports
GiganticTurtle0 Hi 🙂
How about Task.get_task()
?
https://clear.ml/docs/latest/docs/references/sdk/task#taskget_task
You'd need to provide it the project name and task name
If you run in docker mode you can specify startup shell script