Hi!
Can you say what's the size of your clearml folders?
Hi @<1529633468214939648:profile|CostlyElephant1> , it looks like thats the environment setup. Can you share the full log?
Hi @<1523702496097210368:profile|ScantChimpanzee51> , that information is saved on the task object. You can fetch it either with the API or the SDK
I think as long as they have different hashes you will have two different files
I think you can simply save the creating task ID as a configuration parameter (I'm just thinking out loud)
Hi @<1695969549783928832:profile|ObedientTurkey46> , this is supported in the Scale/Enterprise licensees of ClearML (external IdP support). API access is always done using credentials.
In that case then yes, install the agent on top of the machine with the A100 with 8 GPUs
Can you verify in the INFO section of an individual step to what queue it is enqueued into? Can you see them in the Queues page?
Hi @<1523702307240284160:profile|TeenyBeetle18> , what do you mean by dev containers?
Hi @<1696331935023894528:profile|BoredBee87> , role based access controls are available only in the Scale & Enterprise licenses. These licenses do support full on premise deployments. Besides role based access controls there are many other features available. I'd suggest contacting ClearML directly to hear about other options 🙂
Are you running it inside a docker yourself or is it run via the agent?
MuddySquid7 , Yes! Reproduced like a charm. We're looking into it 🙂
Do you have conda installed on the machine running the agent? Also what versions are involved? Conda/agent
Hi @<1635813046947418112:profile|FriendlyHedgehong10> , the pipeline basically creates tasks and pushes them into execution. You can click on each step and view the full details. In the info section you can see into which queue each step was pushed. I'm assuming there are no agents listening to the queue
I think all of them
@<1526734383564722176:profile|BoredBat47> , what happens if you configure it like @<1523701087100473344:profile|SuccessfulKoala55> is suggesting?
What do you mean read params file?
Hi @<1724235687256920064:profile|LonelyFly9> , what data/information are you looking to get using the user id?
For that you have the autoscaler - None
You can set up multiple instances of the autoscaler each spinning machines on different accounts
Hi @<1742355077231808512:profile|DisturbedLizard6> , you can open a GitHub feature request for this to be added 🙂
Hi @<1539417873305309184:profile|DangerousMole43> , in that case I think you can simply save the file path as a configuration in the first step and then in the next step you can simply access this file path from the previous step. Makes sense?
TartSeal39 , Hi 🙂
Do I understand correctly that you want to push parameters for Task.create() from a .yml file?
Hi @<1614069770586427392:profile|FlutteringFrog26> , if I'm not mistaken ClearML doesn't support running from different repoes. You can only clone one code repository per task. Is there a specific reason these repoes are separate?
Hi @<1523703746951909376:profile|SoggyBeetle95> , if you already ran v1.10 then the DB was migrated and 1.9 can not be run on it. ClearML does not support downgrade.
If you need to downgrade then the best that I can suggest is that you restore your DB from the backup if you created any before the upgrade.
Hi @<1649946171692552192:profile|EnchantingDolphin84> , it's not a must but it would be the suggested approach 🙂
RoughTiger69 Hi!
Regarding your questions:
You can use the following: Task.force_requirements_env_freeze(requirements_file='repo/some_folder/requirements.txt')
before your task=Task.init(...)
You can configure sdk.development.detect_with_pip_freeze=true
in your ~/clearml.conf
file for full env detection from the environment you're running from
Regarding 1 & 2 - I suggest always keeping the API docs handy - https://clear.ml/docs/latest/docs/references/api/definitions
I love using the API since it's so convenient. So to get to business -
To select all experiments from a certain project you can use tasks.get_all with filtering according to the API docs (I suggest you also use the web UI as reference - if you hit F12 you can see all the API calls and their responses. This can really help to get an understanding of it's capabilities ...
The UI uses also the API so any data you see the the UI you can directly extract from the API that's why I personally love using it so much for similar tasks