Hi @<1557174909573009408:profile|LargeOctopus32> , I suggest you through the introduction videos on ClearML's youtube channel
None
Hi @<1523707653782507520:profile|MelancholyElk85> , in a section right under the default S3 credentials in clearml.conf
you have a section to specify per bucket 🙂
Hi EnviousPanda91 , I'm not quite sure what you want to extract but you can extract everything from the UI using the API. The docs can be found here: https://clear.ml/docs/latest/docs/references/api/events
And for the best reference - You can open developer tools in the UI and see how the requests are handled there 🙂
connected_config = task.connect({})
Looks like you're connecting an empty config..
You can see my answer in channel
Hi GrittyHawk31 , can you elaborate on what you mean by metadata? Regarding models you can achieve this by defining the following in Task.init(output_uri="<S3_BUCKET>")
@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None
What's the version of your ClearML-Agent?
Are all agents running on the same machine or is it spread out?
Hi @<1546303293918023680:profile|MiniatureRobin9> , do you have some stand alone script that reproduces this behaviour for you? Are you both running the same pipeline? How are you starting the pipeline
Hi @<1529633468214939648:profile|CostlyElephant1> , I think this is what you're looking for:CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
None
Hi @<1660817806016385024:profile|FantasticMole87> , you'll either have to re-run it or change something in the DB. I suggest the first option.
If I'm not mistaken, models reflect the file names. So if you recycle the file names you recycle the models. So if you save torch.save(" http://top1.pt ") then later torch.save(" http://top2.pt ") and even later do torch.save(" http://top1.pt ") again, you will only have 2 OutputModels, not three. This way you can keep recycling the best models 🙂
@<1681111528419364864:profile|SmoothGoldfish52> , it will be saved to a cache folder. Take a look at what @<1576381444509405184:profile|ManiacalLizard2> wrote. I think tar files might work already. Give it a test
Hi CloudySwallow27 , regarding - Process terminated by user
- Are you running Hyperparam Optimization?
Regarding CUDA - yes, you need CUDA installed (or run it from a docker with CUDA) - ClearML doesn't handle the CUDA installation since this is on a driver level.
I think you need to make this package somehow available. One option is to have it already preinstalled /cached on the target machine
@<1715538373919117312:profile|FoolishToad2> , I think you're missing something. ClearML backend only holds references (links) to artifacts. Actual interaction with storage is done directly via the SDK, aka on the machine running the code
I'm not sure it's not possible in the open version. I think this is because the users are loaded into the server when the server loads all the config files. This usually happens on server startup.
However, even if you simply restart the apiserver, any running experiments should continue running and resume communication with the backend once the apiserver is back up.
There is also the option of manually creating documents directly on mongodb - but it is unadvisable.
Hi @<1546303254386708480:profile|DisgustedBear75> , how are you adding this file?
Hey 🙂
What data do you have for the dataframe? Couldn't reproduce it with:df = pd.DataFrame( { "num_legs": [2, 4, 8, 0], "num_wings": [2, 0, 0, 0], "num_specimen_seen": [10, 2, 1, 8], }, index=["falcon", "dog", "spider", "fish"], )
Hi @<1523701553372860416:profile|DrabOwl94> , is this a self hosted server? Do you see any console errors in developer tools?
The agent prints its configuration before the execution step, I don't see agent.git_pass
set anywhere in the log. Are you sure you set it up on the correct machine? This needs to be set up on the machine running the agent.
ExcitedSeaurchin87 , I think you can differentiate them by using different worker names. Try using the following environment variable when running the command: CLEARML_WORKER_NAME
I wonder, why do you want to run multiple workers on the same GPU?
Hi DrabCockroach54 , in the open source version there are no roles. You can set up users & passwords using this:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config/#web-login-authentication
I am not sure there is a simple way to delete users - I think you would need to edit MongoDB manually, which I would not recommend
Hi OddShrimp85 , you mean bash script? I don't think there is something built in to run a script afterwards but I'm sure you could incorporate it in your python script.
I'm curious, what is the use case?
This looks more appropriate if the username itself is "ubuntu"
DepressedChimpanzee34 , Hi!
The part you want to do faster is the code snippet you provided? Also, I'll check regarding the verbosity 🙂
Then you can disable the pre-population in docker compose - None
However it will work only on a new server on startup. Otherwise I think you would need to delete them from mongo + elastic, which I would advise against if it's a running server
Hi @<1582904448076746752:profile|TightGorilla98> , you would need to edit these addresses in mongodb/elastic for the new address. A migration script would do the job