SubstantialElk6 , you can find some neat examples here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
Hi @<1615881718445641728:profile|EnchantingSeaturtle2> , what version of clearml are you using? Are you running the server yourself or using the community server?
Can you try running it native on windows?
Hi @<1797438038670839808:profile|PanickyDolphin50> , what is this uv caching you're referring to?
Hi @<1523704674534821888:profile|SourLion48> , I'd suggest connecting your batch size as a configuration parameter of the experiment, for example using argparser, and then regardless of the committed or uncommitted code, you will be able to control this value through the configuration section.
What do you think?
Please open developer tools (F12) and see what is returned in the network from tasks.get_by_id_ex
Also please see if there are any errors in the console
In that case I suggest you write some basic code that will aggregate and compute those values for you for comparison
@<1597762318140182528:profile|EnchantingPenguin77> , are you sure you added the correct log? I don't see any errors related to cuda
Hi @<1688721775728267264:profile|VastGiraffe70> , the self hosted version is completely unrestricted from the usage perspective
Here is an example for auto cleaning. Did you delete ALL experiments?
Hi @<1523701491863392256:profile|VastShells9> , I assume you're using the autoscaler?
Hi @<1856144866401062912:profile|VirtuousHorse94> , there was a hotfix released, try pip install -U clearml-agent and run again π
Hi @<1833676820357058560:profile|MiniatureGrasshopper70> , I suggest checking out the channel to see anything you can add or fix π
Reproduces for me as well. Taking a look what can be done π
SmugTurtle78 , I think so. Can you verify on your end?
Ok, thatβs good to know. So with the autoscaler, can you also define what types of machines you need, for example GPU/No GPU, storage size, memory, etc?
Yes! And you can even run with preemptible instances π
Hi ExcitedSeaurchin87 ,
How are you trying to run the agents? Also, are you trying to run multiple agents on the same GPU?
Hi ThoughtfulBadger56 ,
I think these are a the environment variables you're looking for:CLEARML_AGENT_SKIP_PIP_VENV_INSTALL CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
https://clear.ml/docs/latest/docs/clearml_agent/clearml_agent_env_var
DepressedChimpanzee34 , how are you trying to get the remote config values? Also, what which configurations are we talking about specifically?
DistressedKoala73 , can you send me a code snippet to try and reproduce the issue please?
What is this http://unicorn address? Did you deploy using docker compose?
Hi @<1533257411639382016:profile|RobustRat47> , what would you define as most metrics?
I suggest watching the following videos to get a better understanding:
Agent - None
Autoscaler - None
Also please review agent docs - None
when a task is enqueued when does the autoscaler kicks in?
You're looking for the polling interval parameter as mentioned in the documentation - [None](https://clear.ml/docs/latest/docs/webapp/appl...
No need, you can set multiple compute resources per single autoscaler instance
Hi AbruptHedgehog21 , what are you trying to do when you're getting this message? Are you running a self hosted server?
Hi LethalCentipede31 , I don't think there is an out of the box solution for this but saving them as debug samples sounds like a good idea. You can simply report them as debug samples and that should also work π
Hi @<1556812486840160256:profile|SuccessfulRaven86> , can you please add an example configuration that reproduces this?
If it works on two computers and one computer is having problems then I'll be suspecting some issue with the computer itself. Maybe permissions or network issues
VictoriousPenguin97 , Hi π
Can you provide a snippet of how you tried to download the file? Also what version of clearml are you using? Also can you please give an example of the filename you have on s3?