DeterminedOwl36 , what version of ClearML are you using? Also, does it happen if you run the script standalone and not through jupyter notebook?
Hi @<1523702307240284160:profile|TeenyBeetle18> , if they are already on GS then you can use add_external_files to log them.
None
What do you think?
clearml-session is part of the clearml package 🙂
Hi MoodySheep3 ,
Can you please provide screenshots from the experiment - how the configuration looks like
You can add basically whatever you want usingclearml-serving metrics add ...
None
@<1664079296102141952:profile|DangerousStarfish38> , I would suggest creating a configuration file on each machine and not only specifying a token 🙂
Besides that all sounds good and of course all of this is fully supported in the open source self hosted server
Maybe ExasperatedCrab78 , might have an idea
Hi MammothParrot39 , what command do you run the agent with?
Hi @<1523703961872240640:profile|CrookedWalrus33> , I think by "mutable" it means that the object itself is mutable when connecting.
I'm curious, what is your use case that you want to change the values in the code itself? The intended usage is to connect the config object and then control it via the webUI / API
Hi NarrowLion8 , you can simply change the file_server section to an s3 bucket like files_server: s3://my_test_bucket
Hi @<1554275802437128192:profile|CumbersomeBee33> , can you elaborate on what you're trying to do?
Hi @<1719524663014461440:profile|CornyOwl46> , that sounds like a good plan, take into account that all of the metrics/console logs are stored in elastic so you'd have to replicate that as well
DeliciousSeal67 , you need to update the docker image in the container section - like here:
You'll need to assign an agent to run on the queue, something like this: 'clearml-agent daemon -- foreground --queue services'
Hi JitteryCoyote63 , I think you can click one of the debug samples to enlarge it. Then you will have a scroll bar to get to your needed Iteration. Does that help?
JitteryCoyote63 , I'm afraid currently not and only available in docker mode.
What do you need it for if I may ask?
Hi @<1587615463670550528:profile|DepravedDolphin12> , can you please provide a link to the doc you read?
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: " our-host.com :<PORT>"
And NOThost: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
Hi @<1652845271123496960:profile|AdorableClams1> , you set up fixed users in your docker compose, I would check there
Also, I would suggest trying pipelines from decorators, I think it would be much smoother for you
Is the agent running on the same machine as the original code that didn't get any errors?
MortifiedDove27 , in the docker ps command you added everything seems to be running fine
I think you can just send empty payload for users.get_all like this {}
and it will return all the users in your database 🙂
Hi @<1529271098653282304:profile|WorriedRabbit94> , do you maybe have autoscalers that ran for very long? Easiest is simply deleting all projects and applications and waiting a few hours
Hi @<1797800418953138176:profile|ScrawnyCrocodile51> , you can use Task.add_requirements to add any packages. Additionally, you can also install packages with the docker bash init script
Hi NuttyCamel41 , what kind of additional information are you looking to report? What is your use case?
Hi @<1523701504827985920:profile|SubstantialElk6> , I think as long as the ports are open and pods can communicate between themselves and it should work