ProudElephant77 , I think you might need to finalize the dataset for it to appear
In compare view you need to switch to 'Last Values' to see these scalars. Please see screenshot
What is the best way to achieve that please?
I think you would need to edit the webserver code to change iterations to epochs in the naming of the x axis
Hi @<1554638166823014400:profile|ExuberantBat24> , you mean dynamic GPU allocation on the same machine?
Oh LOL 😛
Hi @<1576381444509405184:profile|ManiacalLizard2> , can you please elaborate more on your specific use case? And yes, ClearML supports working only with a specific user currently. What do you have in mind to expand this?
Hi @<1709740168430227456:profile|HomelyBluewhale47> , you can use CLEARML_FILES_HOST env variable to point to it - None
Do you need to pull it later somewhere? Is there a specific use case?
Because I think you can get the params dict back via code as a the same dict
Hi @<1529271085315395584:profile|AmusedCat74> , what are you trying to do in code? What version of clearml are you using?
Hi @<1523704207914307584:profile|ObedientToad56> , the virtual env is constructed using the detected packages when run locally. You can certainly override that. For example use Task.add_requirements - None
There are also a few additional configurations in the agent section of clearml.conf I would suggest going over
Hi SlimyElephant79 , can you share a screenshot of the 'Execution' section in the UI?
Hi @<1523701083040387072:profile|UnevenDolphin73> , can you list/show which buttons it affects and how?
Hi @<1792726992181792768:profile|CloudyWalrus66> , from a short read on the docs it seems simply as a way to spin up many machines with many different configurations with very few actions.
The autoscaler spins up and down regular ec2 instances and spot instances automatically by predetermined templates. Basically making the fleet 'feature' redundant.
Or am I missing something?
@<1664079296102141952:profile|DangerousStarfish38> , can you provide logs please?
You can configure it in ~/clearml.conf at api.files_server
Hi @<1709740168430227456:profile|HomelyBluewhale47> , dynamic env variables are supported. Please see here - None
And what is the issue? You can't access the webUI?
BitterLeopard33 , ReassuredTiger98 , my bad. I just dug a bit in slack history, I think I got the issue mixed up with long file names 😞
Regarding http/chunking issue/solution - I can't find anything either. Maybe open a github issue / github feature request (for chunking files)
Regarding UI - you can either build your own frontend for it or use streamlit / gradio applications (Which are supported in the enterprise license).
About using a model outside of ClearML - You can simply register the model to the model artifactory - None
Hi @<1539780258050347008:profile|CheerfulKoala77> , it seems that you're trying to use the same 'Workers Prefix' setup for two different autoscalers, workers prefix must be unique between autoscalers
@<1526734383564722176:profile|BoredBat47> , that could indeed be an issue. If the server is still running things could be written in the databases, creating conflicts
Strange. Can you add your clearml.conf from the agent machine? Please make sure to obscure all secrets 🙂
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
Hi @<1652845271123496960:profile|AdorableClams1> , you set up fixed users in your docker compose, I would check there
I think you need to provide the app pass for github/butbucket instead of your personal password
Hi @<1533620191232004096:profile|NuttyLobster9> , are you self hosting ClearML?
ResponsiveHedgehong88 , please look here:
https://clearml.slack.com/archives/CTK20V944/p1660142477652039
Is this what you're looking for?