Hi @<1536881167746207744:profile|EnormousGoose35> , support for the PRO version is given here on community slack channel.
Or should I set agent.google.storage {}?
Did you follow the instructions in the docs?
Hi @<1523701624541810688:profile|RotundHedgehog76> , how are you running the agent? What is the command you're using?
JumpyPig73 Hi!
I think that AnxiousSeal95 can help you 🙂
My bad, please use debug.ping
instead of debug/ping
Hi WickedCat12 , are you running your own server or are you using the Saas
Hi @<1576381444509405184:profile|ManiacalLizard2> , can you please elaborate more on your specific use case? And yes, ClearML supports working only with a specific user currently. What do you have in mind to expand this?
Regarding UI - you can either build your own frontend for it or use streamlit / gradio applications (Which are supported in the enterprise license).
About using a model outside of ClearML - You can simply register the model to the model artifactory - None
Also, how are you saving your models?
Hi @<1612982606469533696:profile|ZealousFlamingo93> , how exactly are you running the autoscaler?
Is this what you're running?
None
Hi @<1638712150060961792:profile|SilkyCrocodile89> , how did you upload them and as what?
Hi MoodyCentipede68 ,
What versions of ClearML & Agent are you using?
Hi MelancholyChicken65 , do you mean license wise?
@<1541954607595393024:profile|BattyCrocodile47> , shouldn't be an issue - ClearML SDK is resilient to connectivity issues so if the server goes down the SDK will continue running and will just store all the data locally, once server is back up, it will send everything that was waiting.
Makes sense?
the experiments themselves 🙂
Imagine if you have very large diffs or very large (several mb) configuration files logged into the system - this is sitting somewhere in the backend on a database
Hi @<1702130048917573632:profile|BlushingHedgehong95> , I would suggest the following few tests:
- Run some mock task that uploads an artifact to the files server. Once done, verify you can download the artifact via the web UI - there should be a link to it. Save that link. Then delete the task and mark to delete all artifacts. Test the link again to see that it fails to delete
- Please repeat the same with a dataset
Is it a self deployed server or the Community?
Can you give an example for your test_config
?
Discussion moved to internal channels
SmugTurtle78 , I think so. Can you verify on your end?
This is strange. Can you take a look inside the apiserver & webserver to see if there are any errors?
VexedCat68 , it looks like it is being saved locally. Are you running all from the same machine?
I wasn't able to reproduce it on my side. Can you try the following?
In clearml/examples/reporting/mode_config.py
Under line 45:OutputModel().update_weights('my_best_model.bin')
Add the following:output_model = task.models['output'][-1]
output_model.tags=['deployed']
And check in the UI if you get a tag on the model
Hi @<1556450111259676672:profile|PlainSeaurchin97> , the API can only retrieve the URI from the backend. The SDK itself will manage the downloads for you.
IcyJellyfish61 , I don't think so, no
UnevenDolphin73 , can you please provide a screenshot of the window, message and the URL in sight?
Hi CloudySwallow27 ,
I think currently the way to do this is by disabling the framework detection and reporting the debug images manually.
You can do this by Task.init(
auto_connect_frameworks=False
)