@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?
Well it sounds like it makes some sense. Try the following on the machine running the agent - In ~/clearml.conf
edit the following section:agent.vcs_cache.enabled=False
I don't think it's currently updated but the team is working on updating it. Hopefully it will be updated very soon 🙂
When looking at the base task, do you have that metric there?
Hi SourLion48 , what if you try inserting the credentials etc individually? Are you using a self hosted server? Is it behind a proxy by chance?
Hi @<1698868530394435584:profile|QuizzicalFlamingo74> , Try compression=False
Also please note that your path is wrong
Can you please elaborate more on what is happening in your code while this occurs, Can you add the full log?
Hi @<1727859576625172480:profile|DeliciousArcticwolf54> , I'd suggest debugging using developer tools in the webUI. Also, are you seeing any errors in the API server or webserver containers? I'd suggest first testing with elasticsearch to make sure that the deployment went through OK and this is not related to something else.
Out of curiosity, why do you want to use opensearch instead of elasticsearch?
Hi @<1706116294329241600:profile|MinuteMouse44> , is there any worker listening to the queue?
Hi @<1724960468822396928:profile|CumbersomeSealion22> , what was the structure that worked previously for you and what is the new structure?
Hi OddShrimp85 , you sure can! You can use the API. This one is useful for getting data about specific tasks. I think you'll have to sift through the response to find what you need 🙂
Hi @<1523701842515595264:profile|PleasantOwl46> , what do you mean more details about the state? Usually in the INFO section of the task you have all history of actions
Can you provide the log though? Where you got there error?
Hi, can you provide the full log?
Try setting the following environment envs:%env CLEARML_WEB_HOST=
%env CLEARML_API_HOST=
%env CLEARML_FILES_HOST=
%env CLEARML_API_ACCESS_KEY=... %env CLEARML_API_SECRET_KEY=...
and try removing the clearml.conf file 🙂
@<1564060263047499776:profile|ThoughtfulCentipede62> , you can run the agent twice on different queues - one with docker one without
Hi @<1523701062857396224:profile|AttractiveShrimp45> , I'm afraid not. But you can always export these tables and plots into a report and add your custom data into the ClearML report as well
Hi @<1724235687256920064:profile|LonelyFly9> , what is the reason you're getting 503 from the service ?
Hi @<1864479785686667264:profile|GrittyAnt2> , for that you would need to specify --output-uri
in the create command - None
This will point all previews also to the storage of your choice. Note, however as a NAS is considered part of your local disks, the browser cannot access local disk and therefor previews will not work.
For local storage solutions I suggest using something like MinIO
Hi @<1578555761724755968:profile|GrievingKoala83> , I'm afraid that's an Enterprise/Scale only feature
Huh, what an interesting issue! I think you should open a github issue for this to be followed up on.
If you remove the tags the page resizes back?
Hi SmugTurtle78 , this issue is handled in the coming update of ClearML PRO
Hi @<1607909176359522304:profile|UnevenCow76> , I suggest you review the following video on serving - None
This also explains how to visualize different metrics in Grafana
RoughTiger69 Hi!
Regarding your questions:
You can use the following: Task.force_requirements_env_freeze(requirements_file='repo/some_folder/requirements.txt')
before your task=Task.init(...)
You can configure sdk.development.detect_with_pip_freeze=true
in your ~/clearml.conf
file for full env detection from the environment you're running from
Hi GrittyCormorant73 ,
Did you define a single queue or multiples?
Hi @<1820993248525553664:profile|DisturbedReindeer69> , I think you're looking for the --output-uri
parameter in clearml-data create
- None
Hi SucculentWoodpecker18 ,
The two are a bit different, this is why the versions are different. Functionality wise they should be almost the same - And bugs shouldn't be present in either. Do you have a code snippet that reproduces this behavior?
SarcasticSparrow10 , please note that during the upgrade you do NOT copy /opt/clearml/data/mongo
into /opt/clearml/data/mongo_4
, you create the folder like in the instructions: sudo mkdir /opt/clearml/data/mongo_4
This is the reason that it is giving out errors - You've got old mongo data in your mongo 4 folder...
Please follow the instructions to the letter - this should work 🙂
SarcasticSparrow10 , it seems you are right. At which point in the instructions are you getting errors from which step to which?