Hi @<1556450111259676672:profile|PlainSeaurchin97> , I think what you're looking for is the output_uri parameter in Task.init()
@<1624579015031394304:profile|JitterySeal56> , is it possible there are connectivity issues between the client and the server? Do you see anything in the logs of the apiserver?
HI SubstantialElk6 ,
If I'm not mistaken the order is as goes:output_uri (Both code and CLI) Configurations vault default_output_uri in clearml.conf
Happy to help 🙂
Note that you will get almost all information about the task using tasks.get_by_id , then you would need a few more calls to extract the console/scalars/plots/debugs
Hi, how do you connect your configs currently?
You mean you'd like to be able to connect/create configuration objects via UI?
Hi ExuberantParrot61 , that's a good question. This is a bit hacky but what if you try to catch the task with Task.current_task() from inside the step and try to change the output_uri attribute there?
@<1539417873305309184:profile|DangerousMole43> , I think you're trying to do with the agent something that it wasn't intended to. As @<1523701087100473344:profile|SuccessfulKoala55> mentioned the agent does not support running custom entry points. The idea is to clone tasks in the system and enqueue them where the agent is capable of creating the required environment and running the code through cloning the repo
I think the pipeline runs from start to end, starting when the first step starts
I would suggest structuring everything around the Task object. After you clone and enqueue the agent can handle all the required packages / environment. You can even set environment variables so it won't try to create a new env but use the existing one in the docker container.
The functionality is basically the same as the GCP/AWS ones but since it is only in the Scale/Enterprise I don't think there is any documentation externally
DrabCockroach54 , you can set it all up. I suggest you open developer tools (F12) and see how it is done in the UI. You can then implement this in code.
For example to filter tasks that started 10 minutes ago is something you can view via the UI
Hi OutrageousSheep60 , can you elaborate on how/when this happens?
whenever
preview
ing the dataset (which is in a parquet tabular format) the browser automatically downloads a copy of the preview file as a text file
If Elastic isn't crashing then it should be good. Once I get a confirmation I'll update you 🙂
BitterLeopard33 , ReassuredTiger98 , my bad. I just dug a bit in slack history, I think I got the issue mixed up with long file names 😞
Regarding http/chunking issue/solution - I can't find anything either. Maybe open a github issue / github feature request (for chunking files)
Hi @<1524560082761682944:profile|MammothParrot39> , did you make sure to finalize the dataset you're trying to access?
What do you mean by public to private mongo? @<1734020208089108480:profile|WickedHare16>
Please add it as a code snippet.
Hi @<1529271085315395584:profile|AmusedCat74> , thanks for reporting this, I'll ask the ClearML team to look into this
Hi @<1714088832913117184:profile|MammothSnake38> , not sure I understand. Can you add a screenshot of how you currently play audio files?
In that case you are correct. If you want to have a 'central' source of data then Datasets would be the suggested approach. Regarding your question on adding data, you would always have to create a new child version and append new data to the child.
Also maybe squashing the dataset might be relevant to you - None
Of course :)
You can select tasks in different projects in table view or you can add experiments to an existing compare
Hi @<1523701842515595264:profile|PleasantOwl46> , I think that is what happening. If server is down, code continues running as if nothing happened and ClearML will simply cache all results and flush them once server is back up
Maybe @<1523701087100473344:profile|SuccessfulKoala55> has more insight into this 🙂
Hi @<1573119955400921088:profile|CloudyPelican46> , you can certainly do this. You can find all the related api calls here - None
I suggest opening developer tools (F12) and seeing what is sent in the UI to fetch the various metrics you're looking for
Hi @<1523702496097210368:profile|ScantChimpanzee51> , from what version to what version did you try upgrading? Did you perform a backup?
Hi @<1523721697604145152:profile|YummyWhale40> _, what if you specify the output_uri through the code in Task.init() ?
I was suspecting connectivity issues. Glad to hear it's working
UnevenDolphin73 , interesting idea! Could you open a github issue to track this?
I wasn't able to reproduce it on my side. Can you try the following?
In clearml/examples/reporting/mode_config.py
Under line 45:OutputModel().update_weights('my_best_model.bin')
Add the following:output_model = task.models['output'][-1]output_model.tags=['deployed']
And check in the UI if you get a tag on the model