I think this is referring to your configuration file ~/clearml.conf follow the instructions in the message to remove it or you can just ignore it
Aren't you getting logs from the docker via ClearML? I think you can build that capability fairly easily with ClearML, maybe add a PR?
ScaryBluewhale66 ,
If you want to re-run - you need the agent It's still a Task object so you can just use Task.close() I'm not sure if something exists at the moment but you could write it faily easily in code
Hi FierceHamster54 , I'm afraid currently this is not possible. Maybe open a Github issue to track this 🙂
I think that something like that exists, it appears to be done in the paid version called hyper-datasets. The documentation is open for all apparently 🙂
As far as I know Hyper-datasets also support csv/tabular data quite well 🙂
Hi DrabOwl94 , how did you create/save/finalize the dataset?
Hi PerplexedElk26 , It seems you are correct. This capability will be added in the next version of the server.
SparklingElephant70 , Hi
Can you please provide a screenshot of the error?
Hi @<1635813046947418112:profile|FriendlyHedgehong10> , I think for this you need to create a child version and only add the new files to the child.
Hi @<1681836303299121152:profile|RoundElk14> , I suggest watching this - None
Hi StraightParrot3 ,
I'm not sure if thumbnails are supported inside tables. AgitatedDove14 , what do you think?
Hi @<1709015393701466112:profile|ScatteredPeacock14> , you are correct, this feature is available only in the Scale/Enterprise plans.
I'm afraid there isn't anything besides unregistering/re-registering
VictoriousPenguin97 , I managed to reproduce the issue with 1.1.3 as well. It should be fixed in the next version 🙂
Meanwhile as a workaround please try using shorter file name. The file name you provided is almost 200 characters long.
Keeping it under 150 characters will still work (I made sure to test it).
Hi @<1717350332247314432:profile|WittySeal70> , I think that task.get_reported_plots() is indeed what you're looking for. You might have to do some filtering there
Can you please add here what you're sending + what is received?
Hi @<1774245220934750208:profile|GleamingTiger28> , you basically need to build it in your code and expose it as parameters, please see the examples for reference - None
Can you please elaborate?
Hi BroadSeaturtle49 , can you please elaborate on what the issue is?
Hi BoredBluewhale23 ,
How did you configure the apiserver when you raised the EKS K8S cluster?
The agent prints its configuration before the execution step, I don't see agent.git_pass set anywhere in the log. Are you sure you set it up on the correct machine? This needs to be set up on the machine running the agent.
Hi UnevenDolphin73 , I think this is analyzed in the code
Hi NastySeahorse61 ,
It looks like deleting smaller tasks didn't make much of a dent. Do you have any tasks that ran for very long or were very intensive on reporting to the server?
Oh I see. Technically speaking the pipeline controller is a task of a special type of itself. So technically speaking you could provide the task ID of the controller and clone that. You would need to make sure that the relevant system tags are also applied so it would show up properly as a pipeline in the webUI.
In addition to that, you can also trigger it using the API also
I think it is one of the parameters of the task. Fetch a Task and see what properties the artifact has 🙂
Hi, I know that this is a known issue and is supposed to have a hotfix coming really soon.
Regarding your question, this is what I found - None
HelplessCrocodile8 , we managed to reproduce the issue. Looks like it occurs in this specific case only on python3.9 + windows. We've logged it and it should be fixed. I'll let you know when 🙂
HelplessCrocodile8 what version of clearml are you using?
Hi @<1607184400250834944:profile|MortifiedChimpanzee9> , to use a specific requirements.txt you can use Task.add_requirements
None