Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
DisturbedParrot38
Moderator
15 Questions, 58 Answers
  Active since 25 April 2024
  Last activity 7 months ago

Reputation

0

Badges 1

56 × Eureka!
0 Votes
5 Answers
596 Views
0 Votes 5 Answers 596 Views
I have an environment error when running HPO: ``RuntimeError: Dataset '/home/rini-debian/git-stash/lvgl-ui-detector/datasets/ui_randoms.yaml' error❌❌ '/home/...
7 months ago
0 Votes
1 Answers
680 Views
0 Votes 1 Answers 680 Views
I hope I'm not too much of a bugger today.. but here's another issue I'm facing currently. Traceback (most recent call last): File "/root/.clearml/venvs-buil...
7 months ago
0 Votes
24 Answers
593 Views
0 Votes 24 Answers 593 Views
I get these warnings whenever I run pipelines and I have no idea what it means or where it comes from: :1: SyntaxWarning: invalid decimal literal :1: SyntaxW...
8 months ago
0 Votes
8 Answers
615 Views
0 Votes 8 Answers 615 Views
I noticed that Task.get_tasks(project_name=project) is quite slow in my project of over 300 experiments. Is there a faster way of receiving task objects?
7 months ago
0 Votes
14 Answers
624 Views
0 Votes 14 Answers 624 Views
7 months ago
0 Votes
2 Answers
739 Views
0 Votes 2 Answers 739 Views
Is it possible to launch the HPO application with the metrics/graphics after running the HPO from code? I am more interested in the graphical representation ...
7 months ago
0 Votes
7 Answers
644 Views
0 Votes 7 Answers 644 Views
I keep running into this issue: error: Could not fetch origin when the docker worker fetches the repository with submodules. It has credentials via ssh-auth-...
8 months ago
0 Votes
7 Answers
601 Views
0 Votes 7 Answers 601 Views
When running an agent inside google colab, I always get this error after dependency installation: 2024-04-27 16:54:12 ERROR: Invalid requirement: 'google goo...
7 months ago
0 Votes
1 Answers
590 Views
0 Votes 1 Answers 590 Views
7 months ago
0 Votes
1 Answers
693 Views
0 Votes 1 Answers 693 Views
How can I enforce that the clearml-agent starts the task container WITHOUT the dataset cache enabled?
7 months ago
0 Votes
0 Answers
611 Views
0 Votes 0 Answers 611 Views
finally my HPO runs... what a trip
7 months ago
0 Votes
2 Answers
620 Views
0 Votes 2 Answers 620 Views
7 months ago
0 Votes
4 Answers
689 Views
0 Votes 4 Answers 689 Views
How can I access the commit and uncommitted changes information displayed on the WebApp on the execution tab of a task? I don't see corresponding functions i...
7 months ago
0 Votes
2 Answers
518 Views
0 Votes 2 Answers 518 Views
When I created a dataset by specifying reuse_task_id , how can I then access the plots, scalars of the task corresponding to the dataset? I don't see a funct...
7 months ago
0 Votes
6 Answers
603 Views
0 Votes 6 Answers 603 Views
Is there a way to export all data/artifacts from multiple experiments from the WebApp?
7 months ago
0 Hi! I Have A Question About The Integration Of Clearml With Yolov8 (Or Otherwise Known As Ultralytics). I Have Written A Generic Task To Run The Ultralytics Tuner Function. However, I Think That There Isn'T A Good Integration For That Specific Task Bet

A minimal illustration of the problem:

If I run model.tune(...) from ultralytics, then it automatically will track each iteration in ClearML and each iteration will be its own task (as it should be, given that the parameters change)

But the actual tune result will not be stored in a ClearML task, since I believe there is no integration on ultralytics side to do so.

If I create a task myself which then performs model.tune(...) it will get immediately overridden by the parameters fro...

7 months ago
0 I Have An Environment Error When Running Hpo:

How can I adjust the parameter overrides from tasks spawned by the hyperparameter optimizer?

My template task has some environment depending parameters that I would like to clear for the newly spawned tasks, as the function that is run for each tasks handles the environment already.

7 months ago
0 I Believe I Discovered A Bug Or At Least Weird Behavior In The Clearml Scalar Reporting Mechanism. In My Data Processing Task, I Have A Metric, Which In Theory As Well As In The Implementation Can Only Ever Increase In Value. I Report The Scalar In Each

Here is the code doing the reporting:

def capture_design(design_folder: str):
    import subprocess, os, shutil
    from clearml import Task
    print(f"Capturing designs from {design_folder}...")
    task = Task.current_task()
    logger = task.get_logger()
    design_files = [f for f in os.listdir(design_folder) if os.path.isfile(os.path.join(design_folder, f))]
    if len(design_files) == 0:
        print(f"No design files found in {design_folder}")
        return
    widgets = {}
  ...
7 months ago
0 I Have An Environment Error When Running Hpo:

Hey. I should have closed this..

The thing that I was looking for is called set_parameter on the task.
The HPO uses a task I created previously and I had trouble with that, since it contained a path, which wasn't available on the colab instance.
I fixed my code, so it always updates this parameter depending on the environment.

It was less of an HPO issue, more of a programming failure on the function, which didn't properly update the parameter, even though I thought it should.

7 months ago
0 Is There A Way To Export All Data/Artifacts From Multiple Experiments From The Webapp?

If there's source URLs in the plots of the task, how can I authenticate against ClearML to properly download them?

Or is there some SDK way to download them?

7 months ago
0 Is There A Way To Export All Data/Artifacts From Multiple Experiments From The Webapp?

For anyone else interested in this, I wrote a little script which pulls all the data from a given project, seems to work well enough

7 months ago
0 Is There A Way To Export All Data/Artifacts From Multiple Experiments From The Webapp?

Here is an updated and improved version.

if anyone can tell me on how to improve the cookie situation, I'd be grateful

7 months ago
0 How Can I Tell Clearml To Ignore Certain Submodules Existing In The Project? My Projects Consists Of Multiple Git Submodules And It Is Rather Annoying That The Task Always Tries To Fetch All Submodules, When They Are Not Even Necessary. I Don'T Know How I

Yea, but even though it's cached, it takes quite a long time, because my project has really alot of submodules, due to the submodules having their own submodules as well.

I don't really understand why fetching the submodules is the default.

7 months ago
0 How Can I Tell Clearml To Ignore Certain Submodules Existing In The Project? My Projects Consists Of Multiple Git Submodules And It Is Rather Annoying That The Task Always Tries To Fetch All Submodules, When They Are Not Even Necessary. I Don'T Know How I

just to give an idea about the scale of the problem on my side.

These are the submodules...

`Fetching submodule lvgl_ui_generator
2024-04-27 20:45:34
Fetching submodule lvgl_ui_generator/lv_drivers
Fetching submodule lvgl_ui_generator/lvgl
Fetching submodule lvgl_ui_generator_v2
Fetching submodule lvgl_ui_generator_v2/lv_micropython
Fetching submodule lvgl_ui_generator_v2/lv_micropython/lib/asf4
2024-04-27 20:45:40
Fetching submodule lvgl_ui_generator_v2/lv_micropython/lib/axtls
Fetch...
7 months ago
0 How Can I Tell Clearml To Ignore Certain Submodules Existing In The Project? My Projects Consists Of Multiple Git Submodules And It Is Rather Annoying That The Task Always Tries To Fetch All Submodules, When They Are Not Even Necessary. I Don'T Know How I

None of these submodules are required for the tasks, they are there for a different part of the project dealing with data generation.

So even having them fetched (even when cached) is quite the delay on the actual task.

7 months ago
0 I noticed that `Task.get_tasks(project_name=project)` is quite slow in my project of over 300 experiments. Is there a faster way of receiving task objects?

My experiments are all using YOLOv8 and they contain the data from what is gathered there automatically

7 months ago
0 Hi Guys! Anyone Else Has Trouble Caching Virtual Environments In The Agent? I Manually Installed A Single Agent On A Virtual Machine, And I'M Using This Conf For Venv Caching :

I noticed poetry can be a problem in my run.
Not specifically due to the cache, but due to the installation of much more packages than the runtime might need.

When using regular pip, it will use the requirements list determined by ClearML to install necessary packages, which usually already excludes all dev-tools and similar.

I am not sure if poetry uses the cache properly, but I can't verify either atm.

7 months ago
0 I Have An Environment Error When Running Hpo:

Back when I wrote this, I thought HPO does something magical for overwriting the general args of the task when cloning.
Turns out it just was my code that was missing a more explicit set_parameter for this environment path.

7 months ago
0 When I Created A Dataset By Specifying

Nevermind, all I need is to use Task.get_task() with the id of the dataset, since the ID was re-used.

I'd still be interested in knowing how to retrieve the task_id of a dataset if reuse_task_id was set to false.

7 months ago
0 I Hope I'M Not Too Much Of A Bugger Today.. But Here'S Another Issue I'M Facing Currently.

According to None I am supposed to install

libgl1

I changed my clearml.conf to include that installation for the task container started by the agent.

Will see if it helps in a minute

7 months ago
0 I Keep Running Into This Issue:

On another attempt with a cleaned repository (no dirty commits) I get the same result, even though it states that it got a new commit id, so I'm at a loss at what is actually going wrong here:

`Using cached repository in "/root/.clearml/vcs-cache/lvgl-ui-detector.git.7c8ae2688810ceed26c1ebcc1e911cf2/lvgl-ui-detector.git"
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 8 (delta 4), reused 7 ...
8 months ago
0 I Keep Running Into This Issue:

I cleared the vcs cache manually already, it results in the same behaviour illustrated above
(allthough the logs show that it used the cache, I had another run without cache - but don't have the logs from that)

8 months ago
0 I Keep Running Into This Issue:

Hi @<1523701087100473344:profile|SuccessfulKoala55>

I am using 1.8.0 for the clearml-agent.

Attached is the logfile.

8 months ago
0 I Keep Running Into This Issue:

Any help would be greatly appreciated

8 months ago
0 I Get These Warnings Whenever I Run Pipelines And I Have No Idea What It Means Or Where It Comes From:

I am getting the same when starting regular tasks.
I think it has something to do with my paramaters, which contain an environment variable which contains a list of datasets

7 months ago
Show more results compactanswers