Reputation
Badges 1
56 × Eureka!When developing I use the poetry environment, but in the queues I let clearML handle the installation via pip.
There is no need to use poetry if the task is a one-time thing
If it were possible to override the checkout behaviour I would ignore all submodules anyways, but in the documentation of clearml.conf as well as the pipeline decorator I couldn't find anything that would allow me doing that.
I cleared the vcs cache manually already, it results in the same behaviour illustrated above
(allthough the logs show that it used the cache, I had another run without cache - but don't have the logs from that)
If there's some or any mechanism that would allow me to constrain what the task sees, it would really help me alot.
You mean a seperate branch to work in without the submodules linked?
Not really sure how I'd go about doing that.
I'd be more happy with an option to say 'pull_submodules=False'
I am getting the same when starting regular tasks.
I think it has something to do with my paramaters, which contain an environment variable which contains a list of datasets
Maybe it has something to do with my general environment? I am running on WSL2 in debian
Yea, I get that.. But it's really hard to tell what's causing it due to the "<unknown>"
For anyone else interested in this, I wrote a little script which pulls all the data from a given project, seems to work well enough
Here is an updated and improved version.
if anyone can tell me on how to improve the cookie situation, I'd be grateful
If there's source URLs in the plots of the task, how can I authenticate against ClearML to properly download them?
Or is there some SDK way to download them?
Here are the codefiles for my pipelines.
They do not work yet, I am struggling with the pipeline stuff quite a bit.
But both pipelines always give these warnings.
Here is the latest version with all issues ironed out.
I don’t know what would cause slowness
Hi @<1523701087100473344:profile|SuccessfulKoala55>
I am using 1.8.0 for the clearml-agent.
Attached is the logfile.
A minimal illustration of the problem:
If I run model.tune(...)
from ultralytics, then it automatically will track each iteration in ClearML and each iteration will be its own task (as it should be, given that the parameters change)
But the actual tune result will not be stored in a ClearML task, since I believe there is no integration on ultralytics side to do so.
If I create a task myself which then performs model.tune(...)
it will get immediately overridden by the parameters fro...
Nevermind, all I need is to use Task.get_task() with the id of the dataset, since the ID was re-used.
I'd still be interested in knowing how to retrieve the task_id of a dataset if reuse_task_id
was set to false.
Sure can do
Here is the code doing the reporting:
def capture_design(design_folder: str):
import subprocess, os, shutil
from clearml import Task
print(f"Capturing designs from {design_folder}...")
task = Task.current_task()
logger = task.get_logger()
design_files = [f for f in os.listdir(design_folder) if os.path.isfile(os.path.join(design_folder, f))]
if len(design_files) == 0:
print(f"No design files found in {design_folder}")
return
widgets = {}
...
Alright, good to know.
This here.. I know how to get the source code info, but it doesn't include the commit ID. And I also cannot access the uncommitted changes.
This function shows the same behaviour once the task gets initialized:
# Training helper functions
def prepare_training(env: dict, model_variant: str, dataset_id: str, args: dict, project: str = "LVGL UI Detector"):
from clearml import Task, Dataset
import os
print(f"Training {model_variant} on dataset: {dataset_id}")
# Fetch dataset YAML
env['FILES'][dataset_id] = Dataset.get(dataset_id).list_files("*.yaml")
# Download & modify dataset
env['DIRS']['target'] ...
According to None I am supposed to install
libgl1
I changed my clearml.conf
to include that installation for the task container started by the agent.
Will see if it helps in a minute
This is the full log of the task.
I am trying to run HPO.
How can I adjust the parameter overrides from tasks spawned by the hyperparameter optimizer?
My template task has some environment depending parameters that I would like to clear for the newly spawned tasks, as the function that is run for each tasks handles the environment already.
None of these submodules are required for the tasks, they are there for a different part of the project dealing with data generation.
So even having them fetched (even when cached) is quite the delay on the actual task.