
Reputation
Badges 1
56 × Eureka!Is there some verbose mode I could run it with?
I have the strong suspicion it is somewhat related to my parameters of the function or generally the hyperparameters gathered by the task automatically.
I am getting the same when starting regular tasks.
I think it has something to do with my paramaters, which contain an environment variable which contains a list of datasets
Here is the latest version with all issues ironed out.
Maybe it has something to do with my general environment? I am running on WSL2 in debian
Not that I would know of..
I attached the possible problematic argument.
The strings are just regular string, nothing fancy there.
args
:{'epochs': 3, 'imgsz': 480, 'data': '/home/rini-debian/git-stash/lvgl-ui-detector/datasets/ui_randoms.yaml'}
model_variant
:yolov8n
dataset_id
:50e10f640d7548458d9c38ab9aac571b
Sure can do
Yup.
I really don't know what it's about.
It doesn't affect the process. Everything seems to run fine.
If the warnings would provide a bit more info I could maybe pinpoint it better, but it's really all I got
I don’t know what would cause slowness
Hem, yeah might be the case.
My experiments are all using YOLOv8 and they contain the data from what is gathered there automatically
I ‘ m using the app.clearml server
This is the full log of the task.
I am trying to run HPO.
Yea, I get that.. But it's really hard to tell what's causing it due to the "<unknown>"
It happens on all of my pipeline run attempts and there's nothing more that gives insight.
As an example:
python src/train.py
ClearML Task: created new task id=102a4f25c5ac4972abd41f1d0b6b9708
ClearML results page:
<unknown>:1: SyntaxWarning:
invalid decimal literal
<unknown>:1: SyntaxWarning:
invalid decimal literal
<unknown>:1: SyntaxWarning:
invalid decimal literal
<unknown>:1: SyntaxWarning:
invalid decimal literal
<unknown>:1: SyntaxWarning:
invalid decimal...
How can I adjust the parameter overrides from tasks spawned by the hyperparameter optimizer?
My template task has some environment depending parameters that I would like to clear for the newly spawned tasks, as the function that is run for each tasks handles the environment already.
A minimal illustration of the problem:
If I run model.tune(...)
from ultralytics, then it automatically will track each iteration in ClearML and each iteration will be its own task (as it should be, given that the parameters change)
But the actual tune result will not be stored in a ClearML task, since I believe there is no integration on ultralytics side to do so.
If I create a task myself which then performs model.tune(...)
it will get immediately overridden by the parameters fro...
Hey. I should have closed this..
The thing that I was looking for is called set_parameter
on the task.
The HPO uses a task I created previously and I had trouble with that, since it contained a path, which wasn't available on the colab instance.
I fixed my code, so it always updates this parameter depending on the environment.
It was less of an HPO issue, more of a programming failure on the function, which didn't properly update the parameter, even though I thought it should.
This here.. I know how to get the source code info, but it doesn't include the commit ID. And I also cannot access the uncommitted changes.
It comes from the PipelineDecorator.pipeline I assume or from PipelineDecorator.component
Nevermind, all I need is to use Task.get_task() with the id of the dataset, since the ID was re-used.
I'd still be interested in knowing how to retrieve the task_id of a dataset if reuse_task_id
was set to false.
This function shows the same behaviour once the task gets initialized:
# Training helper functions
def prepare_training(env: dict, model_variant: str, dataset_id: str, args: dict, project: str = "LVGL UI Detector"):
from clearml import Task, Dataset
import os
print(f"Training {model_variant} on dataset: {dataset_id}")
# Fetch dataset YAML
env['FILES'][dataset_id] = Dataset.get(dataset_id).list_files("*.yaml")
# Download & modify dataset
env['DIRS']['target'] ...
I cleared the vcs cache manually already, it results in the same behaviour illustrated above
(allthough the logs show that it used the cache, I had another run without cache - but don't have the logs from that)
Yea, but even though it's cached, it takes quite a long time, because my project has really alot of submodules, due to the submodules having their own submodules as well.
I don't really understand why fetching the submodules is the default.
If there's some or any mechanism that would allow me to constrain what the task sees, it would really help me alot.