Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
5 Answers
993 Views
0 Votes 5 Answers 993 Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
2 years ago
0 Votes
0 Answers
902 Views
0 Votes 0 Answers 902 Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
2 years ago
0 Hey, If I Uploading A New Dataset With Parent, But The Child Dataset==Parent Dataset It Will Work? Thanks

The scheduler just downloads a dataset using the ID right? So if you don't upload a new dataset, the scheduler is just downloading the dataset from the last known ID then. I don't really see how that could lead to a new dataset with it's own ID as the parent. Would you mind explaining your setup in a little more detail? 🙂

2 years ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

Hey @<1541592213111181312:profile|PleasantCoral12> thanks for doing the profiling! This looks pretty normal to me. Although 37 seconds for a dataset.get is definitely too much. I just checked and for me it takes 3.7 seconds. Mind you the .get() method doesn't actually download the data, so the dataset size is irrelevant here.

But the slowdowns do seem to only occur when doing api requests. Possible next steps could be:

  • Send me your username and email address (maybe dm if you don't wa...
one year ago
0 Hey, If I Uploading A New Dataset With Parent, But The Child Dataset==Parent Dataset It Will Work? Thanks

Hello!
What is the usecase here, why would you want to do that? If they're the same dataset, you don't really need lineage, no?

2 years ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

It shouldn't quite take 4 minutes. We did just have a server update so maybe there are some delays from there?

That said, it will take a little time, even if the code itself is very simple. This is a fixed cost overhead of ClearML analysing your runtime environment, packages etc.
This runs async though, so when the code itself takes more time, you won't notice this fixed cost as much, of course with just printing text it will be the majority of the runtime 🙂

one year ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

Hi @<1541592213111181312:profile|PleasantCoral12> thanks for sending me the details. Out of curiosity, could it be that your codebase / environment (apart from the clearml code, e.g. the whole git repo) is quit large? ClearML does a scan of your repo and packages every time a task is initialized, maybe that could be it. In the meantime I'm asking our devs if they can see any weird lag with your account on our end 🙂

one year ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

How large are the datasets? To learn more you can always try to run something like line_profiler/kerprof, to get exactly how long a specific python line takes. How fast/stable is your internet?

one year ago
0 Hi Community, How Can I Prevent Clearml Creating A New Experiment, Each Time I Interrupt And Restart Training On The Same Task? I'M Training Yolov8 And Clearml Docker Usage Is Up To 30Gb. I Can'T See A Yaml Config Parameter For This.

Hey @<1539780305588588544:profile|ConvolutedLeopard95> , unfortunately this is not built-in into the YOLOv8 tracker. Would you mind opening an issue on the YOLOv8 github page and atting me? (I'm thepycoder on github)

I can then follow up the progress on it, because it makes sense to expose this parameter through the yaml.

That said, to help you right now, please change [this line](https://github.com/ultralytics/ultralytics/blob/fe61018975182f4d7645681b4ecc09266939dbfb/ultralytics/yolo/uti...

one year ago
0 Hey Everyone, Is It Possible To Use The

Yes you can! The filter syntax can be quite confusing, but for me it helps to print task.__ dict__ on an existing task object to see what options are available. You can get values in a nested dict by appending them into a string with a .

Example code:

` from clearml import Task

task = Task.get_task(task_id="17cbcce8976c467d995ab65a6f852c7e")
print(task.dict)

list_of_tasks = Task.query_tasks(task_filter={
"all": dict(fields=['hyperparams.General.epochs.value'], p...

one year ago
0 Hi, Is There A Way To Log Sklearn Metrices (Like Accuracy/Precision) In A Tabular Way Rather Than Plot ?

I agree, I came across the same issue too. But your post helps make it clear, so hopefully it can be pushed! 🙂

2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

It depends on how complex your configuration is, but if config elements are all that will change between versions (i.e. not the code itself) then you could consider using parameter overrides.

A ClearML Task can have a number of "hyperparameters" attached to it. But once that task is cloned and in draft mode, one can EDIT these parameters and change them. If then the task is queued, the new parameters will be injected into the code itself.

A pipeline is no different, it can have pipeline par...

one year ago
0 Hey Everyone, I Have Another Question: Is It Possible To Change Agent Config For Each Task? E.G.

Hi ReassuredTiger98 !

I'm not sure the above will work. Maybe I can help in another way though: when you want to set agent.package_manager.system_site_packages = true does that mean you have a docker container with some of the correct packages installed? In case you use a docker container, there is little no real need to create a virtualenv anyway and you might use the env var CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 to just install all packages in the root environment.

Because ev...

one year ago
0 Hello Everyone ! As You Can Observe In Attached Snipped, In My Code I Freeze The Env, And The Agent Install Every Cached Dependency With The Same Version. Is There Any Way That The Agent On My Side (My Computer) Will Straightly Use The Virtual Environment

Hi ExasperatedCrocodile76 ,

You can try running the agent with these environment variables set to 1:

CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1There's more env vars here: https://clear.ml/docs/latest/docs/clearml_agent/clearml_agent_env_var

Does that work for you?

one year ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Hi Fawad!
You should be able to get a local mutable copy using Dataset.get_mutable_local_copy and then creating a new dataset.
But personally I prefer this workflow:

dataset = Dataset.get(dataset_project=CLEARML_PROJECT, dataset_name=CLEARML_DATASET_NAME, auto_create=True, writable_copy=True) dataset.add_files(path=save_path, dataset_path=save_path) dataset.finalize(auto_upload=True)
The writable_copy argument gets a dataset and creates a child of it (a new dataset with your ...

2 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

It part of the design I think. It makes sense that if we want to keep track of changes, we always build on top of what we already have 🙂 I think of it like a commit: I'm adding files in a NEW commit, not in the old one.

2 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

That makes sense! Maybe something like dataset querying as is used in the clearml hyperdatasets might be useful here? Basically you'd query your dataset to only include sample you want and have the query itself be a hyperparameter in your experiment?

2 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Wait is it possible to do what i'm doing but with just one big Dataset object or something?

Don't know if that's possible yet, but maybe something like the proposed querying could help here?

2 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Cool! 😄 Yeah, that makes sense.

So (just brainstorming here) imagine you have your dataset with all samples inside. Every time N new samples arrive they're just added to the larger dataset in an incremental way (with the 3 lines I sent earlier).
So imagine if we could query/filter that large dataset to only include a certain datetime range. That range filter is then stored as hyperparameter too, so in that case, you could easily rerun the same training task multiple times, on differe...

2 years ago
2 years ago
0 Hi, I Am Trying The Triggerscheduler To Catch When A User Add Specific Tag To A Task. I Used The Below Code But The Schedule_Function Is Not Called When Adding Tags To Task (It Seems The Task.Last_Update Is Not Modified After Adding Tag)

Can you elaborate a bit more, I don't quite understand yet. So it works when you update an existing task by adding a tag to it, but it doesn't work when adding a tag for the first time?

2 years ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

As I understand it, vertical scaling means giving each container more resources to work with. This should always be possible in a k8s context, because you decide which types of machines go in your pool and your define the requirements for each container yourself 🙂 So if you want to set the container to use 10.000 CPUs feel free! Unless you mean something else with this, in which case please counter!

one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Allright, a bit of searching later and I've found 2 things:

  • You were right about the task! I've staged a fix here . It basically detects whether a task is already running (e.g. from the pipelinedecorator component) and if so, uses that task instead. We should probably do this for all of our integrations.
  • But then I found another bug. Basically the pipeline decorator task wou...
one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

@<1523701949617147904:profile|PricklyRaven28> Please use this patch instead of the one previously shared. It excludes the dict hack :)

one year ago
0 Another Question On The Topic Of How A Remote Execution Of A Pipeline Kills The Calling Process (Previously Discussed

Also, the answer to blocking on the pipeline might be in the .wait() function: https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#wait-1

TimelyPenguin76 I can't seem to make it work though, on which object should I run the .wait() method?

2 years ago
0 Hey, Is There An Easy Way To Retrieve The Code Used To Run An Experiment? Without Recreating The Whole Environment Etc. The Problem: I Have Ran A

If you didn't use git, then clearML saves your .py script completely in the uncommited changes section like you say. You should be able to just copy paste it to get the code. In what format are your uncommited changes logged? Can you paste a screenshot or paste the contents of uncommitted changes ?

2 years ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Great to hear! Then it comes down to waiting for the next hugging release!

one year ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

Did you by any chance save the checkpoint without any file extention? Or with a weird name containing slashes or points? The error seems to suggest the content type was not properly parsed

2 years ago
2 years ago
Show more results compactanswers