Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
VexedCat68
Moderator
60 Questions, 381 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

371 × Eureka!
0 I'Ll Just Ask This Question Again To Get Some Fresh Attention To This. Is There Any Way To Run A Pipeline Step Conditionally? E.G, Under Certain Condition, Execute The Step Otherwise Don'T?

I'm not using decorators. I have a bunch of function_steps followed by a normal task step, where I've passed a base_task_id.

I want to check the value of one of the functional steps, and if it holds true, I want to execute the task step otherwise I want the pipeline to end there, since the task step is the last one.

3 years ago
0 I Have A Function That Runs Normally. Its Job Is To Monitor A Specific Folder, And When I Execute The Script Locally It Works Fine. When I Make A Taskscheduler And Pass That Function To The Scheduler, Then Run Remotely, Then I Get An Error: Clearml Resul

Set up is on a single machine, I have a nas mounted where I'm watching a folder, if there are sufficient images, it should publish the data but since I was using start_remotely, the code was running somewhere else and couldn't access folder.

3 years ago
0 Would Appreciate Some Help. Getting This Error. Valueerror: Node Train_Model, Parameter '${Split_Dataset.Split_Dataset_Id}', Input Type 'Split_Dataset_Id' Is Invalid

AgitatedDove14 Can you help me with this? Maybe something like storing the returned values or something in a variable outside the pipeline?

3 years ago
0 Hey Guys, Sorry For The Rapid Fire Questions In The Past Few Days. I Have Another Issue Though. I Initially Ran A Task, Directly From A Repo. It Succesfully Installed The Requirements From The Requirements File In The Repo And Ran The Task Without Any Iss

My use case is that the code using pytorch saves additional info like the state dict when saving the model. I'd like to save that information as an artifact as well so that I can load it later.

3 years ago
3 years ago
0 So I Decided To Re-Create My Clearml Server, I

It's probably a cookie issue I agree.

4 years ago
0 I'Ll Just Ask This Question Again To Get Some Fresh Attention To This. Is There Any Way To Run A Pipeline Step Conditionally? E.G, Under Certain Condition, Execute The Step Otherwise Don'T?

AnxiousSeal95 Basically its a function step return. if I do, artifacts.keys(), there are no keys, even though the step prior to it does return the output

3 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

I think maybe it does this because of cache or something. Maybe it keeps a record of an older login and when you restart the server, it keeps trying to use the older details maybe

4 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

Also, is clearml open source and accepting contributions or is it just a limited team working on it? Sorry for an off topic question.

4 years ago
0 When I Create Clearml-Dataset From The Cli, I Get An Id. The Same Doesn'T Happen When I Use The Python Api. Is There Any Way To Get The Id In Python?

This works, thanks. Do you have any link to where I can also see the parameters of the Dataset class or was it just on git?

4 years ago
0 I'M Looking At How Triggers Work In Clearml. Is There An Example, Maybe With Clearml Data And A Dataset Being Uploaded Or Some Other Example?

This here shows my situation. You can see the code on the left and the tasks called 'Cassava Training' on the right. They keep getting enqueued even though I only sent a trigger once. By that I mean I only published a dataset once.

4 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

When I try to access the server with the IP I set as CLEARML_HOST_IP, it looks like this. I set that IP to the ip assigned to me by the network

4 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Well I'm still researching how it'll work. I'm expecting it to not be very good and will make the model learning very stochastic in nature.

I plan to instead at the training stage, instead of just getting this model, use Dataset.squash, to get previous M datasets merged together.

This should introduce stability in the dataset.

Also this way, our model is trained on a batch of data multiple times but only for a few times before that batch is discarded. We keep the training data fresh for co...

3 years ago
0 Um, Is There A Way To Delete An Artifact From A Task That Is Running?

I ran a training code from a github repo. It saves checkpoints every 2000 iterations. Only problem is I'm training it for 3200 epochs and there's more than 37000 iterations in each epoch. So the checkpoints just added up. I've stopped the training for now. I need to delete all of those checkpoints before I start training again.

3 years ago
0 Another Question, I Have Written A Code That Includes A Task Scheduler That Calls A Function. That Function Watches A Folder And If There Are Sufficient Images, It Creates And Publishes The Dataset, After Which It Clears The Folder. Problem, For Some Rea

Can you spot something here? Because to me it still looks like it should only create a new Dataset object if batch size requirement is fulfilled, after which it creates and publishes the dataset and empties the directory.

Once the data is published, a dataset trigger is activated in the checkbox_.... file. which creates a clearml-task for training the model.

3 years ago
0 When It Comes To Continuous Training, I Wanted To Know How You Train Or Would Train If You Have Annotated Data Incoming? Do You Train Completely Online Where You Train As Soon As You Have A Training Example Available? Do You Instead Train When You Have A

I get what you're saying. I was considering training on just the new data to see how it works. To me it felt like that was the fastest way to deal with data drift. I understand that it may introduce instability however. I was curious how other developers who have successfully managed to set up continuous training deal with it. 100% new data, or a ratio between new and old data. And if it is the latter, what should be the case, which should be the majority, old data or new data?

3 years ago
Show more results compactanswers