Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
VexedCat68
Moderator
60 Questions, 381 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

371 × Eureka!
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
When saving the model, there's a label tab but it's empty.
3 years ago
0 Votes
13 Answers
2K Views
0 Votes 13 Answers 2K Views
3 years ago
0 Votes
15 Answers
2K Views
0 Votes 15 Answers 2K Views
I'm publishing the model artifact after the tensorflow model is saved. But in the UI, the download button is grayed out. And get_local_copy() doesn't seem to...
3 years ago
0 Votes
17 Answers
2K Views
0 Votes 17 Answers 2K Views
Should Dataset Triggers also be activated if there is no trigger condition except dataset_project and a new task starts in that project? Is this expected beh...
4 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Given this pipeline step, is there any way to get the return value outside of the pipeline? Like put split_dataset_id in a variable in the main pipeline file.
3 years ago
0 Votes
16 Answers
2K Views
0 Votes 16 Answers 2K Views
3 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
I have a question. I'm struggling with setting up my own ClearML server. I think i've got it up and running but not sure how to send clearml-task to my own s...
4 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
3 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
I keep getting this error when trying to upload a dataset. Anyone has any idea what might be causing it?
3 years ago
0 Votes
25 Answers
2K Views
0 Votes 25 Answers 2K Views
I'll just ask this question again to get some fresh attention to this. Is there any way to run a pipeline step conditionally? E.g, under certain condition, e...
3 years ago
0 Votes
8 Answers
2K Views
0 Votes 8 Answers 2K Views
4 years ago
0 Votes
10 Answers
2K Views
0 Votes 10 Answers 2K Views
4 years ago
0 Votes
7 Answers
2K Views
0 Votes 7 Answers 2K Views
Hi guys, needed a bit of clarification. Every 15 minutes, the scheduler prints, "Syncing scheduler, sleeping for 15 minutes until next sync". Can you guide m...
3 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
In the configuration section of an experiment, I can see the args I passed via argparse. Is there any way to create other sections other than args and tf_define
3 years ago
0 Votes
25 Answers
2K Views
0 Votes 25 Answers 2K Views
Um, is there a way to delete an artifact from a task that is running?
3 years ago
0 Votes
15 Answers
2K Views
0 Votes 15 Answers 2K Views
How do I get args like epochs to show up in the UI configuration panel under hyperparameters? I want to be able to change number of epochs and learning rate ...
4 years ago
0 Votes
12 Answers
2K Views
0 Votes 12 Answers 2K Views
I'm on the machine with ClearML Server hosted. Is there any way to see datasets uploaded to ClearML Data without downloading them using ClearML Data?
4 years ago
0 Votes
13 Answers
2K Views
0 Votes 13 Answers 2K Views
4 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
In the case where I'm passing a schedule_fn to add_task in a TaskScheduler, how do I pass the function arguments?
3 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
When I upload and publish data to clearml-data, it says successful. Now when i try to get it using id, I get the following error. Error: __get_tasks() got mu...
4 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
Is it possible to write text file and see it in results of the experiment? I want to use it to version data as in keeping a track of what images have been tr...
4 years ago
0 Votes
12 Answers
3K Views
0 Votes 12 Answers 3K Views
I'm training a tensorflow model and saving it in the end. I looked at the OutputModel class. How do I connect the model I'm saving to the OutputModel?
3 years ago
0 Votes
9 Answers
2K Views
0 Votes 9 Answers 2K Views
Pipeline_Controller.py doesn't exist in repo, or atleast the link to it in the docs is dead in the simple pipeline example. Any help would be appreciated.
4 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
How can I register a json file I'm creating as an artifact
3 years ago
0 Votes
23 Answers
2K Views
0 Votes 23 Answers 2K Views
I'm looking at how triggers work in ClearML. Is there an example, maybe with clearml data and a dataset being uploaded or some other example?
4 years ago
0 Votes
11 Answers
2K Views
0 Votes 11 Answers 2K Views
3 years ago
0 Votes
8 Answers
2K Views
0 Votes 8 Answers 2K Views
3 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
In your docs for Dataset at https://clear.ml/docs/latest/docs/references/sdk/dataset#class-dataset , I think you might have duplicate explanations for list_m...
3 years ago
0 Votes
27 Answers
2K Views
0 Votes 27 Answers 2K Views
I wanted to ask, how to run pipeline steps conditionally? E.g if step returns a specific value, exit the pipeline or run another step instead of the sequenti...
3 years ago
0 Votes
11 Answers
2K Views
0 Votes 11 Answers 2K Views
4 years ago
Show more results questions
0 Hi Everyone, Is It Possible To Not Create A Copy Of A Dataset When Adding To Clearml? My Data Is Already In A Directory On The Clearml-Server Machine And I Do Not Want To Copy It, Just Add It To Clearml As Dataset.

I normally just upload the data to the ClearML server and then remove it locally from my machine but I understand that isn't what you want. A quick hack was the only thing I could come up with at the moment xd. Anyway you're welcome. Hope you find a solution.

3 years ago
0 I'Ll Just Ask This Question Again To Get Some Fresh Attention To This. Is There Any Way To Run A Pipeline Step Conditionally? E.G, Under Certain Condition, Execute The Step Otherwise Don'T?

I'm not using decorators. I have a bunch of function_steps followed by a normal task step, where I've passed a base_task_id.

I want to check the value of one of the functional steps, and if it holds true, I want to execute the task step otherwise I want the pipeline to end there, since the task step is the last one.

3 years ago
0 I Have A Function That Runs Normally. Its Job Is To Monitor A Specific Folder, And When I Execute The Script Locally It Works Fine. When I Make A Taskscheduler And Pass That Function To The Scheduler, Then Run Remotely, Then I Get An Error: Clearml Resul

Set up is on a single machine, I have a nas mounted where I'm watching a folder, if there are sufficient images, it should publish the data but since I was using start_remotely, the code was running somewhere else and couldn't access folder.

3 years ago
0 Would Appreciate Some Help. Getting This Error. Valueerror: Node Train_Model, Parameter '${Split_Dataset.Split_Dataset_Id}', Input Type 'Split_Dataset_Id' Is Invalid

AgitatedDove14 Can you help me with this? Maybe something like storing the returned values or something in a variable outside the pipeline?

3 years ago
0 Hey Guys, Sorry For The Rapid Fire Questions In The Past Few Days. I Have Another Issue Though. I Initially Ran A Task, Directly From A Repo. It Succesfully Installed The Requirements From The Requirements File In The Repo And Ran The Task Without Any Iss

My use case is that the code using pytorch saves additional info like the state dict when saving the model. I'd like to save that information as an artifact as well so that I can load it later.

3 years ago
3 years ago
0 So I Decided To Re-Create My Clearml Server, I

It's probably a cookie issue I agree.

4 years ago
0 I'Ll Just Ask This Question Again To Get Some Fresh Attention To This. Is There Any Way To Run A Pipeline Step Conditionally? E.G, Under Certain Condition, Execute The Step Otherwise Don'T?

AnxiousSeal95 Basically its a function step return. if I do, artifacts.keys(), there are no keys, even though the step prior to it does return the output

3 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

I think maybe it does this because of cache or something. Maybe it keeps a record of an older login and when you restart the server, it keeps trying to use the older details maybe

4 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

Also, is clearml open source and accepting contributions or is it just a limited team working on it? Sorry for an off topic question.

4 years ago
0 When I Create Clearml-Dataset From The Cli, I Get An Id. The Same Doesn'T Happen When I Use The Python Api. Is There Any Way To Get The Id In Python?

This works, thanks. Do you have any link to where I can also see the parameters of the Dataset class or was it just on git?

4 years ago
0 I'M Looking At How Triggers Work In Clearml. Is There An Example, Maybe With Clearml Data And A Dataset Being Uploaded Or Some Other Example?

This here shows my situation. You can see the code on the left and the tasks called 'Cassava Training' on the right. They keep getting enqueued even though I only sent a trigger once. By that I mean I only published a dataset once.

4 years ago
0 I Keep Facing This Issue. I'M Trying To Set Up My Own Clearml Server Using This Tutorial.

When I try to access the server with the IP I set as CLEARML_HOST_IP, it looks like this. I set that IP to the ip assigned to me by the network

4 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Well I'm still researching how it'll work. I'm expecting it to not be very good and will make the model learning very stochastic in nature.

I plan to instead at the training stage, instead of just getting this model, use Dataset.squash, to get previous M datasets merged together.

This should introduce stability in the dataset.

Also this way, our model is trained on a batch of data multiple times but only for a few times before that batch is discarded. We keep the training data fresh for co...

3 years ago
0 Um, Is There A Way To Delete An Artifact From A Task That Is Running?

I ran a training code from a github repo. It saves checkpoints every 2000 iterations. Only problem is I'm training it for 3200 epochs and there's more than 37000 iterations in each epoch. So the checkpoints just added up. I've stopped the training for now. I need to delete all of those checkpoints before I start training again.

3 years ago
0 Another Question, I Have Written A Code That Includes A Task Scheduler That Calls A Function. That Function Watches A Folder And If There Are Sufficient Images, It Creates And Publishes The Dataset, After Which It Clears The Folder. Problem, For Some Rea

Can you spot something here? Because to me it still looks like it should only create a new Dataset object if batch size requirement is fulfilled, after which it creates and publishes the dataset and empties the directory.

Once the data is published, a dataset trigger is activated in the checkbox_.... file. which creates a clearml-task for training the model.

3 years ago
0 When It Comes To Continuous Training, I Wanted To Know How You Train Or Would Train If You Have Annotated Data Incoming? Do You Train Completely Online Where You Train As Soon As You Have A Training Example Available? Do You Instead Train When You Have A

I get what you're saying. I was considering training on just the new data to see how it works. To me it felt like that was the fastest way to deal with data drift. I understand that it may introduce instability however. I was curious how other developers who have successfully managed to set up continuous training deal with it. 100% new data, or a ratio between new and old data. And if it is the latter, what should be the case, which should be the majority, old data or new data?

3 years ago
Show more results compactanswers