Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
3 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
3 years ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Great to hear! Then it comes down to waiting for the next hugging release!

2 years ago
0 Hello, I Am Using Datasets In Community Server. I Have Been Trying To Create A Child Dataset The Following Way: Dataset_Name = "Training2501" Dataset_Project = "Datasets" Dataset_Path = Dataset.Create( Dataset_Name=Dataset_Name, Dataset_Project=Da

I think that would defeat the purpose of lineage no? The point is to keep track of where data came from in the real world. Rewriting that record is just kind of... metadata?
As for the (*) line, could it be that "0385db..." itself does not have parents itself? So "0385db..." is the base dataset, without parents, and it has 1 child, which has "0385db..." as its parent

3 years ago
0 Hello, Does Clearml_Apiserver Needed To Listen To 8008 Only? Can I Change To Other Ports Likes 9008?

Hi VictoriousPenguin97 ! I think you should be able to change it in the docker-compose file here: https://github.com/allegroai/clearml-server/blob/master/docker/docker-compose.yml

You can map the internal 8008 port to another port on your local machine. But beware to provide the different port number to any client that tries to connect (using clearml-init )

3 years ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

Usually those models are Pytorch right? So, yeah, you should be able to, feel free to follow the Pytorch example if you want to know how 🙂

2 years ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

Not exactly sure what is going wrong without an exact error or reproducible example.

However, passing around the dataset object is not ideal, because passing info from one step to another in a pipeline requires ClearML to pickle said object and I'm not exactly sure a Dataset obj is picklable.

Next to that, running get_local_copy() in the first step does not guarantee that you can access that data from the other step. Both might be executed in different docker containers or even on different...

2 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

Cool! 😄 Yeah, that makes sense.

So (just brainstorming here) imagine you have your dataset with all samples inside. Every time N new samples arrive they're just added to the larger dataset in an incremental way (with the 3 lines I sent earlier).
So imagine if we could query/filter that large dataset to only include a certain datetime range. That range filter is then stored as hyperparameter too, so in that case, you could easily rerun the same training task multiple times, on differe...

3 years ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Ah I see 😄 I have submitted a ClearML patch to Huggingface transformers: None

It is merged, but not in a release yet. Would you mind checking if it works if you install transformers from github? (aka the latest master version)

2 years ago
0 Hi Is There Any Option To Get Preview For The Images On Dataset In Case Upload With

AstonishingRabbit13 If I'm not mistaken, you can add images to the preview tab by reporting them as debug samples.

So you'd run: dataset.get_logger().report_image() or report_media()
This is not scalable though, so don't expect the server to handle millions of images well, for that you'd need Hyperdatasets 🙂
But it works well (as the name suggests) for some previews of the images!

Relevant docs:
https://clear.ml/docs/latest/docs/references/sdk/dataset/#get_logger
https://...

2 years ago
0 Hi, I Am Trying The Triggerscheduler To Catch When A User Add Specific Tag To A Task. I Used The Below Code But The Schedule_Function Is Not Called When Adding Tags To Task (It Seems The Task.Last_Update Is Not Modified After Adding Tag)

Can you elaborate a bit more, I don't quite understand yet. So it works when you update an existing task by adding a tag to it, but it doesn't work when adding a tag for the first time?

3 years ago
0 When Using Dataset.Get_Local_Copy(), Once I Get The Location, Can I Add Another Folder Inside Location Add Some Files In It, Create A New Dataset Object, And Then Do Dataset.Upload(Location)? Should This Work? Or Since Its Get_Local_Copy, I Won'T Be Able

It part of the design I think. It makes sense that if we want to keep track of changes, we always build on top of what we already have 🙂 I think of it like a commit: I'm adding files in a NEW commit, not in the old one.

3 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

It should, but please check first. This is some code I quickly made for myself. It did make tests for it, but it would be nice to hear from someone else that it worked (as evidenced by the error above 😅 )

2 years ago
0 Hi. I Am Experimenting With

Hi PanickyMoth78 ,

I've just recreated your example and it works for me on clearml==1.6.2 but indeed not on clearml==1.6.3rc1 which means we have some work to do before the full release 🙂 Can you try on clearml==1.6.2 to check that it does work there?

3 years ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

How large are the datasets? To learn more you can always try to run something like line_profiler/kerprof, to get exactly how long a specific python line takes. How fast/stable is your internet?

2 years ago
0 Hello Again, I Would Like To Ask You If Something Like This Is Possible In Clearml (See Screenshot)? For Each Experiment (

Thank you so much! In the meantime, I check once more and the closest I could get was using report_single_value() . It forces you to report each an every row though, but the comparison looks a little better this way. No color coding yet, but maybe it can already help you a little 🙂

3 years ago
0 Hello Again, I Would Like To Ask You If Something Like This Is Possible In Clearml (See Screenshot)? For Each Experiment (

Hi! Have you tried adding custom metrics to the experiment table itself? You can add any scalar as a column in the experiment list, it does not have color formatting, but it might be more like what you want in contrast to the compare functionality 🙂

3 years ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

Hey @<1541592213111181312:profile|PleasantCoral12> thanks for doing the profiling! This looks pretty normal to me. Although 37 seconds for a dataset.get is definitely too much. I just checked and for me it takes 3.7 seconds. Mind you the .get() method doesn't actually download the data, so the dataset size is irrelevant here.

But the slowdowns do seem to only occur when doing api requests. Possible next steps could be:

  • Send me your username and email address (maybe dm if you don't wa...
2 years ago
0 Hello Channel, I Have A Question Regarding Clearml Serving In Production. I Have Different Environments, And Different Models Each Of Them Linked To A Use Case. I Would Like To Spin Up One Kubernetes Cluster (From Triton Gpu Docker Compose) Taking Into

To be honest, I'm not completely sure as I've never tried hundreds of endpoints myself. In theory, yes it should be possible, Triton, FastAPI and Intel OneAPI (ClearML building blocks) all claim they can handle that kind of load, but again, I've not tested it myself.

To answer the second question, yes! You can basically use the "type" of model to decide where it should be run. You always have the custom model option if you want to run it yourself too 🙂

2 years ago
0 Hi, I'M Using Hyperparameteroptimizer Alongside Optimizeroptuna And I Am Unsure How To Implement Pruning On Tasks That Are Not Producing Good Results. Is There A Way To Implement This On These Modules?

Yeah, I do the same thing all the time. You can limit the amount of tasks that are kept in HPO with the save_top_k_tasks_only parameter and you can create subprojects by simply using a slash in the name 🙂 https://clear.ml/docs/latest/docs/fundamentals/projects#creating-subprojects

3 years ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Hey! Thanks for all the work you're putting in and the awesome feedback 😄

So, it's weird you get the shm error, this is most likely our fault for not configuring the containers correctly 😞 The containers are brought up using the docker-compose file, so you'll have to add it in there. The service you want is called clearml-serving-triton , you can find it [here](https://github.com/allegroai/clearml-serving/blob/2d3ac1fe63637db1978df2b3f5ea4903ef59788a/docker/docker-...

2 years ago
0 Hi, We Have A Workflow Which Goes Over List Of Directories And Processes All Movies From Them. "Process" - Means Run Certain Detection Algorithms On Each Movie Frame. We Built Clearml Task From This Workflow, And Created Hpo Application Based On This Task

Unfortunately, ClearML HPO does not "know" what is inside the task it is optimizing. It is like that by design, so that you can run HPO with no code changes inside the experiment. That said, this also limits us in not being able to "smartly" optimize.

However, is there a way you could use caching within your code itself? Such as using functools' LRU cache? This is built-in in python and will cache function return values if it's ever called again with the same input arguments.

There also see...

2 years ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

Hi @<1541592213111181312:profile|PleasantCoral12> thanks for sending me the details. Out of curiosity, could it be that your codebase / environment (apart from the clearml code, e.g. the whole git repo) is quit large? ClearML does a scan of your repo and packages every time a task is initialized, maybe that could be it. In the meantime I'm asking our devs if they can see any weird lag with your account on our end 🙂

2 years ago
0 Hpo App Question: My Config Includes 11 Parameter Values (0 - 1, Step 0.1). I'Ll Expect To See 11 Experiments, But I Fact It Was "52 Iterations". What I'M Missing (Last Time I Asked Similar Question, But This Time There Is No Issue With Hpo-App Integratio

Ok, so I recreated your issue I think. Problem is, HPO was designed to handle more possible combinations of items than is reasonable to test. In this case though, there are only 11 possible parameter "combinations". But by default, ClearML sets the maximum amount of jobs much higher than that (check advanced settings in the wizard).

It seems like HPO doesn't check for duplicate experiments though, so that means it will keep spawning experiments (even though it might have executed the exact s...

2 years ago
0 Hpo App Question: My Config Includes 11 Parameter Values (0 - 1, Step 0.1). I'Ll Expect To See 11 Experiments, But I Fact It Was "52 Iterations". What I'M Missing (Last Time I Asked Similar Question, But This Time There Is No Issue With Hpo-App Integratio

In the meantime, it might help to limit the amount of jobs using the advanced settings. If you know the exact amount and want to do every one for sure, just set it that way 🙂

2 years ago
Show more results compactanswers