Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 I'M Running Hyperparameter Tuning With Oputnaotimization. When Using Optuna It Is Possible To Save Studies As You Go And Pick Them Up Again In Case Of Crashes Etc. Is There Anyway Of Accessing The Optuna.Study Class So When We Run The Optunaoptimization W

yes, that makes sense to me.
What is your specific use case, meaning when/how do you stop / launch the hpo?
Would it make sense to continue from a previous execution and just provide the Task ID? Wdyt?

2 years ago
0 I Saw Some Talk Of Clearml + Kedro On Reddit. Is That A Good Approach?

TrickySheep9

Is there a way to see a roadmap on such things 

? (edited)

Hmm I think we have some internal one, I have to admit these things change priority all the time (so it is hard to put an actual date on them).
Generally speaking, pipelines with functions should be out in a week or so, TaskScheduler + Task Triggers should be out at about the same time.
UI for creating pipelines directly from the web app is in the working, but I do not have a specific ETA on that

3 years ago
0 Hello! I Have An Issue Reproducing My Runs. The Task.Create Completes Successfully. When I Clone And Enqueue A Completed Task The Clone Fails. It Fails During The Python Requirements Installation. Why Is This? Do You Know How I Can Debug? Thank You In Adv

Hi @<1734020162731905024:profile|RattyBluewhale45>
What's the clearml agent version? And could you verify with the latest RC?
Lastly how are you running the agent, docker mode? What's the bade container?

one month ago
0 I Have An On-Prem/Free Clearml-Server Setup With Custom S3 Back-End Storage. I'M Trying Out The Clearml-Serving Capability And Not Sure What'S Failing. When I Start The Serving Containers It Can'T Retrieve The Model:

To auto upload the model you have to tell clearml to upload it somewhere, usually by passing output_uri to Task.init or setting the default_output_uri in the clearml.conf

2 years ago
0 Different Question About Warnings: I'M Getting (Infrequently) This Warning, Followed By My Script Hanging

Okay, progress.
What are you getting when running the following from the git repo folder:
git ls-remote --get-url origin

3 years ago
0 Hi Everyone! I Try To Run Pytorch Lightning Code On Slurm With Srun Script Like This (

Yes they are supposed to be routed there by pytorch dist
(and the TB logs are on the master only anyhow)

one year ago
0 How Does Clearml Associate Projects/Experiments With Git Repos? Can I Think Of It As Clearml Project = Git Repo And Clearml Experiment = Git Commit? What About Git Branches - Is There Any Way To Organize Things Such That Separate Branches Are Easy To Trac

Intersting!
I would also add that Task name is not unique and you can use to describe the "process / goal etc" which would make it pretty obvious to search / review from the UI.
Regrading models and branchs, Iw ould use the Task tags (you can have as many as you like) to tag the specific model type (or dev branch if the alg is diff), this means you can also easily filter based on the Tags in the UI.

can you use the Web UI to compare the artifacts from two separate subprojects?

Yes comp...

one year ago
0 Hey, Just Trying Out Clearml-Serving And Getting The Following Error

Hi RobustRat47

My guess is it's something from the converting PyTorch code to TorchScript. I'm getting this error when trying the

I think you are correct see here:
https://github.com/allegroai/clearml-serving/blob/d15bfcade54c7bdd8f3765408adc480d5ceb4b45/examples/pytorch/train_pytorch_mnist.py#L136
you have to convert the model to TorchScript for Triton to serve it

2 years ago
0 Hi. I Have A Few Questions About The Snippet Attached

The second run prints out the same (non) "random" numbers as the first run

ClearML sets the initial random seed for you, basically trying to help with reproducibility. That said inside the function you can always do:
import random import time random.seed(time.time())

2 years ago
0 Can Anyone Recommend Some Good Ai Deployment Frameworks For Kubernetes? (Better If They Have/Can Be Integrated With Clearml)

Hi DisgustedDove53
When you say "deployment" there are a lot of way to interpret that 🙂 what exactly are you looking for ?

3 years ago
0 Hey, Is There A Way To Disable Going To The Demo Server

Hi SharpDove45

what 

 suggested about how it fails on bad/missing credentials

Yes, this is correct, since you specifically set the hosts worst case you will end up with wrong credentials 🙂

3 years ago
0 Anyone Seeing These Errors?

This seems more complicated that I thought... I think you are correct, and it fails to load the entire module, let me check what I can do

2 years ago
0 Hi All! Is There Any Simple Way To Use

Yes this is exactly the solution!
Nice 🎊 !

one year ago
0 Hello! There Is Great Alternative For Argparse Developed By Facebook For Ml Named

GrievingTurkey78 yes, you are correct on both.

Will the sweep functionality work?

Yes it should, that said, it will not use the trains-agent so you are limited to the machine running the sweep.
If you want to do HPO on multi-node, checkout this example 🙂
https://github.com/allegroai/trains/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py

3 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

You can set torch to be installed last:
post_packages: ["horovod", "torch"]
Which will make sure the "trains-agent" version (the one you specified in the "installed packages" will be installed last.

3 years ago
3 years ago
0 Getting A Super Weird Error. Everything Works Fine On Local, When Trying To Run On Remote, Getting This Error Failing To Apply The Git Diff

WackyRabbit7 hmmm seems like non regular character inside the diff.
Let me check something

3 years ago
0 Hello Everyone, I'M Currently Trying Clearml-Serving To Serve A Model Via An Endpoint. I Followed The Tutorial In The Documentation, But When I Try A Request, I Get An Error. Here It Is: Curl -X Post "

Interesting question, should work and looks like an interesting combination, I'm curious what you come up with.
btw: grafana itself can already provide a lot of alerts for drift etc, this is basically their histogram delta feature

7 months ago
0 Hey, Would It Possible To Add An Option To Make

Ohh, the controller task itself holds the artifacts ?

4 years ago
0 Hello Everyone. I'Ve Just Started Playing With Clearml. In The 2Nd 'Getting Started' Tutorial, I Launched The Agent From Google Colab. But Whenever A Task Is Picked, It Fails For The Following Error. Any Clues? Thank You!

Hi @<1686547344096497664:profile|ContemplativeArcticwolf43>

In the 2nd 'Getting Started' tutorial,

Could you send a link to the specific notebook?

. But whenever a task is picked, it fails for the following

You mean after the Task.init call?

6 months ago
0 Hi. Inside A Notebook When I Cerate A New Clearml Task And Then Run Sklearn Gridsearchcv , Clearml Uploads A Lot Of Model. Is There A Way To Force Clearml Not To Upload These Models? Related Question Is What Are These Models Anyway? Their Name Only Contai

Is that normal or a possible bug?

This sounds like xgboost internal format, it makes sense to me to be joblib (which is like pickle only faster and safer)
Let me see if we can also add the model object to the callback...

one year ago
Show more results compactanswers