Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8048 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,

JitteryCoyote63

I agree that its name is not search-engine friendly,

LOL πŸ˜„
It was an internal joke the guys decided to call it "trains" cause you know it trains...
It was unstoppable, we should probably do a line of merch with AI πŸš† πŸ˜‰
Anyhow, this one definitely backfired...

4 years ago
0 Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,

JitteryCoyote63

I agree that its name is not search-engine friendly,

LOL πŸ˜„
It was an internal joke the guys decided to call it "trains" cause you know it trains...
It was unstoppable, we should probably do a line of merchandise with AI πŸš† πŸ˜‰
Anyhow, this one definitely backfired...

4 years ago
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

UnevenDolphin73 sounds great, any chance you can open a git issue on clearml-agent repo for this feature request ?

2 years ago
0 Hello Everyone, I'M Currently Trying Clearml-Serving To Serve A Model Via An Endpoint. I Followed The Tutorial In The Documentation, But When I Try A Request, I Get An Error. Here It Is: Curl -X Post "

Is it not possible to serve a model with preprocessing pipeline from scikit-learn using clearml-serving?

of course it is, did you first try the example , here: None
If you need to run your own LogisticRegression call you can use this example:
None
Notice this is where the custom endpoint actually calls the prediction: [None](https...

6 months ago
0 Hi, In My Setup I Run Multiple Experiments In Parallel From The Same Script. I Understand That There Can Only Be One Execution

Well that depends on how you think about the automation. If you are running your experiments manually (i.e. you specifically call/execute them), then at the beginning of each experiment (or function) call Task.init and when you are done call Task.close . This can be done in parallel if you are running them from separate processes.
If you want to automate the process, you can start using the trains-agent which could help you spin those experiments on as many machines as you l...

3 years ago
0 In Pipelinev2, Is It Possible To Register Artifacts To The Pipeline Task? I See There Is A Private Variable

okay but still I want to take only a row of each artifact

What do you mean?

How do I get from the node to the task object?

pipeline_task = Task.get_task(task_id=Task.current_task().parent)

2 years ago
0 Question: Has Anyone Done Anything With Ray Or Rllib, And Clearml? Would Clearml Be Able To Integrate With Those Out Of The Box?

SmallDeer34 in theory no reason it will not work with it.
If you are doing a single node (from Ray's perspective)
This should just work, the challenge might be multi-node ray+cleaml (as you will have to use clearml to set the environment and ray as messaging layer (think openmpi etc.)
What did you have in mind?

3 years ago
0 Question: Has Anyone Done Anything With Ray Or Rllib, And Clearml? Would Clearml Be Able To Integrate With Those Out Of The Box?

save off the "best" model instead of the last

Should be relatively easy to update on the main Task the model with the best performance, no?

3 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

I think it fails because it tries to install trains twice. Could you remove the trains package, and test? I'm also curious how do you have both installed?!

4 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

Yes, I mean trains-agent. Actually I am using 0.15.2rc0. But, I am using local files, I mean I clone trains and trains-agent repos and install them. Their versions are 0.15.2rc0

I see, that's why we get the git ref, not package version.

4 years ago
0 Hi

SarcasticSparrow10 sure see "execute_remotely" it does exactly that:
https://allegro.ai/docs/task.html#trains.task.Task.execute_remotely
It will stop the current process (after syncing everything) and launch itself remotely (i.e. enqueue itself)
When the same code is running by the "trains-agent" the execute_remotely call becomes a no-operation and is basically skipped

3 years ago
0 Hi Folks, We Are Trying To Find A Tool To Help With Workflow Orchestration. This Is Our Stack So Far (Label Studio/Clearml/Seldon). Does Anyone Have Any Experience With Using Any Workflow Which Is Most Compatible Esp Wrt To Clearml.

TenseOstrich47 / PleasantGiraffe85
The next version (I think releasing today) will already contain scheduling, and the next one (probably RC right after) will include triggering. That said currently the UI wizard for both (i.e. creating the triggers), is only available in the community hosted service. That said I think that creating it from code (triggers/schedule) actually makes a lot of sense,

pipeline presented in a clear UI,

This is actually actively worked on, I think Anxious...

3 years ago
0 Hey, I Would Like My Experiment To Call At Some Point A Cli Program Installed As A Dependency Of The Experiment. Here Is What I Do:

So I'm gusseting the cli will be in the folder of python:
import sys from pathlib2 import Path (Path(sys.executable).parent / 'cli-util-here').as_posix()

3 years ago
0 Hey, I Would Like My Experiment To Call At Some Point A Cli Program Installed As A Dependency Of The Experiment. Here Is What I Do:

Hi JitteryCoyote63
Just making sure, the package itself it installed as part of the "Installed packages", and it also installs a command line utility ?

3 years ago
0 Hello! I Get The Following Error In Results->Console After A Task Is Sent For Remote Execution (Using Sdk):

I have an idea, can you try with:
task = Task.init(..., reuse_last_task_id=False)I have a suspicion it starts the Tasks in parallel, and the "reuse_last_task_id" causes them to "reuse the same task locally" which makes them overwrite the configuration of one another.

2 years ago
0 Hello, I Would Like To Use Spot Instances Together With The Aws Autoscaler To Train Models With Pytorch/Ignite And I Am Wondering How To Support Interruptions During The Training (In Case The Instance Is Terminated By Aws). Is There Anything Already Built

JitteryCoyote63

somehow the previous iterations, not sure yet if it’s coming from my code, ignite or clearml

ClearML will automatically continue reporting from the previous iteration (i.e. if before continuing the Task the last iteration was 100, then the next report with iteration =0 will actually be 101)

task.set_initial_iteration(engine.state.iteration)

Basically it is called automatically by ClearML (obviously only when you continue an aborted Task)

3 years ago
0 Hello

But how do you specify the data hyperparameter input and output models to use when the agent runs the experiment

They are autodetected if you are using Argparse / Hydra / python-fire / etc.
The first time you are running the code (either locally or with an agent), it will add the hyper parameter section for you.
That said you can also provide it as part of the clearml-task command with --args
(btw: clearml-task --help will list all the options, https://clear.ml/docs/...

2 years ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Thanks @<1569496075083976704:profile|SweetShells3> for bumping it!
Let me check where it stands, I think I remember a fix...

one year ago
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

so you have a repo with poetry that some users update and some do not?
All working on the same branch ?

2 years ago
0 Hey, Just Trying Out Clearml-Serving And Getting The Following Error

Hi RobustRat47

My guess is it's something from the converting PyTorch code to TorchScript. I'm getting this error when trying the

I think you are correct see here:
https://github.com/allegroai/clearml-serving/blob/d15bfcade54c7bdd8f3765408adc480d5ceb4b45/examples/pytorch/train_pytorch_mnist.py#L136
you have to convert the model to TorchScript for Triton to serve it

2 years ago
0 Hey, Just Trying Out Clearml-Serving And Getting The Following Error

Notice that we are using the same version:
https://github.com/allegroai/clearml-serving/blob/d15bfcade54c7bdd8f3765408adc480d5ceb4b45/clearml_serving/engines/triton/Dockerfile#L2
The reason was that previous version did not support torchscript, (similar error you reported)
My question is, why don't you use the "allegroai/clearml-serving-triton:latest" container ?

2 years ago
0 Hey, Just Trying Out Clearml-Serving And Getting The Following Error

RobustRat47 what's the Triton container you are using ?
BTW, the Triton error is:
model_repository_manager.cc:1152] failed to load 'test_model_pytorch' version 1: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.https://github.com/triton-inference-server/server/issues/3877

2 years ago
0 I’M Using Catboost For Training, But Sadly It Does Not Have A Native Integration With Clearml (Xgboost And Lightgbm Do Have Integrations). But Catboost Writes Down Training Logs In Tensorboard Format (Into A

it certainly does not use tensorboard python lib

Hmm, yes I assume this is why the automagic is not working 😞

Does it have a pythonic interface form the metrics ?

3 years ago
Show more results compactanswers