Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8094 Answers
  Active since 10 January 2023
  Last activity 10 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Hi
Hi , v0.15 is out, πŸŽ‰ πŸš€ Your feedback had a major influence on the features we added πŸ™‚ thank you! A selected list of features: Column resizing / ordering /...
4 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is it a one time thing? or recurring?
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
docs are up
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is you server using https ?!
5 years ago
0 Votes
3 Answers
632 Views
0 Votes 3 Answers 632 Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New releases: pip install trains==0.13.3https://github.com/allegroai/trains/releases/tag/0.13.3 pip install trains-agent==0.13.2https://github.com/allegroai/...
4 years ago
0 Votes
1 Answers
724 Views
0 Votes 1 Answers 724 Views
LSTMeow is back! Bots/Gals/Guys feel free to πŸ‘ None
4 years ago
0 Votes
10 Answers
746 Views
0 Votes 10 Answers 746 Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on πŸš€ πŸŽ‰ Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
11 months ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 awesome thanks!
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
4 years ago
0 Votes
2 Answers
646 Views
0 Votes 2 Answers 646 Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS πŸš† πŸŽ‰ 🎈
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Finally
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
https://allegro.ai/docs
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🀯
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hello Everyone!
5 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! πŸŽ‰ πŸš€ 🎊 Happy Holidays and Happy New Year! ❇️ πŸŽ‡ πŸŽ„
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories πŸŽ‰ 🎊 πŸ‘‹ πŸ›€οΈ This new name brings on many changes, mainly replace a...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
We are at AAAI NY, come look us up :)
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
apparently everyone can ...
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
2 years ago
Show more results questions
0 Gm Folks, Really Liking Clearml So Far As My Top Choice (After Looking At Dvc, Mlflow), And Thank You For Your Help Here! I Had Another Q: Is There A Recommended Workflow To Be Able To “Drop Into” The

gm folks, really liking ClearML so far as my top choice (after looking at dvc, mlflow), and thank you for your help here!

Thanks HurtWoodpecker30 !

Is there a recommended workflow to be able to β€œdrop into” the

exact

env

(code, venv, data) of a previous experiment (which may have been several commits ago), to reproduce that experiment?

You can use clearml-agent on your local machine to build the env of any Task,
` clearml-agent build --id <ta...

2 years ago
0 Hello Channel, Two Other Related Questions:

@<1556812486840160256:profile|SuccessfulRaven86> is the issue with flask reproducible ? if so could you open a github issue, so we do not forget to look into it?

one year ago
0 Hey There! I'M Encountering An Odd Issue - I'M Running My Agents As Python Processes On A Windows Pc Endpoints. I Recently Had A Bug That Forced Me To Delete All Cache And All (Non-Core) Venv-Builds. My Firstly Booted Agent Uses The ''First'' Venv-Build

@<1710827340621156352:profile|HungryFrog27> the venv-build folder is supposed to be deleted after each task is done. How did you end up with leftovers? Could it be windows was failing to delete it for some reason? That actually connects with you initial issue no?

5 months ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

Okay so my thinking is, on the pipelinecontroller / decorator we will have:
abort_all_running_steps_on_failure=False (if True, on step failing it will abort all running steps and leave)
Then per step / component decorator we will have
continue_pipeline_on_failure=False (if True, on step failing, the rest of the pipeline dag will continue)
GiganticTurtle0 wdyt?

3 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

However, are you thinking of including this callbacks features in the new pipelines as well?

Can you see a good use case ? (I mean the infrastructure supports it, but sometimes too many arguments is just confusing, no?!)

3 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

GiganticTurtle0 My apologies, I made a mistake, this will not work 😞
In the example above "step_two" is executed "instantaneously" , meaning it is just launching the remote task, it is not actually waiting for it.
This means an exception will not be raised in the "correct" context (actually it will be raised in a background thread).
That means that I think we have to have a callback function, otherwise there is no actual way to catch the failed pipeline task.
Maybe the only re...

3 years ago
0 Hi There! Can Anybody Help Me With Specifying The 'Platform' For A Model In Clearml-Serving. I Am Using The K8S Clearml-Serving Setup (Version 1.3.1). I Already Tried A Bunch Of Variants Like

I think the real issue is that I am not able to specify a platform for the model,

None
there is no need to specify it, remove it from the config.pbtxt - the clearml-serving will automatically add the background

8 months ago
0 Hello, I Tried The Clearml-Session Cli To Start A Jupyter Instance On An Agent, But An Error With The Password, Here Is The Full Cli Log:

Yes docker was not installed in the machine

Okay make sense, we should definitely check that you have docker before starting the daemon πŸ˜‰

Ok, it would be nice to have a --user-folder-mounted that do the linking automatically

It might be misleading if you are running on k8s cluster, where one cannot just -v mount volume...
What do you think?

3 years ago
0 Question About The Usage Of Trains Agents. In Our Company We Have 3 Hpc Servers, Two Of Them Have Multiple Gpus, One Is Cpu Only. I Saw In The Docs The Multiple Agents Can Be Run Separately Assigning Gpus In Whatever Manner You Want. My Questions Are 1

WackyRabbit7 my apologies for the lack of background in my answer πŸ™‚
Let me start from the top, one of the goal of the trains-agent is to reproduce the "original" execution environment. Once that is done, it will launch the code and monitor it. In order to reproduce the original execution environment, trains-agent will install all the needed python packages, pull the code, and apply the uncommitted changes.
If your entire environment is python based, then virtual-environment mode is proba...

4 years ago
0 I'M Running Hyperparameter Tuning With Oputnaotimization. When Using Optuna It Is Possible To Save Studies As You Go And Pick Them Up Again In Case Of Crashes Etc. Is There Anyway Of Accessing The Optuna.Study Class So When We Run The Optunaoptimization W

If you could provide the specific task ID then it could fetch the training data and study from the previous task and continue with the specified number of trainings.

Yes exactly, and also all the definitions for the HPO process (variables space, study etc.)

The reason that being able to continue from a past study would be useful is that the study provides a base for pruning and optimization of the task. The task would be stopped by aborting when the gpu-rig that it is using is neede...

2 years ago
0 Hi All! Currently I Am Trying To Create A Tool That Can Perform Certain Operations On Dataset Ids, This Is A Skeleton Of What I Have In Mind (Based On The Examples):

Hi GrievingTurkey78
First, I would look at the CLI clearml-data as a baseline for implementing such a tool:
Docs:
https://github.com/allegroai/clearml/blob/master/docs/datasets.md
Implementation :
https://github.com/allegroai/clearml/blob/master/clearml/cli/data/main.py
Regrading your questions:
(1) No, a new dataset version will only store the diff from the parent (if files are removed it stored the metadata that says the file was removed)
(2) Yes any get operation will downl...

4 years ago
0 Hi, I Recently Started Evaluating Trains. Given That Tensorboard Is Much More Mature, And Our Team Is Used To It, I Think It Is Likely We Won’T Want To Stop Using Tensorboard Completely And Just Switch To Trains. But I Am Thinking It Could Be Pretty Use

Hi LivelyLion31
Yes, the reason we designed Trains with an automagic integration is exactly that reason, so users do not need to learn another package and that with almost no effort you get most of the benefits.
Regrading the TB files, from experience most users will use the TB files short after they executed the experiment, usually for debugging and in depth capabilities (like network debugger profile etc), metric view is something that is much easier to do on a centralized server (like on...

4 years ago
0 Hi, First Time Installing Trains-Server. But I’M Getting The Following Error When Loading The Web Page

Hi CooperativeSealion8
Seems like your NoScript addon is blocking the site :)

4 years ago
0 Hi, I Am Trying To Clone An Experiment. Using The Server Gui, I Select 'Clone' And Then 'Enqueue'. In The Console Window, I See That Clearml Makes Sure The Environment Is Installed, And Then It Goes Into A 'Completed' Status Although The Experiment Did N

I did nothing to generate a command-line. Just cloned the experiment and enqueued it. Used the server GUI.

Who/What created the initial experiment ?

I noticed that if I run the initial experiment by "python -m folder_name.script_name"

"-m module" as script entry is used to launch entry points like python modules (which is translated to "python -m script")
Why isn't the entry point just the python script?
The command line arguments are passed as arguments on the Args section of t...

2 years ago
0 Hello Everyone, I’M Newcomer For Clearml. I Have Question Related To

Hi MortifiedCrow63
I have to admit this is very strange, I think the fact it works for the artifacts and not for the model is kind of a fluke ...
If you use "wait_on_upload" argument in the upload_artifact you end up with the same behavior. Even if uploaded in the background, the issue is still there, for me it was revealed the minute I limited the upload bandwidth to under 300kbps.It seems the internal GS timeout assumes every chunk should be uploaded in under 60 seconds.
The default chunk...

3 years ago
0 Hi Team! Is There A Way To Make Clearml’S Aws Autoscaler And Queues Resource-Aware Please? I.E. If We Can Say, As We Enqueue Our Job, How Much Ram Or Gpu-Ram Or Even Gpus It Needs, Have The Scheduler/Autoscaler Dispatch The Job To Instances That Are Of Th

because a pipeline is composed of multiple tasks, different tasks in the pipeline could run on different machines.

Yes!

. Or more specifically, they could run on different queues, and as you said, in your other response, we could have a Q for smaller CPU-based instances, and another queue larger GPU-based instances.

Exactly !

I like the idea of having a queue dedicated to CPU-based instances that has multiple agents running on it simultaneously. Like maybe four agents.

Th...

one year ago
0 Hi, Is There A Way To Create A Draft Experiment Manually? That Is - Give It A Some File To Run, Or, Better Yet, A Function To Run Which Will Be The Start Of The Experiment? In W&B, For Example It Is Possible To Simply Write (Their

OddAlligator72 I like this idea.
The single thing I'm not sure about is the "function entry point"
Why would one do that? Meaning why wouldn't you have a proper python entry-point.
The reason I'm reluctant is that you might have calls/functions/variables in global scope of the file storing the function, and then users will not know why something broke, ans it will be very cumbersome to debug.
A simple script entry point seems trivial to launch and debug locally.
What do you think ? What woul...

4 years ago
0 Hi, There'S Something I Don'T Find Too Logical When Using Clearml And Its Agents. I Will Need To Run My Code Once On My Client Computer This Is Without Gpus. And Then I Will Need To Run It Via The Ui On Clearml Server That Has Gpus. Why Can'T I Configure

Hi SubstantialElk6 I'll start at the end, you can run your code directly on the remote GPU machine πŸ™‚
See clearml-task documentation, on how to create a task from existing code and launch it
https://github.com/allegroai/clearml/blob/master/docs/clearml-task.md

That said, the idea is that you add the Task.init call when you are writing/coding the code itself, then later when you want to run it remotely you already have everything defined in the UI.
Make sense ?

3 years ago
0 Hi! I Am Trying To Build And Run A Pipeline. I Pass My Dataset As Parameter Of Pipeline:

I pass my dataset as parameter of pipeline:

@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.

Another option is to use pipeline from dec...

one year ago
0 Hi, I Have A Future Roadmap Question On Clearml-Datasets. The Current Implementation Works Well For Small Datasets But Its Rather In Effective For Very Large Datasets. For Example, Let'S Say I Have 10 Million Images Just For The Training Dataset, And My T

SubstantialElk6 I just realized 3 weeks passed, wow!
So the good news we have some new examples:
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_functions.py
The bad news the documentation was postponed a bit, as we are still messaging the interface (the community is constantly pushing for great ideas and uses cases , and they are just too good to miss out πŸ™‚ )...

3 years ago
0 Whet Is The Method For Packages Exploration When Using Conda? Agent Is Set To 'Conda' Mode. We Upload A Task From A Local Conda Env That (Obviously) Has Some Pip Packages As Well. When We Enqueue The Task To Run Remotely, Not All Conda Packages Are Instal

CrookedWalrus33 from the log it seems the code is trying to use "kwcoco" but it is not listed under any "Installed packages" nor do you see any attempt to install it. Can you confirm ?

3 years ago
Show more results compactanswers