Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8060 Answers
  Active since 10 January 2023
  Last activity 9 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, Expanding On

DeliciousBluewhale87 You can havwe multiple queues for the k8s queuea in priory order:
python k8s_glue_example.py --queue glue_q_high glue_q_lowThen if someone is doing 100 experiments (say HPO), then they push into the "glie_q_low" which means it will first pop Tasks from the high priority queue and if it is empty it will pop from the low priority queue.
Does that make sense ?

3 years ago
3 years ago
3 years ago
0 Ok, I Faced Quite Funny Issue. Sorry For Spamming In This Chat, But I Am Just Ramping Up With Clearml And Its A Bit Turbulent.. Issue (As I Understand It) Is Following: My Package That I Use For Model Trainings Has The Same Name As Some Package In Pip (I

Worker just installs by name from pip, and it installs not my package!

Oh dear ...
Did you configure additional pip repositories in the Agent's clearml.conf ? https://github.com/allegroai/clearml-agent/blob/178af0dee84e22becb9eec8f81f343b9f2022630/docs/clearml.conf#L77 It might be that (1) is not enough, as pip will first try to search the package in the pip repository, and only then in the private one. To avoid that, in your code you can point directly to an https of your package` Ta...

2 years ago
0 Hi! I'M Currently Saving A Dataframe With Predictions Inside The Task. To Do So, I Save A Dataframe As Pickle File In

MuddySquid7
are you saying that for some reason the models pick the artifacts ? Is that reproducible ? (they are two different things)
Can you see the df.pkl on the Models section of the Task (in the UI) ?

3 years ago
0 What Could Be The Reason For My Package To Not Be Loading Under The "Installed Packages"? I Have A

What exactly do you get automatically on the "Installed Packages" (meaning the "my_package" line)?

3 years ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

Does StorageManager.upload and upload_artifact use the same methods?

Yes they both use StorageManager.upload

Is the only difference is task being async?

Two differences:
Upload being async Registering the artifact on the experiment. StorageManager will only upload, where as upload_artifact will make sure the file is registered as an artifact on the experiment, together with all of the artifacts properties.

4 years ago
0 Hi, Can You Help Me Pls, I Got: Environment Setup Completed Successfully Starting Task Execution: Traceback (Most Recent Call Last): File "Agro_Api.Py", Line 13, In From Help_Models.Consts Import Urls Importerror: No Module Named 'Help_Models'

PlainSquid19 No worries 🙂
btw: If you could see if the mangling of workings / script path happens with the latest trains, that will be appreciated, because if you were running the script in the first place from "stages/" then the trains should have caught it ...

4 years ago
0 Hey,

WickedElephant66 it should work, how exactly are you calling StorageManager?

2 years ago
0 Hello Clearml Community, Does Anyone Have An Idea How I Could Integrate/Manager Carla (

I see, something like:
from mystandalone import my_func_that_also_calls_task_init def task_factory(): task = Task.create(project="my_project", name="my_experiment", script="main_script.py", add_task_init_call=False) return task
if the pipeline and the my_func_that_also_calls_task_init are in the same repo, this should actually work.
You can quickly test this pipeline with
` pipe = Pipelinecontroller()
pipe.add_step(preprocess, ...)
pipe.add_step(base_task_facto...

3 years ago
0 I Found The Following Config Parameter (Related To Clearml-Data I Guess?):

This is done in the background while accessing the cache, so it should not have any slowdown effect

3 years ago
0 Base_Template_Keras_Simply.Py

DeliciousBluewhale87 could you send the full log of the Task?

3 years ago
0 Hi, I'M Trying To Set Storage Manager To Use Our Internal Miniio Installation But I Ran Into This Issue With This Testing Code:

at that point we define a queue and the agents will take care of training 

This is my preferred way as well :)

4 years ago
0 Hi, I Am Trying To Execeute My Code On Nvidia/Cuda Docker, But It Keeps Running, It Is Not Failed Or Not Aborted. The Last Log Message Is

I suspect it's the localhost - and the trains-agent is trying too hard to access the port, but for some reason does not report an error ...

4 years ago
0 Hey, Could You Help Me? I’Ve Tried Update Clearml-Server In K8S Old And New Clearml In The Different Namespaces, But After Migrate I Got The Error Error 101 : Inconsistent Data Encountered In Document: Document=Output, Field=Model How It Fix?

Error 101 : Inconsistent data encountered in document: document=Output, field=model

Okay this point to a migration issue from 0.17 to 1.0
First try to upgrade to 1.0 then to 1.0.2
(I would also upgrade a single apiserver instance, once it is done, then you can spin the rest)
Make sense ?

3 years ago
0 Hello. Am New To Clearml. I Wish To Know If There Are Clearml Support For Nvidia Tao (Formerly Known As Transfer Learning Toolkit) ? Thank You

My current experience is there is only print out in the console but no training graph

Yes Nvidia TLT needs to actually use tensorboard for clearml to catch it and display it.
I think that in the latest version they added that. TimelyPenguin76 might know more

2 years ago
0 Hey All, I Want To Purchase The Pro Version Of Clearml But Would Like To Have A Better Understanding Of The Metric Events And Api Calls That Are Performed When Using Clearml-Serving. For Example: I Have No Understanding Which Docker Container Calls The Ap

I reached over 1M API calls in about one week using clearml-serving

Oh that makes sense now 🙂
If I remember correctly, adding an additional model to a signal clearml-serving instance should not actually change the number of API calls, they are mostly affected by the number of clearml-serving / containers and not in the number of models.

one year ago
0 When We Train The Models, We Often Choose Checkpoint Based On The Validation Accuracy, But Test Set Accuracy (Or Specific Class Validation Accuracy) Is Not Necessarily The Best For This Checkpoint. Right Now There Are Options To Add Columns With Max And L

Hi DilapidatedDucks58

eg, we want max validation accuracy and all other metric values for the corresponding epoch

Is this the equivalent of nested sort ?
Wouldn't you get the requested behavior if you add all metric columns but sort based on the "accuracy" column ?

3 years ago
0 Has Anyone Got Any Experience With C++ Extensions In Python When Using Clearml? In Our Setup.Py We Have:

The point is, " leap" is proeperly installed, this is the main issue. And although installed it is missing the ".so" ? what am I missing? what are you doing manually that does Not show in the log?
In other words how did you install it "menually" inside the docker when you mentioned it worked for you when running without the agent ?

2 years ago
0 Hi, I’M Trying Out Clearml Pipelines From Decorators, And I’M Encountering A Few Problems I Don’T Know How To Solve.

I’d definitely prefer the ability to set a docker image/docker args/requirements config for the pipeline controller too

That makes sense, any chance you can open a github issue with feature request so that we do not forget ?

The current implementation will upload the result of the first component, and then the first thing the next component will do is download it.

If they are on the same machine, it should be cached when accessed the 2nd time

Wouldn’t it be more performant f...

2 years ago
Show more results compactanswers