Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

Hi @<1532532498972545024:profile|LittleReindeer37>
Yes you are correct it should capture the entire jupyter notebook in sagemaker studio.
Just verifying this is the use case, correct ?

2 years ago
0 Hi Guys, I Am Trying To Upload And Serve A Pre-Existing 3-Rdparty Pytorch Model Inside My Clearml Cluster. However, After Proceeding With The Suggested Sequence Of Operations By Official Docs And Later Even Gpt O3, I Am Having Errors Which I Cannot Solve.

My model files are also there, just placed in some usual non-shared linux directory.

So this is the issue, How would the container Get to these models? you either need to mount the folder to the container,
or you push them to ClearML model repo with the OutputModel class , does that make sense ?

6 months ago
0 When Trying To Run The Server From The Docker Image ( `Docker-Compose -F /Opt/Clearml/Docker-Compose.Yml Up -D` As Instructed In

@<1523722618576834560:profile|ShaggyElk85> nice !
I think that in theory you can run the DBs arm64 images no?

2 years ago
0 Latex In Plot Labels?

TrickyRaccoon92 I didn't know that 🙂
where did you try to add it? did you report a plotly figure or is it with report_???

4 years ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

To summarize: The scheduler should assign tasks the the agent first, which gives a queue the highest priority.

The issue here you assume both are idle and you need global priority based on resource preference. I understand your scenario now, but it will only hold if enqueuing order is "optimal". For example, if machine Y is running a Task B that is about to be completed (e.g. in a minute) then still machine X will pick the new Task B, and again we end up in the scenario where Task A i...

4 years ago
0 Hello Clearml Community, Does Anyone Have An Idea How I Could Integrate/Manager Carla (

Hi ReassuredTiger98
I think you should have something like:

` @PipelineDecorator.component(task_type=TaskTypes.application, docker='clara_docker_container_if_we_need')
def step_one(param):
print('step_one')
import os
os.system('run me clara')
# I'm assuming we should wait?
return

@PipelineDecorator.component(task_type=TaskTypes.training)
def step_two(param):
print('step_two')
import something
somthing.to_do()
return

@PipelineDecorator.pipeline(name='c...

3 years ago
0 When Use Gcp Bucket As Files_Server + Yolov5 Train For Now Its Upload The Model In The End To

so other process can use it

This is why there is a model repository, so you can query the last model created, or by name or tag or query the Task that created it and then via the Task the model and it's location.
This is a stable way to make sure your application code (the one using the model) will get to use stable models regardless of the training processes.
I would add a Tag to the model and then search based on the project and the tag, wdyt?

2 years ago
0 Hello! Since Today I Get

Sorry, env file for conda, the one you are using to install

4 years ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

a task of queue B if the next task is of type A it will have to wait,

It seems you imply there are two types of Tasks and they need to be executed one after the other ?

4 years ago
4 years ago
0 Is There Any Way To, Like, Load-Balance Automatically? Like, On The User End Can I Just Specify An Amount Of Gb I Think I Will Need, And It Goes And Picks A Queue For Me Based On That? Like, Let'S Say I Want "A 15Gb Gpu Or Better" And There'S 4 Queues, Tw

Like, let's say I want "a 15GB GPU or better" and there's 4 queues, two of which fit the description... is there any way to set it so that ClearML will just queue it up on whichever one's available?

How do you know that? Also if you know that, what do you know about the queues ?
Generally speaking this type of granularity is k8s, but it has lots of caveats, specifically that you need to Know what you need in term of resources, that you can specify resources that do not exist, and that...

4 years ago
0 Hi, A Question About Dataset Storage Suppose I Create A Dataset Like This

Why would that require refactoring ? Dataset class should take care if it internally ,no?
The reason my_name is a subproject , is that so every version could be a "Task" inside that project , just easier to manage (or at least that was the idea)

2 years ago
0 What Happens To File That Are Downloaded To A Remote_Execution Via Storagemanager? Are They Removed At The End Of The Run, Or Does It Continuously Increases Disk Space?

Hmm, so what I'm thinking is "extending" the capabilities of the "configuration" section (as it seems this is the right context). Allowing to upload a bunch of files (with the same mechanism as artifacts), as zip files, in the configuration "editable" section have the URL storing the zip, together with the target folder. wdyt?

3 years ago
0 Hi Everyone! Does Clearml Logs Everything That Tensorboard Generates? Tensorboard Creates A Graph Of The Neural Network And Would Be Nice To Have It On The Experiment Logs Aswell

Maybe I can plot it using other lib.

I remember a while back there was integration with network visualization but it was hard to support and failed to many times...
If you have library that converts the network into html or image you can report it as debug sample?

2 years ago
0 Hi, There! After Running A Task In Clearml, How Can I Use The Api To Get The Best Performing Output Model For That Task? I Came Across This

Hi @<1614069770586427392:profile|FlutteringFrog26>
So since you have the Task id. you do:

task = Task.get_task("task id here")

Then to get the models

models = task.models["output]

the models is a list And a dict, if you want the lats one you do last_model = models[-1] if you know the best model name you do model = models["best model"] (notice the model name is the exact one you see in the UI. Once you have the model object you can get a copy with `model.get_lo...

2 years ago
0 Hi, How Can I Remove A Tag From A Task Via Code In A Non-Barbaric Way?

In theory task.tags.remove(tag) might also work, but I'm not sure of it will automatically be updated on the backend

4 years ago
0 Hi All! I Was Checking The Configuration Logged Under "Hyperparameters" In The Web Ui And The Values Of Some Parameters Are Not Displayed. At First I Thought The Problem Was Coming From My Code, But Later I Realized The Values Disappear When You Scroll Do

GiganticTurtle0 I know that the UI is optimizing the display so it does not push all the parameters, but does so based on the scroll. Are you saying there is a bug on the logic? If so, how do I reproduce?

3 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

GiganticTurtle0 My apologies, I made a mistake, this will not work 😞
In the example above "step_two" is executed "instantaneously" , meaning it is just launching the remote task, it is not actually waiting for it.
This means an exception will not be raised in the "correct" context (actually it will be raised in a background thread).
That means that I think we have to have a callback function, otherwise there is no actual way to catch the failed pipeline task.
Maybe the only re...

3 years ago
0 On A Similar Note, The Million Autogenerated Experiments When Doing Tuning Swamp Out Everything Else In The Experiments And Models Tabs. Is There A Current Solution To Hide Autogenerated Runs, Give Them Specific Tags, Etc, Or Is This Not Yet Possible? Sor

LudicrousParrot69 we are working on adding nested project which should help with the humongous mass the HPO can create. This is a more generic solution for the nesting issue. (since nesting inside a table is probably not the best UX solution 🙂 )

4 years ago
0 I Am Running Trains=0.16.4 Python==3.7.5 , And Notice That The "Log" Page Sometimes Didn'T Capture The Console Log From My Program. Is This A Known Issue, Anyone Have Experienced Similar Behavior?

Will the new fix avoid this issue and does it still requires the 

incremental

 flag?

It will avoid the issue, meaning even when incremental is not specified, it will work
That said the issue any other logger will be cleared as well, so, just good practice ...

From the 

logging

 documentation ...

Hmmm so I guess Kedro should not use dictConfig ?! I'm not sure on the exact use case, but just clearing all loggers seems like a harsh approach

4 years ago
0 I Have Set

there is almost zero overhead if your docker container alreadyt has everything (including the agent) preinstalled and you set it with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1
it then should basically just run the code.

one year ago
0 Is There A Reason Why All Clearml.Task Methods Regarding Requirements (E.G. Pip Requirements) Are Class Methods? Are Requirements Not Stored In A Task?

ClearML seems to store stuff that's relevant to script execution outside of clearml.Task

Outside of the cleaml.Task?

4 years ago
0 Clearml-Session Question: I’M Using The Tool With An On-Prem Machine. Normal Tasks Are Being Executed Normally - But When Using

Hmm, any suggestion on making it more visible or on the interface ? (I mean deleting the cache file is always a solution, but it sounded quite painful to debug, hence the question)

2 years ago
0 Hey, I Was Wondering How Can I Do Hparams Tuning With Trains? Couldn'T Find Anything On The Documentation

Go to the workers & queues, page right side panel 3rd icon from the top

4 years ago
0 Hi, Plotting A Debug Sample With A

I'll make sure we look into it

4 years ago
Show more results compactanswers