Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello! Getting Credential Errors When Attempting To Pip Install Transformers From Git Repo, On A Gpu Queue.

Wait IrritableOwl63 this looks like ti worked, am I right ? huggingface was correctly installed

4 years ago
0 Hey, Our Elastic Search Just Randomly Crashed On Our Self-Hosted K8S Deployment. On Debugging, It Looks Like Indices Are Corrupt. Any Suggestions Of How We Might Solve This?

Hi @<1535069219354316800:profile|PerplexedRaccoon19>

On debugging, it looks like indices are corrupt.

ishhhhh, any chance you have a backup?

one year ago
0 Hey Everyone, I'M Having An Issue Due To Conflicting Git Credentials On The Clearml-Agent (Running Inside The Docker). I'M Using Ssh Settings (

If this is how the repo links look like, do not set anything in the clearml.conf
It "should" use the ssh for the ssh links, and http for the http links.

3 years ago
0 For Clearml Serving, If I Am Trying To Deploy 100 Models On A Gpu That Can Handle 5 Concurrently, But Each One Will Be Sporadically Used (Fine Tuned Models Trained For Different Customers), Can Clearml-Serving Automatically Load And Unload Models Based Up

It appears that "they sell that" as Triton Management Service, part of

. It is possible to do through their API, but would need to be explicit.

We support that, but this is Not dynamically loaded, this is just removing and adding models, this does not unload them from the GRAM.
That's the main issue. when we unload the model, it is unloaded, to do dynamic, they need to be able to save it in RAM and unload it from GRAM, that's the feature that is missing on all Triton deployme...

2 years ago
0 Hi. Inside A Notebook When I Cerate A New Clearml Task And Then Run Sklearn Gridsearchcv , Clearml Uploads A Lot Of Model. Is There A Way To Force Clearml Not To Upload These Models? Related Question Is What Are These Models Anyway? Their Name Only Contai

We do upload the final model manually.

If this is the case just name it based on the parameters, no? am I missing soemthing?
https://github.com/allegroai/clearml/blob/cf7361e134554f4effd939ca67e8ecb2345bebff/clearml/model.py#L1229

I was just wondering if i can make the autologging usable.

It kind of assumes these are different "checkpoints" on the same experiment, and then stores them based on the file name
You can however change the model names later:
` Task.current_task().mo...

3 years ago
0 I Want To Run

So this can be translated to

CLEARML__SDK__AZURE__STORAGE__CONTAINERS__0__ACOUNT_NAME=abcd
3 months ago
0 Executed From Within A Pipelinecontroller Task, What Possible Reason Does

WackyRabbit7 just making sure I understand:
MedianPredictionCollector.process_results Is called after the pipeline is completed.
Then inside the function, Task.current_task() returns None.
Is this correct?

3 years ago
0 Hi, V1 Of Agent Seems To Have Removed Agent.Package_Manager.Force_Repo_Requirements_Txt. Is This Still Available In Other Forms?

I'm wondering why this is the case as docker best practices does indicate we should use a non root on production images.

The docker image for the service-agent is not root ?

4 years ago
0 Hello, Does Anybody Here Have Much Experience In Creating Sub-Tasks Or Sub-Pipelines? I'M Not Sure The Concept Is Particularly Well Established But The Docs Mention:

using caching where specified but the pipeline page doesn't show anything at all.

What do you mean by " the pipeline page doesn't show anything at all."? are you running the pipeline ? how ?
Notice PipelineDecorator.component needs to be Top level not nested inside the pipeline logic, like in the original example

@PipelineDecorator.component(
        cache=True,
        name=f'append_string_{x}',
    )
2 years ago
0 I'M Using

Hi WittyOwl57

I'm guessing clearml is trying to unify the histograms for each iteration, but the result is in this case not useful.

I think you are correct, the TB histograms are actually a 3d histograms (i.e. 2d histograms over time, which would be the default for kernel;/bias etc.)

is there a way to ungroup the result by iteration, and, is it possible to group it by something else (e.g. the tags of the two plots displayed below side by side).

Can you provide a toy example...

4 years ago
0 Hi All! When I Set A List As A Task Parameter And Later Try To Retrieve It, What I Get Is A String. Is This The Expected Behavior? I Have Prepared The Following Snippet So That You Can Reproduce It.

Task.debug_simulate_remote_task simulates the Task being executed by the agent (basically same behaviour, only local). the argument it gets is the Task ID (string).
The to see how it works is to run the code once (no debug_simulate call), get the Task ID we created, then rerun with the debug_simulate_remote_task passing the previous Task.ID
Make sense ?

3 years ago
0 Hi, Which Database Services Are Used To Store The Logged Data Such As Scalar, Text, Matrix, Etc? How Can I Query These For A Downstream Process Programmatically Instead Of Just Within The Web Ui? If Scalar Data Is Stored In Mongodb, Can I Use Pymongo To R

Hi SarcasticSparrow10

which database services are used to...

Mongo & Elastic
You can query everything using ClearML interface, or talk directly with the databases.
Full RestAPI is here:
https://clear.ml/docs/latest/docs/references/api/endpoints
You can use the APIClient for easier pythonic interface:
See example here
https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py

What is the exact use case you have in mind?

4 years ago
0 I'M Trying To Understand How Clearml Serving Works And Trying To Set It Up. I Have An Agent Listening To The Serving Queue And I'M Trying To Set Up Clearml Serving To Launch On The Serving Queue. Do I Need To Have Clearml-Serving Installed On The Machine

can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is,

Great idea!

This line actually creates the control Task (2)
clearml-serving triton --project "serving" --name "serving example"
This line configures the control Task (the idea is that you can do that even when the control Task is already running, but in this case it is still in draft mode).
Notice the actual model serving configuration is already stored on the crea...

3 years ago
0 Hi , I Am New To Clearml , I Am Trying To Create A Pipeline That Have Multiple Steps That Run Inside A Gitlab Repo , So I Pass It Add_Function_Step With A Token Like : Repo=F"

Hi @<1798887590519115776:profile|BlushingElephant78>

have multiple steps that run inside a gitlab repo

one thing to make sure that is not missed, "inside a gitlab repo" , notice the actual pipeline is running on agents/workers/k8s gitlab will be the trigger and monitor it, but the actual execution should probably happen somewhere with proper compute

this works fine when i run the pipeline from the script directly but when i run it from the web interface

try to configure your...

9 months ago
0 Hi Guys, Is A Task Updating Its Status To 'Complete' Before Finishing To Upload Its Artifacts/Metrics In The Background?

Task status change to "completed" is set after all artifacts upload is completed.
JitteryCoyote63 that seems like the correct behavior for your scenario

5 years ago
0 Help Please, After Creating My Data Drift Monitoring Dashboard Using Clearml Serving And Grafana, How Can I Configure My Alerts To Be Notified When The Distribution Of My Metrics (Variables) Changes On My Heatmaps?

try to break it into parts and understand what produces the error
for example:
increase(test12_model_custom:Glucose_bucket[1m])
increase(test12_model_custom:Glucose_sum[1m])
increase(test12_model_custom:Glucose_bucket[1m])/increase(test12_model_custom:Glucose_sum[1m])
and so on

one year ago
0 Hi Community! I Have A Question Regarding Using Docker Containers With Conda. We Have Created A Docker Image Where All The Required Python Modules Are Installed Using Conda. The Conda Environment Is Activated Automatically In The Entrypoint Of The Docker

Hi @<1601023807399661568:profile|PompousSpider11>
Yes "activating" a conda/python environment in a docker is more complicated then it should be ...
To debug, what are you getting when you do:

docker run -it <docker name here> bash -c "set"
2 years ago
0 Hello All. I'M Experimenting With Clearml And I'Ve Run Into A Strange Issue. I Used

that using a “local” package is not supported

I see, I think the issue is actually pulling the git repo of the second local package, is that correct ?
(assuming you add the requirement manually, with Task.add_requirements) , is that correct ?

2 years ago
0 In Pipelines I'Ve Found That Empty Lists Don'T Work As I Would Expect Them To Work. For Example, This Will Work Fine:

Hi SmugSnake6
I think it was just fixed, let me check if the latest RC includes the fix

3 years ago
0 Hi All. I Am Wondering How People Tend To Use Clearml With Cross-Validation. Do You Tend To Create Separate Experiments For Each Fold? And If So, Would You Then Create Another Experiment For The Aggregated Results?

Hi RattyBat71

Do you tend to create separate experiments for each fold?

If you really want to parallelized the workload, then splitting it to multiple executions (i.e. passing an argument of the index of the same CV) makes sense, then you can compare / sort the results based on a specific metric. That said if speed is not important, just having a single script with multiple CVs might be easier to implement?!

2 years ago
4 years ago
0 Hi, I Am Using Clearml By Building It As My Own Server. After The Message Below Was Displayed, The Operation Stopped Without Progress. In Clearml Server, It Is In “Running” State. “Clearml.Task - Info - No Repository Found, Storing Script Code Instead”

Hi @<1625303806923247616:profile|ItchyCow80>
Could you add some prints ? Is it working without the Task.init call? the code looks okay and the - No repository found, message basically says it logs it as a standalone script (which makes sense)

2 years ago
0 Is There Any Way To Clear The Installed Packages Of A Task Programmatically? (I.E. Using The Python Sdk And Not The Ui)

GiddyTurkey39

A flag would be really cool, just in case if theres any problem with the package analysis.

Trying to think if this is a system wide flag (i.e. trains.conf) or a flag in task.init.
What do you think?

4 years ago
0 Hi Guys, I Am Trying To Upload And Serve A Pre-Existing 3-Rdparty Pytorch Model Inside My Clearml Cluster. However, After Proceeding With The Suggested Sequence Of Operations By Official Docs And Later Even Gpt O3, I Am Having Errors Which I Cannot Solve.

Hmm I just noticed:

'--rm', '', 'bash'

This is odd this is an extra argument passed as "empty text" how did that end up there? could it be you did not provide any docker image or default docker container?

9 months ago
0 Hi, In My Setup I Run Multiple Experiments In Parallel From The Same Script. I Understand That There Can Only Be One Execution

This is an example of hoe one can clone an experiment and change it from code:
https://github.com/allegroai/trains/blob/master/examples/automation/task_piping_example.py

A full HPO optimization process (basically the same idea only with optimization algorithms deciding on the next set of parameters) is also available:
https://github.com/allegroai/trains/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py

5 years ago
0 Hi! Regarding The

Hi GrievingTurkey78
the artifacts are downloaded to the cache folder (and by default the last 100 accessed artifacts are maintained there).

node executes the task all the info will be erased or does this have to be done explicitly?

Are you referring to the trains-agent running a docker?
By default the cache is persistent between execution (i.e. saving time on multiple downloads between experiments)

5 years ago
0 I Get These Warnings Whenever I Run Pipelines And I Have No Idea What It Means Or Where It Comes From:

Hmm yeah I think that makes sense. Can you post here the arguments?
I'm assuming you have something like '1.23a' in the arguments?

one year ago
Show more results compactanswers