Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4213 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi Folks, I Have A Question On The

Hi ObedientToad56 🙂

My question is on how the deployment would be once we have verified the endpoints are working in a local container.

I isn't the deployment just running the inference container? You just open up the endpoints towards where you wanna server, no?

3 years ago
0 Hello Community, I Am Trying To Run A Pipeline Using Pipeline Decorater. My Components I.E. Step_One, Step_Two.... I Want To Run This Components On A Queue Default And The Pipeline On Another Queue Services. The Issue Is The Step_One Seems To Abort Automa

@<1533619725983027200:profile|BattyHedgehong22> , it appears from the log that it is failing to clone the repository. You need to provide credentials in clearml.conf

2 years ago
0 Hello Everybody, I'M Seeking For A Clarification On The 'Metrics' Quote For The Saas Platform. My Workspace Only Uses Self S3 Storage As The File Server To Store Artifacts, Datasets And Models. For Some Reason, My 'Metrics' Quota Is Blown Up By > 30Gb Of

the experiments themselves 🙂

Imagine if you have very large diffs or very large (several mb) configuration files logged into the system - this is sitting somewhere in the backend on a database

2 years ago
0 Hello, I Am Using Clearml In Docker Mode. I Have A Simple Script That Runs Locally, Runs On The Target Machine Running The Same Tensorflow Container, But Doesn'T Run When I Deploy It Using Clearml. Here'S The Log Of The Error:

Can you compare the installed packages between the original experiment to the cloned one? Do you see anything special or different between the two?

2 years ago
0 Proper Way To Upload Artifacts

Hi GentleSwallow91 ,

  1. When using jupyter notebooks its best to do task.close() - It will bring the same affect you're interested in
  2. If you would like to upload to the server you need to add the following parameter to your Task.init() The parameter is output_uri. You can read more here - https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
    You can either mark it as True or provide a path to a bucket. The simplest usage would be ` Task.init(..., output_uri...
3 years ago
0 Hi Everyone! I Created A Pipeline From One Block, Passed The Initial Parameters. Please Tell Me, Is It Possible To Make A New Launch Of The Pipeline, But With Different Parameters, Just Like In The Draft Mode Of Usual Experiments. Globally, I Want To Init

Hi @<1524560082761682944:profile|MammothParrot39> , I think you need to run the pipeline at least once (at least the first step should start) for it to "catch" the configs. I suggest you run once with pipe.start_locally(run_pipeline_steps_locally=True)

2 years ago
0 Hi, Part Of The Ml Pipeline I'M Working On Temporarily Stores Intermediate Features Using

OK, there appears to be a github issue relating this:
https://github.com/allegroai/clearml/issues/388
I was right regarding encountering this before. People have asked for this feature and I think it appears to be a priority to add as a feature.

You can circumvent auto logging with the following:
task = Task.init(..., auto_connect_frameworks={'pytorch': False})However you will need to log other models manually now. More information is in the github issue 🙂

3 years ago
2 years ago
0 I Guess One Experiment Is Running Backwards In Time

JitteryCoyote63 , are you on a self hosted server? It seems that the issue was solved for 3.8 release and I think should be released to the next self hosted release

3 years ago
0 Hi, I'M Trying To Access/Use Experiment'S Model+Data+Params My Model And Data Are Stored In S3 And I'M Not Sure What Is The Practice Of Getting Them Reddy To Use. When I Use

You can clone it via the UI, enqueue it to a queue that has a worker running against that queue. You should get a perfect 1:1 reproduction

3 years ago
0 I Got A Clearml-Serving Instance Running With Inferences Served Correctly, However The Model-Endpoints Dashboard Screen Is Remaining Empty With The Splash Screen. Why Don'T The Model Endpoints Appear There Too?

Hi @<1774245260931633152:profile|GloriousGoldfish63> , this feature is waiting enablement on clearml-serving side and will be supported in the next release

7 months ago
0 Hi Team! Is There A Way To Make Clearml’S Aws Autoscaler And Queues Resource-Aware Please? I.E. If We Can Say, As We Enqueue Our Job, How Much Ram Or Gpu-Ram Or Even Gpus It Needs, Have The Scheduler/Autoscaler Dispatch The Job To Instances That Are Of Th

Hi @<1546665634195050496:profile|SolidGoose91> , when configuring a new autoscaler you can click on '+ Add item' under compute resources and this will allow you to have another resource that is listening to another queue.

You need to set up all the resources to listen to the appropriate queues to enable this allocation of jobs according to resources.

Also in general - I wouldn't suggest having multiple autoscalers/resources listen to the same queue. 1 resource per queue. A good way to mana...

2 years ago
0 Hello! I'Ve Been Trying To Use Clearml For The First Time, But I Cannot Seem To Run The First Serving Model. First, I Run The Following In Powershell: >>> Clearml-Serving Create --Name "June Test" Log: Clearml-Serving - Cli For Launching Clearml Serving

So, I went to the link

in order to use it like Postman. Testing the API without using Python. It was ChatGPT that directed me there, and it is kind of a nice way to validate the API

I would ignore anything that ChatGPT says about ClearML (and most other things too)

5 months ago
0 Hey I Hope Everyone Is Having A Good Day, Two Quick Questions About Datasets:

This description in the

add_tags()

doc intrigues me tho, I would like to remove a tag from a dataset and add it to another version (eg: a

used_in_last_training

tag) and this method seems to only add new tags.

I see. Then I think you would need to do this via the API:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksupdate
or
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksupdate_batch

3 years ago
0 Hello, Where Are Projects And Logging File Saved Default? Are They Saved In Fileserver? Then What Is The Exact Path?

Hi @<1524922424720625664:profile|TartLeopard58> , projects and many other internals like tasks are all saved in internal databases of the ClearML server, specifically in mongo & elastic

2 years ago
0 Hello Everyone! Could You Help Me With The Authorization Question? Is It Possible To Add A New User Through The Api To Access Clearml Webserver? I Found Three Methods In The Clearml Rest Api: Auth.Credentials_Key, Auth.Credentials, Auth.Role. Is There A D

Hi @<1578555761724755968:profile|GrievingKoala83> , there is no such capability in the open source. To add new users you need to edit the users file.

In the Scale/Enterprise licenses you have full user management including role based access controls

one year ago
0 Does Clearml Support Running The Experiments On Any "Serverless" Environments (I.E. Vertexai, Sagemaker, Etc.), Such That Gpu Resources Are Allocated On Demand? Alternatively, Is There A Story For Auto-Scaling Gpu Machines Based On Experiments Waiting In

Does ClearML support running the experiments on any "serverless" environments

Can you please elaborate by what you mean "serverless"?

such that GPU resources are allocated on demand?

You can define various queues for resources according to whatever structure you want. Does that make sense?

Alternatively, is there a story for auto-scaling GPU machines based on experiments waiting in the queue and some policy?

Do you mean an autoscaler for AWS for example?

3 years ago
0 I Just Saw The New Release Of The Agent 1.8.1 :

Hi @<1576381444509405184:profile|ManiacalLizard2> , it looks like the default setting is still false

one year ago
0 Is It Possible To Change The

I think it sits somewhere in the UI code

3 years ago
Show more results compactanswers