Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4175 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi, I’M Getting This Error When I Try To Run Task On A Remote Agent With Docker Mode Web Ui:

REMOTE MACHINE:

  1. git ssh key is located at ~/.ssh/id_rsa

Is this also mounted into the docker itself?

3 years ago
0 Im Trying To Run This Exmple :

I'm not quite sure I understand. Is this from the clearml-agent?

3 years ago
0 Im Trying To Run This Exmple :

SparklingElephant70 , Hi 🙂
Please create a queue in the system called 'services' and run an agent against that queue

3 years ago
0 Im Trying To Run This Exmple :

Can you please provide a shareable link?

3 years ago
0 Im Trying To Run This Exmple :

I'm not sure, check to which queue the steps are queued to

3 years ago
0 Im Trying To Run This Exmple :

Pending means it is enqueued. Check to which queue it belongs by looking at the info tab after clicking on the task :)

3 years ago
0 Im Trying To Run This Exmple :

You'll need to assign an agent to run on the queue, something like this: 'clearml-agent daemon -- foreground --queue services'

3 years ago
0 Im Trying To Run This Exmple :

But you said that pipeline demo is stuck. Which task is the agent running?

3 years ago
0 Is There A Robust Way (Using The Sdk And Not The Ui) To Add Tags To Task Regardless Of Where It Is Executed?

I think you can get the task from outside and then add tags to that object

3 years ago
0 Hi, I'M Setting A

Yeah I see what you're saying. It doesn't keep it's type. This might be a bug.

3 years ago
0 Hi, I'Ve Multiple Tasks Setup In A Complex Pipeline. How Can I;

Hi SubstantialElk6 ,

Define prior to running the pipeline, which tasks to be running on which remote queue using which images?

What type of pipeline steps are you running? From task, decorator or function?

Make certain tasks in the pipeline run in the same container session, instead of spawning new container sessions? (To improve efficiency)

If they're all running on the same container why not make them the same task and do things in parallel?

3 years ago
0 Hello Everyone, I'M Currently Working On

Hi @<1610445887681597440:profile|WittyBadger59> , how are you reporting the plots?

I would suggest taking a look here and running all the different examples to see the reporting capabilities:
None

2 years ago
3 years ago
0 Hello, When Sending New Tasks Using Python Script, I'M Getting This Error No Matter How Many Times I Retry The Task, Even Though The Commit Is Pushed And The Agent Has Correct Credentials. I Figured Out The Problem Is The Cached Repository Because The Er

Hi @<1523708920831414272:profile|SuperficialDolphin93> , once you deleted the cache folder did it work?

Also, did you try pulling the specific commit using the same credentials that are defined on the agent machine?

3 months ago
0 Hi All. After Rebooting The Server After "No Space On Disk" Cannot See Plots Of One Of The Experiments.

Hi @<1523701553372860416:profile|DrabOwl94> , do you see any errors in the elastic?

6 months ago
0 Does Any One Know This Error While Running A Pipeline:

Is it possible the machines are running out of memory? Do you get this error on the pipeline controller itself? Does this constantly reproduce?

2 years ago
0 Hi. I Am Trying To Install Clearml-Agent On The Remote Server On Aws. I Successfully Installed It To The Home Directory: Successfully Installed Attrs-20.3.0 Clearml-Agent-1.2.3 Distlib-0.3.4 Filelock-3.4.1 Furl-2.1.3 Idna-2.10 Orderedmultidict-1.0.1 Path

This is part of the log - I'll need the entire thing 🙂
` ERROR: Could not find a version that satisfies the requirement ipython==7.33.0 (from -r /tmp/cached-reqssiv6gjvc.txt (line 4)) (from versions: 0.10, 0.10.1, 0.10.2, 0.11, 0.12, 0.12.1, 0.13, 0.13.1, 0.13.2, 1.0.0, 1.1.0, 1.2.0, 1.2.1, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 3.0.0, 3.1.0, 3.2.0, 3.2.1, 3.2.2, 3.2.3, 4.0.0b1, 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.1.0rc1, 4.1.0rc2, 4.1.0, 4.1.1, 4.1.2, 4.2.0, 4.2.1, 5.0.0b1, 5.0.0b2, 5...

3 years ago
3 years ago
0 Is It Possible To Add Just A String Or Some Other Object As An Artifact? If Yes, Then How?

Yes & Yes.
task.upload_artifact('test_artifact', artifact_object='foobar')
You can save a string, however please note that in the end it will be saved as a file and not a pythonic object. If you want to keep your object, you can pickle it 🙂

3 years ago
0 Hi All, I Would Like To Use Clearml-Serving To Serve Model Binaries (For Use In On-Device Deployment). Can Clearml-Serving Be Used To Serve That?

Hi @<1523701601770934272:profile|GiganticMole91> , I think for binaries and not just the model files themselves you would need to do a bit of tweaking

5 months ago
0 Does

Hi ElegantCoyote26 ,

What happens if you delete ~/.clearml (This is the default cache for ClearML) and rerun?

3 years ago
0 Hi, I Wanted To Ask Whether Using Clearml

Hi @<1750327614469312512:profile|CrabbyParrot75> , why use the StorageManager module and not the Datasets to manage your data?

11 months ago
0 Does Clearml Support Running The Experiments On Any "Serverless" Environments (I.E. Vertexai, Sagemaker, Etc.), Such That Gpu Resources Are Allocated On Demand? Alternatively, Is There A Story For Auto-Scaling Gpu Machines Based On Experiments Waiting In

Does ClearML support running the experiments on any "serverless" environments

Can you please elaborate by what you mean "serverless"?

such that GPU resources are allocated on demand?

You can define various queues for resources according to whatever structure you want. Does that make sense?

Alternatively, is there a story for auto-scaling GPU machines based on experiments waiting in the queue and some policy?

Do you mean an autoscaler for AWS for example?

3 years ago
Show more results compactanswers