Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
EnthusiasticShrimp49
Moderator
0 Questions, 96 Answers
  Active since 18 February 2023
  Last activity 2 years ago

Reputation

0
0 Hello Guys I Have A Question About Local Cache Right Now Im Trying To Store In Cache A Pretty Large Dataset (Over 20Mil Files And 3Tb Of Data) I Use A

Thanks for pointing this out, we will need to update our documentation. Still, if you manually inspect the ~/clearml.conf file you will see the available configurations

2 years ago
0 Hello Everyone, I Would Like To Know What Your Projects Are In Terms Of The Usage Of Clearml Pipelines? What Are Your Most Elaborate Pipelines? So Far, I Am Using "Only" A Pipeline That Looks Like This:

Sounds interesting. But my main concern with this kind of approach is if the surface of the (hparam1, hparam2, objective_fn_score) is non-convex, using your method you may not reach the best set of hyperparameters. Maybe try using smarter search algorithms, like BOHB or TPE if you have a large search space, otherwise, you can try to do a few rounds of manual random search, reducing the search space around the region of most-likely best hyperparameters after every round.

As for why struct...

2 years ago
0 Hey Guys

It may indeed be, thanks for letting us know, we’ll try to replicate it

2 years ago
0 Hi! I Am Trying To Build And Run A Pipeline. I Pass My Dataset As Parameter Of Pipeline:

Hey @<1523704757024198656:profile|MysteriousWalrus11> , given your use case, did you consider passing the path to the dataset? Like an address to an S3 bucket

2 years ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

I ’ m afraid serializing an entire class won’t be possible , but create_function_task will send the entire environment for remote execution , so you can still access your code

one year ago
0 I Know At Least One Other Person Has Posted About This Previously, But When I Interact With

It happens due to an internal use of Dataset.get , the larger the dataset, the more verbose it will be. We’ll fix this in the upcoming releases

2 years ago
0 Hi, I Would Like To Bring Awareness

I think you can set the cuda version in the clearml.conf , alternatively you can have the agent use a docker image with your required version of cuda instead of setting the environment directly on the machine

2 years ago
0 Hey There! I Kindly Ask For Your Swarm Knowledge On Clearml Pipelines. I'M Trying To Setup A Simple Pipeline With A Controller Running On The Service-Queue And Three Tasks, That Are Added With

Hey @<1547390438648844288:profile|ScaryJellyfish75> , can you provide the whole code for the pipeline, and also mention what clearml version are you using?

2 years ago
0 1)Hi Team,Can I Clone Experiment Shared By Some One, Via Link?

Hey Sana, yes you can. When you open the link, check on the upper-right side the Task's menu bar, and you will notice that you can clone the shared task.
image

2 years ago
0 Hello, I Am Trying To Modify My Clearml-Agent Running On A Aws Autoscaler (From Clearml Applications). I Want To Be Able To Clone My Repo (Working), And Install My Poetry Dependencies From

Do you know whether the agent VM/image has python 3.9 installed ? Also, you emphasised that this happens when setting the package manager to poetry, does it mean this issue doesn’t happen when leaving package manager settings to default values ?

2 years ago
0 Does Any One Know This Error While Running A Pipeline:

Can you paste here the code of the pipeline that you're trying to run?

2 years ago
0 My Project Pipeline Is Runnung Well And Good But Instead Of Completed It Is Coming As Aborted After Complete Execution

Is this a jupyter notebook or something ? Can you download it properly as either a .ipynb or .py file?

2 years ago
0 Hey Guys

Can you please check with the latest 1.10.2 SDK version if the checkpointing issue still happens. As for the example code which couldn't be reproduced, we're already working on it and should have a fix for it for the next minor SDK version

2 years ago
0 How To Version Models While Training In Production

This sounds like you don't have clearml installed in the ubuntu container. Either this, or your clearml.conf in the container is not pointing to the server, as a result all information is missing.

I'd rather suggest you change the approach, and run a clearml-agent setup with docker and when you want to run YOLOv5 training you actually execute it remotely on the queue that the agent is listening to

one year ago
0 Hello, I Saw, That Clearml Data Was Integrated Into Yolov5

To link a dataset to a task you need to pass the alias= parameter to the Dataset.get . See here: https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#accessing-datasets

2 years ago
0 Hi There, Does Anyone Have Suggestions For Best Practice For Deploying A Pipeline So That It Can Run Remotely On A Clearml Server Using A Docker Image? I Am Finding The Clearml Docs And Videos Insufficient To Get The Pipeline To Actually Run To Completion

Hey @<1654294828365647872:profile|GorgeousShrimp11> can you abort all pending experiments that wait to be fetched from this queue and try again ? Off the top of my head it could be that the clearml-agent can’t pull the custom docker image. In general you should treat the docker images not as step definitions but only as the environment , hence setting the entrypoint is not necessary

one year ago
0 Hi, We Would Like To Incorporate Some Approval Process In Clearml. One Of The Needs Is To Attach Some Pdfs And Word Docs To A Published Experiment, Preferbly Through The Web Ui. The Attachments Could Be In The Form Of The Actual Files, Or Links To The Fil

This sounds like a use case for the enterprise version of ClearML. In it you can set read/write permissions. Publishing is considered a "write", so you can limit who can do it. Another thing that might be useful in your scenario is to try using "Reports", and connect the "approved" experiments info to a report and then publish it. Here's a short video introducing reports .

By the way, please note that if the experiment/report/whatever is publis...

2 years ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

You can try to add the force_download=True flag to .get() to ignore the locally cached content. Let me know if it helps.

2 years ago
0 Hi There, Does Anyone Have Suggestions For Best Practice For Deploying A Pipeline So That It Can Run Remotely On A Clearml Server Using A Docker Image? I Am Finding The Clearml Docs And Videos Insufficient To Get The Pipeline To Actually Run To Completion

Which gives me an idea. Could you please remove the entrypoint from the docker image altogether and try again ?

Overriding the entrypoint in the image can lead to docker run/docker exec failing to work properly , because instead of a shell it will use your entrypoint to run everything

one year ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

Also, make sure you use Task.init instead of task.init

2 years ago
Show more results compactanswers