Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8094 Answers
  Active since 10 January 2023
  Last activity 10 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, I Tried The Clearml-Session Cli To Start A Jupyter Instance On An Agent, But An Error With The Password, Here Is The Full Cli Log:

That didn’t gave useful infos, was that docker was not installed in the agent machine x)

JitteryCoyote63 you mean "docker" was not installed and it did not throw an error ?

3 years ago
0 With The Helm Charts, What Is The Recommend Way To Automate Getting An Api Secret Pair For The K8 Glue Agent So You Dont Have To Go Into The Ui And Generate One In Between The Server And Agent Helm Releases?

So essentially, the server helm chart creates randomly generated secret pair and deploys it as a shared k8 secret that pods can access.

This is the tricky part, for the helm chart to be able to create it, it means it can login to the server it means there is a secret embedded in the helm chart that lets you access the default server. you see my point ?

2 years ago
0 Hey, Do Hyperdatasets Offer The Same Features With Tabular Data? Almost All Examples On The Docs Are On Image Datasets

basically @<1554638166823014400:profile|ExuberantBat24> you can think of hyper-datasets as a "feature-store for unstructured data"

one year ago
0 Hello! I Think I'Ve Found A Bug, But Couldn'T Fix It Completely To Make A Pull Request. I Want To Optimizer Hyperparameters With Trains.Automation But:

I want to optimizer hyperparameters with trains.automation but: ...

Yes you are correct, in case of the example code, it should be "General/..." if you have ArgParser, it should be "Args/..." Yes it looks like the metric is wrong, it should be "epoch_accuracy" & "epoch_accuracy"

4 years ago
0 Hi, Is There Any Option To Run Clearml Agent In Docker?

Hi @<1645597514990096384:profile|GrievingFish90>
You mean the agent itself inside a docker then the agent spins sibling dockers for the Tasks ?

11 months ago
0 I'Ve Been Trying To Use The

Hi @<1610808279263350784:profile|FriendlyShrimp96>

Is there a way to get a list of variants given a metric, or even just a full list of metrics and variants for a given task id?

Try this
None

from clearml.backend_api.session.client import APIClient

c = APIClient()
metrics = c.events.get_task_metrics(tasks=["TASK_ID_HERE"], event_type="training_debug_image")
print(metrics)

I think API ...

one year ago
0 Assuming I Have A

WackyRabbit7 I guess we are discussing this one on a diff thread 🙂 but yes, should totally work, that's the idea

4 years ago
0 Hi Guys, Thanks For The Previous Discussion On Ml-Ops With Clearml Agent. I'M Still Not Sure How To Monitor A Training Job On K8S (That Wasn'T Scheduled By Clearml). My Clearml Server Is Deployed And Functional For Tracking Non-K8S Jobs. But For A K8S Job

That wasn't scheduled by ClearML).

This means that from Clearml perspective they are "manual" i.e the job it self (by calling Task.init) create the experiment in the system, and fills in all the fields.

But for a k8s job, I'm still unsuccessful.

HelpfulDeer76 When you say "unsuccessful" what exactly do you mean ?
Could it be they are reported to the clearml demo server (the default server if no configuration is found) ?

3 years ago
0 Hi Guys, I Have Been Running The Clearml-Serving For A While Now And I Realize That From Time To Time After A Couple Of Hours The Serving Task (Control Plane) That Is Configured Through The Cli Goes Into Status Abort. This Happens Even Though All The Pods

Hi @<1569858449813016576:profile|JumpyRaven4> could you test the fix? just pull & run

allegroai/clearml-serving-triton:1.3.1
allegroai/clearml-serving-inference:1.3.1
11 months ago
0 Hi! I Am Running A Code From Repository, Which Is Cloned By The Following Command:

EnviousPanda91 this seems like a specific issue with the clearml-task cli, could that be ?
Can you send a full clearml-task command-line to test ?

2 years ago
0 So From What I Can Tell Using

ShinyPuppy47 the code that is being launched, does it call task.init?

2 years ago
0 So From What I Can Tell Using

Are you sure you passed add_task_init_call=True to task create?

2 years ago
0 Hi Anyone

Hi AstonishingWorm64
Is this the same ?
https://github.com/allegroai/clearml-serving/issues/1
(I think it was fixed on the later branch, we are releasing 0.3.2 later today with a fix)
Can you try:
pip install git+

3 years ago
0 With The Helm Charts, What Is The Recommend Way To Automate Getting An Api Secret Pair For The K8 Glue Agent So You Dont Have To Go Into The Ui And Generate One In Between The Server And Agent Helm Releases?

I have to admit, I'm not sure...
Let me talk to backend guys, in theory you are correct the "initial secret" can be injected via the helm env var, but I'm not sure how that would work in this specific case

2 years ago
0 Re Dataset Object: Is It Possible To Use Sync_Folder And Upload Several Times Along The Code And Then Finalize The Dataset?

EmbarrassedSpider34

Sync_folder and upload
Several times along the code and then

Do notice they overwrite one another...

2 years ago
0 Hello, When Running A Task With A Remote Interpreter I Get

Hi DeliciousKoala34
This means the pycharm plugin was not able to run git on your local machine.
Whats your OS ?
could it be that if you open cmd / shell "git" is not in the path ?

2 years ago
0 When I Do

You could change infrastructure or hosting, and now your data is associated with the wrong URL

Yeah that makes sense, so have it on a specific dns name? (this is usually the case with k8s deployments)

2 years ago
0 Tracking From Experiments To Datasets

Yeah that make sense 🙂

2 years ago
0 Hello If I Try To Create A Dataset From Code, As Shown In This Example I Have Two Questions:

Can you test with the latest RC:
pip install clearml==1.0.3rc0

3 years ago
0 Hi, I Am New Here, Can I Ask Question On Trains-Server Also?

Hi CooperativeFox72 ,
From the backend guys, long story short, upgrade your machine => more cpu cores , more processes , it is that easy 🙂

4 years ago
0 It Is Possible To Attach To An

Hi GiganticTurtle0
Sure, OutputModel can be manually connected:
model = OutputModel(task=Task.current_task()) model.update_weights(weights_filename='localfile.pkl')

3 years ago
0 Hi Friends! I'M Trying To Upgrade The

Also, I just wanted to say thanks for the tool! I'm managing a small data science practice and it's going to be really nice to have a view of all of the experiments we've got and know our GPU utilization, all without having to give every data scientist access to each box where the workflows are run. Incredibly stoked.

♥ ❤ ♥

3 years ago
0 Has Anyone Successfully Deployed Clearml On A Kube Cluster Utilizing Istio? I Don’T See Any Mention Of Istio In The Docs.

i’m working on creating a custom config with istio

That is awesome! let me know if we could help 🙂
Also please consider PRing it, I'm sure other users will appreciate the option

3 years ago
0 Assuming I Have A

That is correct.
Obviously once it is in the system, you can just clone/edit/enqueue it.
Running it once is a mean to populate the trains-server.
Make sense ?

4 years ago
Show more results compactanswers