Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
WackyRabbit7
Moderator
73 Questions, 550 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

533 × Eureka!
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

So once I enqueue it is up? Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to

4 years ago
0 Unrelated Problem (Or Is It?) The Clearml'S Built In Cleanup Service Fails

to fix it, I excluded this var entirely from the docker-compose

2 years ago
0 Question, When Using

Good, so if I'm templating something using clearml-task (without queue, so the task is in draft mode) it will use this task? Even though it never exeucted?

3 years ago
3 years ago
0 I Have A Set Up An Agent, On A Gpu Machine, And Spun Up The Daemon In Docker Moder, And Specifically Specified A Gpu That It Will Work With. The Image Is Okay And I Verified That By Running

By the way, just inspecting, the CUDA version on the output of nvidia-smi is matching the driver installed on the host, and not the container - look at the image below

4 years ago
0 Using

Saving part from task A:

pipeline = trials.trials[index]['result']['pipeline'] output_prefix = 'best_iter_' if i == 0 else 'iter_' task.upload_artifact(name=output_prefix + str(index), artifact_object=pipeline)

3 years ago
0 Using

Couldn't find any logic on which tasks fail and why... all the lines are exactly the same, only different parameters

3 years ago
0 Using

a third one?

3 years ago
0 Using

Thanks

3 years ago
0 Using

I will

3 years ago
0 Unrelated Problem (Or Is It?) The Clearml'S Built In Cleanup Service Fails

Let's take a step back. Let's remove the clearml-services from the docker compose for a second, and run it manually (then you can control everything). Once you have it running manually, let's try to replicate the setup back to the docker compose, make sense ?

I'd prefer not to docker-compose down as researchers are actively working on it, what do you say that I will manually kill the services agent and launch one myself?

3 years ago
0 I Have A Production Inference Pipeline Which I Want To Continuously Test On My Github To Make Sure It Doesn'T Break As We Move Forward. The Ideal Scenario For Me Is To Use

Gotcha, didn't think of an external server as Service Containers are part of Github's offering, I'll consider that

3 years ago
0 Unrelated Problem (Or Is It?) The Clearml'S Built In Cleanup Service Fails

I'm saying that because in the task under "INSTALLED PACKAGES" this is what appears

2 years ago
0 Using

task here is a ClearML task object

3 years ago
0 Using

Any news on this? This is kind of creepy, it's something so basic that I can't trust my prediction pipeline because sometimes it fails randomly with no reason

3 years ago
0 In Pipelinev2, Is It Possible To Register Artifacts To The Pipeline Task? I See There Is A Private Variable

AgitatedDove14

So nope, this doesn't solve my case, I'll explain the full use case from the beginning.

I have a pipeline controller task, which launches 30 tasks. Semantically there are 10 applications, and I run 3 tasks for each (those 3 are sequential, so in the UI it looks like 10 lines of 3 tasks).

In one of those 3 tasks that run for every app, I save a dataframe under the name "my_dataframe".
What I want to achieve is once all tasks are over, to collect all those "my_dataframe" arti...

3 years ago
Show more results compactanswers