Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
TrickySheep9
Moderator
71 Questions, 428 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

383 × Eureka!
4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

As in if there are jobs, first level is new pods, second level is new nodes in the cluster.

4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

AgitatedDove14 - any pointers on how to run gpu tasks with k8s glue. How to control the queue and differentiate tasks that need cpu vs gpu in this context

4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

Running multiple k8s_daemon rightt? k8s_daemon("1xGPU") and k8s_daemon('cpu') right?

4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

AgitatedDove14 aws autoscaler is not k8s native right? That's sort of the loose point I am coming at.

4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

For different workloads, I need to habe different cluster scaler rules and account for different gpu needs

4 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

Got it. Never ran GPU workload in EKS before. Do you have any experience and things to watch out for?

4 years ago
0 I Am Not Familiar With Pytorch, But Is It Expected That So Many “Models” Are Created? These Are Being Repeated As Well For A Single Task (This Is Training A T5_Model With Transformers):

` if project_name is None and Task.current_task() is not None:
project_name = Task.current_task().get_project_name()

if project_name is None and not Task.running_locally():
task = Task.init()
project_name = task.get_project_name() `

4 years ago
0 Can Someone Help Me With Deploying This Example Model (From Triton Inference Server) Deployed In Clearml-Serving? Too Many Random Errors For Me To Figure It Out

That makes sense - one part I am confused on is - The Triton engine container hosts all the models right? Do we launch multiple gorups of these in different projects?

4 years ago
0 Why Would Every Submitted Task Be Aborted Directly?

I was having this confusion as well. Did behavior for execute_remote change that it used to be Draft is Aborted now?

4 years ago
0 So I Bumped Onto This Comparison Shared By Dagshub. It Kinda Placed Clearml Is A Rather Bad Position Compared To Everything Else In The Industry.

CynicalBee90 - on the platform agnostic aspect - dvc does it with the CLI right? It that what made you give a green checkmark for it?

4 years ago
4 years ago
0 Hi Guys, Until Today I Always Requested Data Scientists To Use Cli To Create Tasks. After That I Usually Reconfigure Them So They Can Be Pointed On Git Repo And So On. Unfortunately This Is Becoming A Big Task Since Now We Have Pipelines With Many Tasks A

Example:

name: ml-project template: nbdev pipelines_runner: gitlab pipelines: pipeline-1: steps: - name: "publish-datasets" task_script: "mlproject/publish_datasets.py" - name: "training" task_script: "mlproject/training.py" parents: ["publish-datasets"] - name: "test" task_script: "mlproject/test.py" parents: ["training"]Have cli which goes through each of the tasks and creates them

4 years ago
0 Getting This Error At

That’s great, will try it out soon (it’s 2.30am here, about to crash 🙂 )

4 years ago
4 years ago
0 Is It Possible To Do Additional Setup (The

Ah is it, didn’t know about that. Let me check it out.

4 years ago
0 Does K8S Glue Support Running Service Agent? Slightly Confused Here

I guess the question is - I want to use services queue for running services, and I want to do it on k8s

4 years ago
0 Is It Possible To Set An Environment Variable For A Task?

yeah i was trying in local and it worked as expected. But in local I was creating a Task first and then seeing if it’s able to get project name from it

4 years ago
0 Is There A Way To

+1. The k8s helper needs to be more k8s native.

4 years ago
0 When Using Something Like Pdf2Image Which Requires Poppler (Which Can Be Installed With Conda), How Can I Ensure That The Task Can Run On An Agent Correctly? As Of Now It Doesn’T Know About Poppler

Basic question - i am running clearml agent in a ubuntu ec2 machine. Does it use docker by default? I thought it uses docker only if I add the --docker flag?

4 years ago
Show more results compactanswers