Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
WackyRabbit7
Moderator
73 Questions, 550 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

533 × Eureka!
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
When using PipelineControler is there a way to execute locally? Or must I use a queue?
4 years ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Got something very wrong with my pipeline, the pipeline plot shows it has an experiment in_progress but when going to the experiment itself it is already com...
3 years ago
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
I have a data scientist constantly having the same problem. If she did not push his latest changes to git, she gets the following error Using cached reposito...
4 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
Why would the Mongo4 migration scripts (for clearml-server 1.2) try to chown 1000:1000?
3 years ago
0 Votes
31 Answers
103K Views
0 Votes 31 Answers 103K Views
In PipelineV2, is it possible to register artifacts to the pipeline task? I see there is a private variable ._task but not sure its the right way to go as it...
3 years ago
0 Votes
9 Answers
2K Views
0 Votes 9 Answers 2K Views
How do I restart trains-agents? How do I stop them?
5 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
5 years ago
0 Votes
32 Answers
109K Views
0 Votes 32 Answers 109K Views
I have a set up an agent, on a GPU machine, and spun up the daemon in docker moder, and specifically specified a GPU that it will work with. The image is oka...
5 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
I have a production inference pipeline which I want to continuously test on my Github to make sure it doesn't break as we move forward. The ideal scenario fo...
3 years ago
0 Votes
33 Answers
118K Views
0 Votes 33 Answers 118K Views
Question about the auto scaling service Under extra_trains_conf when I supply a configuration file path, should it be a path on the trains server running the...
4 years ago
0 Votes
31 Answers
128K Views
0 Votes 31 Answers 128K Views
Unrelated problem (or is it?) the ClearML's built in Cleanup Service fails clearml.utilities.pyhocon.exceptions.ConfigMissingException: 'No configuration set...
3 years ago
0 Votes
32 Answers
114K Views
0 Votes 32 Answers 114K Views
Very weird error, trying to run an experiment through an agent in docker mode, and I get this error docker: Error response from daemon: create /home/elior/De...
4 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
Is there an option to separate the storage from the server? e.g. deploying my trains server on some light machine, and confguring the storage to be AWS S3 or...
5 years ago
Show more results questions
0 I'M Running

to see if it contains the .git folder

3 years ago
0 Question About The Usage Of Trains Agents. In Our Company We Have 3 Hpc Servers, Two Of Them Have Multiple Gpus, One Is Cpu Only. I Saw In The Docs The Multiple Agents Can Be Run Separately Assigning Gpus In Whatever Manner You Want. My Questions Are 1

Makes sense

So I assume, trains assumes I have nvidia-docker installed on the agent machine?

Moreover, since I'm going to use Task.execute_remotely (and not through the UI) is there any code way to specify the docker image to be used?

5 years ago
0 I'M Running

` name: XXXXXXXXXX

on:
workflow_dispatch

jobs:
test-monthly-predictions:
runs-on: self-hosted
env:
DATA_DIR: ${{ secrets.RUNNER_DATA_DIR }}
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.RUNNER_CREDS }}
steps:
# Checkout
- name: Check out repository code
uses: actions/checkout@v2

  # Setup python environment
  - name: Setup up python environment using Poetry
    run: |
      /home/elior/.poetry/bin/poetry env use python3.9
  ...
3 years ago
0 Guess We'Re Back To Basics How Do I Report A Single Scalar With No Iteration Dimension - Something I Can Put As One Of The Columns In The Experiments Table?

AgitatedDove14 all I did was to cerate this metric as "last" and then turned on the "max" and "min" and then turned them off

I can't reproduce it now but:
I restarted the services and it didn't help I deleted the columns, and created them again after a while and it helped

4 years ago
0 I'M Running

so when I'm on the machine through SSH

3 years ago
0 How Come

We try to break up every thing into independent tasks and group them using a pipeline. The dependency on an agnet caused an unnecessary overhead since we just want to execute locally. It became a burden once new data scientists join the project and instead of just telling them "yeah, just execute this script" you have to now teach them about clearml, the role of agents, how to launch them, how they behave, how to remove them and stuff like that... things you want to avoid with data scientists

3 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

So once I enqueue it is up? Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to

4 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

Okay, so let me get this straight

The autoscaling is basically an ever-running task (lets say on the services queue). Now, the actual auto scaling and which queues exist have nothign to do with that, and are configured in the auto scale task?

4 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

Oh... from the docs I understood that I don't have to run the script, that I can either configure it in the UI, or with the sscript (wizard) so I ignored it up until now

4 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

Trains docs have at no point any mention on what should I do on the AWS interface... So I'm not sure at what point I should encounter this wizard

I'm going to play with it a bit and see if I can figure out how to make it work

4 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

What about permissions to the machines that are being spun up? For exampel if I want the instances to have specific permissions to read/write to S3 for example, how do I mange those?

4 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

You should try trains-agent daemon --gpus device=0,1 --queue dual_gpu --docker --foreground and if it doesn't work try quoting trains-agent daemon --gpus '"device=0,1"' --queue dual_gpu --docker --foreground

4 years ago
0 Question About The Configuration Format - I'D Like To Parse It Within My Python Code So I'Ll Be Able To Access Things Like

Another Q on that - does pyhocon allows me to edit the file while keeping the comments in place?

4 years ago
3 years ago
0 How Do I Get Access To

Cool - what kind of objects are returned by .artifacts. getitem ? I want to check their docs

4 years ago
0 How Should I Edit The

I only found Project ID, which I'm not sure what this refers to - I have the project name

3 years ago
0 In Pipelinev2, Is It Possible To Register Artifacts To The Pipeline Task? I See There Is A Private Variable

and then how would I register the final artifact to the pipelien? AgitatedDove14 ⬆

3 years ago
Show more results compactanswers