Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SteadySeagull18
Moderator
6 Questions, 15 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

14 × Eureka!
0 Votes
1 Answers
905 Views
0 Votes 1 Answers 905 Views
Is there any place I can find the metric storage usage of a given experiment, or all experiments underneath a project?
one year ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Hello all! Is there any way to attach to a task/worker in a terminal? I have jobs which fall into pdb upon an error, and it would be very helpful to be able ...
one year ago
0 Votes
1 Answers
994 Views
0 Votes 1 Answers 994 Views
I'm wondering if I've run into a bug, or am not understanding something correctly. In a pre_execute_callback in a pipeline step, I am calling model.get_local...
one year ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hello! I was hoping I could get some debug help. I've set up a ClearML pipeline using the PipelineController, and when running through pipeline.start_locally...
2 years ago
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
Hello everyone! I'm currently trying to set up a Pipeline, and am a bit confused at a few things. Some questions I have: What does the intended workflow for ...
2 years ago
0 Votes
10 Answers
681 Views
0 Votes 10 Answers 681 Views
8 months ago
0 Hi All, I Am Really Stuck In Getting A Clearml Pipeline To Work. I Am Using The Open Source Version I Am Trying To Reproduce The Example On The Documentation, Using Pipelines In Task Mode. Here Is My Set Up

Reviving this: do you recall what fixed this, or has anyone else run into this issue? I'm constantly getting this in my pipelines. If I run the exact same pipeline code / configuration multiple times, it will eventually run without a User aborted: stopping task (3) , but it's unclear what is happening the times when it fails.

one year ago
0 Hi All! I Am A Bit Confused As To How The Python Environment Is Set. I Can Submit Jobs That Build The Environment And Run Perfectly Fine. But, If I Abort The Job -> Requeue It From The Gui, Then A Different Environment Is Installed (Which Has Some Package

I guess what I'm confused about is that the final resolved environment is different between the first manual execution and the reproduced one -- the first runs perfectly fine, the second crashes and fails to make the environment.

8 months ago
0 Hello All! Is There Any Way To Attach To A Task/Worker In A Terminal? I Have Jobs Which Fall Into Pdb Upon An Error, And It Would Be Very Helpful To Be Able To Connect To Them And Interact With The Debugger.

I am wondering if there is a way for me to connect to a currently worker process and interact with anything in the script with expects user input. For example, if I submitted a task that had this as its script:

# ... other stuff

import code
code.interact()

would there be any way for me to connect and actually use the interactive Python session it drops into?

one year ago
one year ago
0 Hello! I Was Hoping I Could Get Some Debug Help. I'Ve Set Up A Clearml Pipeline Using The Pipelinecontroller, And When Running Through

Yup, there was an agent listening to the services queue, it picked up the pipeline job and started to execute it. It just seems frozen at the place where it should be spinning up the tasks within the pipeline

2 years ago
0 Hello! I Was Hoping I Could Get Some Debug Help. I'Ve Set Up A Clearml Pipeline Using The Pipelinecontroller, And When Running Through

Yup! Have two queues: services with one worker spun up in --services-mode , and another queue (say foo ) that has a bunch of GPU workers on them. When I start the pipeline locally, jobs get sent off to foo and executed exactly how I'd expect. If I keep everything exactly the same, and just change pipeline.start_locally() -> pipeline.start() , the pipeline task itself is picked up by the worker in the services queue, sets up the venv correctly, prints ` St...

2 years ago
0 Hello Everyone! I'M Currently Trying To Set Up A Pipeline, And Am A Bit Confused At A Few Things. Some Questions I Have:

Hi AgitatedDove14 , thanks for the response!

I'm a bit confused between the distinction / how to use these appropriately -- Task.init does not have repo / branch args to set what code the task should be running. Ideally, when I run the pipeline I run the current branch of whoever is launching the pipeline which I can do with Task.create . It also seems like Task.init will still make new tasks if artifacts are recorded?

My ideal is that I do exactly what ` Task.c...

2 years ago
0 Hello Everyone! I'M Currently Trying To Set Up A Pipeline, And Am A Bit Confused At A Few Things. Some Questions I Have:

But it is a bit confusing that the docs suggest accessing node.job.task even though node.job is being set to None

2 years ago
0 Hello Everyone! I'M Currently Trying To Set Up A Pipeline, And Am A Bit Confused At A Few Things. Some Questions I Have:

I guess I'm just a bit confused by what the correct mental model is here. If I'm interpreting this correctly, I need to have essentially "template tasks" in my Experiments section whose sole purpose is to be copied for use in the Pipeline? When I'm setting up my Pipeline, I can't go "here are some brand new tasks, please run them", I have to go "please run existing task A with these modifications, then task B with these modifications, then task C with these modifications?" And when the pipeli...

2 years ago
0 Hello Everyone! I'M Currently Trying To Set Up A Pipeline, And Am A Bit Confused At A Few Things. Some Questions I Have:

Oooo I didn't notice the base_task_factory argument before, that seems exactly like what I'd want. I will try that now! Thank you.

I think the docstring is just a bit confusing since it seems to directly recommend accessing node.job.task to access/modify things. I believe I have found a workaround for my specific case though by using pipeline.get_processed_nodes() to grab some relevant info from the previously completed step.

2 years ago