Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi Everyone! Is There A Way To Specify The Working Directory In A Pipeline Component? I’M Using Pipelines From Decorators, I Can Set The Repo Url Just Fine, But I’M Running Everything From A Subfolder, And The Working Dir Is Set To

Okay this is a bit hacky but will work

@PipelineDecorator.component(...)
def step(...)
  import sys
  import os
  sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), "projects", "main" ))
  
  from file import something
one year ago
0 Hi All! I Have Methods Inside Notebooks That I Made Available To Clis Using Nbdev
  • In a notebook, create a method and decorate it by fastai.script’s @call_parse .Any chance you have a very simple code/notebook to reference (this will really help in fixing the issue)?
2 years ago
0 Hello Everyone! I Have A Problem With Clearml. Could You Please Help Me? I Have 2 Little Projects With Total 31 Experiments. And Its 837Mb Metric Stored. Where Can I Find A Detail Information About This Memory Quota Spending? I Really Don'T Understand, Wh

Oh I see, yes the "metrics" include both scalars / plots & console outputs,
I also think they are updated only once a day (or maybe twice a day?) so even if you delete them it will take to update
(archive is not delete, you then need to go to the archived view and delete it from there)

one year ago
0 Continuing On

Docker cmd is basically docker image name but you can add parameters as well.
For example "Nvidia/cuda" or "Nvidia/cuda -v /mnt/data:/mnt/data"

4 years ago
0 Another Quick Question About Fileservers And Clearml-Agent: Clearml-Agent Seems To Ignore The Output Destination Set In The Task Config

@<1523701868901961728:profile|ReassuredTiger98>
Manually set both:
None
None
To where you want your files to be uploaded

2 years ago
0 Hi, I Have A Small Issue About Gpu Monitoring. I Run My Training Inside A Singularity Container And I Set The Cuda_Visible_Devices Variable. However, I Get The Following Message:

Maybe permissions?!
you can test it manually by installing pynvml
and running:
from pynvml.smi import nvidia_smi nvsmi = nvidia_smi.getInstance() nvsmi.DeviceQuery('memory.free, memory.total')

5 years ago
0 Hi, V1 Of Agent Seems To Have Removed Agent.Package_Manager.Force_Repo_Requirements_Txt. Is This Still Available In Other Forms?

Hmm, I think the issue is here (the docker command mount)
'-v', '/tmp/.clearml_agent.de0n48pm.cfg:/root/clearml.conf'

4 years ago
0 Hello! Since Today I Get

@<1523701868901961728:profile|ReassuredTiger98> what are you getting with:

nvidia-smi

And here:

ls -la /usr/local/
4 years ago
0 Hi I Saw This On The Clearml-Agent Docs But Other Than The Docker Image, I'M Not Sure How To Integrate This With Clearml Py And Clearml-Server. Please Advise.

Hi SubstantialElk6
No need for that, you can use the helm chart (or spin them once with kubctl) then they take care of scheduling by themselves.
You can also use the k8s glue (basically spinning kubernetes pods automatically for you, based on the Tasks that you push into the ClearML queue)
https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py

In short, two possible deployments
Static k8s pod running the agent (then the agent runs all the experiments inside t...

4 years ago
0 Multiprocessing.Pool.Remotetraceback: """ Traceback (Most Recent Call Last): File "/Usr/Lib/Python3.6/Multiprocessing/Pool.Py", Line 119, In Worker Result = (True, Func(*Args, **Kwds)) File "/Usr/Lib/Python3.6/Multiprocessing/Pool.Py", Line 44, I

GreasyPenguin14 whats the clearml version you are using, OS & Python ?
Notice this happens on the "connect_configuration" that seems to be called after the Task was closed, could that be the case ?

4 years ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

Meaning if I create a sleep endpoint that is async

Hmm are you calling "sleep" or "async.sleep"?
Also are you running the serving service with GUNICORN or UVCORN?
see here:
None

one year ago
0 <no title>

Hi @<1523704207914307584:profile|ObedientToad56>

hat would be the right way to extend this with let's say a custom engine that is currently not supported ?

as you said 'custom' πŸ™‚
None
This is actually a custom engine, (see (3) in the readme, and the preprocessing.py implementing it) I think we should actually add a specific example to custom so this is more visible. Any thoughts on what would...

3 years ago
0 Hello Again, How Can I Use The

Sure thing πŸ™‚

4 years ago
2 years ago
0 If I Set

sure

4 years ago
0 I Found An Interesting Error. If I Run The Following:

Hi @<1545216070686609408:profile|EnthusiasticCow4>
hmm this seems odd, and definitely looks like a bug, please report on GH πŸ™

2 years ago
0 Hope Everyone'S Having A Nice Holiday Period. I'Ve Been Debating Between Cron And The Clearml Taskscheduler Cron Is The Solution I'M Currently Using But I Wanted To Understand The Advantages To Using The Taskscheduler. Right Now I'M Using The Classic Cro

So if I pass a function that pulls the most recent version of a Task, it'll grab the most recent version every time it's scheduled?

Basically you function will be called, that's it.
What I'm assuming is that you would want that function to find the latest Task (i.e. query based & filter based on project/name/tag etc), clone the selected Task and Enqueue it,
is that correct?

one year ago
0 Hi Guys, Just Wondering If Anyone Encountered This Error When Using The Pipeline Controller Object. I Simply Added A Step With The Step-Name And Base_Task_Id As Flags.

Hi AverageBee39
What's the clearml-server and clearml packge you are using ?
(I looks like some capability that is missing from the server, i.e. needs upgrade ?!)

4 years ago
0 Heyo, After Building Some Custom Pipelining Functionality On Mlflow, I Started Looking For Better Software That Can Beat What I Created - With A Similar Amount Of Effort. Problem Has Been That Up Till Now, All I Found Could Make Things Way Better But Al

Hi ContemplativePuppy11
This is really interesting point.
Maybe you can provide a pseudo class abstract of your current pipeline design, this will help in trying to understand what you are trying to achieve and how to make it easier to get there

2 years ago
0 So From What I Can Tell Using

Are you sure you passed add_task_init_call=True to task create?

2 years ago
Show more results compactanswers