Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi There, I'Ve Encountered A Problematic Behavior In Python. When Defining An Argument A Default Value Of

Hmm, I still wonder what is the "correct" answer for most people, is empty string in argparse redundant anyhow? will someone ever use it?

4 years ago
0 I Am Trying To Plot Values That Are Either 0 Or 1 (With Tensorboardx.Add_Scalar). However, It Doesn'T Show Correctly. Any Idea Why? (Smoothing Is 0)

Hmm, it might be sub-sampling on large scalar plots (so that we do not "kill" the ui), but I remember that it only happens above 50k samples. (when you zoom in, do you still get the 0.5 values?)

3 years ago
0 Hi, The Following Does Not Seem To Work

SmarmySeaurchin8 regrading (2)
I'm not sure the current visualization supports it. I mean we can put "{}", but that would imply you can edit it, which then we have to support, possible but weird, and this is why:
task.connect({'a':{},'b': {'nested': 'value}}will become
'a' = '{}'
'b/nested' = 'value'
But then if you edit to:
'a' = '{'nested': 'value'}'
'b/nested' = 'value'
you have two different ways of presenting the same type of structure...

3 years ago
0 Folks, Could You Please Clarify/Help? I Correct Understand, If --Docker Is Enable That Will Means Every New Experiments Will Be Executed Into Dedicated Agent Worker Containers? Also I See For

Hi UnevenOstrich23

if --docker is enable that will means every new experiments will be executed into dedicated agent worker containers?

Correct

I think the missing part is how to specify the docker for the experiment?
If this is the case, in the web UI, clone your experiment (which will create a draft copy, that you can edit), then in the Execution tab, scroll down to the "base docker image" and specify the docker image to use.
Notice that you can also add flags after the docker im...

3 years ago
0 I Have Set

what if the preexisting venv is just the system python? my base image is python:3.10.10 and i just pip install all requirements in that image. Does that not avoid venv still?

it will basically create a new venv inside the container forking the existing preinistalled stuff (i.e. the new venv already has everything the python system has preinstalled)
then it will call "pip install" on all the "installed packages of the Task.
Which should just check everything is there and install nothing...

6 months ago
0 Hi, Is There A Concept Of An Agent Taking More Then One Job?

Actually we just added venv support as well, the reasoning is/was inside a docker it is easier to separate the running processes, with venv we had to support multiple venv running at the same time and reusing of those venv (just a bit more logic) anyhow this is now supported :)

3 years ago
0 I Have Setup A

Q. Would someone mind outlining what the steps are to configuring the default storage locations, such that any artefacts or data which are pushed to the server are stored by default on the Azure Blob Store?

Hi VivaciousPenguin66
See my reply here on configuring the default output uri on the agent: https://clearml.slack.com/archives/CTK20V944/p1621603564139700?thread_ts=1621600028.135500&cid=CTK20V944
Regrading permission setup:
You need to make sure you have the Azure blob credenti...

3 years ago
0 I Have Setup A

I suppose the same would need to be done for any 

client 

PC running 

clearml

 such that you are submitting dataset upload jobs?

Correct

That is, the dataset is perhaps local to my laptop, or on a development VM that is not in the 

clearml

 system, but I from there I want to submit a copy of a dataset, then I would need to configure the storage section in the same way as well?

Correct

3 years ago
0 With

I am just about to move house, which is stressful enough without a global pandemic(!), so until that's completed I won't commit to anything.

Sure man 🙂 no rush, I appreciate the gesture regardless of the outcome
Many thanks!

3 years ago
0 With

Sounds good.
BTW, when the clearml-agent is set to use "conda" as package manager it will automatically install the correct cudatoolkit on any new venv it creates. The cudatoolkit version is picked direcly when "developing" the code, assuming you have conda installed as development environment (basically you can transparently do end-to-end conda, and not worry about CUDA at all)

3 years ago
0 Hi All! I’M Trying To Set Up Remote-Launching Of Training Scripts On Clearml Autoscaler, And I Can’T Figure Out One Thing: How To Make Remote Clearml Agent Do

Hi @<1716987933514272768:profile|SuccessfulPuppy43>

How to make remote ClearML agent do

pip install -e .

in theory there is no need to do that clearml-agent adds the repo root folder to the python path.
If you insist on actually installing it, try to add to your "installed packages" section a "requirement.txt" compatible line:

-e .
5 months ago
0 I Have Set

@<1689446563463565312:profile|SmallTurkey79> could you attach the full log of the Task?
also I would recommend "export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1" (not true )
Usually binary env vars are 0/1
(I can see that the docs here: None
never mention it, I'll ask them to add that)

6 months ago
0 I Have Set

of what task? i'm running lots of them and benchmarking

If you are skipping every installation it should be the same
because if you set CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 it will not install Anything at all
This is why it's odd to me...
wdyt?

6 months ago
0 With

I find it quite difficult to explain these ideas succinctly, did I make any sense to you?

Yep, I think we are totally on the same wavelength 🙂

However, it also seems to be not too prescriptive,

One last question, what do you mean by that?

3 years ago
0 With

Dm me 🙂

3 years ago
0 I'Ve Tried Setting Up A Clearml Application On Openshift Using The Helm Chart But The Pods Cannot Go Up Because They Are Trying To Write To Files And Directories That Aren'T Open To Non Root Users During Their Setup. This Is A Problem On Openshift Because

(also im a bit newer to this world, whats wrong with openshift?)

It's the most difficulty Kubernetes flavor to work with 🙂

weve already tried that but it didnt really change ...

Can you provide full log? as well as how you created the pods ?

2 years ago
0 Hi All! Question Around Resource Management Using

Oh that makes sense, This depends on how you setup the clearml k8s glue, (becuase the resource allocation is done by k8s) a good hack to limit the number of containers per GPU is to set a RAM limitation per pod, then k8s will know to limit the number of pods on the same GPU machine,
wdty?

2 years ago
0 Good Morning Folks, I Am Setting Up Clearml On A (Self-Hosted) K8S Cluster Using The

SarcasticSquirrel56

if I configure manually the pods for the different nodes, how do I make clearml server aware that those agents exist?

Basically the agent register themselves on your cleaml-server, and they register on which Queue(s) they listen to. In other words the interface to choose the different types of machines/gpus is by enqueue the Task to different queues.
For example: Queue(1): "CUDA11_GPUx1" , Queue(2): "CUDA10_GPUx1"
Make sense ?

EDIT:

I guess to achieve what I w...

2 years ago
0 Hi All! Question Around Resource Management Using

We actually plan to create different queues for different types of workloads, we are a bit seeing what the actual usage is to define what type of workloads make sense for us.

That sounds like a great path to take, it will make it very clear fro users on what they will be getting and why they should use specific queue.

As for the memory, yes the reasoning is clear, the main thing we'll have to see is hot define the limits, because we have nodes with quite different resources availab...

2 years ago
0 Is There A Way To Access Dataframe Logged Using Report_Table From The A Task Instance Instantiated Using Task.Get_Task(Id='.....')? I Have: T = Task.Get_Task(Id='....') And I Am Looking For Something Along The Lines Of: Df = T.Get_Table('Table Name')

Hi ThickDove42 ,
Yes, but by the time you will be able to access it, it will be in a display form (plotly), not very convient.
If this is something you need to re-use, I would argue that it is an artifact and should be stored as artifact (then accessing it is transparent) , obviously you can both report as table and upload as artifact, no harm in that.
what do you think?

4 years ago
Show more results compactanswers