Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi Guys, When Reporting Debug Images, Is There Any Way To Use A String Instead Of An Int In

The only workaround I can think of is :
series = series + 'IoU>X'
It doesn't look that bad ๐Ÿ™‚

5 years ago
0 Hey Folks, When I Run

According to you the VPN shouldn't be a problem right?

Correct as long as all parties are on the same VPN it should work, all the connections are always http so basically trivial communication

4 years ago
0 Similar Question But When Running A Pipeline, Can I Control The Tags That The Tasks A Pipeline Creates?

pipeline, can I control the tags that the tasks a pipeline creates?ย 

add_pipeline_tags

ย  adds tags from pipeline to the tasks I suppose? But I also need to clear existing tags in those created tasks

add_pipeline_tags will add the unique ID of the pipeline execution, if you want to add specific tags you can use the task_overrides and provide:
pipe.add_step(..., task_overrides={'tags': ['my', 'tags']})

4 years ago
0 Hi, I’M Getting This Error When I Try To Run Task On A Remote Agent With Docker Mode Web Ui:

HI BurlyRaccoon64
Yes, we did the latest clearml-agent solves the issue, please try:
'pip3 install -U --pre clearml-agent'

3 years ago
0 Hello, Is There A Way To Disable Dataset Caching So That When

FreshParrot56 we could add this capability, but the main caveat is that f your version depends on multiple parent versions you still need to download and extract all the parent versions, which means that when you clear them you might hurt later performance. Does that make sense? What is the use-case / scenario for you?

2 years ago
0 Hi Everyone. I Have An Issue With The Simple Pipeline - It Runs Two Similar Nn Training Steps (Tf2.3, Windows10, Python 3.7) With Only Difference Is A Batch Size. I'M Running First Separately Each Step To Have Them In Clearml Project Page. Then I Run Pipe

That makes no sense to me?!
Are you absolutely sure the nntrain is executed on the same queue? (basically could it be that the nntraining is executed on a different queue in these two cases ?)

4 years ago
3 years ago
0 Is There A Way To Set The Name/Path Of The

I think this is the temp requirements it creates not your requirements file. If you attach a log here with the "installed packages" section maybe we could help to debug it

one year ago
0 I .

@<1523707653782507520:profile|MelancholyElk85> I just run a single step pipeline and it seemed to use the "base_task_id" without cloning it...
Any insight on how to reproduce ?

3 years ago
0 Hello! I Get The Following Error In Results->Console After A Task Is Sent For Remote Execution (Using Sdk):

I want each remote task to execute one instance of the hydra multirun, but I suspect the remote will try to run the full multirun by itself

if config.clearml.remote and task.running_locally(): task.execute_remotely( queue_name=config.clearml.queue_name, clone=True, exit_process=False ) returnI think this ensures the local execution actually triggers the remote one, so it should be as you expect, no?

2 years ago
0 I’M Trying To Get A Copy Of A Model Through Clearml Which Is Stored In S3:

I still think the issue is getting boto3 credentials

It might be the case
Are you using clearml-agent or are you running it manually ?

3 years ago
0 Hey, Everybody! I Am A New User Of The Clearml Service, And I Would Like To Ask You About Your Experience With Clearml Working With An Aws Virtual Machine. My Problem Is That When The Aws Virtual Machine Is Killed, My Pipelines And Scheduling Stop Working

Ohh then use the AWS autoscaler, basically it what you want, spin an EC2 and set an agent there, then if the EC2 goes down (for example if this is a spot), it will spin it up again automatically with the running Task on it.
wdyt?

one year ago
0 Hey All, Is There A Way To Setup Scalar Plotting So That Series On The Same Scalar Plot Will Have Different Colors?

HighOtter69 inside the legend click on the color rectangle next to the series name, you can change the color of the series on the graph. This property is stored so it will always remember your color preferences (yes even logging from another machine ๐Ÿ™‚ )

4 years ago
0 Clearml_Agent_Git_User Is This My Github Username? Or I Need To Setup A Custom Git Server?

CLEARML_AGENT_GIT_USER

Is your git user (on whatever git host/server you are using, GitHub/GitLab/BitBucket etc.)

4 years ago
0 Hi All - I Have A Question To Ask (And Not Sure If There Is A Channel For Faqs So Sorry For Putting It Here) ... I Am Using Trains In Combination With Pycharm'S Remote Debugging. I Have The Pycharm Plugin Installed. When The Experiment Ends, I Get

Yes that's the reason, basically there is a background thread analyzing the code, at the end of the execution if it is till running (hence the question regrading execution time) we give it extra 10seconds to come up with answers, otherwise we terminate, so the code won't get stuck. Makes sense to you?

5 years ago
0 Hi All! Question Around Resource Management Using

Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.Actually I am as well, this is Kubernets doing the resource scheduling and actually Kubernetes decided it is okay to run two pods on the Same GPU, which is cool, but I was not aware Nvidia already added this feature (I know it was in beta for a long time)
https://developer.nvidia.com/blog/improving-gpu-utilization-in-kubernetes/
I also see thety added dynamic slicing and Memory Proteciton:
Notice you can control ...

2 years ago
0 Hi All, Is There A Way To Schedule The Tasks From The Queue Onto The Gpu Instances Based On Factors Such As Gpu Utilisation, Number Of Cpu Cores Present, Free Memory Or Custom Parameters Such As Priority Of The Task, Estimated Time Etc?

I am trying to see if the user can submit a list of resource requirements (e.g 4GPUs, 12 cores, 100GB diskspace) for the task when queuing the task and the agents pick these tasks if they have the requested resources. With this, the user need not think about which queue to send the task to. The users just state what they need and the agents do the scheduling for them.

Can I assume we are talking Kubernetes under the hood for the resource allocation ?

4 years ago
0 Hi, I Have A Task Which Uses Hydra For Configuration. I Want To Add This Taks To A Pipeline, And Pass The Full Hydra Config Objects To The Task. Is There A Way To Do It? I Get “Parameters Should Be In The Form Of “`Section-Name`/Parameter”, Example: “Args

Okay this is a bit tricky (and come to think about it, we should allow a more direct interface):
pipe.add_step(name='train', parents=['data_pipeline', ], base_task_project='xxx', base_task_name='yyy', task_overrides={'configuration.OmegaConf': dict(value=yaml.dump(MY_NEW_CONFIG), name='OmegaConf', type='OmegaConf YAML')} )Notice that if you had any other configuration on the base task, you should add them as well (basically it overwrites the configurati...

3 years ago
0 Hello, We Are Currently Working On A Hyperparameter Tuning Job For Object Detection Following This Tutorial

DeterminedToad86
Yes I think this is the issue, on SageMaker a specific compiled version of torchvision was installed (probably part of the image)
Edit the Task (before enqueuing) and change the torchvision URL to:
torchvision==0.7.0Let me know if it worked

4 years ago
3 years ago
0 Hey, I’M Getting A Lot Of These

CourageousKoala93 when you call Task.close() it will mark the task as completed, there is no need to do that manually. The idea with mark_completed is that you can forcefully change the state if needed, or externally stop the task and mark it completed. Make sense?

3 years ago
0 Is There Any Way To Clear The Installed Packages Of A Task Programmatically? (I.E. Using The Python Sdk And Not The Ui)

GiddyTurkey39
BTW: you can always add the missing package via code:
Task.add_requirements('torch', optional_version)

4 years ago
0 Hi, Expanding On

Regrading the limit interface, let me check I think this is worked on (i.e. nice interface that should be pushed in the next few days). Let me get back to you on this one.

How will imposing an instance limit , prevent or allow --order-fairness feature for example, which exists when running in clearml-agent version compared to k8s_glue_example version ?

A bit of background on how the glue works:
It pulls jobs from the clearml queue, then it prepares a k8s job, and launches the k8s jobs...

4 years ago
0 One More Thing, I'M Trying To Take Full Advantage Of The Controller, But I Run Into A Problem In My Use Case. The Controller Is Super Useful For Creating A Dag Of Tasks Which Is A Behaviour Of Interest. But Issues Rise When The Tasks Are Changing. Not On

That is exactly that, the trains-agent is replicating the code from the git repo, and trying to apply the git diff (see uncommitted changes section). Obviously it failed ๐Ÿ™‚

4 years ago
Show more results compactanswers