Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Two Annoying Visual Bugs In Clearml Server Ui After Latest Update:

DilapidatedDucks58 I'm assuming clearml-server 1.7 ?
I think both are fixed in 1.8 (due to be released wither next week, or the one after)

one year ago
0 When We Run Our Code And It Communicate With Clearml Server, Is There Some Way We Can Log That Api Request? Like What Endpoint Is It And What Payload It Sends To That Endpoint? Thanks

? Do you have a link how to setup a task scheduler to run in service mode in k8s?

basically spin the agent pod and add an argument to the agent itself (this is the --service-mode)
https://clear.ml/docs/latest/docs/clearml_agent#services-mode

2 years ago
0 I Am Not Using Tensorflow, However The Experiment Shows Some (Useless) Data, Is The Only Way To Get Rid Of It To Specify

I'm assuming some package imports absl (the TF define package) and that's the reason you see the TF defines). Does that make sense?

3 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

Okay so my thinking is, on the pipelinecontroller / decorator we will have:
abort_all_running_steps_on_failure=False (if True, on step failing it will abort all running steps and leave)
Then per step / component decorator we will have
continue_pipeline_on_failure=False (if True, on step failing, the rest of the pipeline dag will continue)
GiganticTurtle0 wdyt?

2 years ago
0 Hi Everyone, Just Setup Trains.. Was Very Easy To Setup. Was Able To Run An Experiment With It. Question: Is It Possible To Turn Off The Code Tracking (Anything Related To Git) ?

Hmmm, I'm not sure that you can disable it. But I think you are correct it should be possible. We will add it as another argument to Task.init. That said, FriendlyKoala70 what's the use case for disabling the code detection? You don't have to use it later, but it is always nice to know :)

4 years ago
0 Hi, I Noted That Clearml-Serving Does Not Support Spacy Models Out Of The Box And That Clearml-Serving Only Supports Following;

2,3 ) the question is whether the serving is changing from one tenant to another, does it?

2 years ago
0 I Hit A Issue That I Cannot See My Matplotlib Plot, But It Was Shown In The Panel. Any Idea?

EnviousStarfish54

it seems that if I don't use plt.show() it won't show up in Allegro, is this a must?

Yes , at plt.show / plt.save Trains will capture the plot and send it to the backend.
BTW: when you hover over the empty plot area, do you see the plotly objects, or is it all blank ?

4 years ago
0 Does Clearml Have A Good Story For Offline/Batch Inference In Production? I Worked In The Airflow World For 2 Years And These Are The General Features We Used To Accomplish This. Are These Possible With Clearml?

Hi @<1541954607595393024:profile|BattyCrocodile47>

Does clearML have a good story for offline/batch inference in production?

Not sure I follow, you mean like a case study ?

Triggering:

We'd want to be able to trigger a batch inference:

  • (rarely) on a schedule
  • (often) via a trigger in an event-based system, like maybe from AWS lambda function(2) Yes there is a great API for that, checkout the github actions it is essentially the same idea (RestAPI also available) ...
one year ago
0 Hi, I'M Following The Instructions For

This really makes little sense to me...

Can you send the full clearml-session --verbose console output ?

Something is not working as it should obviously, console output will be a good starting point

2 years ago
0 Hi Guys, With The New Venv Caching Available In Clearml, I Have The Following Problem: I Force My Pip Requirements To Be:

JitteryCoyote63 could you test the latest RC ๐Ÿ˜‰
pip install clearml-agent==0.17.2rc4

3 years ago
0 Hi I Came Across Some Inconsistency In The Iteration Reporting In The Clearml With Pytorch-Lightning When Calling Trainer.Fit Multiple Times, Before I Dive In I Wondered If There Is A Known Issue Related To This?

when you are running the n+1 epoch you get the 2*n+1 reported
RipeGoose2 like twice the gap, i.e internally it adds the an offset of the last iteration... is this easily reproducible ?

3 years ago
0 Hi

I'd prefer to use config_dict, I think it's cleaner

I'm definitely with you

Good news:

newย 

best_model

ย is saved, add a tagย 

best

,

Already supported, (you just can't see the tag, but it is there :))

My question is, what do you think would be the easiest interface to tell (post/pre) store, tag/mark this model as best so far (btw, obviously if we know it's not good, why do we bother to store it in the first place...)

4 years ago
0 Hi, I Am Trying To Upload A Plot To An Existing Task Using The

Weird that this code is also uploading to the 'Plots'. I replicated the same thing as my main script, but main script is still uploading to Debug Samples.

SmarmyDolphin68 are you saying the same code behaves differently ?

3 years ago
0 Bug?

Hi PanickyMoth78

dataset name is ignored if

use_current_task=True

Kind of, it stores the Dataset on the Task itself (then dataset.name becomes the Task name), actually we should probably deprecate this feature, I think this is too confusing?!
What was the use case for using it ?

one year ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

What if I register the artifact manually?

task.upload_artifact('local folder', artifact_object=' ')This one should be quite quick, it's updating the experiment

4 years ago
0 What Is The Suggested Way Of Running Trains-Agent With Slurm? I Was Able To Do A Very Naive Setup: Trains-Agent Runs A Slurm Job. It Has The Disadvantage That This Slurm Job Is Blocking A Gpu Even If The Worker Is Not Running Any Task. Is There An Easy Wa

HealthyStarfish45 could you take a look at the code, see if it makes sense to you?
What I'm getting to, is maybe we build a template, then you could fill in the gaps ?

3 years ago
0 Hi, We Are Having An Interesting Issue Here. We Serve Many Users And Each User Has Their Own Credentials In Accessing The Private Git Repo. We Can'T Seem To Find A Way For The End User To Pass In Their Git Credentials When They Run Their Codes In Both Age

Hmm that sounds like the agent needs to access a vault with credentials per user, unfortunately this is not covered in the open-source ๐Ÿ˜ž I "think" this is supported in the enterprise version as part of the permission management

3 years ago
0 Hello All! Quick Question, Do Any Of You Know Of A Clean Way To Access The Clearml Logger Inside Of A

I ended up using

task = Task.init(

continue_last_task

=task_id)

to reload a specific task and it seems to work well so far.

Exactly, this will initialize and auto log the current process into existing task (task_id). Without the argument continue_last_task ` it will just create a new Task and auto log everything to it ๐Ÿ™‚

one year ago
0 I Want To Execute A Script Via Trains-Agent, But I Want To Be Able To Provide The Location Of A Config File By Specifying The Path Before Trains-Agent Executes The Script (Like A Flag Or Command Line Argument). How Can I Accomplish This?

Can I change the parameters before executing the draft task

Yes you can, after you clone the experiment everything becomes editable, so you can edit the config in the UI.
For example, let's assume I have config.yml, and in my code I do:
my_file = task.connect_configuration('config.yml') with open(my_file, 'rt') as f: ...Then after I clone it in the UI and edit the configuration, when it will be executed remotely,
my_file will contain the content of the configuration as s...

3 years ago
0 Hi, Is It Possible To Resume An Experiment That Stopped Unexpectedly, By Using A Checkpoint Of The Model?

I would clone the first experiment, then in the cloned experiment, I would change the initial weights (assuming there is a parameter storing that) to point to the latest checkpoint, i.e. provide the full path/link. Then enqueue it for execution. The downside is that the iteration counter will start from 0 and not the previous run.

4 years ago
0 Hello Everyone! I'M Trying To Add Functionality Where I Need To Rotate Artifacts. Psedocode:

Hi GrotesqueDog77

and after some time I want to delete artifact with

You can simply upload with the same local file name and same artifact name, it will override the target storage. wdyt?

one year ago
Show more results compactanswers