Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, I'M Using A Virtual Environment Inside My Jupyterhub Server Along With Clearml. Whenever I Create Any Task The "Uncommitted Changes" Are The Contents Of

@<1535793988726951936:profile|YummyElephant76>

Whenever I create any task the "uncommitted changes" are the contents of

ipykernel_launcher.py

, is there a way to make ClearML recognize that I'm running inside a venv?

This sounds like a bug, it should have the entire notebook there, no?

2 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

Yes, I mean trains-agent. Actually I am using 0.15.2rc0. But, I am using local files, I mean I clone trains and trains-agent repos and install them. Their versions are 0.15.2rc0

I see, that's why we get the git ref, not package version.

5 years ago
0 I Am Not Using Tensorflow, However The Experiment Shows Some (Useless) Data, Is The Only Way To Get Rid Of It To Specify

I'm assuming some package imports absl (the TF define package) and that's the reason you see the TF defines). Does that make sense?

4 years ago
0 Hello Everyone, I’M Newcomer For Clearml. I Have Question Related To

Hi MortifiedCrow63
I have to admit this is very strange, I think the fact it works for the artifacts and not for the model is kind of a fluke ...
If you use "wait_on_upload" argument in the upload_artifact you end up with the same behavior. Even if uploaded in the background, the issue is still there, for me it was revealed the minute I limited the upload bandwidth to under 300kbps.It seems the internal GS timeout assumes every chunk should be uploaded in under 60 seconds.
The default chunk...

4 years ago
0 Hi All, Playing Around With Hp Optimisation, And I Notice In The Hyperparameteroptimizer Class Itself, The

Hi LudicrousParrot69
I guess you are right this is not trivial distinction:
min: means we are looking for the the minimum value of a specific scalar. meaning 1.0, 0.5, 1.3 -> the optimizer will get these direct values and will optimize based on that
global min: means the optimizer is getting the minimum values of the specific scalar. With the same example: 1.0, 0.5, 1.3 -> the HPO optimizer gets 1.0, 0.5, 0.5
The same holds for max/global_max , make sense ?

4 years ago
4 years ago
0 Hey, How Can I Add A Private Key In Order To Let The Clearml Agent To Clone From A Private Git Repository?

well, it's only when adding a 

  • name

 to the template

Nonetheless it should not break it 🙂

4 years ago
0 Different Question About Warnings: I'M Getting (Infrequently) This Warning, Followed By My Script Hanging

Okay, progress.
What are you getting when running the following from the git repo folder:
git ls-remote --get-url origin

3 years ago
2 years ago
0 Hey Guys, Is There A Ready Script That Can Delete All Models From S3 (Or Other Storage) That Are Related To Deleted Or Archived Experiments?

what if for some old tasks I get WARNING:root:Could not delete Task ID=a0908784a2a942c3812f947ec1f32c9f, 'Task' object has no attribute 'delete'? What's the best way of cleaning them?

This seems like an old SDK no?

4 years ago
0 Sorry For Always Posting Such Cryptic Problems. I Managed To Create A Docker-Compose File That Runs Clearml

@<1541954607595393024:profile|BattyCrocodile47> not restarting the docker, restarting the Docker service (on Mac it's an app, I think there is an option on the Docker app to do that)

one year ago
0 Hello! Is There A Way To Override The Configuration Vault Parameters Of A Pipeline Step With The Add_Function_Step Method? I See In The Docs That Add_Step Method Has The Option To Override The Vault With The Configuration_Overrides Argument, But Not Add_F

Hi @<1688721797135994880:profile|ThoughtfulPeacock83>

the configuration vault parameters of a pipeline step with the add_function_step method?

The configuration vault are a per set at execution user/project/company .
What would be the value you need to override ? and what is the use case?

one year ago
0 Hello! Is There A Way To Override The Configuration Vault Parameters Of A Pipeline Step With The Add_Function_Step Method? I See In The Docs That Add_Step Method Has The Option To Override The Vault With The Configuration_Overrides Argument, But Not Add_F

OH I see. I think you should use the environment variable to override it:
None
so add to the docker args something like

-e CLEARML_AGENT__AGENT__PACKAGE_MANAGER__POETRY_INSTALL_EXTRA_ARGS=
one year ago
0 Hello Everyone, I Have A Quick Question, I Am Using Clearml For An Ml Experiment Tracking Project. As Is, Clearml Is Saving A Version Of My Model After Each Epoch. Is There A Way For Clearml To Simply Save The Model Once Training Is Done And To Ignore The

Hi @<1547028031053238272:profile|MassiveGoldfish6>

Is there a way for ClearML to simply save the model once training is done and to ignore the model checkpoints?

Yes, you can simple disable the auto logging of the model and manually save the checkpoint:

task = Task.init(..., auto_connect_frameworks={'pytorch': False}
...
task.update_output_model("/my/model.pt", ...)

Or for example, just "white-label" the final model

task = Task.init(..., auto_connect_frameworks={'pyt...
one year ago
Show more results compactanswers