Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
UnevenDolphin73
Moderator
106 Questions, 749 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

662 × Eureka!
0 What Is Being Stored Exactly In

Maybe they shouldn't be placed under /tmp if they're mission critical, but rather the clearml cache folder? πŸ€”

3 years ago
0 Hi

Woot! What about clearml-agent 1.2.0? πŸ™‚

3 years ago
0 What Would Be The Best Way To Approach This Flow?

Right and then for text (file path) use some regex or similar for extraction, and for dictionary simply parse the values?

3 years ago
0 How Would I Go About Adding Multiple Credentials In The Autoscaler? (I.E. Specify Multiple

I would expect the service to actually implicitly inject it to new instances prior to applying the user's extra configuration πŸ€”

3 years ago
0 When Using

Indeed with ~ the .root call ends with an empty string, so it has a bit of different flow

4 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In

I think -

  • Creating a pipeline from tasks is useful when you already ran some of these tasks in a given format, and you want to replicate the exact behaviour (ignoring any new code changes for example), while potentially changing some parameters.
  • From decorators - when the pipeline logic is very straightforward and you'd like to mostly leverage pipelines for parallel execution of computation graphs
  • From functions - as I described earlier :)
2 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In

So caching results for steps with the same arguments is trivial. Ultimately I would say you can combine the task-based pipeline with a function-based pipeline to achieve such dynamic control as you specified in the first two scenarios.

About the third scenario I'm not sure. If the configuration has changed, shouldn't the relevant steps (the ones where the configuration changed and their dependent steps) be rerun?

At any case, I think if you stay away from the decorators, at the cost of a bi...

2 years ago
0 Our Mac Users Are Having Some Issues. They Have Their Respective ~/Clearml.Conf, And Yet They Get: Clearml 1.1.5

We have a mini default config (if you remember from a previous discussion we had) that actually uses the second form you suggested.
I wrote a small "fixup" script that combines this default with the one generated by clearml-init , and it simply does:
def_config = ConfigFactory.parse_file(DEF_CLEARML_CONF, resolve=False) new_config = ConfigFactory.parse_file(new_config_file, resolve=False) updated_new_config = ConfigTree.merge_configs(new_config, def_config)

3 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In
  • in the second scenario, I might have not changed the results of the step, but my refactoring changed the speed considerably and this is something I measure.
  • in the third scenario, I might have not changed the results of the step and my refactoring just cleaned the code, but besides that, nothing substantially was changed. Thus I do not want a rerun.Well, I would say then that in the second scenario it’s just rerunning the pipeline, and in the third it’s not running it at all πŸ˜„
    (I ...
2 years ago
0 What Would Be The Best Way To Approach This Flow?

I don't think there's a PR issue for that yet, at least I haven't created one.

I could have a look at this and maybe make a PR.
Not sure what would the recommended flow be like though πŸ€”

3 years ago
0 Since V1.4.0, Our

You mean the host is considered the bucket, as I wrote in my earlier message as the root cause?

3 years ago
0 For Remote Execution Where The Queue Has

It's pulled from the remote repository, my best guess is that the uncommitted changes apply only after the environment is set up?

2 years ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

@<1537605940121964544:profile|EnthusiasticShrimp49> It’ll take me still some time to find the MVC that generated this, but I do have the ClearML experiment page for it. I was running the thing from ipython , and was trying to create a task from a function:
image

one year ago
2 years ago
0 Is It Possible To Avoid The Clearml-Agent For Local Installations, And Have The File Server Automatically Use An S3 Bucket? I'Ve Found

I will TIAS, but maybe worthwhile to also mention if it has to be the absolute path or if relative path is fine too!

3 years ago
0 Is There A Way To Interface With Clearml Agent (Cli?) To Handle Model Repositories And Data Versioning (But So, Not Experimentation, Tight Integration, Pipelining, Etc)?

If everything is managed with a git repo, does this also mean PRs will have a messy metadata file attached to them?

4 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In

Heh, my bad, the term "user" is very much ingrained in our internal way of working. You can think of it as basically any technically-inclined person in your team or company.

Indeed the options in the WebUI are too limited for our use case, so we're developed "apps" that take a yaml configuration file and build a matching pipeline.
With that, our users do not need to code directly, and we can offer much more fine control over the pipeline.

As for the imports, what I meant is that I encounter...

2 years ago
0 We Have Configured The Aws Credentials In The Remote Worker'S

Thanks SuccessfulKoala55 , I made https://github.com/allegroai/clearml-agent/issues/126 as a suggestion.

Do you have any thoughts on how to expose these... manually?
It does so already for environment variables that prefixed with CLEARML_ , so it would be nice to have some control over that.

2 years ago
2 years ago
0 Clearml Server V1.2.0 Has Just Been Released!

It was really easy with the attached code, really πŸ‘

I would only maybe suggest adding in the documentation, that if one uses the default recommended install location, then the script can be run without any command line arguments.
I had to momentarily look at the code to see the default paths match my own (though I could've also looked at --help default values πŸ˜› )

3 years ago
0 Is There An Easy Way To Add A Link To One Of The Tasks Panels? (As An Artifact, Configuration, Info, Etc)? Edit: And Follow Up Regarding The Dataset. As Discussed Somewhere Previously, The Datasets Are Now Automatically Moved To A Hidden "Sub-Project" Pr

Well, -ish. Ideally what we're after is one of the following:
Couple a task with a dataset. Keep it visible in it's destined location. Create a dataset separately from the task. Have control over its visibility and location. If it's hidden, it should not affect normal UI interaction (most annoying is having to click twice on the same project name when there are hidden datasets, which do not appear in the project view)

3 years ago
Show more results compactanswers