Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, I'M A Bit Lost In The Docs For The Mlops, I Have Script Which Already Integrate Clearml Logging, Should I Use Clearml-Task To Launch It On An Agent ? (I Already Have A Clearml-Server And A Clearml-Agent Running).

Could it be the code is not in a git repository ?
clearml support either a single script or a git repository, but Not a collection of standalone files. wdyt?

4 years ago
0 Hello All. I'M Experimenting With Clearml And I'Ve Run Into A Strange Issue. I Used

@<1523712386849050624:profile|NastyFox63>

is there a limit to the search depth for this?

Yes, the Task.init auto package listing is Only the first depth (i.e. directly imported),
the reason is that the derivative packages should be resolved by pip, when the agent remotely executes that Task.
Now when the Agent is installing the task the Entire python environment is stored, so that it is always fully reprpoducible,
Make sense ?

2 years ago
0 Hey Guys, I'M Trying To Run An Experiment Using Trains-Agent. I Have A Custom Docker Image With Nightly Versions Of Pytorch And Our Own Library Installed From A Private Repo. I Was Assuming That These Packages Will Be Automatically Available To Trains Dur

Hi DilapidatedDucks58
trains-agent tries to resolvethe torch package based on the specific cuda version inside the docker (or on the host machine is if used in virtual-env mode). It seems to fail finding the specific version "torch==1.6.0.dev20200421+cu101"
I assume this version was automatically detected by trains when running manually. If this version came from a private artifactory you can add it to the trains.conf https://github.com/allegroai/trains-agent/blob/master/docs/trains.conf#L...

5 years ago
0 I Am Working Up With The Autoscaler, After Setting Up The Autoscaler Instance I Am Getting The Following Error When I Launch The Autoscaler Googleapiclient.Errors.Httperror: <Httperror 404 When Requesting

Thanks!
Hmm from here : None
Could it be you do not have privileges to the resource, or that you did not provide credentials ?
Did that autoscaler work before ?

2 years ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Thanks for the logs @<1627478122452488192:profile|AdorableDeer85>
Notice that the log you attached means the preprocessing is executed and the GPU backend is returning an error.
Could you provide the log of the docker compose specifically the intersting part is the Triton container, I want to verify it loads the model properly

2 years ago
0 Running This Code From Inside A Docker Container Locally:

AttributeError: 'NoneType' object has no attribute 'base_url'

can you print the model object ?
(I think the error is a bit cryptic, but generally it might be the model is missing an actual URL link?)
print(model.id, model.name, model.url)

3 years ago
0 Hello. Is There Any Doc Where I Could Find What Contributes As Api Usage? And Is It Possible To View The Usage Breakdown By Source/Type? I Want To Estimate Api Usage Costs Before Signing Up For Pro Plan On The Saas.

TroubledHedgehog16 generally speaking you can expect about 10 api calls per minute if you have many reports, and about 3 per minute on low report. We just optimized the sdk so in cases there are lots of consequential reports they are better batched, I would recommend the latest RC

3 years ago
3 years ago
0 Hi! I Am Researching Different Mlops Libraries / Platforms. I Don'T Want To Use Platform As A Service Solutions. Could You Suggest Me What Are The Main Differences Between Clearml And Mlflow? What Are Advantages Of Using Clearml?

Hi RoundMosquito25
This is a bit old but probably a good start:
https://clear.ml/blog/stacking-up-against-the-competition/
tl;dr
ClearML advantages (at least a few I can think of)
Scales way better Enables out of the box experiment orchestration (i.e. remote execution etc) Data management Nicer UI Full RestAPI Full MLops platform Model serving Query-able model repositoryProbably more 🙂

3 years ago
0 When I Pass Invalid Key To

if fails during 

add_step

 stage for the very first step, because 

task_overrides

 contains invalid keys

I see, yes I guess it it makes sense to mark the pipeline as Failed 🙂
Could you add a GitHub issue on this behavior, so we do not miss it ?

4 years ago
0 Hi Everyone, Does Anybody Now If The Latest Release 1.15 Is Still Vulnerable To

Hi Martin, of course not,

Smart!

I was just wondering if it has been patched yet and if not what is the expected timeline for patching it

Yes, I believe the target is a patch version 1.15.1 to be released in a couple of weeks. This is not a major issue but it's always better to have have it fixed. (btw: the enterprise version never had this issue to being with, because it is of course authenticated, as well as it has additional RBAC layer on top.)

one year ago
0 Hi, For Some Reason Many Packages Are Not Detected In The Installed Packages Section. My Experiment Clone Crashes As It Fails To Import A Package That Wasn'T Included In The Installed Packages Although It Is Installed In The Default Environment. I'M Using

SmarmySeaurchin8
When running in "dev" mode (i.e. writing the code) only packages imported directly are registered under "installed packages" , then when the agent is executing the experiment, it will update back the entire environment (including derivative packages etc.)
That said you can set detect_with_pip_freeze to true (in trains.conf) and it will basically store the entire pip freeze.
https://github.com/allegroai/trains/blob/f8ba0495fb3af1f99732fdffbbccd2fa992934a4/docs/trains.c...

5 years ago
0 Hi Guys, Suppose I Have The Following Script:

GiganticTurtle0 is it in the same repository ?
If it is it should have detected the fact that it needs to analyze the entire repository (not just the standalone script, and then discover tensorflow)

4 years ago
0 Is There A Reason Why All Clearml.Task Methods Regarding Requirements (E.G. Pip Requirements) Are Class Methods? Are Requirements Not Stored In A Task?

Long story short, the Task requirements are async, so if one puts it after creating the object (at least in theory), it might be too late.
Make sense ?

4 years ago
4 years ago
0 When We Run Some Agents And Then Kill Them, They Remain In Ui For Quite A Long Time (Even If They Are Don'T Exist) - It Is Like 5Min. It There Some Way To Make The Ui More Responsive? I Mean To Have A Shorter Timeout After Which The Worker Is Invisible?

RoundMosquito25 are you using clearml-agent daemon --stop or are you killing them ?

killing them basically means you loose them in the UI when they timeout, the backend does not see them for 10min so it assumes they died, when you call clearml-agent --stop they will unregister themselves and disappear immortally

3 years ago
0 I Have A Question Regarding Running The Code On The Remote Machine, Each Time I Run The Code I See The Console In The Clearml Server Start Downloading All The Libraries I Used In The Code And When I Run Another Code The Same Thing Happens So Why It Has To

how to put or handle this configuration and where?

In your clearml.conf on the machine with the agent just add at the bottom of the file agent.venvs_cache.path=~/.clearml/venvs-cache

3 years ago
0 When Using

Hi SteadyFox10 the way it works is that Trains limits the debug image history by reusing the same files names, so the UI will only present the iterations where the debug images are relevant for. With your sample code it looks like it exposes a bug , the generated link should contain iteration number, it does not and so it overwrites the debug images every iteration. Here is the image link: https://demofiles.trains.allegro.ai/Test/test_images.6ed32a2b5a094f2da47e6967bba1ebd0/metrics/Test/te...

5 years ago
0 Hi All! I Am A Bit Confused As To How The Python Environment Is Set. I Can Submit Jobs That Build The Environment And Run Perfectly Fine. But, If I Abort The Job -> Requeue It From The Gui, Then A Different Environment Is Installed (Which Has Some Package

One suggestion is to make sure all agents have the same configuration. Another is to add pip into the "installed packages" section.
(Notice that in the next release we will specifically include it there, to avoid these kind of scenarios)

one year ago
0 Hi! Is There Something Happening With The

GrievingTurkey78 are you able to reproduce it?

4 years ago
Show more results compactanswers