Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, I Run 'Manually' On My Local Machine With No Errors. Then, I Clone The Completed Task And Enqueue It. I Get To Stage When 'Environment Setup Completed Successfully'. But Right After I Get An Error Related To 'Connect' Method - Task.Connect(Config.Mode

After removing the task.connect lines, it encountered another error related to 'einops' that is not recognized. It does exist on my environment file but was not installed by the agent (according to what I see on 'Summary - installed python packages'. should I add this manually?

Yes, I'm assuming this is a derivative package that is needed by one of your packages?

Task.add_requirements("einops")
task = Task.init(...)
2 years ago
0 Hey, Somehow

DeliciousSeal67 the agent will use the "install packages" section in order to install packages for the code. If you clear the entire section (you can do that in the UI or programmatically) then it will revert to requirementsd.txt
Make sense ?

3 years ago
2 years ago
0 Hey, Is There A Shortcut On The Dataset Sdk To Directly Get The Latest Version Of A Dataset ?

Hi FierceHamster54
Sure just do
dataset = Dataset.get(dataset_project="project", dataset_name="name")This will by default fetch the latest version

3 years ago
0 Hello, I Have The Following Scenario:

See the log:

Collecting keras-contrib==2.0.8
  File was already downloaded c:\users\mateus.ca\.clearml\pip-download-cache\cu0\keras_contrib-2.0.8-py3-none-any.whl

so it did download it, but it failed to pass it correctly ?!
Can you try with clearml-agent==1.5.3rc2 ?

2 years ago
0 Hi, I'M Attempting To Use

Also, on the ClearML dashboard, I can see the 

clearml-agent

 log:

Is the clearml-agent running in docker mode ?

See https://github.com/allegroai/clearml-session/issues/3

4 years ago
2 years ago
0 Hello Friends! I Am Trying To Play Around With The Configs For

Hi @<1547028116780617728:profile|TimelyRabbit96>
You are absolutely correct, we need to allow to override configuration
The code you want to change is here:
None
You can try:

channel = self._ext_grpc.aio.insecure_channel(triton_server_address, options=dict([('grpc.max_send_message_length', 512 * 1024 * 1024),  ('grpc.max_receive_message_len...
2 years ago
0 Is Clearml-Serving Using Either System Or Cuca Shared Memory? Or Planning To? In Our Experiments Using Perf_Analyzer The Shared Memory Experiments Showed A Huge Improvement And If We Wanted To Look Into This, Do You Have Any Pointers Of Where We Can Do T

Sorry @<1657918706052763648:profile|SillyRobin38> I missed this reply

Is ClearML-Serving using either System or CUCA shared memory? O

This needs to be set on the docker-compose:
and I think this line actually includes ipc: host which means there is no need to set the shm_size, but you can play around with it and let me know if you see a difference
[None](https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose-triton-gpu.yml#L1...

one year ago
0 Some Random Weird Feature Suggestions For The Future 1) It Would Be Great If You Could Export Key Experiment Data As Html Or Pdf Report 2) It Would Also Be Quite Nice To Have An Opportunity To Discuss Experiments In Trains Without Leaving The Web App 3)

Thanks DilapidatedDucks58 ! We ❤ suggestions for improvements 🙂

Did you try to print a page using the browser (I think that they can all store it as pdf these days) Yes I agree, it would 🙂 we have some thoughts on creating plugins for the system, I think this could be a good use-case. Wait a week or two ;)

5 years ago
0 Is Clearml-Serving Using Either System Or Cuca Shared Memory? Or Planning To? In Our Experiments Using Perf_Analyzer The Shared Memory Experiments Showed A Huge Improvement And If We Wanted To Look Into This, Do You Have Any Pointers Of Where We Can Do T

Hi @<1547028116780617728:profile|TimelyRabbit96>
Notice that if running with docker compose you can pass an argument to the clearml triton container an use shared mem. You can do the same with the helm chart

one year ago
0 Hello! Question About

Hi @<1547028116780617728:profile|TimelyRabbit96>

Trying to do model inference on a video, so first step in

Preprocess

class is to extract frames.

Basically this depends on the RestAPI, usually would will be sending a link to data to be processed and returned Synchronously
What you should have a custom endpoint doing the extraction, send Raw data into another endpoint doing the model inference, basically think "pipeline" end points:
[None](https://github.com/allegro...

2 years ago
0 Hello! Question About

can we use a currently setup virtualenv by any chance?

You mean, if the cleamrl-agent needs to setup a new venv each time? are you running in docker mode ?
(by default it is caching the venv so the second time it is using a precached full venv, installing nothing)

2 years ago
0 Hello! Question About

So actually while we’re at it, we also need to return back a string from the model, which would be where the results are uploaded to (S3).

Is this being returned from your Triton Model? or the pre/post processing code?

2 years ago
0 Hello! Question About

One issue that I see is that the Dockerfile inside the agent container

Not sure I follow, these are settings for the default container to be used when the agent spins a Task for you.
How are you running the agent itself ?

2 years ago
0 Hello! Question About

notice that even inside docker the venv is cached on the host machine 🙂

2 years ago
0 Is There Any Api Reference? Somewhere In The Docs I Can See The Signature Of Methods/Classes And See What Arguments They Accept And Description? Before I'M Rushing To Ask Questions Here Myself, I'D Prefer To Do As Much Learning As I Can Through The Docs

Hi WackyRabbit7
First always check the functions on the Task object, they are the most straight forward access to the system.
Then if you need general purpose API calls, currently they are only documented in the doc-string of the API schema (that said it should be quite documented)
You can check all the endpoints https://github.com/allegroai/trains/tree/master/trains/backend_api/services/v2_8
And finally if you want to easily use the RestAPI :
` from trains.backend_api.session.client impo...

5 years ago
0 Hi. When Using Sklearn'S

DistressedGoat23

We are running a hyperparameter tuning (using some cv) which might take a long time and might be even aborted unexpectedly due to machine resources.
We therefore want to see the progress

On the HPO Task itself (not the individual experiments the one controlling it all) there is the global progress of the optimization metric, is this what you are looking for ? Am I missing something?

3 years ago
0 Hi, Love What You Guys Did With The New Datasets! I Need Some Help Though. I Assume There Will Be A No-Code Way To Do This, Maybe Not Now But In The Future. But Anyway, I Have Three Different Datasets, And I Want To Create A Merged Version Of All Three Of

but can it NOT use /tmp for this i’m merging about 100GB

You mean to configure your Temp folder for when squashing ?
you can do hack the following:
` import tempfile
tempfile.tempdir = "/my/new/temp"

Dataset squash

tempfile.tempdir = None `But regradless I think this is worth a GitHub issue with feature request, to set the temp folder///

3 years ago
0 Hi, With The Upcoming Version Of Hydra It Seems The Binding Breaks. Specifically In The

Thanks GrievingTurkey78
Sure just PR (should work with any Python/Hydra version):
kwargs['config']=config kwargs['task_function']=partial(PatchHydra._patched_task_function, task_function,) result = PatchHydra._original_run_job(*args, **kwargs)

4 years ago
0 Hi Guys! Is There A Way To Tell An Agent To Run A Task In An Existing Venv (Without Creating A New One)?

ExcitedFish86 this is a general "dummy agent" that tasks and executes them (no env created, no code cloned, as you suggested)

hows does this work with HPO?

The HPO clones Tasks, changes arguments, push them into a queue, and monitors the metrics in real time. The missing part (from my understanding) was the the execution of the Tasks themselves required setup, and that you wanted multiple machine support, in order to overcome it, I post a dummy agent that just runs the Tasks.
(Notice...

3 years ago
0 Is There Any Reason Why Doing The Following Is Not Possible? Am I Doing It Right? I Want To Run A Pipeline With Different Parameters But I Get The Following Error?

Hey GiganticTurtle0 ,
So basically the issue is the the pipeline function ( prediction_service ) is getting a dict as input, and it is expecting to get basic types... if you were to do the following, it would have worked as expected.
prediction_service(**default_config)I will make sure we flatten any dictionary so that we end up with config/start , instead of a serialized version of the dict.
wdyt?

3 years ago
0 Hi, I Try To Write An Article On Medium About Clearml And Face Some A Problem With Plotly Figures. When Displaying The Figure Locally In A Browser Works Fine, But On The Cleaml Server (I Use The Free Tier Service) The Plot Is Empty And Has The Title 'Unkn

Hi WickedGoat98

I try to write an article on medium about ClearML and face some a problem with plotly figures.

This is awesome !

I ran the plotly_reporting.py example locally and the uploaded plot was ok.

So are you saying the same example code from the repository worked okay on your server but showed nothing on the hosted server ?

4 years ago
0 Hi, Love What You Guys Did With The New Datasets! I Need Some Help Though. I Assume There Will Be A No-Code Way To Do This, Maybe Not Now But In The Future. But Anyway, I Have Three Different Datasets, And I Want To Create A Merged Version Of All Three Of

Yeah the hack would work but i’m trying to use it form the command line to put in airflow. I’ll post on GH

Oh, then set TMP/TMPDIR environment variable, it should have the same effect

3 years ago
0 Hi, Love What You Guys Did With The New Datasets! I Need Some Help Though. I Assume There Will Be A No-Code Way To Do This, Maybe Not Now But In The Future. But Anyway, I Have Three Different Datasets, And I Want To Create A Merged Version Of All Three Of

GrittyStarfish67

I do not wish for data duplication. Any Idea how to do this with clearml-data CLI/GUI/python?

At least in theory creating a new version with parents from multiple Datasets should just work out of the box.
wdyt?

3 years ago
0 Hi, Some Properties Of The Task Object Are Not Listed In The Documentation (Such As Task.Parent, Which Is Not Clear Whether It Is The Parent Task Object Itself Or The Id Of The Parent Task).

JitteryCoyote63 I meant to store the parent ID as another "hyper-parameter" (under its own section name) not the data itself.
Makes sense ?

5 years ago
0 Hi, Another Question. I Tried To Not

PompousBeetle71 so basically exclude parameters that are considered "local" only, so that other people will not accidentally use them?

5 years ago
0 What’S The Point Of Tracking Artifacts Dynamically?

Make sense. BTW: you can manually add data visualization to a Dataset with dataset.get_logger().report_table(...)

4 years ago
Show more results compactanswers