Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi I Saw This On The Clearml-Agent Docs But Other Than The Docker Image, I'M Not Sure How To Integrate This With Clearml Py And Clearml-Server. Please Advise.

TypeError:ย 

init

() got an unexpected keyword argument 'base_pod_num'

Could you post the entire log?

4 years ago
0 Has Anyone Used Dynaconf With Clearml? Trying To Decide Whether To Migrate To Hydra Or Stick With Dynaconf. Would Love To Take Advantage Of Automatic Logging Of The Hyperparameters

Hmm that makes sense to me, any chance you can open a github issue so we do not forget ? (I do not think it should be very complicated to fix)

2 years ago
0 Hi Folks, I Did A Deployment Of Clearml Using The K8S Helm Chart, And I Set The Agent Using K8S Glue. I Run A Task Locally, And I Went To The Ui Cloned The Experiment And Scheduled It In The Default Queue. After Doing This, I See That The Experiment Is Q

think it's because the proxy env var are not passed to the container ...

Yes this seems correct, the errors point to a network issues, i.e. the container does not seem to be able to connect to the clearml-server

3 years ago
0 I'M Having A Problem Reusing The Last Task Id On Jupyter Notebooks. Dispite Having Reuse_Last_Task_Id=True On Task.Init, It Always Creates A New Task Id. Anyone Ever Had This Issue?

Hi GrotesqueOctopus42

Dispite having reuse_last_task_id=True on Task.init, it always creates a new task id. Anyone ever had this issue?

So the way "reuse_last_task_id=True" works is that if there are no artifacts on the Task it will reuse it, but when running inside jupyter it always has artifacts (the notebook itself), so it starts a new Task.
You can however pass a specific Task ID and it will reuse it "reuse_last_task_id=aabb11", would that help?

2 years ago
0 Why Would Every Submitted Task Be Aborted Directly?

Yep that makes sense to me too. Done

4 years ago
0 Is There A Way To Access Dataframe Logged Using Report_Table From The A Task Instance Instantiated Using Task.Get_Task(Id='.....')? I Have: T = Task.Get_Task(Id='....') And I Am Looking For Something Along The Lines Of: Df = T.Get_Table('Table Name')

Hi ThickDove42 ,
Yes, but by the time you will be able to access it, it will be in a display form (plotly), not very convient.
If this is something you need to re-use, I would argue that it is an artifact and should be stored as artifact (then accessing it is transparent) , obviously you can both report as table and upload as artifact, no harm in that.
what do you think?

4 years ago
4 years ago
3 years ago
0 Getting An Odd Error When Trying To Open My Plots (See Picture Attached) Also, Not Able To Save Any Plots To Trains

hit ctrl-f5 (reload the page) do you still ge the same error? Is it limited to a specific experiment?

5 years ago
5 years ago
0 Hi, I'M Trying To Install A New Server, This Is A Fresh Ubuntu 18.04 Install. When I Try To Run The Docker Composer Up Command I Get Error Messages Like This One:

CourageousLizard33 VM?! I thought we are talking fresh install on ubuntu 18.04?!
Is the Ubuntu in a VM? If so, I'm pretty sure 8GB will do, maybe less, but I haven't checked.
How much did you end up giving it?

5 years ago
0 Hi, I Run The Trains Server In An Docker Container And Started Making Use Of Tasks ... My Tests Are Showed On The Projects Dashboard Which Is Realy Cool. What I Haven'T Found So Far Is A Way To Clean Up The System From The Tests I Did. I'M Able To Archive

Another point I see is, that in the workers & queses view the GPU usage is not been reported

It should be reported, if it is not, maybe you are running the trains-agent in cpu mode ? (try adding --gpus)

4 years ago
0 What Is Being Stored Exactly In

Ohh... I would not delete them then ... ๐Ÿ˜ž
Maybe kind of heuristics (files created a week ago can be deleted?!)

3 years ago
0 Hello, When Running A Task With A Remote Interpreter I Get

In your code, can you print the following:
import os print(os.environ.keys())There should be a few keys the Pycharm plugin is sending from the local machine, pointing to the git repo

2 years ago
0 Autoscaler Parallelization Issue: I Have An Aws Autoscaler Set Up With A Resource That Has A Max Of 3 Instances Assigned To The

I located the issue, I'm assuming the fix will be in the next RC ๐Ÿ™‚
(probably tomorrow or before the weekend)

3 years ago
0 Is It Possible To Add A Callback For A Pipeline From A Step?

Is task.parent something that could help?

Exactly ๐Ÿ™‚ something like:
# my step is running here the_pipeline_task = Task.get_task(task_id=task.parent)

4 years ago
4 years ago
0 Hi, Is There A Way To List All Agents Running In A Host, I Do Not Find Relevant One In Clearml-Agent -H.

And the agent continue running.

oh just kill al the processes with clearml-agent in the cmd line

pkill -9 -f clearml-agent
2 years ago
0 Hi, I Noted That Clearml-Serving Does Not Support Spacy Models Out Of The Box And That Clearml-Serving Only Supports Following;

Besides that, what are your impressions on these serving engines? Are they much better than just creating my own API + ONNX or even my own API + normal Pytorch inference?

I would separate ML frameworks from DL frameworks.
With ML frameworks, the main advantage is multi-model serving on a single container, which is more cost effective when it comes to multiple model serving. As well as the ability to quickly update models from the clearml model repository (just tag + publish and the end...

3 years ago
0 Hi, I'M Facing Some Issues When Try To Run A Pipeline, How Can A Import A Local Library Using Pipelines From Functions? Always Getting "Modulenotfounderror: No Module Named"

you can also specify additional packages on the decorator
@PipelineDecorator.component(..., packages=["tqdm>=2.1", "scikit-learn"]) def step_one(...): # code here

3 years ago
0 Another Question, I Have Written A Code That Includes A Task Scheduler That Calls A Function. That Function Watches A Folder And If There Are Sufficient Images, It Creates And Publishes The Dataset, After Which It Clears The Folder. Problem, For Some Rea

why are there indefinitely growing anonymous tasks, even after i've closed the main schedulers.

The anonymous Tasks are The Dataset you are creating (a Dataset version is also a Task of a certain type with artifacts, the idea is usually Datasets are created from code, hence the need to combine the two).
Make sense ?

3 years ago
0 Hi Everyone! Is It Possible To Read Data Directly From Server W/O Using Get_Local_Copy()?

Hi @<1720249421582569472:profile|NonchalantSeaanemone34>

Is it possible to read data directly from server w/o using get_local_copy()?

do you mean an artifact ? what is direct here?

one year ago
0 Hi Everyone, I Was Looking Into Clearml Integration With Nvidia For Transfer Learning. Does Clearml Have Plans To Integrate With The New Tao? Looks Like Nvidia Is Focusing Tao As A Low Code Transfer Learning Tool With Everything Done In Command Line, Whic

The latest TAO doesn't use python for fine tuning, rather it uses the CLI entirely

It's a good question, but I think the CLI actually just runs a python code (the CLI is their interface). Generally speaking I'm pretty sure it will not be complicated to convert the TLT integration to support TAO (Nvidia helps with that, and I think we had a similar proces with Nvidia Clara/MONAI)
BTW: how are you using Nvidia TAO ?

3 years ago
0 When Using Something Like Pdf2Image Which Requires Poppler (Which Can Be Installed With Conda), How Can I Ensure That The Task Can Run On An Agent Correctly? As Of Now It Doesn’T Know About Poppler

Hi JealousParrot68
spinning the clearml-agent with docker support (i.e. each experiment is running inside its own container):
https://clear.ml/docs/latest/docs/clearml_agent#docker-mode
Basically you can specify a default docker to use (per agent) and a specific docker container to use per Task (configured in the UI under execution at the bottom)

4 years ago
0 Hi, I'M Trying To Make Use Of New Capabilities Of Dag Creation In Clearml. Seems That Api Has Changed Pretty Much Since A Few Versions Back. There Seems To Be No Need In

. In short, I was not able to do it withย 

Task.clone

ย andย 

Task.create

ย , the behavior differs from what is described in docs and docstrings (this is another story - I can submit an issue on github later)

The easiest is to use task_ task_overrides
Then pass:
task_overrides = dict('script': dict(diff='', branch='main'))

3 years ago
0 Does Clearml-Serving Support Mms(Multi-Model-Serving) Like Seldon Deploy? Mms: Serving Multiple Model In The Same Container

Hi @<1524922424720625664:profile|TartLeopard58>
Yes this is the default it is designed to serve multiple models and scale horizontally

2 years ago
Show more results compactanswers