Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi

ElegantKangaroo44 what do you think?

5 years ago
0 When We Run A Task On Gpu, We Can Access Gpu Monitoring. But Can We Access It From Code? Usecase Is: When We See That There Is Enough Resources For Some Task, We Schedule It

Hi @<1523701240951738368:profile|RoundMosquito25>
Sure you can 🙂

task = Task.get_task("task_id_here")
metrics = task.get_last_scalar_metrics()
print(metrics[":monitor:gpu"])

None

2 years ago
0 Hi, I Started A Trains-Agent (0.15) In Services Mode (Full Command:

shows that the trains-agent is stuck running the first experiment, not

the trains_agent execute --full-monitoring --id a445e40b53c5417da1a6489aad616fee
is the second trains-agent instance running inside the docker, if the task is aborted, this process should have quit...

Any suggestions on how I can reproduce it?

5 years ago
0 Hi, Can You Pls Help Me? I Am Using V 0.14 (Will Update It Soon) And I Got The Following Error: /Usr/Bin/Python3.6: No Module Named Virtualenv Trains_Agent: Error: Command '['Python3.6', '-M', 'Virtualenv', '/Home/Ubuntu/.Trains/Venvs-Builds.2/3.6']' Ret

PlainSquid19 Trains will analyze the entire repository if this is a git repo code, and a single script file if there is no repository found.

It will not analyze an entire folder if it is not in a git repository, because it will not be able to recreate this folder anyhow. Make sense ?

5 years ago
0 Hello Everyone, Can You Please Tell Me Where Can I Set Display Options For Clearml Debug Samples? I Only Have The Last 3 Iterations Displayed?

Hi @<1524560082761682944:profile|MammothParrot39>
By default you have the last 100 iterations there (not sure why you are only seeing the last 3), but this is configurable:
None

2 years ago
0 Hi, Is There Any Way To Get Experiment Debug Images Programmatically?

Hi HandsomeCrow5 .
Remember the debug images are events with links to the actual images, so you first have to get the events and then you can download the images with https://allegro.ai/docs/examples/examples_storagehelper/#storagemanager (which by definition has the credentials, because it was able to upload them 🙂
To get the events:
from trains.backend_api.session.client import APIClient client = APIClient() client.events.debug_images(task='aabbcc')

5 years ago
0 Hey, Thanks For The Great Logging Tool

CloudyHamster42 FYI the warning will not be shown in the next Trains version, the issue is now fixed, thank you 🙂
Regrading the double axes, see if adding plt.clf() helps. It seems the axes are leftover from the previous figure, that somehow are still there...

5 years ago
3 years ago
0 Hi Everyone, I Have Questions Related To Clearml-Serving.

Hmm, how does your preprocessing code looks like?

3 years ago
0 More Of Pushing Clearml To It'S Data Engineering Limits

Whoa, are you saying there's an autoscaler that

doesn't

use EC2 instances?...

Just to be clear the ClearML Autoscaler (aws) will spin instances up/down based on jobs in the queue it is listening to (the type of EC2 instances and configuration is fully configurable)

2 years ago
0 Hi

(also could you make sure all posts regrading the same question are put in the thread of the first post to the channel?)

2 years ago
0 Question About

So basically a list of Path objects ?

4 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

Hi LudicrousParrot69
A bit of background:
A Task is a job executed in the system (sometime it is an experiment training, sometime a controller like the pipeline). Basically everything process can be a task.
Specifically the pipeline controller itself (i.e. the process running the Bayesian optimization) is Task in the system (i.e. a job running). What it does (using the HyperParameterOptimizer) is cloning previously executed Tasks (e.g. training experiments), change their parameters and moni...

4 years ago
0 Hi Everyone, Is It Possible To Show The Upload Progress Of Artificats? E.G. I Use

An upload of 11GB took around 20 hours which cannot be right.

That is very very slow this is 152kbps ...

4 years ago
0 Hi. When Using The Logger'S

DistressedGoat23 you are correct, since at the end this become a plotly object the extra_layout is for general purpose layout, but this specific entry is next to the data. Bottom line, can you open a github issue, so we do not forget to fix? In the mean time you can use the general plotly reporting as SweetBadger76 suggested

3 years ago
0 Is There A Way To Copy The Parameters From The Tasks In A Pipeline?

StraightDog31 can you elaborate? where are the parameters stored? who is trying to access them, and maybe for what purpose ?

4 years ago
0 Is It Possible To Report A Static Html To A Task And Have It Shown In The Ui? I Used The Following:

Done HandsomeCrow5 +1 added 🙂
btw: if you feel you can share how your reports looks like (screen shot is great), this will greatly help in supporting this feature , thanks

5 years ago
0 Does Anyone Get These Junk Logs From Matplotlib While Using Clearml? Is There A Way To Disable It?

StraightDog31 how did you get these ?
It seems like it is coming from maptplotlib, no?

4 years ago
0 Is Anyone Also Experiencing Network Error During Every Clearml Dataset Download? It'S Been A While And Almost Every Download Fails...

Hmm BitterStarfish58 what's the error you are getting ?
Any chance you are over the free tier quota ?

3 years ago
0 Hello All, I'M Trying To Adapt Clearml With My Workflow. I Installed A Server At My Server, With Workers Attached To It. I'M Trying To Execute A Task From My Local Within One Of My Workers. Trying To Use Docker Mode And A Custom Image. I Also Have A Local

The driver script (the one initializes models and initializes a training sequence) was not at git repo and besides that one, everything is.

Yes there is an issue when you have both git repo and totally uncommitted file, since clearml can store either standalone script or a git repository, the mix of the two is not actually supported. Does that make sense ?

3 years ago
0 Hello! I’M Wondering If There Is An Option To Run A Termination Hook Script

Ohh I see, so basically the ASG should check if the agent is Idle, rather than the Task is running ?

3 years ago
0 Is There A Way To Get The Most Updated

yes 🙂
But I think that when you get the internal_task_representation.execution.script you are basically already getting the API object (obviously with the correct version) so you can edit it in place and pass it too

5 years ago
0 Hi Everyone, I'M Using The

So as you say, it seems hydra kills these

Hmm let me check in the code, maybe we can somehow hook into it

3 years ago
0 Clearml-Session Fails Ssh Tunneling. It Does Not Use Key Auth, Instead Sets Up Some Weird Password And Then Fails To Auth:

Btw it seems the docker runs in

network=host

Yes, this is so if you have multiple agents running on the same machine they can find a new open port 🙂

I can telnet the port from my mac:

Okay this seems like it is working

3 years ago
0 Hey All, Is There Any Reason The Python Sdk

It only happens in the clearml environment, works fine local.

Hi BoredHedgehog47
what do you mean by "in the clearml environment" ?

3 years ago
0 Hi Everyone, I'M Using Clearml-Serving With Triton And Have A Couple Of Questions Regarding Model Management:

Hi @<1690896098534625280:profile|NarrowWoodpecker99>

Once a model is loaded into GPU memory for the first time, does it stay loaded across subsequent requests,

yes it does.

Are there configuration options available that allow us to control this behavior?

I'm assuming your're thinking dynamic loading/unloading models from memory based on requests?
I wish Triton added that 🙂 this is not trivial and in reality to be fast enough the model has to leave in RAM then moved to GPU (...

one year ago
Show more results compactanswers