Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi Anyone

The latest image seems to require drivers on the host 460+
try this one:
https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel_20-12.html#rel_20-12

3 years ago
0 How Can I Ensure Tasks In A Pipeline Have The Same Environment As The Pipeline Itself? It Seems A Bit Counter-Intuitive That The Pipeline (Executed Remotely) Captures The Local Environment, But The Tasks (Executed Remotely) Do Not Use That Same Environmen

How or why is this the issue?

The main issue is a missing requirement on the Task component, and this is why it is failing.
You can however manually specify package (and I'm assuming this will solve the issue), but it should have autodetected, no?

one year ago
0 Hii Everyone! I'M Having An Issue Using An Agent Without A Gpu. I'M Using It On Docker Mode (To Allow Ssh), I Changed The Default Docker Image On The Config File To Python 3.9.6 But It Seems It Is Still Trying To Use The Nvidia Image. The Error Message G

Hi GrotesqueOctopus42 ,

BTW: is it better to post the long error message on a reply to avoid polluting the channel?

Yes, that is appreciated πŸ™‚
Basically logs in the thread of the initial message.

To fix this a had to spin the agent using --cpu-only flag (--docker --cpu-only)

Yes if you do not specify --cpu-only it will default to trying to access gpus
Nice!

one year ago
one year ago
0 Hello! I'M Trying To Make A Simple Eval.Py Script That Will Go Pull The Best Model Of A Given Experiment, Load It Locally And Evaluate It On Whatever Data I Give. Question 1: Is There A Standard Way Documented Somewhere To Do This? Question 2: I'M Loadin

Wait, that makes no sense to me. The API from python and the API from the UI are getting the same data from the backend ...
What are you getting with?
from clearml import Task task = Task.get_task(task_id=<put task id here>) print(task.models)

one year ago
0 Hi Guys, Any Plan To Integrate The

Hi JitteryCoyote63
Wait a few hours, there is a new fix, I'll make sure we upload it later today (scheduled to be there anyhow, I'll push it forward)

4 years ago
0 Hi, I’M Having Troubles Initializing Connection To Clearml (“Error: Could Not Verify Credentials:“). Who Can Help? Thanks

IrateBee40 I think I have an idea what's wrong, https could it be there is some firewall in the middle intercepting the entwork, and without installing SSL certificate the ssl connection is failing ?

2 years ago
0 Hello Folks! We Have Started Using Clearml In Kubernetes. The Trainings Are Run In K8S With Help Of K8Sintegration And Some Custom Coding. Now For The Clearml-Session Tasks, A Port-Forward Should Be Done Each Time If I Need To Access The Jupyter Notebook

. I’m using the default operation mode which uses kubectl run. Should I use templates and specify a service in there to be able to connect to the pods?

Ohh the default "kubectl run" does not support the "ports-mode" 😞

There’s a static number of pod which services are created for…

You got it! πŸ™‚

3 years ago
0 I Found Here

Do you mean it recently become part of enterprise version?

I do not think so, but it seems this the support for the open-source is more like a PoC
https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py

one year ago
0 Hi, Currently We Can Add "Tags" On Experiments. When Filtering The Tags In The Dashboard, It Seems To Default To Filter As A "Or" Condition, Is It Possible To Search With "And" Condition, Such As Search With "Dataset_Version1 + Nn_Model"

Hi EnviousStarfish54
Verified with the frontend / backend guys.
Backend allows to search for "all" tags, and frontend will add a toggle button for the UI to select or/all for the selected Tags.
Should be part of the next release

3 years ago
0 Hi

Hi SarcasticSparrow10 , so yes it does, this is more efficient when using pytorch loaders, and in some other situations.
To disable it add to your clearml.conf:sdk.development.report_use_subprocess = false2. interesting error, maybe we can revert to "thread mode" if running under a daemon. (I have to admit, I'm not sure why python has this limitation, let me check it...)

3 years ago
0 Hi Folks! I'M Using  

ExcitedFish86 0.17.5rc3 should fix this issue.
This is what I'm getting:

3 years ago
0 Hi, I Encountered A Few Problems:

FierceFly22 wow that is a cool hack! Trains will capture any torch.save , so I think the actual driver here is the 'model.summary' . You can also upload any artifact with task.upload_artifact('name', 'modelsummary.txt')
Touching a file will not trigger Trains, as it does not monitor the files themselves. Make sense?
BTW, how will you get the file when running with the agent? If you are using the connect_configuration it will be downloaded from the trains-server for you. Otherwise you can alw...

4 years ago
0 Hi, I'M Getting A Lot Of The Following Logs

PompousBeetle71 kudos on the solution!
What were the loggers you ended up setting?
I'd like to make sure we fix this issue

4 years ago
4 years ago
0 Does Clearml Creates Separate Virtual Environments For Each Pipeline Steps When Running Remotely?

This means all the components of the pipeline use the exact same packages, and then it will just reuse the venv. Make sense ?

one year ago
0 Hi Everyone! I Have A Short Question That You Can For Sure Help Me With. Is There A Way To Avoid Each Task To Create A New Environment? I'D Like To Specify Which Env To Use. I Tried With
ERROR: Could not install packages due to an EnvironmentError: 
[Errno 28] No space left on device

BTW: @<1523703080200179712:profile|NastySeahorse61> this sounds like docker out of space on the Main disk '/var/` where it stores all the images and temp file systems
This will cause you code to fail as any runtime change to the container file system will raise this out of disk space error

2 years ago
0 Hi, I Need Your Help Setting Up An Trains Agent Running In Docker. I Have An Python Script Calling Wget As System Command Which Runs Fine On My Dev Engine. When Cloning The Experiment And Scheduling It Into The Services Queue I Get An Error That The Call

WickedGoat98
Put the agent.docker_preprocess_bash_script in the root of the file (i.e. you can just add the entire thing at the top of the trains.conf)

Might it be possible that I can place a trains.conf in the mapped local folder containing the filesystem and mongodb data etc e.g.

I'm assuming you are referring to the trains-=agent services, if this is the case, sure you can,
Edit your docker-compose.yml, under line https://github.com/allegroai/trains-server/blob/b93591ec3226...

3 years ago
Show more results compactanswers