Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ManiacalLizard2
Moderator
40 Questions, 298 Answers
  Active since 05 June 2023
  Last activity 2 months ago

Reputation

0

Badges 1

113 × Eureka!
0 Hi Everyone, I'Ve Set Up A Clearml Server On Aws Ec2 And Configured Output_Uri To Log Everything To S3. However, I Just Noticed That The Input Model Captured By Clearml Is Being Stored On The Ec2 Instance Instead Of S3.

Is it because your training code download the pretrain model from pytorch or whatever, to local disk in /tmp/xxx then train from there ? so ClearML will just reference the local path.

I think you need to manually download the pre-train model, then wrap it with Clearml InputModel (eg here )
And then use that InputModel as pre-train ?

May be clearml staffs have better approach ? @<152370107039036...

7 months ago
0 Hi Everyone, I’M Testing Clearml And Encountered An Issue When Launching The Agent In Docker-Mode: It Seems To Ignore Additional Docker Arguments. For Example, When I Run:

That --docker_args seems to be for clearml-task as described here , while you are using clearml-agent which is a different thing

8 months ago
0 I Have Set

@<1523701205467926528:profile|AgitatedDove14> About why we stay on 1.12.2 : None

one year ago
0 Regarding The Open Source, Self-Hosted Version Of The

the agent inside the docker compose is just a handy one to serve a service queue where you can queue all your "clean up" tasks that are not deep learning related, using only a bit of CPU

2 years ago
2 years ago
0 Hello, All. I’Ve Recently Started Experiencing A Weird Issue With Arg Parsing Where Any String Values Are Being Repeated As Lists Of Strings When The Values Are Sent To The Clearml Server (See Attached Screenshot). I Believe This Issue Started Around The

Found the issue: my bad practice for import 😛
You need to import clearml before doing argument parser. Bad way:

import argparse

def handleArgs():

    parser = argparse.ArgumentParser()
    parser.add_argument('-c', '--config-file', type=str, default='train_config.yaml',
                        help='train config file')   
    parser.add_argument('--device', type=int, default=0,
                        help='cuda device index to run the training')

    args = parser....
2 years ago
0 Hello, I Am Struggling Understanding The Docs And Hope I Can Get A Quick Answer Here: Is It Possible To Utilise Multiple Gpus In Parallel For Hyperparameter Optimization For The Same Base Experiment Without The Pro Plan? I Started An Agent With Clearml-Ag

what about having 2 agents, one on each GPU, on the same machine, serving the same queue ? So that when you enqueue, which ever agent (thus GPU) available will take the new task

2 years ago
0 Another Quick Question About Fileservers And Clearml-Agent: Clearml-Agent Seems To Ignore The Output Destination Set In The Task Config

If you are using multi storage place, I don't see any other choice than putting multi credential in the conf file ... Free or Paid Clearml Server ...

2 years ago
0 Hi Clearml Team, Is There A Way To Overwrite Working_Dir When Creating Task From Task.Init() Workflow? The Underlying Function I Am Triggering Relying On The Assumption On Running From Certain Directory.

the underlying code has this assumption when writing it

That means that you want to make things work not in a standard Python way ... In which case you need to do "non-standard" things to make it work.
You can do this for example in the beginning of your run.py

import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

In this way, you not relying on a non-standard feature to be implemented by your tool like pycharm or `cle...

8 months ago
0 Hi Everyone! I Discovered That Uploading Model Artifacts At Each Checkpoint To The Clearml Server Significantly Slows Down Training. So I Set

you should know where your latest model is located then just call task.upload_artifact on that file ?

2 years ago
0 Hi, We Have An Agent Running Inside A Nvidia Official Container. The Agent Seems To See The Gpu Driver But The Gpu Count Is 0 When I Join That Container,

the weird thing is that: the GPU 0 seems to be in used as reported by nvtop in the host. But it is 50% slower than when running directly instead of through the clearml-agent ...

one year ago
0 Hi Clearml Team, Is There A Way To Overwrite Working_Dir When Creating Task From Task.Init() Workflow? The Underlying Function I Am Triggering Relying On The Assumption On Running From Certain Directory.

What exactly are you trying to achieve ?

Let assume that you have Task.init() in run.py
And run.py is inside /foo/bar/

If you run :

cd /foo
python bar/run.py

Then the Task will have working folder /foo

If you run:

cd /foo/bar
python run.py

Then your task will have the working folder /foo/bar

8 months ago
0 Another Questions Related To

it is actually in the repo root folder.

2 years ago
0 Hi! I Have Noticed That Clearml-Elastic Container Consumes 32.82Gib Memory. This Seems

I think ES use a greedy strategy where it allocate first then use it from there ...

8 months ago
0 I Am Trying To Run One Agent On My Local Machine And One Agent On A Vm

because when I was running both agents on my local machine everything was working perfectly fine

This is probably you (or someone) had set up ssh public key with your git repo sometime in the past

2 years ago
0 I Have Set

clearml==1.12.2
clearml_agent v1.8.1rc2

one year ago
0 How To Tell Clearml Server To Use Cloud Storage (Azure)? I Have A Clearml Server Deployed With Docker-Compose. As Per Instruction

@<1523701087100473344:profile|SuccessfulKoala55> Is it even possible to have the server storing file to a given blob storage ?

2 years ago
0 Hi! I'M A Devops Engineer. My Company Is Self-Hosting Clearml On Kubernetes. I'M A Clearml Newbie, So Pardon My Ignorance. I'M A Little Confused By What Clearml Artifacts (See Screenshot Below) And Custom Models Are. Are They One And The Same? Where Are

Artifact can be anything, that you can use clearml SDK to upload to storage. Which storage is used is defined by your clearml.conf (with its credentials) ClearML web and api server do not store those files

Model is a special artifact: None
Example you have the lineage feature where if you train model B using model A as starting point (aka pre-trained) , and model C from model B, ... The lineage will track modelC was built on...

one year ago
0 I Have Set

1.12.2 because some bug that make fastai lag 2x
1.8.1rc2 because it fix an annoying git clone bug

one year ago
0 Hi Everyone, I’M New To Clearml And Server Administration. We Are Considering Tools To Manage A Dgx H100 Server. Ideally, The Tool Could Provide "Sandboxes" That Are Already Equipped With All The Necessary Tools And Frameworks. This Way, Each Team Member

if you want to replace MLflow by ClearML: do it !! It's like "Should I use sandal or running shoes for my next marathon ..."
Let your user try ClearML, and I am pretty sure all of them will want to swap over !!!

one year ago
2 years ago
Show more results compactanswers