Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Good Morning, I'M Wondering If Someone Has Any Advice/Experience Configuring Clearml-Agent To Include Private Packages From Aws Codeartifact? So Far I Know I Have To Edit The

Correct:
extra_docker_shell_script: ["apt-get install -y awscli", "aws codeartifact login --tool pip --repository my-repo --domain my-domain --domain-owner 111122223333"]

4 years ago
0 For Remote Execution Where The Queue Has

I think poetry should somehow return error if toml is "empty" then we can detect it...

2 years ago
0 Has Anyone Had Success Using Clearml With Huggingface Models? I Create My Hf

I solved the issue by implementing my own ClearML logger

This is awesome! any chance you want to PR it to transformers ?

2 years ago
0 Hello Everybody, I Would Like To Start Off By Saying That I Absolutely Love Clearml. I Am Getting Familiar With Clearml Datasets And I Have A Quick Question. Is Is Possible To Download Individual Files From A Dataset Without Downloading The Entire Datase

I think that by default the zipped package files are 0.5GB
(you can control it None look for --chunk-size)
I think the missing part of the api is understanding which chunk your specific file stored in.
You can do something like:

ds = Dataset.get(...)
the_artifact_chunk_I_need = ds.file_entries_dict["myt/file/here"].artifact_name

wdyt?
maybe worth to add an interface ?

2 years ago
0 I Would Like To Use Clearml Together With Hydra Multirun Sweeps, But I’M Having Some Difficulties With The Configuration Of Tasks.

Hmm @<1523701279472226304:profile|SoreHorse95> this is a good point, I think you are correct we need to fix that,

  • Could you open a GitHub issue so this is not forgotten ?
  • As a workaround I would use clone=True, then after the call I would call task.close() on the original task, wdyt?
2 years ago
0 Our Mac Users Are Having Some Issues. They Have Their Respective ~/Clearml.Conf, And Yet They Get: Clearml 1.1.5

The thing I don't understand is how come this DOES work on our linux setups

I do not think it actually works... I could not have find a code that will convert the ENV in the config string ...

I'll be happy to test it out if there's any commit available?

Please do, and feel free to PR it 😍
https://github.com/allegroai/clearml/blob/d3e986393ac8d1a1ea48302224962570ab8e6f9e/clearml/backend_api/session/session.py#L576
https://github.com/allegroai/clearml/blob/d3e98639...

3 years ago
0 How Can I Tell Clearml-Agent Not To Run Pip Install Unless My Requierments.Txt File Was Changed. It Seems To Run Pip Install Every Time I Run A Task Although Nothing Have Changed...

@<1577468638728818688:profile|DelightfulArcticwolf22>

How can I tell clearml-agent not to run pip install unless my requierments.txt file was changed.

the agent has built in cache, it will reuse the previous venv if nothing changed (cache local on the agent's machine).
Make sure this is line is not commented :
None

2 years ago
0 Hello Everyone, I'M Working On Building A Training Pipeline Using Clearml And I'M Encountering Some Challenges In Assembling The Pipeline.

for example, one notebook will be dedicated to explore columns, spot outliers and create transformations for specific column values.

This actually implies each notebook is a standalone "process", which makes a ton of sense. But this is where notebooks and proper SW design break, in traditional SW, the notebooks are actually python files, and then of course you can import one from another, unfortunately this does not work in notebooks...

If you are really keen on using notebooks I wou...

2 years ago
0 Hi. I'M Using

task=Task.current_task()
Will get me the task object. (right?)

PanickyMoth78 yes, always, from anywhere, this is a singleton object 🙂

3 years ago
0 Is It Possible To Avoid The Clearml-Agent For Local Installations, And Have The File Server Automatically Use An S3 Bucket? I'Ve Found

${PWD} works!

This will be resolved every call to Task.init (so I would recommend against it), how about "$HOME/" ?

3 years ago
0 Hi, Anyone Seen This Issue?

On the machine running the docker-compose (i.e. the clearml-server)

4 years ago
0 If I Do 

Hi ElegantCoyote26
Try:
task = Task.create(....) task.output_uri = " ..."

3 years ago
0 Hi All, Clearml Is My Goto-Tool That Watches All Experiments Behind The Scenes! I Came Here After Trying/Testing Sacred, Dvc, Mlflow, Keepsake, Testtube.... Question -- The "Description" Column In The Experiment Dashboard Is Useful. Is There A Way To Pr

Thank you JuicyOtter4 ! 😍

. Is there a way to programmatically set that in the code?

Something like?
` task = Task.init(...)

probably we should change that to description ?!

task.set_comment("best thing ever") `

2 years ago
0 Hi All, I Was Trying To Use Clearml-Task To Run A Custom Docker(With Poetry To Install All The Python Dependencies And Activated The Environment) Using Clearml Gpu, But It Seems Like Clearml Always Create A Virtual Environment And Run The Python Script Fr

well I do not think you set your pytorch lightining to use cuda:

GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/code/.venv/lib/python3.9/site-packages/lightning/pytorch/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='gpu', devices=1)`.
2 years ago
0 I'M Trying To Set Up Clearml Server On A New Vm But The Elasticsearch Container Is Erroring With The Following:

WittyOwl57 could it be the EC2 instance is too small (i.e. not enough storage / memory) ?

2 years ago
0 I’M Getting These Errors When Using Agent In Docker Mode

clearml-agent daemon --detached --queue manual_jobs automated_jobs --docker --gpus 0

If the user running this command can run "docker run", then you should ne fine

4 years ago
0 Hi, I Am Running Clearml Agent Using Sdk. When I Run A Remote Job On This Clearml Agent, The Venv Setup Is Totally Based On My Requirements.Txt Instead Of Adding On To What The Image Has Before. Why?

Hi @<1523701304709353472:profile|OddShrimp85>

the venv setup is totally based on my requirements.txt instead of adding on to what the image has before. Why?

Are you using the agent in docker mode ? if this is the case it creates a venv inside the docker, inheriting from the preinstalled docker system packages,

2 years ago
0 Hi, I Am Trying To Setup An Auto Scaler, But I Am Getting The Following Dependency Error:

Hi SkinnyPanda43
Can you attache the full log?
Clearml agent is installed before your requirements.txt , at least in theory it should not collide

2 years ago
0 Hey! Is There Way To Get Latest/Best Checkpoint From Another Task (I Know Task Id)? I Know How To Get Data From Artifacts:

Hi FlatOctopus65
You are almost there
prev_task: Task = Task.get_task(task_id=<prev_task_id_here>) model = prev_task.models['output'][-1] my_check_point = model.get_local_copy()

3 years ago
0 Hi, I Try To Execute Pipeline With Pipelinecontroller And Define It Like This: Pipe = Pipelinecontroller(

yes thanks , but if I do this, the packages will be installed for each step again, is it possible to use a single venv?

Notice that the venv is Cached on the clearml-agent host machine (if this is k8s glue, make sure to setup the Cache as a PV to achieve the same)
This means there is no need to worry about that and this is stable.
That said, if you have an existing VENV inside the container, just add docker_args="-e CLEARML_AGENT_SKIP_PIP_VENV_INSTALL =/path/to/bin/python"
Se...

9 months ago
0 Good Morning Folks, I Am Setting Up Clearml On A (Self-Hosted) K8S Cluster Using The

SarcasticSquirrel56

if I configure manually the pods for the different nodes, how do I make clearml server aware that those agents exist?

Basically the agent register themselves on your cleaml-server, and they register on which Queue(s) they listen to. In other words the interface to choose the different types of machines/gpus is by enqueue the Task to different queues.
For example: Queue(1): "CUDA11_GPUx1" , Queue(2): "CUDA10_GPUx1"
Make sense ?

EDIT:

I guess to achieve what I w...

3 years ago
0 Is It Possible To Disable Vcs-Cache? I Tried To Change Value From True To False In The Trains.Conf, But It Does Not Affect Anything. I Want To Disable It, Because It Gives Error When I Run A Project Firstly On Docker Then On Venv.

Hi MysteriousBee56 ,
Yes this is permissions issue, the docker creates all folders as root (as it is the root user running inside the docker), Then when you execute in venv mode, you are running it from your user, which obviously cannot change root created folders.

5 years ago
0 Hello, Is There A Way To Update A Task Diff Programatically? Eg, I'M Creating A Task Using

store_code_diff_from_remote

 don't seem to change anything in regards of this issue

Correct, it is always from remote

i'll be using the update_task, that worked just fine, thanks 

 (edite

Sure thing.

ShakyJellyfish91 , I took a quick look at the diff between the versions can you hack a non working version (preferably the latest) and verify the issue for me?

4 years ago
0 In Ui Under Execution Tab, I See That The Trains Has

PompousParrot44 I assume the folder structure is something like:
repo_root:
--> test
-----> scripts
If this is the case, make sure the ""working directory" is . which means repository root

5 years ago
0 I Am Trying Pytorch Nightly Again With Python 3.10. Works Fine Locally, But Fails On Clearml-Agent In Docker Mode.

So this is verry odd, it looks like a pip bug:
The agent is trying to install torch==2.1.0.* because by default it ignores the 4th+ parts (they are unstable and torch have tendency to remove them) . and for some reason pip will not match 2.1.0.* with for example "2.1.0.dev20230306+cu118"
but based on the docs it should work:
see here: None

As a workaround you can always edit and change to the final url for example: so ...

2 years ago
Show more results compactanswers