Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4175 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi! For

Hmmmm, fair point.

3 years ago
0 I Get An

I just want to make sure that the file itself is valid first 🙂

3 years ago
0 I Get An

Can you try with full path and not relative?

3 years ago
0 Hi All, I Am Using

Hi @<1600661428556009472:profile|HighCoyote66> , ClearML Agent will try to find the python version dynamically and then revert to most basic one it can find. My suggestion is to run everything in docker mode ( --docker ) so the python version can be set by the docker

one year ago
0 Hi, Anyone Know How To Report Scalars With Tenserflow? Thanks

Do you mean reporting scalars with tensorflow OR having the reported tensorflow scalars show up on ClearML?

3 years ago
3 years ago
0 Hi There, Maybe This Was Already Asked But I Don'T Remember: Would It Be Possible To Have The Clearml-Agent Switch Between Docker Mode And Virtualenv Mode At Runtime, Depending On The Experiment

Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.

I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?

2 years ago
0 Hey Is There A Way For One To Extend Clearml Somehow? We Have Some Custom Evaluations We Want To Do And Our Ideal Scenario Would Be To Do Them Within Clearml Itself.

Hi @<1535069219354316800:profile|PerplexedRaccoon19> , not sure I understand what you mean, can you please elaborate on what you mean by doing the evaluations within ClearML?

one year ago
0 Hi Clearml, Does Clearml Orchestration Have The Ability To Break Gpu Devices Into Virtual Ones?

Hi, do you mean out of the box virtualization of your gpu or using virtual gpus on the machine?

3 years ago
0 Ui Suggestion:

DepressedChimpanzee34 which section are you referring to, can you provide a screenshot of what you mean?

3 years ago
0 Is There Any Way (Or Are There Any Plans) To Include Some View For Datasets In The Webui? One That Is Detached From The Generating Task?

I think that something like that exists, it appears to be done in the paid version called hyper-datasets. The documentation is open for all apparently 🙂

https://clear.ml/docs/latest/docs/hyperdatasets/overview

3 years ago
0 Im Having Difficuilty Understanding How To Handle Modified Files On S3

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the reason for this is because each file is hashed and this is how the feature compares between versions. If you're looking to keep track of specific links then the HyperDatasets might be what you're looking for - None

one year ago
3 years ago
0 Hi, Anyone Also Stuck With The Exception Encountered Uploading Pytorch Model File? The Dataset Upload Works Fine, Though.

Can you verify you ~/.clearml.conf has proper configuration. If you do
from clearml import Task t=Task.init()Does this work?

3 years ago
2 years ago
0 I Have A Docker Container That Have Clearml-Agent Running Inside In Normal Mode. The Agent Take On A Task And Execute It Fine. I Just Want To Somehow Log The Docker Image Version That The Agent Is Running Inside. I Start My Container With Something Like:

Hi @<1576381444509405184:profile|ManiacalLizard2> , is there a specific reason you're running the agent inside a docker container instead of running the agent in docker mode which would make it spin up a container?

one year ago
0 Is There A Way To Serve A Model Via The Sdk Or Rest Api? I Want To Programmatically Serve The Model After Finishing Training It Via The Pipeline. Or Is It A Bad Practice To Do So, Hence Why It'S Not Really Exposed?

Hi @<1567321739677929472:profile|StoutGorilla30> , this is a good question. I would assume that the CLI tool uses API calls under the hood. I think you can either look at the code and see what is being sent or simply do CLI commands from the code

2 years ago
Show more results compactanswers