Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hey Everyone, When Uploading With

How can I make it show progress less often/rewrite?

I'm not sure this is configurable ... you mean like reports on the uploads right? (i.e. report every 5mb I think is the default)
while we are at it, maybe we should use twdm if it is installed
wdyt?

2 years ago
0 Tracking From Experiments To Datasets

Yeah that make sense 🙂

one year ago
0 Hey - I'M Trying To Compare Voxel Versus Clear Ml In Image Data Exploration.

Yeah I think using voxel for forensics makes sense. What's your use case ?

one year ago
0 Hey - I'M Trying To Compare Voxel Versus Clear Ml In Image Data Exploration.

I'm hoping i can find an end to end solution that also includes experiment management

Well of course biased here, but ClearML with the hyperdatasets is probably the most complete one.
Specifically with model performance analysis I would add voxel open-source to dissect specific results. but the combination of the abstraction and query capabilities of hyperdatasets, orchestration and experiment management are really unmatched for.
(and again of course I'm biased, but really there is n...

one year ago
0 Clearml-Session Question: I’M Using The Tool With An On-Prem Machine. Normal Tasks Are Being Executed Normally - But When Using

Sometimes it is working fine, but sometimes I get this error message

@<1523704461418041344:profile|EnormousCormorant39> can I assume there is a gateway at --remote-gateway <internal-ip> ?
Could it be that this gateway has some network firewall blocking some of the traffic ?
If this is all local network, why do you need to pass --remote-gateway ?

one year ago
0 I Cannot Get The Configuration From A Task: I Run

Hi @<1523704157695905792:profile|VivaciousBadger56>
You should replace

task.mark_completed()

with:

task.close()

To your point

parameters = task.connect(parameters)

Will be retrieved with:

task.get_parameters()

fyi:
connect_configuration -> get_configuration_objects

one year ago
0 I Cannot Get The Configuration From A Task: I Run

In the documentation it warns about

.close()

"Only call Task.close if you are certain the Task is not needed."

Maybe this is not clear enough, this means you do not need to automatically Add/Log/Track things into the Task in the current process.
This does Not mean you cannot access the Task or its artifacts

Mark closed means to externally (i..e not from the process that crated the Task, maybe even from a different machine) close and mark the task as completed (this...

one year ago
0 Is There A Way To Copy The Parameters From The Tasks In A Pipeline?

StraightDog31 can you elaborate? where are the parameters stored? who is trying to access them, and maybe for what purpose ?

3 years ago
0 I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).

3 years ago
0 I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

and I have no way to save those as clearml artifacts

You could do (at the end of the code
task.upload_artifact('profiler', Path('./fil-result/'))wdyt?

3 years ago
0 I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

but this will be invoked before fil-profiler starts generating them

I thought it will flush in the background 😞
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile

3 years ago
0 Hi, I Am Trying To Run A Task In An Agent From A Repository With An

Hi SkinnyPanda43
Do you mean the cleaml-agent or the cleaml python (a.k.a the auto package detection) ?

3 years ago
0 Hi! I Am Trying To Build And Run A Pipeline. I Pass My Dataset As Parameter Of Pipeline:

I pass my dataset as parameter of pipeline:

@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.

Another option is to use pipeline from dec...

one year ago
0 Hi, I Failed To Update The "Started At" And The "Completed At" Attributes In The "Info" Tab. I Tried To Do So By The Following Steps:

I failed to update the "STARTED AT" and the "COMPLETED AT" attributes in the "INFO" tab.

I'm not sure this can actually be overridden...

3 years ago
0 Hi Everyone, I Was Looking Into Clearml Integration With Nvidia For Transfer Learning. Does Clearml Have Plans To Integrate With The New Tao? Looks Like Nvidia Is Focusing Tao As A Low Code Transfer Learning Tool With Everything Done In Command Line, Whic

The latest TAO doesn't use python for fine tuning, rather it uses the CLI entirely

It's a good question, but I think the CLI actually just runs a python code (the CLI is their interface). Generally speaking I'm pretty sure it will not be complicated to convert the TLT integration to support TAO (Nvidia helps with that, and I think we had a similar proces with Nvidia Clara/MONAI)
BTW: how are you using Nvidia TAO ?

2 years ago
0 Hi, Is There A Way To Pull Clearml Datasets To A Mounted Pv Instead Of The Pod'S Local Directory.

Hi @<1523701304709353472:profile|OddShrimp85>
Do you mean Dataset.get_local_copy() ?

one year ago
0 Hi. I Have A

Are you saying this component should pull a specific git repo?
PipelineDecorator.component( ..., )seems like there is no reference to a specific repo (arguments repo and repo_branch etc are missing) is that correct?

2 years ago
0 Has Anyone Used

Hmm I seems to fit the code 1x784 with float32, no?

2 years ago
0 Hi, I Was Uploading An Image Artifact Using The Following But In The Preview I Only Get An Array Instead Of An Image. Am I Doing Something Wrong? ``` Im=Cv2.Imread('Pic.Jpg') Task.Upload_Artifact('Myimage',I'M) ```

Hi SubstantialElk6
You are uploading an artifact, a good use case for numpy artifact would be a feature table.
If you want to upload an image use either report_media or report_image or upload PIL image as artifact.
What do you think?

3 years ago
0 Maybe This Is More A Git Question Than A Clearml Question, But How Do I Get The Clearml_Agent_Git_User And Clearml_Agent_Git_Pass For Step 11 In

@<1523710674990010368:profile|GreasyPenguin14> make sure it to uses https not ssh:
edit ~/clearml.conf

force_git_ssh_protocol: false

and that you have both git_user & git_pass set in your clearml.conf

3 years ago
0 Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,

JitteryCoyote63

I agree that its name is not search-engine friendly,

LOL 😄
It was an internal joke the guys decided to call it "trains" cause you know it trains...
It was unstoppable, we should probably do a line of merchandise with AI 🚆 😉
Anyhow, this one definitely backfired...

4 years ago
0 Is It Possible To Link Independent Training Experiments.. For Example.. I Have An Ensemble Of 2 Models (A & B) Each Models Are Trained Under Their Own Training Task In Trains Now I Will Run Another Script Which Will Use These Models To Create An Ensemble

Hmm I see what you mean. It is on the roadmap (ETA the next version 0.17, 0.16 is due in a week or so) to add multiple models per Task so it is easier to see the connections in the UI. I'm assuming this will solve the problem?

4 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

I think it fails because it tries to install trains twice. Could you remove the trains package, and test? I'm also curious how do you have both installed?!

4 years ago
0 Hi People, I Looked On This Line When Trains Try To Save Image.

Hi CharmingShrimp37
Go to Github to your newly forked repo, you should have a green button suggesting to take your branch and making it a PR. It is that simple 🙂

4 years ago
0 Hi, Can You Pls Help Me? I Am Using V 0.14 (Will Update It Soon) And I Got The Following Error: /Usr/Bin/Python3.6: No Module Named Virtualenv Trains_Agent: Error: Command '['Python3.6', '-M', 'Virtualenv', '/Home/Ubuntu/.Trains/Venvs-Builds.2/3.6']' Ret

Yes actually that might be it. Here is how it works,
It launch a thread in the background to do all the analysis of the repository, extracting all the packages.
If the process ends (for any reason), it will give the background thread 10 seconds to finish and then it will give up. If the repository is big, the analysis can take longer, and it will quit

4 years ago
Show more results compactanswers