Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Confirming About The Documentation For

Hi UnevenDolphin73
You mean this part?
https://github.com/allegroai/clearml-agent/blob/5afb604e3d53d3f09dd6de81fe0a494dacb2e94d/docs/clearml.conf#L212

(In other words, theΒ 

the Task's Environment section

Β is a bit unclear)

Yes we should expand, but generally you are correct it should work as you described πŸ™‚

2 years ago
0 Hello, I Am Trying To Run The

Hi ShinyRabbit94

system_site_packages: true

This is set automatically when running in "docker mode" no need to worry πŸ™‚
What is exactly the error you are getting ?
Could it be the container itself has the python packages installed in a venv not as "system packages" ?

2 years ago
one year ago
0 Folks, Could You Please Clarify/Help? I Correct Understand, If --Docker Is Enable That Will Means Every New Experiments Will Be Executed Into Dedicated Agent Worker Containers? Also I See For

Hi UnevenOstrich23

if --docker is enable that will means every new experiments will be executed into dedicated agent worker containers?

Correct

I think the missing part is how to specify the docker for the experiment?
If this is the case, in the web UI, clone your experiment (which will create a draft copy, that you can edit), then in the Execution tab, scroll down to the "base docker image" and specify the docker image to use.
Notice that you can also add flags after the docker im...

3 years ago
0 Hello, I'M Getting This Weird Error From Time To Time When Running A Pipeline, It Add My Tasks As Drafts But Never Launch Them, When I Checked The Logs, I See The Following ;

BulkyTiger31 could it be there is some issue with the elastic container ?
Can you see any experiment's metrics ?

2 years ago
0 Any Info On The Lifecycle Of Datasets Downloaded To $Home/.Clearml/Cache/Storage_Manager/Datasets Via Get_Local_Copy I Have A Task Running And I Was Watching The Above Path And Datasets Were Being Downloaded And Then They Are All Removed And For A Partic

And I think the default is 100 entries, so it should not get cleaned.

and then they are all removed and for a particular task it even happens before my task is done

Is this reproducible ? Who is cleaning it and when?

3 years ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

upload_artifact will actually do two things:
upload the file to the trains-server register it as an artifact on the experiment
What did you mean by "register the artifact manually"? You still need to upload the file to the trains-server (so it is later accessible )

4 years ago
0 Hi

if I want to run the experiment the first time without creating theΒ 

template

?

You mean without manually executing it once ?

3 years ago
0 Hi. Question About Dataset Upload Errors: When Uploading A

Unfortunately that is correct. It continues as if nothing happened!

oh dear, let me make sure this is taken care of
And thank you for the reproduce code!!!

one year ago
0 Hi Guys! What Is The Best Way To Access Artifacts From Other Step Of The Pipeline? I Have Step One Returning Dataframe And Step Two Takes It As An Input But When First Step Is Cached I Only Get An Artifact Url. So How Should I Read It From Artifacts Stora

I think your "files_server" is misconfigured somewhere, I cannot explain how you ended up with this broken link...
Check the clearml.conf on the machines or the env vars ?

one year ago
0 Hi Guys! What Is The Best Way To Access Artifacts From Other Step Of The Pipeline? I Have Step One Returning Dataframe And Step Two Takes It As An Input But When First Step Is Cached I Only Get An Artifact Url. So How Should I Read It From Artifacts Stora

However, it's very interesting why ability to cache the step impacts artifacts behavior

From you log:

videos_df = StorageManager.download_file(videos_df)

Seems like "videos_df" is the DataFrame, why are you trying to download the DataFrame ? I would expect to try and download the pandas file, not a DataFrame object

one year ago
0 On A Similar Note, The Million Autogenerated Experiments When Doing Tuning Swamp Out Everything Else In The Experiments And Models Tabs. Is There A Current Solution To Hide Autogenerated Runs, Give Them Specific Tags, Etc, Or Is This Not Yet Possible? Sor

LudicrousParrot69 we are working on adding nested project which should help with the humongous mass the HPO can create. This is a more generic solution for the nesting issue. (since nesting inside a table is probably not the best UX solution πŸ™‚ )

3 years ago
0 Clearml Team Is No Longer To Develp Clearml-Session..? I Wrote An Issue But Nobody Answer

looks like a great idea, I'll make sure to pass it along and that someone reply πŸ™‚

one year ago
0 Regarding The “Classic” Datasets (Not Hyper Datasets): Is There An Option To Do Something Equivalent To Dvc’S “

Hi RoughTiger69
I'm actually not sure about DVC support as well, see in these links, syncing and registering is a link, not creating an immutable copy.
And the sync between the local and remote seems like it is downloading the remote and comparing to the local copy.
Basically adding remote source Does not mean DVC will create an immutable copy of the content, it's just a pointer to a bucket (feel free to correct me if I misunderstood their capability)
https://dvc.org/doc/command-reference/...

2 years ago
0 Hey, I'M Probably Being Thick Here But I Would Like To Pull Some Data From A Database And Write It To A Particular Bucket In S3 Within A Task I'M Doing. I'M Using Task.Upload_Artifact But Can'T Understand Where I Write The Bucket Path.

SmallBluewhale13 the final path is automatically generated, you only need to specify the bucket itself. By default it will be your "files_server"
https://github.com/allegroai/clearml/blob/c58e8a4c6a1294f8acec6ed9cba81c3b91aa2abd/docs/clearml.conf#L10
You can either change the configuration (which will make sure All uploaded artificats will always be there, including debug images etc.)
You can specify where you want the artifacts and debug images to be uploaded by setting:
https://allegro....

3 years ago
0 Hey Guys, I Am Setting Up A New Machine With Two Rtx 3070 Gpus Where I Created Two Agents (One For Each Gpu). On Both Agents, My Experiments Fail With Error:

trains was not able to pick the right wheel when I updated the torch req from 1.3.1 to 1.7.0: It downloaded wheel for cuda version 101.

Could you send a log, it should have worked 😞

3 years ago
0 Hi, Kudos For The 0.15 Guys! I Am Having An Issue Related To Git Auth: I Have An Issue With Trains-Agent (0.15): It Does Not Use Git Creds While Trying To Clone A Private Repo:

Hi JitteryCoyote63 could you provide a bit more details, is this a repo for a python module (i.e. in the installed pacakges) or a submodule ?

4 years ago
0 Is It Possible To Give The Agent Access To Install Private Pip Packages (Needs To Be Installed From The Repo)?

Try to manually edit the "Installed Packages" (right click the Task, select "reset", now you can edit the section)
and change it to :
-e git+ssh@github.com:user/private_package.git@57f382f51d124299788544b3e7afa11c4cba2d1f#egg=private_package(assuming " pip install -e mailto:git+ssh@github.com :user/... " will work, should solve the issue )

3 years ago
0 Hi. When Using The Logger'S

Hmm let me check, because I think it should have worked

2 years ago
0 I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).

3 years ago
0 Hi, Is It Possible To Re-Use Task-Id, But Keep The Old Execution Tab ? (Git Diff Specifically).

Then it initiate a run on aws, which I want it to use the same task-id.

BoredPigeon26 Clone the Task, it basically creates a new copy (of the setup/configuration etc.)/
Then you can launch it on an aws instance (I'm assuming with clearml-agent)
wdyt?

But it write-over the execution tab in the gui

It does you are correct, it will however Not overwrite the reports (log scalars etc)

2 years ago
0 Can We Use Dynamodb With Clearml Helm Charts Instead Of Mongodb? We'D Like To Move All Stateful Storage To Aws As A Separate Service And That Would Be A Nice Alternative

I think the main risk is ClearML upgrades to MongoDB vX.Y, and mongo changed the API (which they did because of amazon), and now the API call (aka the mongo driver) stops working.
Long story short, I would not recommend it πŸ™‚

one year ago
Show more results compactanswers