Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
TimelyPenguin76
Administrator Moderator
0 Questions, 711 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 When Uploading An Artifact, Can I List It In Some Grouping (Like With Parameters, Having E.G.

not sure about the Other , but maybe adding some metadata to the artifact can do the trick?

You can get all the artifacts with task.artifacts and you can go over it and filter with the metadata, wdyt?

2 years ago
0 Hey, How Can I Point Trains To Look For It'S Train.Conf File In A Different Path Than ~/Trains.Conf?

Hi SmarmySeaurchin8

You can configure TRAINS_CONFIG_FILE env var with the conf file you want to run it with. Can this do the trick?

4 years ago
0 I Have To Say I'M Totally Confused By The Pipeline I Want To Execute The Pipeline On My Local Computer. I Followed

those are all? you can copy all the section from the UI, and hide the internal details

3 years ago
4 years ago
0 Hi Guys, I Had Several Times Now The Following Errors Poping In Agents While Executing A Task:

👍

So the diff header doesn’t related to line 13 but the error is, can you try adding space to this line or even delete it if you don’t need it? (just for checking)

3 years ago
0 Hi Guys, I Had Several Times Now The Following Errors Poping In Agents While Executing A Task:

can you share configs/2.2.2_from_scratch.yaml file with me? The error point to line 13, anything special in this line?

3 years ago
0 Is There A Way To Do S3 -> S3 Copy While Doing A Dataset? I Don’T Want To Get It To Local From S3 And Then Upload As A Dataset To S3

Hi TrickySheep9 , didnt get the idea, you are using clearml-data ? you just want to upload a local folder to S3?

3 years ago
0 How Can I Register A Json File I'M Creating As An Artifact

yes -

task.upload_artifact('local json file', artifact_object="/path/to/json/file.json")

2 years ago
0 Hi My Friend

Hi CooperativeSealion8 ,

Sure, we will check that and reactivate soon

4 years ago
0 Hi, We Recently Upgraded Clearml To 1.1.1-135 . 1.1.1 . 2.14. The Task Init Is

Hi SubstantialElk6 ,

Can you add a screenshot of it? what do you have as MODEL URL ?

3 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

The AWS autoscaler doesn’t related to other tasks, you can think about it as a service running in your Trains system.

and are configured in the auto scale task?

Didn’t get that 😕

4 years ago
0 Hi, I Started My Agent Using. Clearml-Agent Daemon --Gpus 0 --Queue Gpu --Docker --Foreground, With The Following Parameters In Clearml.Conf.

ok, I think I missed something on the way then.

you need to have some diffs, because

Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
can you re-run this task from your local machine again? you shouldn’t have anything under UNCOMMITTED CHANGES this time (as we ...

3 years ago
0 Is There A Way To Run A Pipeline (

Hi WackyRabbit7 ,

Did you try using sdk.development.detect_with_pip_freeze as true in your ~/clearml.conf file? It should take the same environment as the one you are running from.

3 years ago
0 When An Environment Variable Is Tracked Via

Hi ReassuredTiger98 ,

Can you try TRAINS_LOG_ENVIRONMENT=MYENVVAR instead of TRAINS_LOG_ENVIRONMENT="MYENVVAR" ?

3 years ago
0 I'M Looking To Utilize The Trains Aws Autoscaler Functionality, But After Going Through Its Docs A Few Times I Still Don'T Get It. Ultimately, My Setup Is That I Have Multiple Data Scientists Working On Static Instances, And They Have Queues Available To

So once I enqueue it is up?

If the trains-agent is listening to this queue (services mode), yes.

Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to

You are right, the auto scale task has nothing to do to where the auto scale task is enqueued

4 years ago
0 Clearml-Data - Incremental Changes And Hashing On Per-File Basis?

Hi EagerOtter28 ,

The integration with cloud backing worked out of the box so that was a smooth experience so far 

Great to read 🙂

When I create a dataset with 10 files and have it uploaded to e.g. S3 and then create a new dataset with the same files in a different folder structure, all files are reuploaded 

 For a few .csv files, it does not matter, but we have datasets in the 100GB-2TB range.

Any specific reason for uploading the same dataset twice? ` clearml-da...

3 years ago
0 I Have To Say I'M Totally Confused By The Pipeline I Want To Execute The Pipeline On My Local Computer. I Followed

btw my site packages is false - should it be true? You pasted that but I’m not sure what it should be, in the paste is false but you are asking about true

false by default, when you change it to true it should use the system packages, do you have this package install in the system? what do you have under installed packages for this task?

3 years ago
0 I’Ve Played Around With Clearml Data And Spotted Sth Weird Basically, I’Ve Created 3 Datasets

Hi GrittyKangaroo27

did you also closed the dataset?
Can you attach the commands with the order of all the datasets?

3 years ago
0 Hi, Another Issue Is Faced When Using Mmdetection/Mmcv With Clearml. The Automatic Uploading Of Checkpoint Meets The Following Error:

NonchalantDeer14 thanks for the logs, do you maybe have some toy example I can run to reproduce this issue my side?

3 years ago
0 Hi, I'M Trying To Set Up

Hi NarrowLobster19 , Is this S3 bucket?

3 years ago
0 Hi, I'M Trying To Create A

Hi CleanPigeon16 ,

Currently, only argparse arguments are supported (list of arg=val ).
How do you use the args in your script?

3 years ago
0 Hi. I'M Running This Little Pipeline:

Hi PanickyMoth78 ,

Note that if I change the component to return a regular meaningless string -

"mock_path"

, the pipeline completes rather quickly and the dataset is not uploaded. (edited)

I think it will use the cache from the second run, it should be much much quicker (nothing to download).

The files server is the default for saving all the artifacts, you can change this (default) with env var ( CLEARML_DEFAULT_OUTPUT_URI ) or config file ( ` sdk.development...

2 years ago
0 Is There Any Way To Get Just One Dataset Folder Of A Dataset? E.G. Only "Train" Or Only "Dev"?

Hi SmallDeer34 👋

The dataset task will download all the dataset when using clearml-data task, you have both in the same one?

3 years ago
0 Hi Guys! What Can Be Wrong Here And How To Fix It?

Hi ItchyHippopotamus18 ,

it seems the request does not reach the Trains File Server (port 8081, same machine running Trains Server), can you reach it?

4 years ago
0 Hi, I Have A Question Regarding The Aws-Autoscaler: Am I Understanding Correctly That:

Correct 🙂
polling_interval_time_min = the scaler interval for checking tasks in the queue

3 years ago
Show more results compactanswers