Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
3 years ago
0 Hi! Is There A Way To Export The Credentials Of The Aws Account Only During The Creation Of The Docker? I Don’T Want Every User In My Team To Know The Credentials To Access S3 Buckets. I Just Want Them To Be Able To Write In The Bucket Without The Credent

…every user in the server has the same credentials, and they don’t need to know them..makes sense?

Make sense, single credentials for everyone, without the need to distribute
Is that correct?

3 years ago
0 Hi Guys, I’M Trying To Install It My Lab Server, But When I Try To Create Credentials, It Says Error And Gives More Info: Error 301 : Invalid User Id: Id=F46262Bde88B4928997351A657901D8B, Company=D1Bd92A3B039400Cbafc60A7A5B1E52B

and i found our lab seems only have shared user file because i installed trains on one node, but it doesn’t appear on the others

Do you mean there is no shared filesystem among the different machines ?

4 years ago
0 Hi, Does Anyone Know Where Trains Stores Tensorboard Data? Because I Am Used To Using Tensorboard To Record Experimental Data And Store Data, I Hope I Can Access The Folder Where Tensorboard Stores Data When I Use Command Like

Hi FierceFly22

Hi, does anyone know where trains stores tensorboard data

Tesnorboard data is stored wherever you point your file-writer to 🙂
What trains is doing is while tensorboard writes it's own data to disk, it takes the data (in-flight) and sends it to the trains-server. The trains-server puts everything in the DB, so later everything is viewable & searchable.
Basically you don't need to store your TB files after your experiment is done, you have all the data in the trains-s...

5 years ago
0 Hi There, I Have A Package Called

Here you go:
` @PipelineDecorator.pipeline(name='training', project='kgraph', version='1.2')
def pipeline(...):
return

if name == 'main':
Task.force_requirements_env_freeze(requirements_file="./requirements.txt")
pipeline(...) If you need anything for the pipeline component you can do: @PipelineDecorator.component(packages="./requirements.txt")
def step(data):

some stuff `

3 years ago
0 I'M Trying To Spin Up A Task On An Agent And Inside The Task I Have Two Packages That I'Ve Created Custom Versions Of And Specified A Git Repo For In The Requirements.Txt. Example With Hydra-Core And Omegaconf:

@<1545216070686609408:profile|EnthusiasticCow4>
git+ssh:// will be converted automatically to git+https if you have user/pass ocnfigured in your clearml.conf on the agent machine.
More over, git packages are always installed After all other packages are installed (because pip cannot resolve the requirements inside the git repo in time)

one year ago
0 How Do I Think About Tasks/Task_Name-S? Do I See Right If I Run The Same Task With The Same Name, It Overwrites The Previous Run? Is It Possible To Fail If The Task Already Exists And Need

ahh, because task_id is the "real" id of a task

Yes the ID is a global system wide unique ID (regardless of the project etc.)

Maybe we will call tasks as

slug_yyyymmdd

Notice that you can just copy-paste the link in the address bar, it will bring you to the exact same view, meaning easily shared among users 🙂 You can, but I would actually use the Task ID. This also means that programatically you can do , task=Task,get_task(task_id_here) and interact and query a...

2 years ago
0 Hello All, How Can I Access The Restful Api. Any Docs Available?

Hi JuicyDog96
The easiest way at the moment (apologies for still lack of RestAPI documentation, it is coming:)
Is actually the code (full docstring doc)
https://github.com/allegroai/trains/tree/master/trains/backend_api/services/v2_8
You can access it all with an easy Pythonic interface, for example:
from trains.backend_api.session.client import APIClient client = APIClient() tasks = client.tasks.get_all()

5 years ago
0 Hello Everyone. I'M Getting Started With Clearml. I'M Trying Hpo Atm And Have Successfully Run The Base Task. When Running The Clone Of The Base Task In One Of The Agents, I'M Getting Following Error. Any Suggestions? Tia

I mean you can run it with kubeflow, but it kind of ruins the auto detection there
You can however clone and manually edit it back to your code, that would work

2 years ago
0 It Seems Like Clearml Agent Does Not Support Arparse Subparsers, Right?

With remote_execution it is 

command="[...]"

 , but on local it is 

command='train'

 like it is supposed to be.

I'm not sure I follow, could you expand ?

4 years ago
0 Does Dataset.Add_Files Support Uploading From S3 Uri? I Have No Problem Uploading To S3 But Cant Use Data That Is Already In S3? Or Am I Dong Something Wrong? I Read In Documentation That Add_External_Files Supports This Feature, But I Want To Be Able To

Yes, but does add_external_files makes chunked zips as add_files do?

No it references them, (i.e. meta-data not actually doing something with the files themselves)

I need the zipping, chunking to manage millions of files

That makes sens, if that's the case you will have to download those files anyway, and then add them with add_files
you can use the StoargeManager to download them, and then add them from the local copy (this will zip/chunk them)
[None](https://clear.ml/docs/la...

one year ago
0 I Trained A Model, Saved It. Now I Am Trying To Access It From Another Machine, But The Model Url Is A Local Path. How Can I Download Models From Clearml?

Hi @<1523702786867335168:profile|AdventurousButterfly15>
Make sure you pass output_uri=true in Task.init
It will automatically upload your model to the file server. You can also configure it in the clearml.conf, look for defualt_output_uri

2 years ago
0 Hey Folks, My Team Is Currently Utilizing Weights And Biases For Experiment Metric Tracking Etc. Is There Some Resource Material That Can Help Me/My Team To Transition From Wandb To Clearml'S Experiment Tracking? As You Can Imagine, Wandb'S Tracking Code

Hi SlimyElephant79

As you can imagine, wandb's tracking code would be present across the code modules and I was hoping for a structured approach that would help me transition to ClearMLs experiment tracking.

Do you guys a have a layer in between that does the reporting, or is the codebase riddled with direct reporting calls ? if the latter, then I guess search and replace ? or maybe a module that "converts" wandb call to clearml call ? wdyt?

2 years ago
0 Hi, I Would Like To Follow-Up In This

it is shown in the recording above

It was so odd, I had to ask 🙂 okay let me see if we can reproduce

I don’t have any error message in the browser console - Just an empty array returned on events.get_task_logs. This bug didn’t exist on version 1.1.0 and is quite annoying…

meaning the RestAPI returns nothing, is that correct ?

3 years ago
0 Hi There! Can Anybody Help Me With Specifying The 'Platform' For A Model In Clearml-Serving. I Am Using The K8S Clearml-Serving Setup (Version 1.3.1). I Already Tried A Bunch Of Variants Like

I think the real issue is that I am not able to specify a platform for the model,

None
there is no need to specify it, remove it from the config.pbtxt - the clearml-serving will automatically add the background

one year ago
0 Hi Everyone, I'M Using Clearml-Serving With Triton And Have A Couple Of Questions Regarding Model Management:

. That speed depends on model sizes, right?

in general yes

Hope that makes sense. This would not work under heavy loads, but eg we have models used once a week only. They would just stay unloaded until use - and could be offloaded afterwards.

but then you still might encounter timeout the first time you access them, no?

one year ago
0 Hi Everyone! I'Ve Had A Problem. But When I Was Describing It Here It Was Solved. Maybe It Will Help Someone. I Use Pytorch And Training Accidentally Freezes After Weights Uploading By Trains. Don'T Know Exactly What'S Wrong, But It Was Somehow Connected

Hi PungentLouse55 ,
I think can see how these magic lines solved it, and I think you are onto something.
Any chance what happened is multiple workers were trying to simultaneously save/load the same Model ?

5 years ago
0 Hi, I Am Trying To Clone An Experiment. Using The Server Gui, I Select 'Clone' And Then 'Enqueue'. In The Console Window, I See That Clearml Makes Sure The Environment Is Installed, And Then It Goes Into A 'Completed' Status Although The Experiment Did N

Any chance your code needs more than the main script, but it is Not in a git repo? Because the agent supports either single script file, or a git repo with multiple files

2 years ago
0 Hey There, I Would Like To Increase The

BTW: for future reference, if you set the ulimit in the bash, all processes created after that should have the new ulimit

4 years ago
0 Hi. When Using Sklearn'S

DistressedGoat23 check this example:
https://github.com/allegroai/clearml/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py
aSearchStrategy = RandomSearchIt will collect everything on the main Task

This is a curial point for using clearml HPO since comparing dozens of experiments in the UI and searching for the best is just not manageable.

You can of course do that (notice you can actually order them by scalars they report, and even do ...

2 years ago
0 Hey Everyone! Is It Possible To Trigger A Pipeline Run Via Api? We Have A Repo That Builds An Image For Serving To Clearml Server But We'Ve Wrapped It Inside A Fastapi Application So It Can Be Called From Another Web Service.

Hi @<1692345677285167104:profile|ThoughtfulKitten41>

Is it possible to trigger a pipeline run via API?

Yes! a pipeline is at the end a Task, you can take the pipeline ID and clone and enqueue it

pipeline_task = Task.clone("pipeline_id_here") 
Task.enqueue(pipeline_task, queue_name="services")

You can also monitor the pipeline with the same Task inyerface.
wdyt?

one year ago
0 Hi, Clearml Stores Models In The Following Format:

Is it possible to change this format ?

not really the path itself is set to be unique.
That said you can upload the model manually with StorageManager.upload_file then register it with Model.import_model
None
None
wdyt?

2 years ago
0 Hi, I Want To Pass Environment Variables From The Host To The Docker Containers Running My Task. I Managed To Use

but this would be still part of the clearml.conf right?

You can pass it per Task , also you can configure the agent to always pass it add this env.
https://github.com/allegroai/clearml-agent/blob/5a080798cb4292e198948fbe16cba70136cb6bdf/docs/clearml.conf#L137

4 years ago
Show more results compactanswers