Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8048 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi! How Can I Force Clearml To Find My Repo? My Current Repo Structure Is Like This:

so when inside the docker, I don’t see the git repo and that’s why ClearML doesn’t see it

Correct ...

I could map the root folder of the repo into the container, but that would mean everything ends up in there

This is the easiest, you can put it on the ENV variable :
None

one year ago
0 How Can I Do The Following? (Basically, Filtering By Task Type)

yes, so you can have a few options 🙂

4 years ago
0 Hi, I Am Trying To Upload A Plot To An Existing Task Using The

, I generate some more graphs with a file called 

graphs.py

  and want to attach/upload to this training task

Make total sense to use Task.get_task, I just want to make sure that you are aware of all the options, so you pick the correct one for you :)

3 years ago
0 Hi, Does Anyone Have Some Issues With Cloning Git Repos Within Alegro? I Always Got Some Error Massage: Fatal: Unable To Access '

so the docker didnt use the dns of the host?

I'm assuming it is not configured on your DNS, otherwise it would have been resolved...

3 years ago
0 Gm Folks, Really Liking Clearml So Far As My Top Choice (After Looking At Dvc, Mlflow), And Thank You For Your Help Here! I Had Another Q: Is There A Recommended Workflow To Be Able To “Drop Into” The

gm folks, really liking ClearML so far as my top choice (after looking at dvc, mlflow), and thank you for your help here!

Thanks HurtWoodpecker30 !

Is there a recommended workflow to be able to “drop into” the

exact

env

(code, venv, data) of a previous experiment (which may have been several commits ago), to reproduce that experiment?

You can use clearml-agent on your local machine to build the env of any Task,
` clearml-agent build --id <ta...

2 years ago
0 For The Frameworks Which Are Supported In Built, Trains Stores The Trained Model As Output Model E.G. For Xgboost Here

so what should the value of "upload_uri" to set to, 

fileserver_url

 e.g. 

 ?

yes, that would work.

4 years ago
0 Hi, Can You Help Me Pls, I Got: Environment Setup Completed Successfully Starting Task Execution: Traceback (Most Recent Call Last): File "Agro_Api.Py", Line 13, In From Help_Models.Consts Import Urls Importerror: No Module Named 'Help_Models'

That sounds like an issue with "working dir" , check the "Execution" "Working Directory" field.
'.' means the root of the git repository
'subfolder' means run the script from the subfolder etc. also make sure that the script path is adjusted accordingly.

btw: Trains should have filled in all the correct paths... If you have time get the latest trains (0.14.3) and run again see if the problem consts, we should probably fix that bug 🙂

4 years ago
0 Hello Everyone. I'M Getting Started With Clearml. I'M Trying Hpo Atm And Have Successfully Run The Base Task. When Running The Clone Of The Base Task In One Of The Agents, I'M Getting Following Error. Any Suggestions? Tia

The base task is self-contained i.e. it downloads training/eval directly data and has direct access to it

I think this is the main issue, how come it does not catch it? Are you using argparser ?

one year ago
0 Has Anyone Done This Exact Use Case - Updates To Datasets Triggering Pipelines?

Good news a dedicated class for exactly that will be out in a few days 🙂
Basically task scheduler and task trigger scheduler, running as a service cloning/launching tasks either based on time (cron alike) or based on a trigger).
wdyt?

3 years ago
0 So, I Have Just Started Using Clearml For Local Data And Experiment Tracking And Its Been Super Helpful. Now That I Am Moving Towards Deploying And Serving The Models Using Clearml-Serving And Triton. I Have Done Some Basic Experimenting With The Provided

What would be the best way to get all the models trained using a certain Task, I know we can use query_models to filter models based on Project and Task, but is it the best way?

On the Task object itself you have all the models.
Task.get_task(task_id='aabb').models['output']

3 years ago
0 Hi, I Am Trying To Pull Api Data From /Tasks.Get_All Endpoint

query

tasks

that are both Running --> You mean

status=["in_progress"]

Yes!

How do I figure out other possible parameter I can use with

status

parameter?

https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
https://clear.ml/docs/latest/docs/references/api/definitions#taskstask

Filter only tasks that start say

10 min ago

. Is there any parameter for it also ?

last_update or created then use...

one year ago
0 Hello! There Is Great Alternative For Argparse Developed By Facebook For Ml Named

GrievingTurkey78 yes, you are correct on both.

Will the sweep functionality work?

Yes it should, that said, it will not use the trains-agent so you are limited to the machine running the sweep.
If you want to do HPO on multi-node, checkout this example 🙂
https://github.com/allegroai/trains/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py

3 years ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Hmm I see, if this is the case, would it make sense to run the pipeline logic locally? (notice the pipeline compute, i.e. the components will be running on remote machines with the agents)

one year ago
0 Hi, I'Ve Got A Quick Question About

Where is the cleamlr-server running? GCP as well?

2 years ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

CrookedWalrus33 can you post the clearml.conf you have on the agent machine?

2 years ago
0 Hey Guys. We Have Been Using Clearml For A While Now And It Has Solved Quite Some Headaches Around Our Operations. We Are Self Hosting It Using Docker Swarm And Were Wondering If This Is Something That The Community Would Be Interested In. This Would Be

We would "donate" back to the community a docker stack template that can be used to set up the community edition.

Perfect, feel free to PR to the clearml-server repository, we can take it from there
🙏 🙏 😍

one year ago
0 Just A Quick Question: How Can I Pull Off The Scaler Data Json From Server Without Downloading Them One By One?

is there a way that i can pull all scalars at once?

I guess you mean from multiple Tasks ? (if so then the answer is no, this is on a per Task basis)

Or, can i get experiments list and pull the data?

Yes, you can use Task.get_tasks to get a list of task objects, then iterate over them. Would that work for you?
https://clear.ml/docs/latest/docs/references/sdk/task/#taskget_tasks

3 years ago
0 Hi All! I Have A Couple Of Things That Are Not Completely Clear To Me, Hope You Can Help Me To Sort Them Out.

Cloud Access section is in the 

Profile

 page.

Any storage credentials (S3 for example) are only stored on the client side (never the trains-server), this is the reason we need to configure them in the trains.conf. When the browser needs to access those URL's (downloading an artifact) it also needs the secret/key, it automatically display a popup requesting them, and will store them in this section. Notice they are stored on the browser session (as a cookie).

3 years ago
0 Hi Everyone And Thanks Again For The Help, I Still Have No Success In Running Clearml Agent, It Just Gets Stuck Without Any Output, On Debug Mode For

Okay found the issue, to disable SSL verification global add the following env variable:
CLEARML_API_HOST_VERIFY_CERT=0(I will make sure we fix the actual issue with the config file)

2 years ago
0 Hi, When Using

ResponsiveHedgehong88 so I would suggest using execute_remotely in your code, basically you start locally you make sure everything is passed as intended, then from within the code you call task.execute_remotely(...) which will stop the current process and enqueue the Task on the selected queue for the agent to execute.
https://github.com/allegroai/clearml/blob/0397f2b41e41325db2a191070e01b218251bc8b2/examples/advanced/execute_remotely_example.py#L127
This way you can both easily test...

2 years ago
0 Hi All! I Might Have Found An Issue With The Migration Guide.

is it possible to change an existing model's URL?

Edit the DBs ... That's basically the only way 😞

one year ago
2 years ago
0 Hello, I Am New To Clearml, I Would Like To Learn More About How Clearml Works On A Hpc Cluster Where The Only Way To Get Computational Resources Is Via Slurm:

That should work 🙂
BTW, you might play around with "clearml-agent execute --id <task_id_here>"
This will basically clone the code, create a venv with the python packages, apply uncommitted changes and will run the actual code. This could be a replacement for your bash. (notice it means that you need to clone the Task in the UI, then you can Change parameters, then the run the agent manually in SLURM and it will take the params from the UI.)

3 years ago
0 Hi, I Am Trying To Upload A Plot To An Existing Task Using The

SmarmyDolphin68
BTW: there is no automatic reporting when you have task = Task.get_task(task_id='your_task_id')
It's only active when you have one "main" task.
You can also check the continue_last_task argument in Task.init , it might be a good fit for your scenario
https://allegro.ai/docs/task.html#trains.task.Task.init

3 years ago
0 Hi, Together With

JitteryCoyote63 fix pushed to master, let me know if it passes...

4 years ago
Show more results compactanswers