Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4213 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi Guys, I Have A Question Regarding Clearml-Serving. I Have Deployed My Model To An Api, Now I Want To Add A Front End Interface For The Url, How Should I Go About Doing It?

Regarding UI - you can either build your own frontend for it or use streamlit / gradio applications (Which are supported in the enterprise license).

About using a model outside of ClearML - You can simply register the model to the model artifactory - None

2 years ago
2 years ago
0 Hi Everyone, Im Trying To Use The Aws Autoscaler Service. Provided The Pac But Is Not Able To Clone The Repo. It Is Not Using The Pac (Using Gitlab)

Hi Juan, can you please elaborate? What is pac? What is failing to clone the repo, can you provide an error message?

3 years ago
0 Hi There, I Would Like To Know How We Can Pass An Environment Variable For The Running Script When The Script Is Run Remotely By An Agent?

Hi @<1529995795791613952:profile|NervousRabbit2> , if you're running in docker mode you can easily pass it in the docker_args parameter for example so you can set env variables with -e docker arg

2 years ago
0 Hi, I'M Not Sure When Exactly But At Some Point I'Ve Got No Access To My Debug Samples. We'Re Using Self-Hosted Clearml, Which Runs Within A Docker Container. Any Ideas? Thanks!

Hi @<1717350332247314432:profile|WittySeal70> , where are the debug samples stored? Have you recently moved the server?

one year ago
0 I Am Trying To Run A Task On An Agent For The First Time But I Am Running Into Some Things I Do Not Understand, I Hope Someone Can Help Me Out With This. I Got An Agent Running On Google Colab, But When I Clone A Task And Enqueue It From The Web Ui, I Ge

Regarding the packages issue:
What python did you run on originally - Because it looks that 1.22.3 is only supported by python 3.8. You can circumvent this entire issue by running in docker mode with a docker that has 3.7 pre-installed

Regarding the data file loading issue - How do you specify the path? Is it relative?

3 years ago
0 Two Questions Today. First, Is There Some Way To Calculate The Number Of Gpu-Hours Used For A Project? Could I Select All Experiments And Count Up The Number Of Gpu-Hours/Gpu-Weeks? I Realize I Could Do This Manually By Looking At The Gpu Utilization Grap

SmallDeer34 Hi 🙂
I don't think there is a way out of the box to see GPU hours per project, but it can be a pretty cool feature! Maybe open a github feature request for this.

Regarding on how to calculate this, I think an easier solution for you would be to sum up the runtime of all experiments in a certain project rather than looking by GPU utilization graphs

3 years ago
0 What Is

MelancholyElk85 if you're using add_function_step() it has a 'docker' parameter. You can read more here:
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_function_step

4 years ago
0 Hi All

Is the services agent part of the docker compose?

2 years ago
0 I Also Want A Low Latency Unthrottled System. It Seems Like The Free Version Of Clearml Is Somewhat Throttled. I Tried Uploading An Artifact To The Free Version Of Clearml And It'S Very Slow To Download The Artifact To My Laptop Using Get_Local_Copy() Fun

AgitatedDove41 , there isn't any throttling in ClearML and it uses the native packages when communicating with AWS (boto3 for example)

Where were you uploading to/from?

4 years ago
0 Help Please. I Have My Clearml Server Running In A Docker Container. Now, I Am Training My Ml Models In Another Docker Container. I Want To Track These Models With My Clearml Server Located In The First Container. What Configuration Do I Need To Do?

Hi @<1673501397007470592:profile|RelievedDuck3> , you simply need to integrate clearml into your code.

from clearml import Task
task = Task.init(...)

More info here:
None

one year ago
0 Hi Everyone, How Can I Check Programmatically Whether A Task Is Running Remotely And How Can I Get The Hostname? Additionally, Retrieving The User Name That Is Shown In The Server Ui Would Be Nice.

Hi @<1523701868901961728:profile|ReassuredTiger98> , you can fetch the task object, there one of the attributes of the task is it's worker. This way you can see on what machine it is running 🙂

2 years ago
0 Hey, I Use

Does this reproduce if you run it as a single task and not a pipeline?

6 months ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

I noticed that the base docker image does not appear in the autoscaler task'

configuration_object

It should appear in the General section

3 years ago
one year ago
0 Hi Everyone, We Use Clearml Pro. Is There A Way To Find Out Which Tasks Are Using Significant Storage For Metrics And Artifacts? We'Re Around 25Gb Above The Free Quota For Metrics And Even Though We'Ve Been Deleting A Lot Of Stuff, The Amount Stored Hasn'

Hi @<1898906633770110976:profile|MinuteFlamingo30> , from my understanding this is actually on the roadmap. Currently there is no easy way to check it. Basically any experiment with a lot of scalars or console logs (think like experiments that ran for very long)

6 days ago
6 days ago
0 Problem: Excessive Scalar Storage From Tensorboard Integration Causing Out-Of-Memory On Clearml Server Hi Team, We’Ve Run Into A Problem With Clearml Ingesting Extremely Large Numbers Of Scalars From Tensorboard (Auto_Connect_Frameworks) (~800K Samples P

Hi @<1853245764742942720:profile|DepravedKoala88> , I don't think there is any downsampling when ingesting from Tensorboard. You can always turn off the autologging and only log what you want and downsample accordingly. Keep in mind that on one hand you should avoid bloat on the server and on the other have high enough granularity in your scalars.

What do you think?

4 months ago
Show more results compactanswers