Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SillyRobin38
Moderator
7 Questions, 19 Answers
  Active since 16 January 2024
  Last activity 6 months ago

Reputation

0

Badges 1

17 × Eureka!
0 Votes
8 Answers
692 Views
0 Votes 8 Answers 692 Views
10 months ago
0 Votes
3 Answers
665 Views
0 Votes 3 Answers 665 Views
Hello everyone, is there any way to remove a serving instance?
7 months ago
0 Votes
2 Answers
561 Views
0 Votes 2 Answers 561 Views
Hello, everyone. I have a model, and in preprocess.py , I have included some print statements. I'm curious to know if it's possible to view these print outpu...
9 months ago
0 Votes
3 Answers
444 Views
0 Votes 3 Answers 444 Views
Hello, everyone, just wanted to ask, how we can fix the following issue: Retrying (Retry(total=229, connect=240, read=240, redirect=240, status=240)) after c...
6 months ago
0 Votes
8 Answers
707 Views
0 Votes 8 Answers 707 Views
11 months ago
0 Votes
8 Answers
630 Views
0 Votes 8 Answers 630 Views
Hello all! Is it possible to utilize shared memory in ClearML for tasks like model inference, where instead of transferring images over the network (e.g., HT...
8 months ago
0 Votes
3 Answers
970 Views
0 Votes 3 Answers 970 Views
Hello everyone, I'm curious to know if it's possible to prevent uploading a duplicate endpoint. For instance, if an endpoint has already been uploaded using ...
9 months ago
0 Hello Everyone, I'M Curious To Know If It'S Possible To Prevent Uploading A Duplicate Endpoint. For Instance, If An Endpoint Has Already Been Uploaded Using The

so basically check the hash and say, no need to upload?

Thanks for answering, Yes, this is exactly what I wanted

9 months ago
0 Is Clearml-Serving Using Either System Or Cuca Shared Memory? Or Planning To? In Our Experiments Using Perf_Analyzer The Shared Memory Experiments Showed A Huge Improvement And If We Wanted To Look Into This, Do You Have Any Pointers Of Where We Can Do T

@<1523701205467926528:profile|AgitatedDove14> Actually our meant is something like the following example from the triton client examples:

None

Does clearml has any example for using shared memory? or it's out of context for clearml?

8 months ago
0 Hello All! Is It Possible To Utilize Shared Memory In Clearml For Tasks Like Model Inference, Where Instead Of Transferring Images Over The Network (E.G., Http, Rpc), We Use A Shared Memory Extension? Please Refer To The Link Below:

Thanks for sharing that but If I'm not mistaken, I couldn't share my exact issue here. Shared memory will also utilize the same communications as HTTP/RPC. However, instead of transferring the entire image, for example, to the Triton server, it will bind the image's address to some shared memory and then send the address using HTTP to the Triton server. By doing this, we can save the cost of transferring data. Please correct me if I'm wrong about this. I want to know if clearml can support su...

8 months ago
0 Hello Everyone, Is There Any Way To Remove A Serving Instance?

@<1523701205467926528:profile|AgitatedDove14> Thanks, suppose that I have a few serving instances that I can see the list using clearml-serving list one of them has been named incorrectly and now I'm trying to remove it and it's not running anywhere, so is there any way to remove it?

7 months ago
0 Hello, Everyone. I Have A Model, And In

Thanks for the prompt response

9 months ago
0 Hello, Everyone, Just Wanted To Ask, How We Can Fix The Following Issue:

Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for the answer. I just wanted to know more about the broader plot of it. I'm more of an ML engineer, so for a self-hosted server, I wanted to know what the best way to create and register the SSL keys is. I think this might be out of context or a noob question, so I apologize for it.

6 months ago
0 Hello All! Is It Possible To Utilize Shared Memory In Clearml For Tasks Like Model Inference, Where Instead Of Transferring Images Over The Network (E.G., Http, Rpc), We Use A Shared Memory Extension? Please Refer To The Link Below:

sorry just a q question, so we do not need to do much in our end right? I mean clearml will handle sharing memory between the process.py and triton server?

8 months ago
0 Hi Everyone, I Wanted To Inquire If It'S Possible To Have Some Type Of Model Unloading. I Know There Was A Discussion Here About It, But After Reviewing It, I Didn'T Find An Answer. So, I Am Curious: Is It Possible To Explicitly Unload A Model (By Calling

Thanks, @<1523701205467926528:profile|AgitatedDove14> , for your feedback. Actually, I've been working with TRT-LLM since day zero of its launch. It is very good for LLMs, However, I haven't had the chance to check the trtllm-backend, as I'm waiting for some features there. However, I'm planning to use it and examine it. I will try to provide any feedback I have on that. But before doing so, I need to become more familiar with the internals of ClearML, I guess.

By the way, thanks for the fe...

11 months ago
0 Hello, Community. I Hope You Are All Doing Well. I'M Seeking Information Regarding A Specific Problem, Specially In The Field Of Computer Vision. Typically, An App In The Field Of Computer Vision Will Have Multiple Models, Each With Its Own Preprocessing,

@<1523701205467926528:profile|AgitatedDove14> No, actually I can upload a directory for the model thanks to the ClearML, but what I really want to achieve is to share this code:

├── common
│   ├── common.py

Between these two preprocess.py :

└── yolo8
    └── preprocess.py
└── yolo7
    └── preprocess.py
9 months ago
0 Hi Everyone, I Wanted To Inquire If It'S Possible To Have Some Type Of Model Unloading. I Know There Was A Discussion Here About It, But After Reviewing It, I Didn'T Find An Answer. So, I Am Curious: Is It Possible To Explicitly Unload A Model (By Calling

Hi @<1523701205467926528:profile|AgitatedDove14> , Thanks for answering, It's not what I meant. Suppose that I have three models and these models can't be loaded simultaneously on GPU memory( since there is not enough GPU ram for all of them at the same time). What I have in mind is this: is there an automatic way to unload a model (for example, if a model hasn't been run in the last 10 minutes, or something similar)? Or, if we don't have such an automatic method, can we manually unload the ...

11 months ago
0 Hello, Community. I Hope You Are All Doing Well. I'M Seeking Information Regarding A Specific Problem, Specially In The Field Of Computer Vision. Typically, An App In The Field Of Computer Vision Will Have Multiple Models, Each With Its Own Preprocessing,

@<1523701205467926528:profile|AgitatedDove14> Thanks for the response, Yeah each endpoint will have it's own modules/files, just wanted to know if there is a way to share such common code between different endpoints in a way that the common code can be get synced like the preprocessing code.

Just I do have one question, please suppose that we have 1000 vm instances that are running, and please suppose that I will create a package from the common code and install it alongside of the containe...

9 months ago
0 Hello, Community. I Hope You Are All Doing Well. I'M Seeking Information Regarding A Specific Problem, Specially In The Field Of Computer Vision. Typically, An App In The Field Of Computer Vision Will Have Multiple Models, Each With Its Own Preprocessing,

@<1523701205467926528:profile|AgitatedDove14> About the proposed ways for fixing this issue, I've got my hands a little dirty with the code, and I think maybe adding another option to include some other files in the clearml-serving model add command would be beneficial here. Please suppose that I have the current directory for now:

├── common
│   ├── common.py
└── yolo8
    ├── 1
    │   ├── model_NVIDIA_GeForce_RTX_3080.plan
    │   └── model_Tesla_T4.plan
    ├── config.pbtxt
  ...
9 months ago
0 Hi Everyone, I Wanted To Inquire If It'S Possible To Have Some Type Of Model Unloading. I Know There Was A Discussion Here About It, But After Reviewing It, I Didn'T Find An Answer. So, I Am Curious: Is It Possible To Explicitly Unload A Model (By Calling

@<1523701205467926528:profile|AgitatedDove14> That is awesome. Could you please provide me with the branch that you are working on or specific commit that can help me know how you are implementing it? Honestly, I want to get familiar with it and, if possible, contribute to the project.

11 months ago
0 Hi Everyone, I Wanted To Inquire If It'S Possible To Have Some Type Of Model Unloading. I Know There Was A Discussion Here About It, But After Reviewing It, I Didn'T Find An Answer. So, I Am Curious: Is It Possible To Explicitly Unload A Model (By Calling

@<1523701205467926528:profile|AgitatedDove14> No, I didn't do that, but if I'm not mistaken, about a month ago I saw some users on Reddit comparing it. They observed that TRT-LLM outperforms all kinds of leading backends, including VLLM. I will try to find it and paste it here.

11 months ago