Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
TimelyRabbit96
Moderator
9 Questions, 26 Answers
  Active since 16 March 2023
  Last activity 9 days ago

Reputation

0

Badges 1

24 × Eureka!
0 Votes
2 Answers
422 Views
0 Votes 2 Answers 422 Views
Hey ClearML community. A while back I was asking how one can perform inference on a video with clearml-serving, which includes an ensemble, preprocessing, an...
7 months ago
0 Votes
14 Answers
542 Views
0 Votes 14 Answers 542 Views
Hi there, another triton-related question: Are we able to deploy Python_backend models? Like TritonPythonModel something like this within clearml-serving? Tr...
11 months ago
0 Votes
3 Answers
407 Views
0 Votes 3 Answers 407 Views
Hi clearML community, trying to setup a load balancer and follow this official guide , but can’t get it to work (Server Unavailabel Error when opening the da...
8 months ago
0 Votes
4 Answers
498 Views
0 Votes 4 Answers 498 Views
Hello friends! I am trying to play around with the configs for gRPC for the triton server for clearml-serving . I’m using the docker-compose setup, so not su...
12 months ago
0 Votes
1 Answers
616 Views
0 Votes 1 Answers 616 Views
Hi everyone, I’m new to ClearML, and our team has started investigating clearML vs MLflow. We’d like to try out the K8s setup using the helm charts, but afte...
one year ago
0 Votes
2 Answers
50 Views
0 Votes 2 Answers 50 Views
10 days ago
0 Votes
23 Answers
469 Views
0 Votes 23 Answers 469 Views
hello! question about clearml-serving : Trying to do model inference on a video, so first step in Preprocess class is to extract frames. However, once this i...
11 months ago
0 Votes
2 Answers
494 Views
0 Votes 2 Answers 494 Views
Hello! Is there any way to access the the Triton Server metrics from clearml-serving ? As in the localhost:8002 that is running inside the triton server None
11 months ago
0 Votes
10 Answers
624 Views
0 Votes 10 Answers 624 Views
Hi there, I’ve been trying to play around with the model inference pipeline following this guide . I am able to of the steps (register the models), but when ...
11 months ago
11 months ago
0 Hello Friends! I Am Trying To Play Around With The Configs For

Yes exactly, that’d be great! I’m not sure how flexible this can be (or should be), but perhaps following a pattern like CLEARML_GRPC_CONFIG_NAME would be possible?

12 months ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

If you’re wondering about the case where no optional config.pbtxt is provided, I guess the logic would be pretty much the same as above:

model_name = f"{model_name}_{version}"

But then after looking at create_config_pbtxt() , it seems like this is not being constructed at all, making me realize that this may have been optional - [confirming name is an optional propery](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#name...

29 days ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Thanks @<1523701205467926528:profile|AgitatedDove14> , this seems to solve the issue. I guess the main issue is that the delimiter is a _ instead of / . This did work, however, as you can see from the model endpoint deployment snippet, we also provide a custom aux-config file. We also had to make sure to update the name inside config.pbtxt so that Triton is happy:

From

name: "mmdet"

TO:

name: "mmdet_VERSION" -> "mmdet_1"
one month ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following
  • Haven’t changed it, although I did change the host port to 8081 instead of 8080? Everything else seems to work fine tho.
  • Sorry what do you mean? I basically just followed the tutorial
11 months ago
0 Hello! Question About

So actually while we’re at it, we also need to return back a string from the model, which would be where the results are uploaded to (S3).

I was able to send back a URL with Triton directly, but the input/output shape mapping doesn’t seem to support strings in Clearml. I have opened an issue for it: None

Am i missing something?

11 months ago
0 Hello! Question About

Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for the always-fast response! 🙂

Yep so I am sending a link to a S3 bucket, and setup Triton ensemble within clearml-serving.

This is the gist of what i’m doing:

None

so essentially i am sending raw data, but i can only send the first 8 frames (L45) since i can’t really send the data in a list or something?

11 months ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

Hi @<1523701205467926528:profile|AgitatedDove14> , I already did the scikit learn examples, it works.

Also both endpoint_a and endpoint_b work when hitting them directly within the pipeline example. But not the pipeline itself.

11 months ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

I’m running it through docker-compose . Tried both with and without triton

Hmm still facing same issue actually…

print("This runs!")
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)

print("Doesn't get here", predict_a, predict_b)

And still, hitting the endpoints independently using curls works. Are you able to replicate this? 
11 months ago
0 Hello! Is There Any Way To Access The The

Yep so Triton sets it up, but I think from the current configuration the port 8002 which is where the metrics are is not exposed

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

I can see pipelines, but not sure if it applies to Triton directly, more of a DAG approach?

None

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Thanks for your response! I see, yep from an initial view it could work. Will certainly give it a try 🙂

However, to give you more context, in order to setup an ensemble within Triton, you also need to add a ensemble_scheduling block to the config.pbtxt file, which would be something like this:

None

I’m guessing this’ll be diffic...

11 months ago
0 Hello! Question About

I see, very interesting. I know this is a psedo-code, but are you suggesting sending the requests to Triton frame-by-frame?

Or perhaps np_frame = np.array(frame) itself could be a slice of the total_frames ?

Like:

Dataset: [700, x, y, 3]
Batch: [8, x, y, 3]

I think that makes sense, and in the end deploy this endpoint like the pipeline example.

11 months ago
0 Hi Everyone, I’M New To Clearml, And Our Team Has Started Investigating Clearml Vs Mlflow. We’D Like To Try Out The K8S Setup Using The Helm Charts, But After Following

Actually, this is the error I’m getting:

0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Thank you for all the answers! Yep that worked, though is it usually safe to add this option? Instead of --shm-size

Also, now I managed to send an image through curl using a local image (@img.png in curl). Seems to work through this! Getting the same gRPC limit size , but seems like there’s a new commit that addressed it! 🎉

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

@<1523701118159294464:profile|ExasperatedCrab78> So this is something I mean. If you think it’d be okay, I can properly implement this:

None

11 months ago
0 Hi Clearml Community, Trying To Setup A Load Balancer And Follow This

I see. So what would be the reason for one using a load balancer in this case? 🙂

8 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

I see, yep aux-config seems useful for sure. Would it be possible to pass a file perhaps to replace config.pbtxt completely? Formatting all the input/output shapes, and now the ensemble stuff is starting to get quite complicated 🙂

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

@<1523701118159294464:profile|ExasperatedCrab78> , would you have any idea about above? Triton itself supports ensembling, was wondering if we can somehow support this as well?

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

okay sorry for spamming here, but i feel like other ppl would find this useful, so i was able to deploy the ensemble model, and i guess to complete this, i would need to individually add all the other “endpoints” independently right?

As in, to reach something like below within Triton:
image

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

oh actually it seems like this is possible already from the code!

11 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Hi @<1523701118159294464:profile|ExasperatedCrab78> , so I’ve started looking into setting up the TritonBackends now, as we first discussed.

I was able to structure the folders correctly, and deploy the endopints. However, when I spin up the containers, I get the following error:

clearml-serving-triton        | | detection_preprocess | 1       | UNAVAILABLE: Internal: Unable to initialize shared memory key 'triton_python_backend_shm_region_1' to requested size (67108864 bytes). If yo...
11 months ago
0 Hello! Question About

Perfect, thank you so much!! 🙏

@<1560074028276781056:profile|HealthyDove84> This is how we’d tackle the video-to-frame ratio issue

11 months ago