Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
TimelyRabbit96
Moderator
10 Questions, 33 Answers
  Active since 16 March 2023
  Last activity 7 months ago

Reputation

0

Badges 1

26 × Eureka!
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Hi everyone, I’m new to ClearML, and our team has started investigating clearML vs MLflow. We’d like to try out the K8s setup using the helm charts, but afte...
one year ago
0 Votes
10 Answers
1K Views
0 Votes 10 Answers 1K Views
Hi there, I’ve been trying to play around with the model inference pipeline following this guide . I am able to of the steps (register the models), but when ...
one year ago
0 Votes
6 Answers
669 Views
0 Votes 6 Answers 669 Views
9 months ago
0 Votes
3 Answers
899 Views
0 Votes 3 Answers 899 Views
Hi clearML community, trying to setup a load balancer and follow this official guide , but can’t get it to work (Server Unavailabel Error when opening the da...
one year ago
0 Votes
4 Answers
975 Views
0 Votes 4 Answers 975 Views
Hello friends! I am trying to play around with the configs for gRPC for the triton server for clearml-serving . I’m using the docker-compose setup, so not su...
one year ago
0 Votes
2 Answers
932 Views
0 Votes 2 Answers 932 Views
Hey ClearML community. A while back I was asking how one can perform inference on a video with clearml-serving, which includes an ensemble, preprocessing, an...
one year ago
0 Votes
2 Answers
977 Views
0 Votes 2 Answers 977 Views
Hello! Is there any way to access the the Triton Server metrics from clearml-serving ? As in the localhost:8002 that is running inside the triton server None
one year ago
0 Votes
14 Answers
1K Views
0 Votes 14 Answers 1K Views
Hi there, another triton-related question: Are we able to deploy Python_backend models? Like TritonPythonModel something like this within clearml-serving? Tr...
one year ago
0 Votes
2 Answers
628 Views
0 Votes 2 Answers 628 Views
Quick question about concurrency and the serving pipeline, if I have request A sent and its being processed, and then send request B while A is processing, w...
8 months ago
0 Votes
23 Answers
965 Views
0 Votes 23 Answers 965 Views
hello! question about clearml-serving : Trying to do model inference on a video, so first step in Preprocess class is to extract frames. However, once this i...
one year ago
0 Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time

Seems like this still doesn’t solve the problem, how can we verify this setting has been applied correctly? Other than checking the clearml.conf file on the container that is

8 months ago
0 Hello! Question About

Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for the always-fast response! 🙂

Yep so I am sending a link to a S3 bucket, and setup Triton ensemble within clearml-serving.

This is the gist of what i’m doing:

None

so essentially i am sending raw data, but i can only send the first 8 frames (L45) since i can’t really send the data in a list or something?

one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

@<1523701118159294464:profile|ExasperatedCrab78> , would you have any idea about above? Triton itself supports ensembling, was wondering if we can somehow support this as well?

one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

oh actually it seems like this is possible already from the code!

one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

I can see pipelines, but not sure if it applies to Triton directly, more of a DAG approach?

None

one year ago
0 Hello! Question About

I see, very interesting. I know this is a psedo-code, but are you suggesting sending the requests to Triton frame-by-frame?

Or perhaps np_frame = np.array(frame) itself could be a slice of the total_frames ?

Like:

Dataset: [700, x, y, 3]
Batch: [8, x, y, 3]

I think that makes sense, and in the end deploy this endpoint like the pipeline example.

one year ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Thanks @<1523701205467926528:profile|AgitatedDove14> , this seems to solve the issue. I guess the main issue is that the delimiter is a _ instead of / . This did work, however, as you can see from the model endpoint deployment snippet, we also provide a custom aux-config file. We also had to make sure to update the name inside config.pbtxt so that Triton is happy:

From

name: "mmdet"

TO:

name: "mmdet_VERSION" -> "mmdet_1"
9 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Thanks for your response! I see, yep from an initial view it could work. Will certainly give it a try 🙂

However, to give you more context, in order to setup an ensemble within Triton, you also need to add a ensemble_scheduling block to the config.pbtxt file, which would be something like this:

None

I’m guessing this’ll be diffic...

one year ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following
  • Haven’t changed it, although I did change the host port to 8081 instead of 8080? Everything else seems to work fine tho.
  • Sorry what do you mean? I basically just followed the tutorial
one year ago
0 Hello! Is There Any Way To Access The The

Yep so Triton sets it up, but I think from the current configuration the port 8002 which is where the metrics are is not exposed

one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Thank you for all the answers! Yep that worked, though is it usually safe to add this option? Instead of --shm-size

Also, now I managed to send an image through curl using a local image (@img.png in curl). Seems to work through this! Getting the same gRPC limit size , but seems like there’s a new commit that addressed it! 🎉

one year ago
0 Hi Clearml Community, Trying To Setup A Load Balancer And Follow This

I see. So what would be the reason for one using a load balancer in this case? 🙂

one year ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

Hi @<1523701205467926528:profile|AgitatedDove14> , I already did the scikit learn examples, it works.

Also both endpoint_a and endpoint_b work when hitting them directly within the pipeline example. But not the pipeline itself.

one year ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

I’m running it through docker-compose . Tried both with and without triton

Hmm still facing same issue actually…

print("This runs!")
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)

print("Doesn't get here", predict_a, predict_b)

And still, hitting the endpoints independently using curls works. Are you able to replicate this? 
one year ago
one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

okay sorry for spamming here, but i feel like other ppl would find this useful, so i was able to deploy the ensemble model, and i guess to complete this, i would need to individually add all the other “endpoints” independently right?

As in, to reach something like below within Triton:
image

one year ago
0 Hello! Question About

So actually while we’re at it, we also need to return back a string from the model, which would be where the results are uploaded to (S3).

I was able to send back a URL with Triton directly, but the input/output shape mapping doesn’t seem to support strings in Clearml. I have opened an issue for it: None

Am i missing something?

one year ago
0 Hello! Question About

Perfect, thank you so much!! 🙏

@<1560074028276781056:profile|HealthyDove84> This is how we’d tackle the video-to-frame ratio issue

one year ago
0 Hi Everyone, I’M New To Clearml, And Our Team Has Started Investigating Clearml Vs Mlflow. We’D Like To Try Out The K8S Setup Using The Helm Charts, But After Following

Actually, this is the error I’m getting:

0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
one year ago
0 Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time

Or rather any pointers to debug the problem further? Our GCP instances have a pretty fast internet connection, and we haven’t faced that problem on those instances. It’s only on this specific local machine that we’re facing this truncated download.

I say truncated because we checked the model.onnx size on the container, and it was for example 110MB whereas the original one is around 160MB.

8 months ago
0 Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time

@<1523701205467926528:profile|AgitatedDove14> this file is not getting mounted when using the docker-compose file for the clearml-serving pipeline, do we also have to mount it somehow?

The only place I can see this file being used is in the README, like so:

Spin the inference container:

docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest

8 months ago
0 Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time

@<1523701205467926528:profile|AgitatedDove14> Okay we got to the bottom of this. This was actually because of the load balancer timeout settings we had, which was also 30 seconds and confusing us.

We didn’t end up needing the above configs after all.

8 months ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Hi @<1523701118159294464:profile|ExasperatedCrab78> , so I’ve started looking into setting up the TritonBackends now, as we first discussed.

I was able to structure the folders correctly, and deploy the endopints. However, when I spin up the containers, I get the following error:

clearml-serving-triton        | | detection_preprocess | 1       | UNAVAILABLE: Internal: Unable to initialize shared memory key 'triton_python_backend_shm_region_1' to requested size (67108864 bytes). If yo...
one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

I see, yep aux-config seems useful for sure. Would it be possible to pass a file perhaps to replace config.pbtxt completely? Formatting all the input/output shapes, and now the ensemble stuff is starting to get quite complicated 🙂

one year ago
0 Hello Friends! I Am Trying To Play Around With The Configs For

Yes exactly, that’d be great! I’m not sure how flexible this can be (or should be), but perhaps following a pattern like CLEARML_GRPC_CONFIG_NAME would be possible?

one year ago
Show more results compactanswers