Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello! I’M Currently Using Clearml-Server As An Artifact Manager And Clearml-Serving For Model Inference, With Each Running On Separate Hosts Using Docker Compose. I’Ve Successfully Deployed A Real-Time Inference Model In Clearml-Serving, Configured Withi

Hello! I’m currently using ClearML-Server as an artifact manager and ClearML-Serving for model inference, with each running on separate hosts using Docker Compose. I’ve successfully deployed a real-time inference model in ClearML-Serving, configured within its own container. Now, I need to deploy another model that requires a different environment in the same ClearML-Serving setup.
Is it possible to configure Docker Compose to run two different models in their own containers, and if so, how can ClearML-Serving be configured to direct requests to the appropriate container based on the service endpoint? I’m also interested in monitoring both models using a single Grafana dashboard. I came across a tutorial suggesting this might be possible, but I’m unsure how to implement it so that ClearML-Serving correctly routes traffic to the right container for predictions. Any advice or insights on this setup would be greatly appreciated!

  
  
Posted 6 months ago
Votes Newest

Answers 17


Hi @<1697056701116583936:profile|JealousArcticwolf24>
Awesome deployment 🤩
Yes if you need another scalable model serving you can just run another instance of the clearml-serving-inference
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L77
So you end up with two of them, one per models environment. Notice each one should have its own unique clearml serving session
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L92

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> Thank you for the answer! So i will be able to log everything in the same grafana ? and i dont need to run another docker-compose with new clearml inference ?)

  
  
Posted 6 months ago

correct

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> Looks like it’s not so easy((( i run model in independent container… but cant find metrics in grafana( should i add this service into docker-compose ? or what? if i should add it into docker-compose how can i add new models without rebuild whole docker compose ? or i just need to add configs into env of my dockerfile ?
image
image
image

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> and should i open ports ? or maybe just add network into my new model ? can’t understand what to do(
image
image
image
image

  
  
Posted 6 months ago

i can run clearml-serving-infernece in another docker-compose and use network and environment from main clearml-serving docker-compose

  
  
Posted 6 months ago

but im not sure that it will works(

  
  
Posted 6 months ago

and its cool that we have here kafka and so on but looks like i dont use it when i just build container with another environment

  
  
Posted 6 months ago

i can get predictions from container but not able to use whole tool stack(
image
image

  
  
Posted 6 months ago

looks like there should be something that i cant find to make it more finished

  
  
Posted 6 months ago

Let's start small. Do you have grafana enabled in your docker compose and can you login to your grafana web ui?
Notice grafana needs to access the prometheus container directly so easiest way is to have everything in the same docker compose

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> Yes) for first two models running in first env) i have) im logging outputs and inputs for this two models)

  
  
Posted 6 months ago

And can you see your promethues in your grafana?

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> yes)
image

  
  
Posted 6 months ago

Hi @<1697056701116583936:profile|JealousArcticwolf24> just saw the reply
Image look okay?! what what is the query? basically I'm truing to understand if grafana is connected to the Prometheus, and if the Prometheus has any data in it
Secondly, just to make sure, kafka service should be able to connect directly to the the container running the actual inference

  
  
Posted 5 months ago

@<1523701205467926528:profile|AgitatedDove14> thanks) actualy im already solved the problem) just build another docker-compose (only with clearml-inference) with external network) conected to first docker-compose) where im running whole bunch of containers with grafana and prometeus) and now im able to use 2 diff envs in 2 diff containers) but logging in the same prometeus and grafana)
image

  
  
Posted 5 months ago

Nice! !!!
🎊

  
  
Posted 5 months ago
564 Views
17 Answers
6 months ago
5 months ago
Tags