Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello!! New To Clearml, Trying It Out Now

Hello!!
New to ClearML, trying it out now 🙂
I have a few questions:

  • Is there a way to easily deploy a model using the UI, or should it be done via the CLI with providing the model id etc.?
  • I want the models to be isolated because of packages versions. So, for every model deployed, will it serve it on a different docker and I need a different port?
  • I'm currently using the same server for ClearML Server and clearml-serving (in the docker-compose.yml, I set the clearml-serving-inference ports to 8082:8080). Should it work well? or should I expect issues and move to 2 different servers?
  • In the clearml_serving_setup documentation, it says "Edit the environment variables file ( docker/example.env ) with your clearml-server credentials". Do they mean to create a ".env" file? Or is it being created with "example.env" file?
    Please answer, even if you know only 1 answer 🙏

Thank you,
Guy

  
  
Posted 3 months ago
Votes Newest

Answers 2


Hi @<1838387863251587072:profile|JealousCrocodile85> , sure!

Is there a way to easily deploy a model using the UI, or should it be done via the CLI with providing the model id etc.?

For none LLM models there is only the CLI (Which you can automate). For LLMs there is an app engine that allows you to launch them directly - None

I want the models to be isolated because of packages versions. So, for every model deployed, will it serve it on a different docker and I need a different port?

I think it will serve it on a different endpoint, the port will most likely be the same.

I'm currently using the same server for ClearML Server and clearml-serving (in the docker-compose.yml, I set the clearml-serving-inference ports to 8082:8080). Should it work well? or should I expect issues and move to 2 different servers?

So do you also have a GPU on that machine or are you serving models that do not require a GPU? The baseline assumption is that serving ("worker") is done on a machine separate from the ClearML server (control plane)

In the clearml_serving_setup documentation, it says "Edit the environment variables file (

docker/example.env

) with your clearml-server credentials". Do they mean to create a ".env" file? Or is it being created with "example.env" file?

These are the API credentials you get from the webUI (Settings -> Workspace)

  
  
Posted 3 months ago

Thank you very much, @<1523701070390366208:profile|CostlyOstrich36> !

  
  
Posted 3 months ago
334 Views
2 Answers
3 months ago
3 months ago
Tags
Similar posts