Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I'M Eric. I'M An Mlops Engineer At A Company With 9 De'S, 6 Ds'S, And 2 Mlops Engineers. I Just Learned About Clearml A Few Hours Ago And I'M Getting Excited About It!! I'M Wondering If We Could Replace Our Current Mlops Platform With Clearml. Right N


Hi friends, I'm just seeing these new messages. I read these links and I agree with @<1557175205510516736:profile|ShallowSwan53> . It's nice that the webapp has these pages, but what is the workflow to actually use this registry?

Also, @<1557175205510516736:profile|ShallowSwan53> , do you have a specific workflow in mind that you're hoping to get from ClearML?

At BEN, we're experimenting with

  • BentoML for model serving. It's a Python REST framework a lot like FastAPI, but with some nice utilities for inference.
  • MLFlow. We'd be open to replacing this with ClearML. I'd even push for it just for the nice integrations with the experiment tracking part of ClearML!
    Here's our MLFlow workflow

What if I wanted to do something like:

  • run several experiments, logging a model artifact for each, landing in S3 as a backend
  • compare the experiments and promote the best model, maybe with tags like production and click-through-rate-regressor and v1.0.0 . Advanced, stable pipelines may automate this step if the selection criteria are clear.
  • Run a script from CI, possibly in Python, that- fetches the latest click-through-rate-regressor model with the production tag from the registry
  • clones / installs the inference and preprocessing code that goes with that model--could be an entire REST API with that logic. Monitoring code with something like WhyLabs or Arize.ai would go in the REST API endpoints.
  • builds a docker image containing the REST API and the model weights baked into the image
  • deploys that image to wherever you run containers (we use AWS ECS)
    (1) and (2) seem doable to me, but (3.a) I'm not sure about.

Is there a straightforward way we could do a filter query against ClearML for model matching these criteria, and then download it?

  
  
Posted one year ago
111 Views
0 Answers
one year ago
one year ago