Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, Our Research Team Uses Both

Hi, our research team uses both local servers and cloud services to run a ML project
In details, we
do EDA, data preprocessing, conducting experiments and other dirty stuff on our local servers (we have GPUs of course), deploy the product to Cloud services and monitor model performance while serving.Now, we want to adopt MLOps practices with the help of ClearML; I don’t know whether ClearML supports our case, tracking experiments + doing task orchestration on local servers and doing task orchestration + deploying + monitoring product on Cloud at the same time.
Thanks!

  
  
Posted 2 years ago
Votes Newest

Answers 3


GrittyKangaroo27 , I think that ClearML will be just right up your alley then πŸ™‚

  
  
Posted 2 years ago

GrittyKangaroo27 hi!

Can you please elaborate what your use case for deployment?

Besides that, I'm happy to say that ClearML supports all the cases above πŸ™‚

Also for some further reading:
https://clear.ml/products/clearml-deploy/
https://allegro.ai/clearml/docs/rst/deploying_clearml/deploying_clearml_formats/index.html
https://github.com/allegroai/clearml-serving

  
  
Posted 2 years ago

CostlyOstrich36
Great to hear that!

In short, we hope ClearML server can act as the bridge to connect local servers and Cloud infrastructure. (Local servers for development and Cloud for deployment and monitoring.)

For example,

  • We want to deploy ClearML somewhere on the Internet.
  • Then use this service to track experiments, orchestrate workflow, etc. in our local servers.
  • After finished experiments, we get returned artifacts and save them somewhere, local disk or cloud for instance.
  • Finally, we will want to use ClearML to deploy our whole training pipeline to the cloud environment and do monitoring, and continue the loop of automatically retraining the already deployed model on Cloud, based on the monitoring results or a fixed schedule.
  
  
Posted 2 years ago
489 Views
3 Answers
2 years ago
one year ago
Tags