Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
How Do People Generally Handle Moving From Experimental Mode With Notebooks And Then Running Pipelines For Production Training And Beyond?

How do people generally handle moving from experimental mode with notebooks and then running pipelines for production training and beyond?

  
  
Posted 2 years ago
Votes Newest

Answers 7


Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for 😉 are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?

  
  
Posted 2 years ago

GrumpyPenguin23 both in general and clearml 🙂

  
  
Posted 2 years ago

Nbdev ia "neat" but it's ultimately another framework that you have to enforce.

Re: maturity models - you will find no love for then here 😉 mainly because they don't drive research to production

Your described setup can easily be outshined by a ClearML deployment, but sagemaker instances are cheaper. If you have a limited number of model architectures you can get tge added benefit of tracking your s3 models with ClearML with very little code changes. As for deployment - that's another story altogether.

Maybe some of the other silent lurkers here would like to comment?

  
  
Posted 2 years ago

One thing I am looking at is nbdev from fastai folks

  
  
Posted 2 years ago

Well in general there is no one answer. I can talk about it for days. In ClearML the question is really a non issue since of you build a pipeline from notebooks on your dev in r&d it is automatically converted to python scripts inside containers. Where shall we begin? Maybe you describe your typical workload and intended deployment with latency constraints?

  
  
Posted 2 years ago

Currently we train from Sagemaker notebooks, push models to S3 and create containers for model serving

  
  
Posted 2 years ago
542 Views
7 Answers
2 years ago
one year ago
Tags