Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, Yet Again I Come With A Problem I Cant See A Fix For This Issue That Is Bugging Me For Days. I Want To Serve A Gpt2 Model And I Have The Onnx Uploaded To The Server. When I Try To Mount The Endpoint The Server Will Try To Find Model.Onnx As It Is Int


Following up on this i was unable to fix the issue. But i ended up finding another complication. When uploading a onnx model using the upload command it keeps getting tagged as a TensorFlow model, even with the correct file structure, and that leads to the previous issue since the serving module will search for different format than the onnx.

As far as i could see this comes from the helper inside the triton engine, but as of right now i could not fix it.

Is there anything i might be doing wrong?

  
  
Posted 14 days ago
11 Views
0 Answers
14 days ago
13 days ago