Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hello All, I Have A Question Related To Clearml Serving. I Have A Trained A Yolov5 Model (.Pt File). When I Try To Create The Endpoint, Then I Get The Following Error Message While Loading The Model:

Hello all,

i have a question related to ClearML Serving. I have a trained a Yolov5 model (.pt file). When I try to create the endpoint, then I get the following error message while loading the model:

E0608 08:10:03.000260 110 model_lifecycle.cc:596] failed to load 'model_1' version 1: Internal: failed to load model 'model_1': PytorchStreamReader failed locating file constants.pkl: file not found
Exception raised from valid at /opt/pytorch/pytorch/caffe2/serialize/inline_container.cc:171 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f28e03b36dc in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f28e038912c in /opt/tritonserver/backends/pytorch/libc10.so)
frame #2: caffe2::serialize::PyTorchStreamReader::valid(char const*, char const*) + 0x35b (0x7f28909ef58b in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #3: caffe2::serialize::PyTorchStreamReader::getRecordID(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x57 (0x7f28909efe37 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #4: caffe2::serialize::PyTorchStreamReader::getRecord(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x5c (0x7f28909efedc in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #5: torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) + 0x131 (0x7f28919f34b1 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #6: <unknown function> + 0x41b7579 (0x7f28919dd579 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #7: <unknown function> + 0x41ba5ab (0x7f28919e05ab in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #8: torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x179 (0x7f28919e1b99 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #9: torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>) + 0x8f (0x7f28919e1e8f in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #10: torch::jit::load(std::istream&, c10::optional<c10::Device>) + 0xb9 (0x7f28919e1f99 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #11: <unknown function> + 0x1ee41 (0x7f28e047be41 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #12: <unknown function> + 0x247eb (0x7f28e04817eb in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #13: <unknown function> + 0x24c92 (0x7f28e0481c92 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #14: TRITONBACKEND_ModelInstanceInitialize + 0x3f6 (0x7f28e04820d6 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #15: <unknown function> + 0x10cfb2 (0x7f28e70f4fb2 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #16: <unknown function> + 0x10e732 (0x7f28e70f6732 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #17: <unknown function> + 0x10288f (0x7f28e70ea88f in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #18: <unknown function> + 0x1bc3a4 (0x7f28e71a43a4 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #19: <unknown function> + 0x1c2e38 (0x7f28e71aae38 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #20: <unknown function> + 0x2f5b00 (0x7f28e72ddb00 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #21: <unknown function> + 0xd6de4 (0x7f28e6c37de4 in /lib/x86_64-linux-gnu/libstdc++.so.6)
frame #22: <unknown function> + 0x8609 (0x7f28e7f88609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #23: clone + 0x43 (0x7f28e6922133 in /lib/x86_64-linux-gnu/libc.so.6)

I tried to write my own load method in the preprocess.py file:

def load(self, local_file_name: str) -> Optional[Any]:  # noqa
        """
        Optional: provide loading method for the model
        useful if we need to load a model in a specific way for the prediction engine to work
        :param local_file_name: file name / path to read load the model from
        :return: Object that will be called with .predict() method for inference
        """
        self._model = torch.hub.load('ultralytics/yolov5', 'custom', path=local_file_name) 

But still failing with the same error.
Is there an example available, how to load a Yolov5 model?

  
  
Posted 8 months ago
Votes Newest

Answers