Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working


Ok, I have some weird update... I shut down and restarted the docker container just to get fresh logs and now I am getting the following error message by clearml-serving-triton
` clearml-serving-triton | clearml-serving - Nvidia Triton Engine Controller
clearml-serving-triton | Warning: more than one valid Controller Tasks found, using Task ID=433aa14db3f545ad852ddf846e25dcf0
clearml-serving-triton | ClearML Task: overwriting (reusing) task id=350a5a919ff648148a3de4483878f52f
clearml-serving-triton | 2023-01-26 15:41:41,507 - clearml.Task - INFO - No repository found, storing script code instead
clearml-serving-triton | WARNING: [Torch-TensorRT] - Unable to read CUDA capable devices. Return status: 35
clearml-serving-triton | I0126 15:41:48.077867 34 libtorch.cc:1381] TRITONBACKEND_Initialize: pytorch
clearml-serving-triton | I0126 15:41:48.077927 34 libtorch.cc:1391] Triton TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.077932 34 libtorch.cc:1397] 'pytorch' TRITONBACKEND API version: 1.9
clearml-serving-triton | 2023-01-26 15:41:48.210347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
clearml-serving-triton | I0126 15:41:48.239499 34 tensorflow.cc:2181] TRITONBACKEND_Initialize: tensorflow
clearml-serving-triton | I0126 15:41:48.239547 34 tensorflow.cc:2191] Triton TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.239552 34 tensorflow.cc:2197] 'tensorflow' TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.239554 34 tensorflow.cc:2221] backend configuration:
clearml-serving-triton | {}
clearml-serving-triton | I0126 15:41:48.252847 34 onnxruntime.cc:2400] TRITONBACKEND_Initialize: onnxruntime
clearml-serving-triton | I0126 15:41:48.252884 34 onnxruntime.cc:2410] Triton TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.252888 34 onnxruntime.cc:2416] 'onnxruntime' TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.252891 34 onnxruntime.cc:2446] backend configuration:
clearml-serving-triton | {}
clearml-serving-triton | I0126 15:41:48.266838 34 openvino.cc:1207] TRITONBACKEND_Initialize: openvino
clearml-serving-triton | I0126 15:41:48.266874 34 openvino.cc:1217] Triton TRITONBACKEND API version: 1.9
clearml-serving-triton | I0126 15:41:48.266878 34 openvino.cc:1223] 'openvino' TRITONBACKEND API version: 1.9
clearml-serving-triton | W0126 15:41:48.266897 34 pinned_memory_manager.cc:236] Unable to allocate pinned system memory, pinned memory pool will not be available: CUDA driver version is insufficient for CUDA runtime version
clearml-serving-triton | I0126 15:41:48.266909 34 cuda_memory_manager.cc:115] CUDA memory pool disabled
clearml-serving-triton | E0126 15:41:48.267022 34 model_repository_manager.cc:2064] Poll failed for model directory 'test_model_pytorch': failed to open text file for read /models/test_model_pytorch/config.pbtxt: No such file or directory
clearml-serving-triton | I0126 15:41:48.267101 34 server.cc:549]
clearml-serving-triton | +------------------+------+
clearml-serving-triton | | Repository Agent | Path |
clearml-serving-triton | +------------------+------+
clearml-serving-triton | +------------------+------+
clearml-serving-triton |
clearml-serving-triton | I0126 15:41:48.267129 34 server.cc:576]
clearml-serving-triton | +-------------+-------------------------------------------------------------------------+--------+
clearml-serving-triton | | Backend | Path
| Config |
clearml-serving-triton | +-------------+-------------------------------------------------------------------------+--------+
clearml-serving-triton | | pytorch | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so
| {} |
clearml-serving-triton | | tensorflow | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so | {} |
clearml-serving-triton | | onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {} |
clearml-serving-triton | | openvino | /opt/tritonserver/backends/openvino_2021_4/libtriton_openvino_2021_4.so | {} |
clearml-serving-triton | +-------------+-------------------------------------------------------------------------+--------+
clearml-serving-triton |
clearml-serving-triton | I0126 15:41:48.267161 34 server.cc:619]
clearml-serving-triton | +-------+---------+--------+
clearml-serving-triton | | Model | Version | Status |
clearml-serving-triton | +-------+---------+--------+
clearml-serving-triton | +-------+---------+--------+
clearml-serving-triton |
clearml-serving-triton | Error: Failed to initialize NVML
clearml-serving-triton | W0126 15:41:48.268464 34 metrics.cc:571] DCGM unable to start: DCGM initialization error
clearml-serving-triton | I0126 15:41:48.268671 34 tritonserver.cc:2123]
clearml-serving-triton | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
clearml-serving-triton | | Option | Value

                      |

clearml-serving-triton | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
clearml-serving-triton | | server_id | triton

                      |

clearml-serving-triton | | server_version | 2.21.0

                      |

clearml-serving-triton | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
clearml-serving-triton | | model_repository_path[0] | /models

                      |

clearml-serving-triton | | model_control_mode | MODE_POLL

                      |

clearml-serving-triton | | strict_model_config | 1

                      |

clearml-serving-triton | | rate_limit | OFF

                      |

clearml-serving-triton | | pinned_memory_pool_byte_size | 268435456

                      |

clearml-serving-triton | | response_cache_byte_size | 0

                      |

clearml-serving-triton | | min_supported_compute_capability | 6.0

                      |

clearml-serving-triton | | strict_readiness | 1

                      |

clearml-serving-triton | | exit_timeout | 30

                      |

clearml-serving-triton | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
clearml-serving-triton |
clearml-serving-triton | I0126 15:41:48.268710 34 server.cc:250] Waiting for in-flight requests to complete.
clearml-serving-triton | I0126 15:41:48.268717 34 server.cc:266] Timeout 30: Found 0 model versions that have in-flight inferences
clearml-serving-triton | I0126 15:41:48.268722 34 server.cc:281] All models are stopped, unloading models
clearml-serving-triton | I0126 15:41:48.268727 34 server.cc:288] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
clearml-serving-triton | error: creating server: Internal - failed to load all models
clearml-serving-triton | ClearML results page:
clearml-serving-triton | configuration args: Namespace(inference_task_id=None, metric_frequency=1.0, name='triton engine', project=None, serving_id='ec2c71ce833a4f91b8b29ed5ea68d6d4', t_allow_grpc=None, t_buffer_manager_thread_count=None, t_cuda_memory_pool_byte_size=None, t_grpc_infer_allocation_pool_size=None, t_grpc_port=None, t_http_port=None, t_http_thread_count=None, t_log_verbose=None, t_min_supported_compute_capability=None, t_pinned_memory_pool_byte_size=None, update_frequency=1.0)
clearml-serving-triton | String Triton Helper service
clearml-serving-triton | {'serving_id': 'ec2c71ce833a4f91b8b29ed5ea68d6d4', 'project': None, 'name': 'triton engine', 'update_frequency': 1.0, 'metric_frequency': 1.0, 'inference_task_id': None, 't_http_port': None, 't_http_thread_count': None, 't_allow_grpc': None, 't_grpc_port': None, 't_grpc_infer_allocation_pool_size': None, 't_pinned_memory_pool_byte_size': None, 't_cuda_memory_pool_byte_size': None, 't_min_supported_compute_capability': None, 't_buffer_manager_thread_count': None, 't_log_verbose': None}
clearml-serving-triton |
clearml-serving-triton | Updating local model folder: /models
clearml-serving-triton | Error retrieving model ID bd4fdc00180642ddb73bfb3d377b05f1 []
clearml-serving-triton | Starting server: ['tritonserver', '--model-control-mode=poll', '--model-repository=/models', '--repository-poll-secs=60.0', '--metrics-port=8002', '--allow-metrics=true', '--allow-gpu-metrics=true']
clearml-serving-triton | Traceback (most recent call last):
clearml-serving-triton | File "clearml_serving/engines/triton/triton_helper.py", line 540, in <module>
clearml-serving-triton | main()
clearml-serving-triton | File "clearml_serving/engines/triton/triton_helper.py", line 532, in main
clearml-serving-triton | helper.maintenance_daemon(
clearml-serving-triton | File "clearml_serving/engines/triton/triton_helper.py", line 274, in maintenance_daemon
clearml-serving-triton | raise ValueError("triton-server process ended with error code {}".format(error_code))
clearml-serving-triton | ValueError: triton-server process ended with error code 1
clearml-serving-triton exited with code 1 `

  
  
Posted one year ago
138 Views
0 Answers
one year ago
one year ago
Tags