Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
FranticWhale40
Moderator
2 Questions, 10 Answers
  Active since 23 February 2024
  Last activity one month ago

Reputation

0

Badges 1

10 × Eureka!
0 Votes
14 Answers
108 Views
0 Votes 14 Answers 108 Views
one month ago
0 Votes
17 Answers
194 Views
0 Votes 17 Answers 194 Views
Hi, I wanted to try model versioning, suppose that I've a model and want to have multiple versions of the same model and to be able to have inference on thes...
2 months ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

@<1523701205467926528:profile|AgitatedDove14> Also could you please share the commit to fix the issue? It'll help to address it on our end.
Thanks!

2 months ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

and here is the serving inference logs:

ffmpeg version 4.3.6-0+deb11u1 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr --extra-version=0+deb11u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-...
2 months ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Yes I'm sure that the Triton container finished syncing.
here is the Triton logs:

I0223 15:58:32.515979 71 model_repository_manager.cc:1352] successfully loaded 'yolo_2' version 1
I0223 15:58:32.842511 71 model_repository_manager.cc:1352] successfully loaded 'yolo_1' version 1
I0223 15:58:32.842579 71 server.cc:559] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0223 15:58:32.842606 71 server.cc:586] 
+-------------+---...
2 months ago
0 Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time

Thank you for your prompt response. As I installed ClearML using pip, I don't have direct access to the config file. Is there any other way to increase this timeout?

one month ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Sure,

clearml-serving --id $SERVING_ID model add \
    --name "yolo" --version 2 --project $PROJECT_NAME --engine triton --endpoint "yolov8" \
    --preprocess "./yolo/preprocess.py" \
    --input-size 3 -1 -1 --input-name "images" --input-type float32 \
    --output-size -1 -1 --output-name "output0" --output-type float32 \
    --aux-config ./yolo/config.pbtxt
2 months ago