Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hello Everyone! I Am Relatively New To Clearml And To The *-Ops Concepts At All, As I Am But A Regular Python Dev. I Am Currently Trying To Implement Mlops Into Our Existing Local Infrastructure, So That We Would Be Able To Utilize Automated Data Preproc


@<1523701087100473344:profile|SuccessfulKoala55> Thank you once again, I extracted the scripts and commands, that seemingly were responsible for model registration and its inference on GPU worker server:

register_model.py

from clearml import Task, OutputModel

task = Task.init(project_name="LogSentinel", task_name="Model Registration")
model_path = "~/<full_local_path_to_model>/deeplog_bestloss.pth"

# Register the model
output_model = OutputModel(task=task)
output_model.update_weights(model_path)
output_model.publish()
print(f"Model ID: {output_model.id}")

Commands:

docker compose --env-file .env -f docker-compose-triton-gpu.yml up -d

clearml-serving create --project "LogSentinel" --name "deeplog-serving"

clearml-serving model add   --engine triton   --endpoint "deeplog"   --model-id 0c6a1c24067a49a0ac09c7e42c215b05   --input-name "log_sequence" --input-type "int64" --input-size 1 10   --output-name "predictions" --output-type "float32" --output-size 1 28
  
  
Posted one month ago
23 Views
0 Answers
one month ago
one month ago