Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetShells3
Moderator
7 Questions, 21 Answers
  Active since 17 May 2023
  Last activity 5 months ago

Reputation

0

Badges 1

19 × Eureka!
0 Votes
5 Answers
432 Views
0 Votes 5 Answers 432 Views
Hi everyone! I have a question about clearml-serving . Are there any examples how to deploy models via clearml-serving ... model add ... with Python code? If...
8 months ago
0 Votes
2 Answers
458 Views
0 Votes 2 Answers 458 Views
Hi all! I can't use scalar tab in all experiments due to elastic search error: Error 100 : General data error (RequestError(400, 'search_phase_execution_exce...
8 months ago
0 Votes
1 Answers
169 Views
0 Votes 1 Answers 169 Views
Hi everyone! Could someone tell how to use task.launch_multi_node or give some successful examples? Except the example from https://clear.ml/docs/latest/docs...
3 months ago
0 Votes
4 Answers
510 Views
0 Votes 4 Answers 510 Views
Hi everyone! I faced the problem with ClearML-serving. I've deployed onnx model from higgingface in clearml-serving, but "Error processing request: Error: Fa...
9 months ago
0 Votes
17 Answers
420 Views
0 Votes 17 Answers 420 Views
Hi everyone! I am using clearml-serving When I am trying to add new endpoint like this clearml-serving --id <> model add --engine triton --endpoint conformer...
5 months ago
0 Votes
3 Answers
424 Views
0 Votes 3 Answers 424 Views
Hi everyone! I use ClearML Pipelines and I have too much parameters in it so I want to use configuration file. How could I connect configuration file (like t...
7 months ago
0 Votes
7 Answers
421 Views
0 Votes 7 Answers 421 Views
Hi everyone! I try to run Pytorch Lightning code on SLURM with srun script like this ( https://pytorch-lightning.readthedocs.io/en/1.2.10/clouds/slurm.html )...
9 months ago
0 Hey There! I Was Wondering If There Is Any Existing Support For Integration Between A Self-Hosted Clearml Server And A Slurm Cluster. I Can Certainly Queue Up A Slurm Job For Any Given Task, But Is There An Easy Way To Do This Automatically Whenever A Tas

Hi John @<1569133676640342016:profile|MammothPigeon75> ! How you queue up a slurm jobs for task with distributed calculations (like Pytorch Lightning)?

Please give me some help

Thank you in advance!

9 months ago
0 Hi Everyone! I Faced The Problem With Clearml-Serving. I'Ve Deployed Onnx Model From Higgingface In Clearml-Serving, But

@<1523701118159294464:profile|ExasperatedCrab78>
We use variables from .env file inside the clearm-serving-triton image, because we use helm chart to clearml-serving spinning. And still face the error

9 months ago
0 Hi Everyone! I Faced The Problem With Clearml-Serving. I'Ve Deployed Onnx Model From Higgingface In Clearml-Serving, But

@<1523701118159294464:profile|ExasperatedCrab78> Thank you! It have solved the problem!

9 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Hi @<1523701205467926528:profile|AgitatedDove14>

https://github.com/allegroai/clearml-serving/issues/62

I have an issue basen on that case. Could you tell me if I miss something in it?

5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Hi @<1523701205467926528:profile|AgitatedDove14>

Are there any questions or updates about the issue?

5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Hi @<1523701087100473344:profile|SuccessfulKoala55> Turns out if I delete

platform: ... 

string from config.pbtxt, it will deploy model on tritonserver (serving v 1.3.0 add "platform" string at the end of config file when clearm-model has "framework" attribute). But when I try to check endpoint with random data (but with right shape according config), I am getting

{'detail': "Error processing request: object of type 'NoneType' has no len()"}

error. Do you know how...

5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

@<1523701205467926528:profile|AgitatedDove14> this error appears before postprocess part.

Today I redeployed existing entrypoint with --aux-config "./config.pbtxt" and get the same error

Before:

!clearml-serving --id "<>" model add --engine triton --endpoint 'conformer_joint' --model-id '<>' --preprocess 'preprocess_joint.py' --input-size '[1, 640]' '[640, 1]' --input-name 'encoder_outputs' 'decoder_outputs' --input-type float32 float32 --output-size '[129]' --output-name 'outpu...
5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

I am getting this error in request response:

import numpy as np
import requests
body={
    "encoder_outputs": [np.random.randn(1, 640).tolist()],
    "decoder_outputs": [np.random.randn(640, 1).tolist()]
}
response = 
(f"
", json=body)
response.json()

Unfortunately, I see nothing related to this problem in both inference and triton pods /deployments (we use Kubernetes to spin ClearML-serving

5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

"After" version in logs is the same as config above. There is no "before" version in logs((

Endpoint config from ClearML triton task:

conformer_joint {
  engine_type = "triton"
  serving_url = "conformer_joint"
  model_id = "<>"
  version = ""
  preprocess_artifact = "py_code_conformer_joint"
  auxiliary_cfg = """default_model_filename: "model.bin"
max_batch_size: 16
dynamic_batching {
    max_queue_delay_microseconds: 100
}
input: [
        {
            name: "encoder_outputs"
      ...
5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

@<1523701205467926528:profile|AgitatedDove14> I think there is no chance to pass config.pbtxt as is.

https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving/preprocess_service.py#L358C9-L358C81

In this line, function use self.model_endpoint.input_name (and after that input_name , input_type and input_size ), but there are no such att...

5 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

@<1523701205467926528:profile|AgitatedDove14> config.pbtxt in triton container (inside /models/conformer_joint) - after merge:

default_model_filename: "model.bin"
max_batch_size: 16
dynamic_batching {
    max_queue_delay_microseconds: 100
}
input: [
        {
            name: "encoder_outputs"
            data_type: TYPE_FP32
            dims: [
                1,
                640
            ]
        },
        {
            name: "decoder_outputs"
            data_type: TYPE_FP3...
5 months ago
0 Hi Everyone! I Have A Question About

@<1523701070390366208:profile|CostlyOstrich36>

I spin endpoint with:
`!clearml-serving --id "<>" model add --engine triton --endpoint 'modelname' --model-id '<>' --preprocess 'preprocess.py' --input-size '[-1, -1]' '[-1, -1]' '[-1, -1]' --input-name 'input_ids' 'token_type_ids' 'attention_mask' --input-type int64 int64 int64 --output-size '[-1, -1]' --output-name 'logits' --output-type float32 --aux-config name="modelname" platform="onnxruntime_onnx" default_model_filename="model.bin...

8 months ago
0 Hi Everyone! I Have A Question About

Expected behavior - pic 1
Actual behavior - pic 2
image
image

8 months ago
0 Hi Everyone! I Have A Question About

Hi @<1523701070390366208:profile|CostlyOstrich36> Yes

Or run clearml-serving python code without CLI wrapper

8 months ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Hi @<1523701205467926528:profile|AgitatedDove14>

My preprocess file:

from typing import Any, Union, Optional, Callable

class Preprocess(object):
    def init(self):
        pass

    def preprocess(
            self,
            body: Union[bytes, dict],
            state: dict, 
            collect_custom_statistics_fn: Optional[Callable[[dict], None]]
        ) -> Any:
        return body["length"], body["audio_signal"]

    def postprocess(
            self,
            data: An...
5 months ago
8 months ago
0 Hi Everyone! I Try To Run Pytorch Lightning Code On Slurm With Srun Script Like This (

UPD: If I use --ntask-per-node=2 then ClearML creates 2 tasks, but I need only 1.

9 months ago
0 Hi Everyone! I Try To Run Pytorch Lightning Code On Slurm With Srun Script Like This (

@<1523701205467926528:profile|AgitatedDove14> Okay, thank you so much for your help!

8 months ago
0 Hi Everyone! I Try To Run Pytorch Lightning Code On Slurm With Srun Script Like This (

@<1523701205467926528:profile|AgitatedDove14> in this case I get AttributeError: 'NoneType' object has no attribute 'report_scalar' on trainer.fit(...) And Logger.current_logger() - I think non-master processes trying to log something, but have no Logger instance because have no Task instance.

What am I suppose to do to log training correctly? Logs in master process include all training history or I need to concatenate logs from different nodes somehow?

9 months ago
0 Hi Everyone! I Try To Run Pytorch Lightning Code On Slurm With Srun Script Like This (

@<1523701205467926528:profile|AgitatedDove14> Yes, I have some Logger.current_logger() callings in model class.

If I turn off logging on non-master nodes with RANK checking, I won't loose training logs from non-master nodes (I mean all training logs are on master node, aren't they) ?

9 months ago