Reputation
Badges 1
17 × Eureka!Hi CostlyOstrich36 . Yes, you are right
clearml-serving --id my_service_id model add --engine triton --endpoint "test_ocr_model" --preprocess "preprocess.py" --name "test-model" --project "clear-ml-test-serving-model" --input-size 1 3 384 384 --input-name "INPUT__0" --input-type float32 --output-size 1 -1 --output-name "OUTPUT__0" --output-type int32
docker-compose --env-file example.env -f docker-compose-triton-gpu.yml up
for clearml-serving
AgitatedDove14 My model has method generate. i would like to call it. How can i get loaded automaticly model from Preprocess object. Preprocess file
` from typing import Any, Callable, Optional
from transformers import TrOCRProcessor
import numpy as np
Notice Preprocess class Must be named "Preprocess"
class Preprocess(object):
def __init__(self):
self.processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-printed")
def preprocess(self, body: dict, sta...
AgitatedDove14 I need to call generate
method of my model, but by default it calls forward
Yes, I'm
AgitatedDove14 Well then I have no idea why with tensorboard learning is so slow
the compute time for each batch is about the same
frameworks = { 'tensorboard': False, 'pytorch': False } task = Task.init( project_name="train_pipeline", task_name="test_train_python", task_type=TaskTypes.training, auto_connect_frameworks=frameworks )
OS
Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.29 Ubuntu 20.04 LTS
python_version
3.8.10
With this setting I have a slow learning speed, but if I use the setting I sent earlier then learning speed is normal
Hi SuccessfulKoala55 , I already test it. Training is much faster without the tensorboard