Unanswered
Hello,
I'Ve Been Using Clearml For A Month Now, And Must Say It'S A Really Good Product!
I'M Mostly Working With Huggingface Transformers, I Integrated Clearml In My Solution:
HungryArcticwolf62 transformer model is at the end a pytorch/tf model, with pre/post processing.
the pytorch/tf model inference is done with Triton (probably the most efficient engine today), where clearml runs the pre/post on a different CPU machine (making sure we fully utilize all the HW. Does that answer the question?
Latest docs here:
https://github.com/allegroai/clearml-serving/tree/dev
expect a release after the weekend 😉
156 Views
0
Answers
2 years ago
one year ago