Reputation
Badges 1
17 × Eureka!AgitatedDove14 thx for your answer, but my question is just about how to pass environment variables to the Yandex Cloud S3 storage in clearml-serving. Because in documentation there are examples for Azure and etc. And if I try something like that i get connection error to my S3 storage
Hi CostlyOstrich36 thx. Can you share the link about enterprise version pls?
Hi CostlyOstrich36
@<1523701087100473344:profile|SuccessfulKoala55> Hi! I'm already upgrade versions. Now I have WebApp: 1.12.1-397 • Server: 1.12.1-397 • API: 2.26
but I check the deletion and when I delete dataset from the UI, it still doesn’t delete it from the S3 bucket....
about setting up configs that u send
artifacts, like the datasets themselves, are loaded successfully onto the bucket, which means there is access to it
Then I especially don’t understand why artifacts are not deleted when deleted from ...
hi @<1523701087100473344:profile|SuccessfulKoala55>
Yes, self hosted
WebApp: 1.10.0-357 • Server: 1.10.0-357 • API: 2.24
@<1523701087100473344:profile|SuccessfulKoala55> Hi!
Can u help me with this question pls?
So the reason is storage limit? Not clearml request size or something like that?
If I as preprocessing resize my output video (for example with ffmpeg) it looks better in clearml UI. Its the solution to report media correctly
Sorry for this screens, but its an IP there - thats why i do this (but yeah a bit casually)
AgitatedDove14 we will try to use Triton, but it’s a bit hard with transformer model.
All extra packages we add in serving)
We made now like this. We in preprocessor.py load our model from S3 bucket and then use it. But it’s maybe not the best solution
AgitatedDove14 KindChimpanzee37 Can you help with this question pls?
AgitatedDove14 Hi. I and Vlad work together and I think I can paraphrase his question.
We’ve got out clearml-serving and we trained our model. Then we want to add that model to serving. But we need to write our custom preprocess.py
in which we need to call method generate from our model. But we do not exactly understand how we can load/refer to our model.
In examples about custom engine we’ve got this
` class Preprocess(object):
"""
Notice the execution flows is synchronous as ...
Hi CostlyOstrich36 I’m trying to log my dataset (120Gb) to S3. Yes, its self hosted server
Hi QuaintStork2 I have a look at https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
And didn’t see any variables as bucket name and host. Access key and token - ok. But i need also host and bucket name)
Hi SweetBadger76 I use Yandex Cloud. And I already connected this bucket to clearml server (in clearml.conf) and my trained models saved to this bucket. And yes, i’ve got list of credentials i need to use.
Thx, I will try it and say if it works for me
AgitatedDove14 SweetBadger76 i hope you can help with one more question
For the test I logged my new model to clearml-server file host and take models for clearml-serving from there. And it works with clearml-serving model add, but for clearml-serving model auto-update i do not exactly understand what happens. I see my auto-update model in models list in serving, but in output logs triton print that poll failed.
command for add model (which works)
` clearml-serving --id 5e4851e9ec3f4f148...
AgitatedDove14 we store .pt model. And we need for model inference method generate
. If we want to load model we need torch.jit.load
SweetBadger76 thx for your answer, but can u send link in documentation about this? I didn’t see it there. For clearml-server - okey, but in clearml-serving…
Yandex Cloud - it’s S3 storage provider for me. And I’m already upload my models to this S3. Now i want to connect this S3 bucket credentials to clearml-serving-triton