
Reputation
Badges 1
35 × Eureka!any timeline on this that you are aware of?
ok I see that now. Everything is there on the UI and webserver though so we went ahead and implemented ourselves on the clearml-serving piece.
Hi Martin,
- Actually we are using ALB with a 30 seconds timeout
- we do not have GPUs instances
- docker version 1.3.0
ACtually the request are never registered to the gunicorn app, and the ALB log show that there is no response from the target "-".
we have tried both and got the same issue (gunicorn vs uvcorn).
No I meant creating a
@router.post(
"/sleep",
tags=["temp"],
response_description="Return HTTP Status Code 200 (OK)",
status_code=status.HTTP_200_OK,
response_model=TestResponse,
)
# def here instead of async def
def post_sleep(time_sleep: float) -> TestResponse:
""" """
time.sleep(time_sleep)
return TestResponse(status="OK")
alright, so actually we noticed that the problem disappears if we use only sync requests. Meaning if I create a sleep endpoint that is async we get the 502 but if it's sync we don't
I'm not sure what to do with that info I must say since the serve_model is async for good reasons I guess
that's a fair point. Actually we have switched from using siege because we believe it is causing the issues and are using Locust now instead. We have been running for days at the same rate and don't see any errors being reported...
yeah I don't know I think we are probably just trying to fit to high a throughput for that box but it's weird that the packet just get dropped i would have assumed the response time should degrade and requests be queued.
was allow_archived
removed from Task.query_tasks?
no requests are being served as in there is no traffic indeed
I can't be sure of the version I can't check at the moment, I have 1.3.0 from the top of my head but could be way off
so they ping the werb server?
what is actually setting the task status to Aborted
?
my understanding was that the deamon thread was deserializing the task of the control plane every 300 seconds by default
so i still can't figure out what sets the task status to aborted
Hi Martin, thanks a lot for looking into this so quickly. Will you let me know the version number once it's pushed? Thanks!
we are actually building from our fork of the code into our own images and helm charts
hey Marin real quick actually, on your update to the requirements.txt file isn't that constraint on fastapi inconsistent?
how can you be >= 0.109.1 and lower than 0.96
ok so I haven't looked at the latest changes after the sync this morning but the ones we put in yesterday seems to have fixed the issue, the service is still running this morning at least.
ok great I ll check what other changes we have missed yesterday
We put back the additional changes and so far it seems that this has solved our issue. Thanks a lot for the quick turnaround on this.
Geez, I have been looking for this for a while, thanks for saving my day...again.
This being said, now I'm running into another issue that this seems to be "erasing" all the packages that had been set in the base task I'm cloning from. I can't find a method that would return these packages so that I could add to it?
I'm assuming that task.data.script.requirements is not the right way to do this...
tx that's what I was doing more or less 😆