Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
JumpyRaven4
Moderator
4 Questions, 33 Answers
  Active since 18 May 2023
  Last activity 2 months ago

Reputation

0

Badges 1

32 × Eureka!
0 Votes
15 Answers
302 Views
0 Votes 15 Answers 302 Views
Hi Guys, we are running clearml-serving on a kube cluster on AWS and we have noticed that we are getting some 502 errors once in a while that we can't seem t...
5 months ago
0 Votes
26 Answers
162 Views
0 Votes 26 Answers 162 Views
2 months ago
0 Votes
4 Answers
579 Views
0 Votes 4 Answers 579 Views
Hi Guys, I have a question regarding Model tracking. I have pipelines that use Xgboost through the scikit-learn api to perform: - Feature selection through n...
11 months ago
0 Votes
11 Answers
144 Views
0 Votes 11 Answers 144 Views
2 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

alright, so actually we noticed that the problem disappears if we use only sync requests. Meaning if I create a sleep endpoint that is async we get the 502 but if it's sync we don't

5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

we have tried both and got the same issue (gunicorn vs uvcorn).
No I meant creating a

@router.post(
    "/sleep",
    tags=["temp"],
    response_description="Return HTTP Status Code 200 (OK)",
    status_code=status.HTTP_200_OK,
    response_model=TestResponse,
)
# def here instead of async def
def post_sleep(time_sleep: float) -> TestResponse:
    """ """
    time.sleep(time_sleep)
    return TestResponse(status="OK")
5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

that's a fair point. Actually we have switched from using siege because we believe it is causing the issues and are using Locust now instead. We have been running for days at the same rate and don't see any errors being reported...

5 months ago
0 Hi Everyone, I'M Trying To Setup Clearml-Serving As Per

Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
I'm running in almost the same error (see below) but I want to connect the the free clearml server version at None so I have set up the corresponding env variables in example.env:

CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
CLEARML_API_ACCESS_KEY="---"
CLEARML_API_SECRET_KEY="---"
CLEARML_SERVING_TASK_ID="---"

I have set up the right values from...

10 months ago
0 Hi Guys, I Have A Question Regarding Model Tracking. I Have Pipelines That Use Xgboost Through The Scikit-Learn Api To Perform:

Hi Alex,
thanks for your answer. I'm curious about your third point using OutputModel. I could not figure out from the documentation how do you actually use it. I constructed the OutputModel object as such:

  • out = OutputModel(task, name="my_model", framework="xgboost")
    However, I could not find any method in the doc that would allow me to pass the model object to that instance or said otherwise, I can't understand how to use that Output model to register my model which would be stored in a...
11 months ago
5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

I have tested with an endpoint that basically add two numbers and never managed to trigger the 502. I'm starting to wonder if we are not running just too many workers. I had it wrong that 2 vcpus should mean 5 workers should be good but I think i should probably be closer to 2 but I m not sure why that would lead requests being dropped

5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

ACtually the request are never registered to the gunicorn app, and the ALB log show that there is no response from the target "-".

5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

Hi Martin,

  • Actually we are using ALB with a 30 seconds timeout
  • we do not have GPUs instances
  • docker version 1.3.0
5 months ago
0 Hi Guys, We Are Running Clearml-Serving On A Kube Cluster On Aws And We Have Noticed That We Are Getting Some 502 Errors Once In A While That We Can'T Seem To Trace Back.

yeah I don't know I think we are probably just trying to fit to high a throughput for that box but it's weird that the packet just get dropped i would have assumed the response time should degrade and requests be queued.

5 months ago
11 months ago
0 Hi Guys, I Have Been Running The Clearml-Serving For A While Now And I Realize That From Time To Time After A Couple Of Hours The Serving Task (Control Plane) That Is Configured Through The Cli Goes Into Status Abort. This Happens Even Though All The Pods

ok so I haven't looked at the latest changes after the sync this morning but the ones we put in yesterday seems to have fixed the issue, the service is still running this morning at least.

2 months ago
Show more results compactanswers