
Reputation
Badges 1
36 × Eureka!I will actually write here what I found. trigger_on_tags
and trigger_required
are actually the same and concatenated with OR. You need to make sure you are using the "__$all" before if that's the behavior you want.
there is a bug in my opinion on the deserialization process because the triggers get de-dupped by trigger name or when using trigger_project there are dozens of triggers being created with the same name (one per dataset in the project). This leads to random behavior dependi...
Geez, I have been looking for this for a while, thanks for saving my day...again.
was allow_archived
removed from Task.query_tasks?
This being said, now I'm running into another issue that this seems to be "erasing" all the packages that had been set in the base task I'm cloning from. I can't find a method that would return these packages so that I could add to it?
hey Marin real quick actually, on your update to the requirements.txt file isn't that constraint on fastapi inconsistent?
I'm assuming that task.data.script.requirements is not the right way to do this...
so i still can't figure out what sets the task status to aborted
no requests are being served as in there is no traffic indeed
tx that's what I was doing more or less 😆
Hey Martin, I will, but it's a bit more tricky because we have modifications in the code that I have to merge on our side
how can you be >= 0.109.1 and lower than 0.96
ok so I haven't looked at the latest changes after the sync this morning but the ones we put in yesterday seems to have fixed the issue, the service is still running this morning at least.
Hi Martin, thanks a lot for looking into this so quickly. Will you let me know the version number once it's pushed? Thanks!
I can't be sure of the version I can't check at the moment, I have 1.3.0 from the top of my head but could be way off
We put back the additional changes and so far it seems that this has solved our issue. Thanks a lot for the quick turnaround on this.
my understanding was that the deamon thread was deserializing the task of the control plane every 300 seconds by default
we are actually building from our fork of the code into our own images and helm charts
so they ping the werb server?
ok great I ll check what other changes we have missed yesterday
what is actually setting the task status to Aborted
?
Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
I'm running in almost the same error (see below) but I want to connect the the free clearml server version at None so I have set up the corresponding env variables in example.env:
CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
CLEARML_API_ACCESS_KEY="---"
CLEARML_API_SECRET_KEY="---"
CLEARML_SERVING_TASK_ID="---"
I have set up the right values from...
ok I see that now. Everything is there on the UI and webserver though so we went ahead and implemented ourselves on the clearml-serving piece.
any timeline on this that you are aware of?
Hi Alex,
thanks for your answer. I'm curious about your third point using OutputModel. I could not figure out from the documentation how do you actually use it. I constructed the OutputModel object as such:
out = OutputModel(task, name="my_model", framework="xgboost")
However, I could not find any method in the doc that would allow me to pass the model object to that instance or said otherwise, I can't understand how to use that Output model to register my model which would be stored in a...
that's a fair point. Actually we have switched from using siege because we believe it is causing the issues and are using Locust now instead. We have been running for days at the same rate and don't see any errors being reported...
Hey tahnks a lot Alex, that's exactly what I was looking for. cheers