
Reputation
Badges 1
89 × Eureka!but is it true that I can have multiple models on the same docker instance with different endpoints?
When you spin the model you can tell it any additional packages you might need
What does spin mean in this context?
clearml-serving ...
?
interesting if I run the script from the repo main directory withpython code/run.py
it still gives me the same error message
clearml.Repository Detection - WARNING - Can't get diff information for git repo in repo/code
Yeah, I found it, thanks.
I also found that you should have a deterministic ordering before you apply a fixed seed random sampling or else you will have a lot of head-scratching and assertion errors...
ahh, ok, well, I tried to find an example that I can extend but this was the only reference I found: https://github.com/allegroai/clearml/blob/ca384aa75c236e0a8af7c5dd85406a359c3eb703/clearml/model.py#L35
I was just looking at the model example. How does output model store the binary? For example of an xgboost model
Is there an explicit OutputModel + xgboost example somewhere?
nah, it runs about 1 minute of standards SQL->dataframes->xgboost pipeline with some file saving
git status gives correct information
python run.py param1 param2
I don't understand the link between service id-s, service tasks and docker containers
What I try to do is that DSes have some lightweight baseclass that is independent of clearml they use and a framework have all the clearml specific code. This will allow them to experiment outside of clearml and only switch to it when they are in an OK state. This will also help not to pollute clearml spaces with half backed ideas
these are the service instances (basically increased visibility into what's going on inside the serving containers
But these have: different task ids, same endpoints (from looking through the tabs)
So I am not sure why they are here and why not somewhere else
python='python3' ~/anaconda3/envs/.venv/bin/python3
and immediately complained about a package missing, which apparently I can't specify when I establish the model endpoint but I need to re compose the docker container by passing an env variable to it????
I guess so, this was done by our DevOps guy and he said he is following instructions
yes, I do, I added a auxiliary_cfg
and I saw it immediately both in CLI and in the web ui
TBH our Preprocess class has an import in it that points to a file that is not part of the preprocess.py so I have no idea how you think this can work.
git-nbdiffdriver diff: git-nbdiffdriver: command not found fatal: external diff died, stopping at ...
I put two models in the same endpoint, then only one was running,
Sorry I wanted to say "service id"
Same service-id but different endpoints
TBH the main reason I went with our API is that because of the custom model loading, we need to use the "custom" framework anyway.
{"detail":"Error processing request: ('Expecting data to be a DMatrix object, got: ', <class 'pandas.core.frame.DataFrame'>)"}
now on to the next pain point:
I pass the IDs to the docker container as environment variables, so this does need restart for the docker container but I guess we can live with that for now