Reputation
Badges 1
2 × Eureka!Just to be sure I understand you correctly: you're saving/dumping an sklearn model in the clearml experiment manager, then want to serve it using clearml serving, but you do not wish to specify the model input and ouput shapes in the CLI?
However, I actually do think I can already open the Huggingface PR in the meantime. It has actually relatively little to do with the second bug.
When creating it, I found that this hack should be on our side, not on Huggingface's. So I'm only going to fix issue 1 with the PR, issue 2 is ours 🙂
Hi ComfortableShark77 !
Which commands did you use exactly to deploy the model?
@<1558986839216361472:profile|FuzzyCentipede59> Would you mind sharing how you're running the training? i.e. a minimal code example so we can reproduce the issue?
Yeah, I do the same thing all the time. You can limit the amount of tasks that are kept in HPO with the save_top_k_tasks_only
parameter and you can create subprojects by simply using a slash in the name 🙂 https://clear.ml/docs/latest/docs/fundamentals/projects#creating-subprojects
Hmm, I can't really follow your explanation. The removed file SHOULD not exist right? 😅 And what do you mean exactly with the last sentence? An artifact is an output generated as part of a task. Can you show me what you mean with screenshots for example?
Hi OddShrimp85
Do you have some more information than that? It could be a whole list of things 🙂
Great to hear! Then it comes down to waiting for the next hugging release!
Isitdown seems to be reporting it as up. Any issues with other websites?
Hi Fawad, maybe this can help you get started! They're both c++ and python examples of triton inference. Be careful though, the pre and postprocessing used is specific to the model (in this case yolov4) and you'll have to change it to your own model's needs
You will have to provide more information. What other docker containers are running and how did you start the server?
Great to know you found it 😄
The point of the alias is for better visibility in the Experiment Manager. Check the screenshots above for what it looks like in the UI. Essentially, setting an Alias makes sure the task that is getting the dataset automatically logs the ID that it gets using Dataset.get()
. The reason being that if you later on look back to your experiment, you can also see what dataset was .get()
't back then.
ExuberantBat52 When you still get the log messages, where did you specify the alias?...
Ah I see. So then I would guess it is due to the remote machine (the clearml agent) not being able to properly access your clearml server
Hi @<1523701949617147904:profile|PricklyRaven28> just letting you know I still have this on my TODO, I'll update you as soon as I have something!
Wait is it possible to do what i'm doing but with just one big Dataset object or something?
Don't know if that's possible yet, but maybe something like the proposed querying could help here?
Unfortunately no, ClearML serving does not infer input or output shapes from the saved models as of today. Maybe you could open an issue on the github of ClearML serving to request it? Preferably with a clear, minimal example, that would be awesome! We'd take it into account for next releases
No inputs and outputs are ever set automatically 🙂 For e.g. Keras you'll have to specify it using the CLI when making the endpoint, so Triton knows how to optimise as well as set it correctly in your preprocessing so Triton receives the format it expects.
If that's true, the error should be on the combine function, no? Do you have a more detailed error log or minimal reproducible example?
That makes sense! Maybe something like dataset querying as is used in the clearml hyperdatasets might be useful here? Basically you'd query your dataset to only include sample you want and have the query itself be a hyperparameter in your experiment?
Hey! Sorry, didn't fully read your question and missed that you already did it. It should not be done inside the clearm-serving-triton
service but instead inside the clearml-serving-inference
service. This is where the preprocessing script is ran and it seems to be where the error is coming from.
Ok, good to know! Thank you very much for doing this!
Hi @<1533257278776414208:profile|SuperiorCockroach75>
I must say I don't really know where this comes from. As far as I understand the agent should install the packages exactly as they are saved on the task itself. Can you go to the original experiment of the pipeline step in question (You can do this by selecting the step and clicking on Full Details" in the info panel), there under the execution tab you should see which version the task detected.
The task itself will try to autodetect t...
Hi VictoriousPenguin97 ! I think you should be able to change it in the docker-compose file here: https://github.com/allegroai/clearml-server/blob/master/docker/docker-compose.yml
You can map the internal 8008 port to another port on your local machine. But beware to provide the different port number to any client that tries to connect (using clearml-init
)
Interesting! I'm glad to know it's working now, only I now really want to know what caused it 😄 Let me know if you ever do find out!
Hi Oriel!
If you want to only serve an if-else model, why do you want to use clearml-serving for that? What do you mean by "online featurer"?