But that itself is running in a task right?
Very bad at python packaging terms - but I use pyproject.toml and python -m build that generates a wheel and a tar and I install the tar
I was thinking such limitations will exist only for published
you will have to update this in your local clearml.conf, or wherever you are doing the Task.init from.
@<1523701205467926528:profile|AgitatedDove14> - any thoughts on this. Would like to use profile / iam roles as well.
forking and using the latest code fixes the boto issue at least
Got the engine running.
curl <serving-engine-ip>:8000/v2/models/keras_mnist/versions/1
What’s the serving-engine-ip supposed to be?
For now that's a quick thing, but for actual use I will need a proper model (pkl) and the .py
The agent ip? Generally what’s the expected pattern to deploy and scale this for multiple models?
I would prefer controlled behavior than some available version being used. Here triggered a bunch of jobs that all went fine and even evaluations were fine and then when we triggered a inference deploy it failed
Planning to exec into the container and run it in a loop and see what happens
Also btw, is this supposed to be screenshot from community verison? https://github.com/manojlds/clearml-serving/blob/main/docs/webapp_screenshots.gif
Model says PACKAGE, that means it’s fine right?
But I don’t see a task option in Dataset.create
https://github.com/allegroai/clearml/blob/master/clearml/datasets/dataset.py#L657-L663
I don’t want to though. Will run it as part of a pipeline
But it seems to make the current task the data processing task. I don't want it to take over the task.
Great. Is there a good view of the roadmap?
Good question 🙂
this is what I am seeing in the logs:
` No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
K8S Glue pods monitor: Failed parsing kubectl output...
Anything that is shown in git status as untracked? So ignore .gitignored. and maybe a oaram or config to say include untracked. Anyway, it's only a nice to have feature.
# Python 3.6.13 | packaged by conda-forge | (default, Feb 19 2021, 05:36:01) [GCC 9.3.0] argparse == 1.4.0 boto3 == 1.17.70 minerva == 0.1.0 torch == 1.7.1 torchvision == 0.8.2
BTW when I started using s3, I was thinking I needed to specify ouput_uri for each task. Soon realized that you just need the prefix where you want to put it into, and clearml will take care of project etc being appended to the path. So for most usecases, a single output uri set in conf should work.
There’s also the cli to create tasks- https://github.com/allegroai/clearml/blob/master/docs/clearml-task.md
A channel here would be good too 🙂
Found the custom backend aspect of Triton - https://github.com/triton-inference-server/python_backend
Is that the right way?