Reputation
Badges 1
383 × Eureka!Got the engine running.
curl <serving-engine-ip>:8000/v2/models/keras_mnist/versions/1
What’s the serving-engine-ip supposed to be?
I am essentially creating a EphemeralDataset abstraction and creating controlled lifecycle for it such that the data is removed after a day in experiments. Additionally and optionally, data created during a step in a pipeline can be cleared once the pipeline completes
That's cool AgitatedDove14 , will try it out and pester you a bit more. 🙂
Planning to exec into the container and run it in a loop and see what happens
AgitatedDove14 either based on scenario
That makes sense - one part I am confused on is - The Triton engine container hosts all the models right? Do we launch multiple gorups of these in different projects?
I used .update_weights(path)
with path being the model
dir containing the model.py annd the config.pbtxt. Should I use update_weights_package
?
Without some sort of automation on top feels a bit fragile
But I don’t see a task option in Dataset.create
https://github.com/allegroai/clearml/blob/master/clearml/datasets/dataset.py#L657-L663
Also btw, is this supposed to be screenshot from community verison? https://github.com/manojlds/clearml-serving/blob/main/docs/webapp_screenshots.gif
Model says PACKAGE, that means it’s fine right?
pipeline code itself is pretty standard
` pipe = PipelineController(
default_execution_queue="minerva-default",
add_pipeline_tags=True,
target_project=pipelines_project,
)
for step in self.config["steps"]:
name = self._experiment_name(step)
pipe.add_step(
name=name,
base_task_project=pipelines_project,
base_task_name=name,
parents=self._get_parents(step),
task_overrides...
Thanks let me try playing with these!
dataset1 -> process -> dataset2
Hey TimelyPenguin76 - i am just using the helm chart and haven’t done any setup on top of that. the agentservices is running as is from the helm chart
Maybe related to doing in notebook. Doing a task.close() finished it as expected
Doing this with one step - https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_controller.py
Also the pipeline ran as per this example - https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_controller.py
This is the command that is running:
` ['docker', 'run', '-t', '-e', 'NVIDIA_VISIBLE_DEVICES=none', '-e', 'CLEARML_WORKER_ID=clearml-services:service:c606029d77784c69a30edfdf4ba291a5', '-e', 'CLEARML_DOCKER_IMAGE=', '-v', '/tmp/.clearml_agent.72r6h9pl.cfg:/root/clearml.conf', '-v', '/root/.clearml/apt-cache:/var/cache/apt/archives', '-v', '/root/.clearml/pip-cache:/root/.cache/pip', '-v', '/root/.clearml/pip-download-cache:/root/.clearml/pip-download-cache', '-v', '/root/.clearml/cache:/clea...
Essentially, if I have a dataset on which I am performing transformations and then creating other downstream datasets
AgitatedDove14 - where does automation.controller.PipelineController
fit in?
I just run the k8s daemon with a simple helm chart and use it with terraform with the helm provider. Nothing much to share as it’s just a basic chart 🙂