Reputation
Badges 1
371 × Eureka!adding tags this way to a Dataset object works fine. This issue only occured when doing this to a model.
Normally when you save a model in tensorflow, you get a whole saved_model not just the weights. Is there no way to get the whole model including the architecture?
Elastic is what Clear ML uses to handle Data?
the storage is basically the machine the clearml server is on, not using s3 or anything
As I go through the model.py file, I get what you're saying. Only problem is in the case of AutoLogging, I don't have the model id, for the model being saved.
It was working fine for a while but then it just failed.
So right now, I'm creating an OutputModel and passing the current task in the constructor. Then I just save the tensorflow keras model. When I look at the details, model artifact in the ClearML UI, it's been saved the usual way, and no tags that I added in the OutputModel constructor are there. From which to me it seems that ClearML is auto logging the model, and the model isn't connected to the OutputModel object that I created.
You're saying that the model should get connected if I call up...
I shared the error above. I'm simply trying to make the yolov5 by ultralytics part of my pipeline.
My draft is View Only but the cloned toy task one is in normal Draft mode.
Collecting idna==3.3
Using cached idna-3.3-py3-none-any.whl (61 kB)
Collecting importlib-metadata==4.8.2
Using cached importlib_metadata-4.8.2-py3-none-any.whl (17 kB)
Collecting importlib-resources==5.4.0
Using cached importlib_resources-5.4.0-py3-none-any.whl (28 kB)
ERROR: Could not find a version that satisfies the requirement jsonschema==4.2.1 (from -r /tmp/cached-reqsm1gu3664.txt (line 19)) (from versions: 0.1a0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 2.0...
so when I run the task using clearml-task --repo and create a task, it runs fine. It runs into the above error when I clone the task or reset it.
I understand that storing data outside ClearML won't ensure its immutability. I guess this can be built in as a feature into ClearML at some future point.
However cloning it uses it from the clearml args, which somehow converts it to string?
before pipe.add_step(train_model)?
Anyway I could apparently delete things in the dataset from the local copy. Isn't it supposed to be immutable?
I'm assuming the triton serving engine is running on the serving queue in my case. Is the serving example also running on the serving queue or is it running on the services queue? And lastly, I don't have a clearml agent listening to the services queue, does clearml do this on its own?
for now installing venv fixes the problem.
Doesn't matter how many times I run this code, it'll always give this same output. The tag gets appended to the list but isn't saved. Unless there's something else I'm supposed to do as well.
So I took dataset trigger from this and added it to my own test code, which needs to run a task every time this trigger is activated.
Thank you, I'll take a look
I've also mentioned it on the issue I created but I had the issue even when I set the type to bool in parser.add_argument(type=bool)
are there other packages other than venv required on the agent? Since I'm not sure exactly what packages do I need on the agent. Since the function normally wouldn't need venv. It just adds a number by 1
Would you know what the pros would be to learning online other than the fact that the incoming data is as close to the current distribution of data based on time as possible for us. Also would those benefits worth it to train online?
I just copied the commands in order from the page and pasted them. All of the linux ones specifically.
Also, the steps say that I should run the serving process on the default queue but I've run it on a queue I created called a serving queue and have an agent listening for it.
Okay so when I add trigger_on_tags, the repetition issue is resolved.