AgitatedDove14 I noticed a lot of my tasks don't contain these graphs though...
hi SuccessfulKoala55 with the clearml server update, does it use a newer ES docker?
Hey SuccessfulKoala55 thanks for the answer.
any ideas how I can try to fix this?
I think it's still caching environments... I keep deleting the caches (pip, vcs, venvs-*) and running an experiment. it re-creates all these folders and even prints
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests>=2.20.0->clearml==1.6.4->prediction-service-utilities==0.1.0) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages (from requests>=2.20.0->clearml==1.6....
clearml-agent 1.2.3
Right, I used --services-mode 2 and it still runs more than 2 tasks simultaneously
Is this a possible future feature? I have used cometML before and they have this. I'm not sure how they do it though...
right, and why can't a particular version be found? how it does it try to find python versions?
Hi CostlyOstrich36
I added this instruction at the very end of my postprocess functionshutil.rmtree("~/.clearml")
Because sometimes it clones a cached version of a private repository, instead of cloning the requested version
i'm also not sure what this is-H "Content-Type: application/octet-stream" -H' NV-InferRequest:batch_size: 1 input { name: "dense_input" dims: [-1, 784] } output { name: "activation_2" cls { count: 1 } }'
well.. it initially worked but now i get the same thing 😕 SuccessfulKoala55
I see, ok!
I will try that out.
Another thing I noticed: none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
Ah, so you're saying I can write a callback for stuff like train_loss , val_loss , etc.
i don't think the conf is an issue. it's been deployed for a long time and working. models from yesterday correctly display the url
well, i have run the keras mnist example that is in the clearml-serving READme. Now I'm just trying to send a request to make a prediction via curl
And then you'll hook it
I have this inside my pipeline defined with decorator
Sent it to you via DM!
i'm probably sending the request all wrong + i'm not sure how the model expects the input
Absolutely, I could try but I'm not sure what it entails...
