Reputation
Badges 1
125 × Eureka!I see, ok!
I will try that out.
Another thing I noticed: none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
This is a minimal comet example. I'm afraid I don't know what it does under the hood.. There are no callbacks on the metrics tracked in model.fit
and yet if you check out your project in the website, your training and validation losses are tracked automatically, live.
well.. it initially worked but now i get the same thing 😕 SuccessfulKoala55
yes, I just ran steps 6-12 again from https://allegro.ai/docs/deploying_trains/trains_server_linux_mac/
right, and why can't a particular version be found? how it does it try to find python versions?
i'm probably sending the request all wrong + i'm not sure how the model expects the input
but it's been that way for over 1 hour.. I remember I can force the task to wait for the upload. how do i do this?
if i enqueue the script to the services
queue but run_as_service
is false, what happens?
right, seems to have worked now!
AgitatedDove14 I noticed a lot of my tasks don't contain these graphs though...
tagging @<1523701205467926528:profile|AgitatedDove14> here just in case 😅
But where do you manually set the name of each task in this code? the .component
has a name
argument you can provide
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]