Reputation
Badges 1
125 × Eureka!So far I have taken one mnist image, and done the following:
` from PIL import Image
import numpy as np
def preprocess(img, format, dtype, h, w, scaling):
sample_img = img.convert('L')
resized_img = sample_img.resize((1, w*h), Image.BILINEAR)
resized = np.array(resized_img)
resized = resized.astype(dtype)
return resized
png img file
img = Image.open('./7.png')
preprocessed img, FP32 formated numpy array
img = preprocess(img, format, "float32", 28, 28, None)
...
But where do you manually set the name of each task in this code? the .component has a name argument you can provide
I am curious about the updates on version 1.0.0, where can I see some info regarding this?
Passing state information from pre to postprocessing and the dynamic preprocessing code thing, for example
can you elaborate a bit on the token side? i'm not sure exactly what would be a bad practice here
does this make more sense? SuccessfulKoala55
In fact I just did that yesterday. I'll let you know how it goes
sure. Removing the task.connect(args_) does not fix my situation
ah.. agent was on a different machine..
so it tries to find it under /usr/bin/python/ I assume?
This is a minimal comet example. I'm afraid I don't know what it does under the hood.. There are no callbacks on the metrics tracked in model.fit and yet if you check out your project in the website, your training and validation losses are tracked automatically, live.
using clearML agent
instead of, say, the binary the task was launched with
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]
i'm not sure how to double check this is the case when it happens... usually we have all requirements specified with git repo
but it's been that way for over 1 hour.. I remember I can force the task to wait for the upload. how do i do this?
hi SuccessfulKoala55 ! has the docker compose been updated with this?>