
Reputation
Badges 1
125 × Eureka!Hey AgitatedDove14 , thanks for the answer. What does that mean? In any case I think it would be a nice to have feature.
Ok gotchu. I'll do that as soon as I can.
I think the issue is that the host is not trusted... it looks like it looks into the index
And then you'll hook it
So far I have taken one mnist image, and done the following:
` from PIL import Image
import numpy as np
def preprocess(img, format, dtype, h, w, scaling):
sample_img = img.convert('L')
resized_img = sample_img.resize((1, w*h), Image.BILINEAR)
resized = np.array(resized_img)
resized = resized.astype(dtype)
return resized
png img file
img = Image.open('./7.png')
preprocessed img, FP32 formated numpy array
img = preprocess(img, format, "float32", 28, 28, None)
...
Because sometimes it clones a cached version of a private repository, instead of cloning the requested version
Hi CostlyOstrich36
I added this instruction at the very end of my postprocess
functionshutil.rmtree("~/.clearml")
Not sure why it tries to establish some http connection, or why it's /
...
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]
i'm just interested in actually running a prediction with the serving engine and all
I have done this but I remember someone once told me this could be an issue... Or I could be misremembering. I just wanted to double check
i'm not sure how to double check this is the case when it happens... usually we have all requirements specified with git repo
i'm guessing the cleanup_period_in_days can only actually run every day or whatever if the script is enqueued to services
if i enqueue the script to the services
queue but run_as_service
is false, what happens?
SuccessfulKoala55 I can't get it to work... I tried using the pip conf locally and it works, but the agent doesn't seem to be able to install the package
right, seems to have worked now!
I think it's still caching environments... I keep deleting the caches (pip, vcs, venvs-*) and running an experiment. it re-creates all these folders and even prints
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests>=2.20.0->clearml==1.6.4->prediction-service-utilities==0.1.0) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages (from requests>=2.20.0->clearml==1.6....
Hi SuccessfulKoala55 , do you have an update on this?
hiya Jake, how do I inject this with the extra_docker_shell_script
setting?
it's from the github issue you sent me but i don't know what the "application" part is or the "NV-InferRequest:...."