
Reputation
Badges 1
124 × Eureka!I have done this but I remember someone once told me this could be an issue... Or I could be misremembering. I just wanted to double check
@<1523701087100473344:profile|SuccessfulKoala55> hey Jake, how do i check how many envs it caches? doing ls -la .clearml/venvs-cache
gives me two folders
I think it's still caching environments... I keep deleting the caches (pip, vcs, venvs-*) and running an experiment. it re-creates all these folders and even prints
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests>=2.20.0->clearml==1.6.4->prediction-service-utilities==0.1.0) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages (from requests>=2.20.0->clearml==1.6....
hi SuccessfulKoala55 ! has the docker compose been updated with this?>
But where do you manually set the name of each task in this code? the .component
has a name
argument you can provide
So far I have taken one mnist image, and done the following:
` from PIL import Image
import numpy as np
def preprocess(img, format, dtype, h, w, scaling):
sample_img = img.convert('L')
resized_img = sample_img.resize((1, w*h), Image.BILINEAR)
resized = np.array(resized_img)
resized = resized.astype(dtype)
return resized
png img file
img = Image.open('./7.png')
preprocessed img, FP32 formated numpy array
img = preprocess(img, format, "float32", 28, 28, None)
...
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]
ah.. agent was on a different machine..
instead of, say, the binary the task was launched with
tagging @<1523701205467926528:profile|AgitatedDove14> here just in case 😅
so it tries to find it under /usr/bin/python/
I assume?
I understand! this is my sysadmin message:
"if nothing else, they could publish a new elasticsearch image of 7.6.2 (ex. 7.6.2-1) which uses a newer patched version of JDK (1.13.x but newer than 1.13.0_2)"
i'm not sure how to double check this is the case when it happens... usually we have all requirements specified with git repo
logger.report_media( title=name_title, series="Nan", iteration=0, local_path=fig_nan, delete_after_upload=delete_after_upload, ) clearml_task.upload_artifact( name=name_title, artifact_object=fig_nan, wait_on_upload=True, )
right, and why can't a particular version be found? how it does it try to find python versions?
yeah, that's fair enough. is it possible to assign cpu cores? I wasn't aware