Reputation
Badges 1
125 × Eureka!i don't think the conf is an issue. it's been deployed for a long time and working. models from yesterday correctly display the url
right, seems to have worked now!
Yeah, I simply used a different port but I got this output:
` (prediction_module) emilio@unicorn:~/clearml-serving$ docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:9501 -e CLEARML_SERVING_TASK_ID=7ce187d2218048e68fc594fa49db0051 -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest
CLEARML_SERVING_TASK_ID=7ce187d2218048e68fc594fa49db0051
CLEARML_SERVING_PORT=
CLEARML_USE_GUNICORN=
EXTRA_PYTHON_PACKAGES=
CLEARML_SERVING_NUM_PROCESS=
CLEARML_SERVING_POLL_FREQ=5
CLEARML_DEFAULT...
if i enqueue the script to the services
queue but run_as_service
is false, what happens?
I see, ok!
I will try that out.
Another thing I noticed: none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
they are taking longer than 30 secs, but admittedly not much longer: 1-3 minutes
yes, I just ran steps 6-12 again from https://allegro.ai/docs/deploying_trains/trains_server_linux_mac/
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]
I am curious about the updates on version 1.0.0, where can I see some info regarding this?
Passing state information from pre to postprocessing and the dynamic preprocessing code thing, for example
Not sure why it tries to establish some http connection, or why it's /
...
Hi SuccessfulKoala55 , do you have an update on this?
it's from the github issue you sent me but i don't know what the "application" part is or the "NV-InferRequest:...."
sure. Removing the task.connect(args_)
does not fix my situation
I have done this but I remember someone once told me this could be an issue... Or I could be misremembering. I just wanted to double check
Ok, going to ask the server admins, will keep you posted, thanks!
@<1523701087100473344:profile|SuccessfulKoala55> hey Jake, how do i check how many envs it caches? doing ls -la .clearml/venvs-cache
gives me two folders
I am tagging AgitatedDove14 since I sort of need an answer asap...!
well.. it initially worked but now i get the same thing 😕 SuccessfulKoala55
well, i have run the keras mnist example that is in the clearml-serving READme. Now I'm just trying to send a request to make a prediction via curl
` Using cached repository in "/root/.clearml/vcs-cache/DeployKit_cloud.git.3e6952dd2fa4054e353465fe2d40daa3/DeployKit_cloud.git"
fatal: Could not read from remote repository. `
i'm not sure how to double check this is the case when it happens... usually we have all requirements specified with git repo
when an agent launches a task, it builds a venv, copies the code, runs it, etc. in my case, the code writes files (such as data it downloaded, or model files, etc) and writes them in subfolders. I'm interested in recovering the entire folder structure.
this is because if I run a different task, everything from the previous task is overwritten.
furthermore, I need the folder structure for other things downstream