If 3e5962dd
is the commit it's trying to clone, it doesn't exist because I deleted it.
it should be cloning a more up-to-date version of the repository
i'm also not sure what this is-H "Content-Type: application/octet-stream" -H' NV-InferRequest:batch_size: 1 input { name: "dense_input" dims: [-1, 784] } output { name: "activation_2" cls { count: 1 } }'
I can do curl
http://localhost:8080 but it's a remote server so unless I do X forwarding I can't browse it
Hi SuccessfulKoala55 I am having some issues with this. I have put a concurrency limit of 2 and I can see 3 workers running
yeah, that's fair enough. is it possible to assign cpu cores? I wasn't aware
AgitatedDove14 I noticed a lot of my tasks don't contain these graphs though...
hiya Jake, how do I inject this with the extra_docker_shell_script
setting?
um, this line is not doing anything for me 🤔controller_clearml_task = Task.current_task() controller_clearml_task.set_resource_monitor_iteration_timeout( seconds_from_start=10 )
"this means the elasticsearch feature set remains the same. and JDK versions are usually drop-in replacements when on the same feature level (ex. 1.13.0_2 can be replaced by 1.13.2)"
in particular, I ran the agent with clearml-agent daemon --queue test-concurrency --create-queue --services-mode 2 --docker "ubuntu:20.04" --detached
and enqueued 4 tasks to it that sleep 15 minutes.
I can see all 4 tasks running, see
Right, I used --services-mode 2
and it still runs more than 2 tasks simultaneously
i'm probably sending the request all wrong + i'm not sure how the model expects the input
Because sometimes it clones a cached version of a private repository, instead of cloning the requested version
i expected to see 2 tasks running, and then when completed the remaining 2 could start. Is this not the expected behavior?
do I need to set the extra_index_url
in clearml.conf
as well?
This is a minimal comet example. I'm afraid I don't know what it does under the hood.. There are no callbacks on the metrics tracked in model.fit
and yet if you check out your project in the website, your training and validation losses are tracked automatically, live.
Hey SuccessfulKoala55 thanks for the answer.
any ideas how I can try to fix this?
In fact I just did that yesterday. I'll let you know how it goes
AgitatedDove14 yeah it should be..
I think it's still caching environments... I keep deleting the caches (pip, vcs, venvs-*) and running an experiment. it re-creates all these folders and even prints
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests>=2.20.0->clearml==1.6.4->prediction-service-utilities==0.1.0) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages (from requests>=2.20.0->clearml==1.6....