Reputation
Badges 1
93 × Eureka!So I've been testing bits and pieces individually.
For example, I made a custom image for the VMSS nodes, which is based on Ubuntu and has multiple CUDA versions installed, as well as conda and docker pre-installed.
I'm managed to test the setup script, so that it executes on a pristine node, and results in a compute node being added to the relevant queue, but that's been executed manually by me, as I have the credentials to log on via SSH.
And I had to do things get the clearml-server the ma...
AgitatedDove14
Just compared two uploads of the same dataset, one to Azure Blob and the other to local storage on clearml-server.
The local storage didn't report any statistics, so it might be confined to the cloud storage method, and specifically Azure.
If my memory serves me correctly, I think it happened on weights saving as well, let me just check an experiment log and see.
I was thinking that I can run on the compute node in the environment that the agent is executed from, but actually it is the environment inside the docker container that the Triton server is executing in.
Could I use the clearml-agent build
command and the Triton serving engine
task ID to create a docker container that I could then use interactively to run these tests?
My bad you are correct, it is as you say.
I am bit confused because I can see configuration sections Azure storage in the clearml.conf files, but these are on the client pc and the clearml-agent compute nodes.
So do these parameters have to be set on the clients and compute nodes individually, or is something that can be set on the server?
AgitatedDove14 Thanks for that.
I suppose the same would need to be done for any client PC running clearml such that you are submitting dataset upload jobs?
That is, the dataset is perhaps local to my laptop, or on a development VM that is not in the clearml system, but I from there I want to submit a copy of a dataset, then I would need to configure the storage section in the same way as well?
I assume the account name and key refers to the storage account credentials that you can f...
I have managed to create a docker container from the Triton task, and run it interactive mode, however I get a different set of errors, but I think these are related to command line arguments I used to spin up the docker container, compared to the command used by the clearml orchestration system.
My simplified docker command was: docker run -it --gpus all --ipc=host task_id_2cde61ae8b08463b90c3a0766fffbfe9
However, looking at the Triton inference server object logging, I can see there...
AgitatedDove14 that started out a lot shorter, and I read it twice, but I think it answers your question..... π
AgitatedDove14 I would love to help the project.
I am just about to move house, which is stressful enough without a global pandemic(!), so until that's completed I won't commit to anything. However, once settled in the new place, and I have a bit more time, I would very much welcome contributing.
Oh cool!
So when the agent fire up it get's the hostname, which you can then get from the API, and pass it back to take down a specific resource if it is deemed idle?
Yup, I can confirm that's the case.
I have just literally installed the latest commit via the master branch and it works.
I think so.
I am doing this with one hand tied behind my back at the moment because I waiting to get an Azure AD App and Services policy setup, to enable the autoscaler to authenticate with the Azure VMSS via the Python SDK.
When I run the commands above you suggested, if I run them on the compute node but on the host system within conda environment I installed to run the agent daemon from, I get the issues as we appear to have seen when executing the Triton inference service.
` (py38_clearml_serving_git_dev) edmorris@ecm-clearml-compute-gpu-002:~$ python
Python 3.8.10 (default, May 19 2021, 18:05:58)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
...
I think I failed in explaining my self, I meant instead of multiple CUDA versions installed on the same host/docker, wouldn't it make sense to just select a different out-of-the-box docker with the right CUDA, directly from the public nvidia dockerhub offering ? (This is just another argument on the Task that you can adjust), wouldn't that be easier for users?
Absolutely aligned with you there AgitatedDove14 . I understood you correctly.
My default is to work with native VM images, a...
Mr AgitatedDove14 Good spot sir!
Sounds like a good candidate, I will test now and report back.
I donβt have a scooby doo what that pickle file is.
This is very cool, any reason for not using dockers the multiple CUDA versions?
AgitatedDove14 my inexperience in using them a lot until recently. I can see how that is a better solution and it's something I am actively getting trying to improve my understanding of, and use of.
I am now relatively comfortable with producing a Dockerfile
for example, although I've not got as far as making any docker-compose
related things yet.
AgitatedDove14 apologies, I read my previous message, I think perhaps it came across as way more passive aggressive than I was intending. Amazing how missing a few words from a sentence can change the entire meaning! π
What I meant to say was, it's going to be a busy few months for us whilst we move house, so I didn't want to say I'd contribute and then disappear for two months!
I've been working on a Azure load balancer example, heavily based on the AWS example. The load balanc...
SuccessfulKoala55 I am not that familiar with AWS. Is that essentially a port forwarding service, where you have a secure end point that redirects to the actual server?
Understood.
SuccessfulKoala55 I point you to my disclaimer above......π¬
SuccessfulKoala55 WearyLeopard29 could this be a potential idea?
It appears here the setup is for apps on different ports, and it seems to me to be exactly the clearml problem?
So could extrapolate and put in an API app and a FILESERVER app description with the correct ports?
https://gist.github.com/apollolm/23cdf72bd7db523b4e1c
` # the IP(s) on which your node server is running. I chose port 3000.
upstream app_geoforce {
server 127.0.0.1:3000;
}
upstream app_pcodes{
server 12...
SuccessfulKoala55 New issue on securing server ports opened on clearml-server repo.
Ohhhhhhhhhhhhhhhhhhhh......that makes sense,
AgitatedDove14 Ok I can do that.
I was just thinking it through.
Would this be best if it were executed in the Triton execution environment?
SuccessfulKoala55
SUCCESS!!!
This appears to be working.
Setup certifications us sudo certbot --nginx
.
Then edit the default configuration file in /etc/nginx/sites-available
` server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name your-domain-name;
ssl_certificate /etc/letsencrypt/live/your-domain-name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain-name/privkey.pem;
...