Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone, Does Anyone Have Any Pointers On How To Make The Clearml-Server Web Service Secure Using Ssl By Setting Up Nginx? I Have Played Around With It A Bit In Relation To Getting A Jupyterhub Setup Working Over Https, However, I Think That Was Mor

Hi everyone,

Does anyone have any pointers on how to make the ClearML-Server web service secure using SSL by setting up NGINX?

I have played around with it a bit in relation to getting a JupyterHub setup working over HTTPS, however, I think that was more luck than a clear understanding of how I achieved it.

Securing Clearml-Server would seem to be slightly more involved given the interactions between the filserver, api and web server, so I was wondering if anyone had an example NGINX configuration and could furnish me with some steps in how to achieve securing the webserver?

  
  
Posted 3 years ago
Votes Newest

Answers 18


VivaciousPenguin66 your docs was helpful, I got SSL running but my question remained
have you kept needed http services accessible and only running the authentication via https?
api_server: "http://<my-clearml-server>:8008" web_server: " " files_server: "http://<my-clearml-server:8081"my current state is that the webserver is accessible via http and https, in 8080 & 443

  
  
Posted 3 years ago

Hi VivaciousPenguin66
thanks for sharing, giving it a try now

after you set up webserver to point to 443 with HTTPS, what have you done with rest of http services clearml is using?

does weberver with 8080 remained accessible and your are directing to it in your ~clearml.conf ?
what about apiserver and file server? (8008 & 8081)

  
  
Posted 3 years ago

Ohhhhhhhhhhhhhhhhhhhh......that makes sense,

  
  
Posted 3 years ago

VivaciousPenguin66 might be. note that the webserver is actually acting as a reverse proxy for the fileserver, so you can actually shut off access to port 8081 and rely on the fact that <webserver-address>/files will be redirected to the fileserver (as long as you configure your ClearML SDK accordingly)

  
  
Posted 3 years ago

SuccessfulKoala55 New issue on securing server ports opened on clearml-server repo.

https://github.com/allegroai/clearml-server/issues/78

  
  
Posted 3 years ago

SuccessfulKoala55 WearyLeopard29 could this be a potential idea?
It appears here the setup is for apps on different ports, and it seems to me to be exactly the clearml problem?
So could extrapolate and put in an API app and a FILESERVER app description with the correct ports?

https://gist.github.com/apollolm/23cdf72bd7db523b4e1c

` # the IP(s) on which your node server is running. I chose port 3000.
upstream app_geoforce {
server 127.0.0.1:3000;
}

upstream app_pcodes{
server 127.0.0.1:3001;
}

#Point http requests to https
server {
listen 0.0.0.0:80;
server_name ;
server_tokens off;
return 301 ;
}

the secure nginx server instance

server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/public.crt;
ssl_certificate_key /etc/nginx/ssl/private.rsa;

server_name  ` ` ;
access_log /var/log/nginx/myapp.log;
error_log /var/log/nginx/myapp_error.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options

location /favicon.ico { alias /home/ubuntu/img/favicon_rc.ico; }

location / {
  # auth_basic "Restricted";
  # auth_basic_user_file /home/ubuntu/app/.htpasswd;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_set_header X-Ssl on;

  proxy_pass  ` ` ;
  proxy_redirect off;
}

location /pcodes/ {
  rewrite /pcodes/(.*)$ /$1 break;

  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_set_header X-Ssl on;

  proxy_pass  ` ` ;
  proxy_redirect off;
}

} `

  
  
Posted 3 years ago

Understood.
SuccessfulKoala55 I point you to my disclaimer above......😬

  
  
Posted 3 years ago

VivaciousPenguin66 note that for the fileserver, access is indeed public and is not controlled by access tokens

  
  
Posted 3 years ago

WearyLeopard29 no I wasn’t able to do that although I didn’t explicitly try.
I was wondering if this was as a high a security risk then the web portal?
Access is controlled by keys, whereas the web portal is not.
I admit I’m a data scientist, so any proper IT security person would probably end up a shivering wreck in the corner of the room if they saw some of my common security practises. I do try to be secure, but I am not sure how good I am at it.

  
  
Posted 3 years ago

Absolutely SuccessfulKoala55

  
  
Posted 3 years ago

Hi VivaciousPenguin66 I was trying to do something similar, I was able to secure the webapp with something similar to this (the cert was on lb level), but were you able to secure the port for fileserver and api server ?

  
  
Posted 3 years ago

Hurrah! Can I ask you to open a GitHub issue with these details, including any required installations? I'd love to add that to the documentation...

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago

SuccessfulKoala55
SUCCESS!!!

This appears to be working.
Setup certifications us sudo certbot --nginx .

Then edit the default configuration file in /etc/nginx/sites-available

` server {
listen 80;
return 301 https://$host$request_uri;
}

server {

listen 443;
server_name your-domain-name;

ssl_certificate           /etc/letsencrypt/live/your-domain-name/fullchain.pem;
ssl_certificate_key       /etc/letsencrypt/live/your-domain-name/privkey.pem;

ssl on;
ssl_session_cache  builtin:1000  shared:SSL:10m;
ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;

access_log            /var/log/nginx/jenkins.access.log;

location / {

  proxy_set_header        Host $host;
  proxy_set_header        X-Real-IP $remote_addr;
  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header        X-Forwarded-Proto $scheme;

  # Fix the It appears that your reverse proxy set up is broken" error.
  proxy_pass           ` ` ;
  proxy_read_timeout  90;

  proxy_redirect       ` `   ` ` ;
}

} `
Point your browser towards http://your-domain-name or https://your-domain-name will result in the browser being forwarded to port 8080 on the local host machine via port 443 SSL.

  
  
Posted 3 years ago

Oh it's a load balancer, so it does that and more.
But I suppose the point holds though, it provides an end-point for external locations, and then handles the routing to the correct resources.

  
  
Posted 3 years ago

SuccessfulKoala55 I am not that familiar with AWS. Is that essentially a port forwarding service, where you have a secure end point that redirects to the actual server?

  
  
Posted 3 years ago

Hi VivaciousPenguin66 , I think someone here already tried that, but with some difficulty. We ourselves do that using ELB on AWS

  
  
Posted 3 years ago

I have changed the configuration file created by Certbot to listen on port 8080 instead of port 80, however, when I restart the NGINX service, I get errors relating to bindings.

server { listen 8080 default_server; listen [::]:8080 ipv6only=on default_server;
Restarting the service results in the following errors:

` ● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-05-28 11:08:40 UTC; 5s ago
Docs: man:nginx(8)
Process: 70457 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 70458 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=1/FAILURE)

May 28 11:08:39 clearml-server-head nginx[70458]: nginx: [emerg] bind() to 0.0.0.0:8080 failed (98: Unknown error)
May 28 11:08:39 clearml-server-head nginx[70458]: nginx: [emerg] bind() to [::]:8080 failed (98: Unknown error)
May 28 11:08:39 clearml-server-head nginx[70458]: nginx: [emerg] bind() to 0.0.0.0:8080 failed (98: Unknown error)
May 28 11:08:39 clearml-server-head nginx[70458]: nginx: [emerg] bind() to [::]:8080 failed (98: Unknown error)
May 28 11:08:40 clearml-server-head nginx[70458]: nginx: [emerg] bind() to 0.0.0.0:8080 failed (98: Unknown error)
May 28 11:08:40 clearml-server-head nginx[70458]: nginx: [emerg] bind() to [::]:8080 failed (98: Unknown error)
May 28 11:08:40 clearml-server-head nginx[70458]: nginx: [emerg] still could not bind()
May 28 11:08:40 clearml-server-head systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
May 28 11:08:40 clearml-server-head systemd[1]: nginx.service: Failed with result 'exit-code'.
May 28 11:08:40 clearml-server-head systemd[1]: Failed to start A high performance web server and a reverse proxy server. `I think this may be something to do with how Docker interacts with port bindings, any ideas?

  
  
Posted 3 years ago
1K Views
18 Answers
3 years ago
7 months ago
Tags