only you will have access to material upload to the SAAS files server, same security as the web and api has.
you can of course use your S3 bucket to store these resources
port forwarding is an unconventional setup, and inconvenient 1. you should configure a public address.
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_aws_ec2_ami has some info and links.
for a more secure option I'd follow this guide to create a user/password environment https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config/#web-login-authentication
and use AWS ELB to securely expose the 3 services though it.
you can then configure your client with the exposed address
web_server: "http://[expossed web address]:8080" api_server: "http://[expossed API address]:8008" files_server: "http://[expossed file address]localhost:8081"
another simple option would be our hosted SAAS at https://app.clear.ml
Hello DepravedSheep68 ,
In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)
DeterminedCrab71 does the artifacts and results send to clearml ec2 server accessible to anyone or it would be perfectly private and confidential??
also it would be great if you could direct me to relevant documentation, where all of the atifacts and plots displayed in cleaml-server are stored to particular s3 bucket and all of the information displayed in the server comes from s3 bucket!!
DepravedSheep68 , can you please give a bit more context on the error? Also can you show an example of your usage?
You can try the following:
sdk.development.default_output_uri: "s3://<YOUR_BUCKET>" and in code simply using
Task.init(..., output_uri=True) . Let's see if that setup works.