Reputation
Badges 1
41 × Eureka!For clarification, I want to store artifacts and models (maybe tasks too) like how the webui lets you specify a external storage s3 bucket when you make a folder, but with onedrive.
I.e in such a way I can still see them on the ui, but don’t necessarily have them stored on the server
This answered my question, I ended up setting up minio running off of a external hard drive which backups to onedrive.
Though I am still having some problems getting the client to connect to minio (see recent thread)
Found the problem. Some port rules between my server and client was blocking it. Some autossh forwarding solved my problem
clearml-init
ClearML SDK setup process
Please create new clearml credentials through the settings page in your clearml-server web app (e.g. None )
Or create a free account at None
In settings page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
api {
web_server: http://:8080
api_server: http://:8008
file...
Thank you!
I wasn’t getting my hopes up on storing tasks elsewhere :)
For the models & artifacts is there a parameter to change the default directory of saving and loading to something else?
From running it with the credentials I got with the non-self-hosted clearml instance
clearml-init
ClearML SDK setup process
Please create new clearml credentials through the settings page in your clearml-server web app (e.g. None )
Or create a free account at None
In settings page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuratio...
None of the default ports are changed and in my firewalld I have ports 8080, 8081, and 8008 open to tcp
With only those docker containers running I’m having this issue. In a few hours I’m going to test on a additional machine to confirm
Yes I will give it a try and get back to you.
This is post script completion on the client end
with the filepath leading to /clearml/task.py
Nevermind, 😆 as you thought, seems I was just needing a few more f5s on my dashboard. Thank you very much for this
And again thank you for the help with this.
This error is thrown by a failed .get() function call on the StorageHandler object I looked at the ._ _ dict _ _.keys() parameter list of the StorageHandler, and I don't see anyway to access the dictionary directly.
I am able to capture clearml experiments on the clearml server running on the same machine as the minio.
I placed the same key and secret in the global locations under s3{ } and this did not change anything
I’ll put in the actual copy paste later tonight thank you for the help
>>> print(json.dumps(config_obj.get("sdk"), indent=2))
{
"storage": {
"cache": {
"default_base_dir": "~/.clearml/cache"
},
"direct_access": [
{
"url": "file://*"
}
]
},
"metrics": {
"file_history_size": 100,
"matplotlib_untitled_history_size": 100,
"images": {
"format": "JPEG",
"quality": 87,
"subsampling": 0
},
"tensorboard_single_series_per_graph": false
},
"network": {
"file_upload_retries":...
Is that being used as a dictionary key?
credentials: [
specifies key/secret credentials to use when handli$
{
#
This will apply to all buckets in this host ($
host: "...:9000”
key: "*********"
secret: "******************"
multipart: false
secure: false
}
]
}
boto3 {
pool_connections: 512
max_ multipart_concurrency: 16
The exact error I am getting is:
line 1095, in output_uri
raise ValueError("Could not get access credentials for '{}' "
I discovered part of the problem. I did not have boto3 installed on this conda env.
No error, just failure to upload it seems
@<1523701205467926528:profile|AgitatedDove14>
clearml python version: 1.91
python version: 3.9.15
the server is running the docker-compose on RHEL
Minio is on the same server and the 9000 and 9001 ports are open for tcp
I changed the default address space from 172.xxx.xxx.xx for docker to another space. This is not the issue as I can replicate this issue without this modified address space.
See configuration file below, I'm running the global section test now
aws {
s3 {...

