Reputation
Badges 1
27 × Eureka!Name: clearml Version: 1.1.6 Summary: ClearML - Auto-Magical Experiment Manager, Version Control, and MLOps for AI Home-page: Author: Allegroai Author-email: clearml@allegro.ai License: Apache License 2.0 Location: /home/junpyo/.conda/envs/minikube/lib/python3.8/site-packages Requires: psutil, numpy, jsonschema, requests, urllib3, PyYAML, python-dateutil, future, attrs, pathlib2, Pillow, furl, pyjwt, six, pyparsing Required-by:
I'm using ubuntu 18.04, and docker command was docker-compose -f /opt/clearml/docker-compose.yml up -d as described at docs.
I changed my directory to clearml-data installed and just run the same command with sudo ./clearml-data and it works
The weird thing is after I run clearml-data create ... , I can check the project and dataset are created in draft mode withCLI spit out error above
In this docs step #9 , https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_linux_mac
It says "Grant access to the Dockers, depending upon the operating system."
TimelyPenguin76 I think http://clear.ml webserver is down. I can't access webserver nor clearml API
SuccessfulKoala55 created file run after clearml-init on cli
Thank you for supporting Jake
Even I uploaded files name with 001 to 010, only 004, 005, 010 exist on fileserver.
CostlyOstrich36 I'm using WebApp: 1.3.1-169 • Server: 1.3.1-169 • API: 2.17 Thanks.
Error: Insufficient permissions for //:8081/files_server: http://localhost:8081 This is my whole output.
BTW, Is there a possibility this problem have related to access permission of /opt/clearml folder?
SuccessfulKoala55
The dataset file URL is set on upload and stored on the server
This might be a reason. I think server IP in machine A is set to "localhost:port"
Then, after I change IP "localhost" to "<server IP>" in server A and re upload Dataset, Is it accessible remotely?
What is desired output when I typed ls -al at clearml folder?
SuccessfulKoala55 OMG I solved problem. the cause was the permission of the clearml-data
` >>> d = Dataset.get(dataset_name="Anonymous task (user@beryl 2022-03-23 04:05:19)", dataset_project="test_project").get_local
_copy()
Retrying (Retry(total=2, connect=2, read=5, redirect=5, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0c23e4b490>: Failed to establish a new connection: [Errno 111] Connection refused')': /test_project/Anonymous%20task%20%28user%2540beryl%202022-03-23%2004%253A05%253A19%29.c05641c2e1c74389b471fb...
I just use clearml-init command and paste api key from localhost webserver like below. Is it normal that there's no files_server key in api dict?api { web_server: api_server: credentials { "access_key" = "my key" "secret_key" = "my key" } }
CostlyOstrich36
I'm taking a look if it's possible
Thank you for response. Dataset.squash works fine. But squash function squash after download all datasets, so I think it's not proper to me cuz dataset size is huge. I'll try upload at once. BTW, is this a bug? or I did something wrong?
AbruptCow41 Yes, it's possible to do so, but I wanted to upload parallelly if I can and I'm wonder it's a kind of bug.
http://<IP>:8081/ranix_pp_result/2022%252F04%252F04.29adfa8dbfa24489a0d9e9[…]tifacts/data/dataset.29adfa8dbfa24489a0d9e959d947b971.zip
How can I check SDK version?
Is this bug resolved in latest version?
I'm not sure about how airflow workers run. What I trying to do is upload "different files" to "one clearrml-dataset" in parallel. My dag looks like below, each task from "transform_group " execute clearml-related dataset tasks. Sorry for my bad explanation
TimelyPenguin76 It's working now. Thx for your support.
It says some files not deleted on several experiments. But the files also not deleted on project which error not occurred.
Also I tired to put fileserver information to configuration file, but nothing changed
also, all dataset don't have any dependency(parent) on other dataset
I just added manually to config file and it works fine