Reputation
Badges 1
40 × Eureka!But i don't want to create a new dataset, my dataset exists and has already been downloaded by a previous task.
No particular information in console( no error), no network error too.
DeterminedCrab71 You right, if i understand correctly HTTP.FILE_BASE_URL is undefined, then file to delete is describe as "misc" instead of "fc" then i guess system is unable to launch the delete of the file
My files (fs) are deleted but i have the same issue as reported by SuperiorPanda77 , with some undefined value that is said not to be deleted. I guess that as my command deleteFileServerSources
works but exit with some strange return value, other commands in the row addFaieldDeletedFiles
and deleteProjectFromRoot
are not executed (file src/app/webapp-common/shared/entity-page/entity-delete/base-delete-dialog.effects.ts
)
Ok, it works i need to specify :80 also in the output_uri from my python file !
I define the HOST ( ENDPOINT) like this but it doesn't change anything
He ask me for credentials for the root server (minio.10.68.0.250.nip) and not for the bucket where it is stored (minio.10.68.0.250.nip/simclr) only this bucket has read/write permission.
Most of the time it is due to a bad parsing of the ip adress. You need to be sure the ip adress is correctly parsed and for this, i need to specify the port used for my minio server even it is a standard http (80) port. Then 'address:80' works but not "address" alone
I was unable to define FILE_BASE_URL inside the docker container. I modify the HTTP constant in app.constants.ts with hard code values, compile the webapp again (npm) and replace it in my docker container and now it works....
i use a proxy and the port is 80, i need to write it ?
sudo docker logs clearml-fileserver
This gives no info at all. May be i should increase the log level to debug. The only message i got is about "werkzeug" the default server module of flash that shouldn't be use as production deployement (by the way, why not use gunicorn as entrypoint in docker-compose ?)
It seems that i should define this variable by the use of an environment variable in ConfigurationService.globalEnvironment.
I think i have my answer, this is hard coded in agent base_cmd += ( (['--name', name] if name else []) + ['-v', conf_file+':'+DOCKER_ROOT_CONF_FILE] + (['-v', host_ssh_cache+':'+mount_ssh] if host_ssh_cache else []) + ...
I tried to modify all docker_internal_mounts point but the mount point for clearm.conf file still remains the same. May be it is defined on server side ?
I take a look a src/app/webapp-common/shared/entity-page/entity-delete/base-delete-dialog.effects.ts.
I see that an error is raised in the mergeMap at line 125, but i'm not familiar enough with Typescript to find why.
My configuration.json
is { "fileBaseUrl": " http://file.10.68.0.250.nip.io "}, but HTTP.FILE_BASE_URL still remains undefined
. Something is missing ?
Yes, so far i came back to the old adress 🙂
Yes, i even got a "upload finished" message et the whole process goes to end.
By default, i put nothing in the task but when i use a ClearMLSaver like thisClearMLSaver(logger, output_uri="
")
where clearml is my bucket.
My artifacts are now deleted but the directories where the artifacts are stored are not deleted.
In docker-compose, image was latest allegroai/clearml:latest when i pull docker images. When i launch it, after installation i have in WebApp following informations : "WebApp: 1.3.1-169 • Server: 1.3.1-169 • API: 2.17"
Files are stored on the same box where the docker is running. And there is a mounting point between file server docker and the host itself.
What i don't understant is the list of artifacts that were not deleted
When i deleted the Experiment, i obtain a the following window :
ClearML results page:
http://clearml.10.68.0.250.nip.io/projects/300ec77013504f51a7f295226c3f7e40/experiments/5418cf58b64f425a9a17fbd4af6cfee8/output/logTraceback (most recent call last):
File "/app/.clearml/venvs-builds/3.8/code/__init__.py", line 287, in <module>
[train_data, test_data, train_loader, test_loader, nb_class] = import_data(root_database, train_path, test_path,
File "/app/.clearml/venvs-builds/3.8/code/__init__.py", line 153, in import_data
...
As my clearml server is run using docker, i have no idea where http://clearml.10.68.0.250.nip.io/projects/300ec77013504f51a7f295226c3f7e40/experiments/5418cf58b64f425a9a17fbd4af6cfee8/output/log is exactly stored.
To your point of view, it may be related with the sdk client that triggers the upload ? with urllib request ?
only a "upload failed" and no data in my S3 bucket
i tried so far but it was not so easy, because there is a python executable "update_from_env" that empties the configuration.json file. So i create a file in /mnt/external_files/configs and my configuration.json was read.
The addresses seems strange, is this the hostname?
I use the nip services to have subdomains: clearml.domain api.domain and file.domain that points to the same host.