![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/GreasyPenguin14.png)
Reputation
Badges 1
61 × Eureka!Yes I see:
"The default location for output models and other artifacts. If True is passed, the default files_server will be used for model storage. In the default location, ClearML creates a subfolder for the output. The subfolder structure is the following: <output destination name> / <project name> / <task name>.<Task ID>"
So it makes a folder in the output destination <project_name>/<task name>.<Task ID>. It is not possible to specify the full output destination right?
I see, will it be possible in the future to directly write custom/not supported formats to the folder? Because we are working with very big files, having them stored at multiple locations is something we try to avoid
Okay, so I have to first save the generated image somewhere and then with logger.report_media it is copied to the folder?
It is for storing the predictions a trained model makes, so two different models do create slightly different images
Old legacy code that has its own folder structure per experiment. I can also do it the other way around. Does task.get_output_destination() return the folder including project name and <task_name>.<task_id>?
I see task.get_output_destination() returns a url like http://localhost:8081 . Is it possible to get the folder with the artifacts/models?
Yes, now I new unique folder is created per experiment where the predictions are saved. That works. The only thing is that now there is the folder that clearml makes for an experiment and the folder that saves the resuts. So two folders with artifacts per experiment. I was wondering if there was a more efficient solution and if it could be combined.
It is the folder the clearml creates and the folder we create ourself to store the predictions
Okay, I am working with medical images. And when running a testing script I want to save the predictions (also big medical images of another modality). What happens when I do logger.upload_artifact(..). Then a file is copied to this folder?
btw the same happens when I try this on localhost
# ClearML SDK configuration file api { # Notice: 'host' is the api server (default port 8008), not the web server. api_server:
web_server:
files_server:
# Credentials are generated using the webapp,
# Override with os environment: CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY
In the process MyProcess other processes are created via a ProcessPoolExecutor. In these processes calls to logger.report_matplotlib_figure are made, but I get the same issue when I remove these calls.
It looks like I don't have hanging issues when I use mp.set_start_method('spawn')
at the top of the script.
I don't have a fully reproducilble example that I can share, sorry for that
Yes that is the error I get when trying to launch a custom slack alert service (when not running it locally)
I am using gitlab, I can create an access token. From the gitlab page:
"Personal Access Tokens
You can generate a personal access token for each application you use that needs access to the GitLab API."
However, now I have an access token, not an username/password. Is there also an option to authenticate with the access token?
I dont see that option in my ~/clearml.conf?
I was looking for all the metric names, similar as what you get when clicking the '+ metric' in customize columns. But turns out I will implement it in a different way, not needed anymore
Okay finalize works. I was looking here: https://github.com/allegroai/clearml/blob/master/docs/datasets.md
I created it with clearml-init, nothing special. It looks like
# ClearML SDK configuration file api { # Notice: 'host' is the api server (default port 8008), not the web server. api_server:
web_server:
files_server:
# Credentials are generated using the webapp,
`
# Override with os environment: CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY
credentials {"access_key": "NMWOE5C3RGTX473M9D3M", "secret_key": "L@G50jO+TJ23#8Eerp1E$4y=elUt11P!BL...
Yes, I add these metrics as extra columns and then I sort them. I want to know which experiments performs best in daylight for example or which during night. Therefore I think a is not the right choice
With a name of 98 characters errors like 'munmap_chunk(): invalid pointer',
'double free or corruption (!prev)' or 'free(): invalid next size (normal) occur later in my script. But maybe this is not related to clearml but to the filesystem. Just wondering if there was a maximum length from the clearml side
Is there a way to test this? It seems my git user and token are correct. I can do git clone https://<NAME_TOKEN >:<ACCESSTOKEN> gitlab.com/mycompany/repo.git
However when starting the service it fails with:
cloning: git@gitlab.com:mycompany/repo.git
Using user/pass credentials - replacing ssh url
:mycompany/repo.git' with https url '
'
2021-01-18 20:04:08
User aborted: stopping task (3)
And is there an easy way to get all the metrics associated with a project?