Reputation
Badges 1
61 × Eureka!Yes that is the error I get when trying to launch a custom slack alert service (when not running it locally)
I restarted the cleanup service. Now I get some messages like this:
2021-07-16 12:39:46,736 - clearml.storage - ERROR - Failed creating storage object file:// Reason: 'NoneType' object has no attribute 'replace'
2021-07-16 12:39:46,736 - clearml.Task - ERROR - Failed deleting None: 'NoneType' object has no attribute 'delete'
WARNING:root:Could not delete Task ID=eb11c92928af477e9e732d0cad47a57e, sequence item 0: expected str instance, NoneType found
any idea?
I see this indeed when I create a new project with an empty description. Is this also possible for older project created before clearml 1.0? For these projects this button is not there
Yes, it did I think that makes sense
it is the same with rc4. Under the variables tab it keeps hanging on 'collecting data...' OS: Ubuntu 18.04, PyCharm CE 2020.3
I am using gitlab, I can create an access token. From the gitlab page:
"Personal Access Tokens
You can generate a personal access token for each application you use that needs access to the GitLab API."
However, now I have an access token, not an username/password. Is there also an option to authenticate with the access token?
Yes, I add these metrics as extra columns and then I sort them. I want to know which experiments performs best in daylight for example or which during night. Therefore I think a is not the right choice
Some of the experiments are done on a GCP instance instead of the local server on which we also run ClearML. The experiments running on GCP report to the same local clearml server, but the IP address for clearml configured on the GCP instance is different (and then forwarded). Is this the problem?
upgrading to? 2020.3.3 is the latest version? https://www.jetbrains.com/pycharm/download/other.html
Okay, I am working with medical images. And when running a testing script I want to save the predictions (also big medical images of another modality). What happens when I do logger.upload_artifact(..). Then a file is copied to this folder?
Old legacy code that has its own folder structure per experiment. I can also do it the other way around. Does task.get_output_destination() return the folder including project name and <task_name>.<task_id>?
I see task.get_output_destination() returns a url like http://localhost:8081 . Is it possible to get the folder with the artifacts/models?
It is the folder the clearml creates and the folder we create ourself to store the predictions
Yes I see:
"The default location for output models and other artifacts. If True is passed, the default files_server will be used for model storage. In the default location, ClearML creates a subfolder for the output. The subfolder structure is the following: <output destination name> / <project name> / <task name>.<Task ID>"
So it makes a folder in the output destination <project_name>/<task name>.<Task ID>. It is not possible to specify the full output destination right?
It is for storing the predictions a trained model makes, so two different models do create slightly different images
I see, will it be possible in the future to directly write custom/not supported formats to the folder? Because we are working with very big files, having them stored at multiple locations is something we try to avoid
yes, I wanted the confirmation that this is also a good solution for datasets with medical images
This was with using one task in a multiprocessing.pool and the next one in the main process. I switched to have all tasks in a separate process via ProcessPoolExecutor and now it runs fine 👍 (version 0.17.5)
Yes other then the link it generates it works fine
And is there an easy way to get all the metrics associated with a project?
I enqueue to service to the services queue, not done anything myself with agents
Ubuntu 18.04 and python 3.6. the subprocess is done by subclassing multiprocessing.Process and then calling the .start() method