
Reputation
Badges 1
90 × Eureka!Oh great, thanks! Was trying to figure out how the method knows that the docker image ID belongs to ECR. Do you have any insight into that?
using this method training_task.set_model_label_enumeration(label_map)
Nope, from a remote server. It was that I had installed the package from git locally, so when pushing the task, clearml assumed it should also install from git. I since installed the package from the private pypi and it all works as expected now 🙂
I removed it and I still get the same error 😞
Only downside, which is not related to clearml, is that codeartifact authorisation tokens have to have a minimum lifespan of 15 mins. Usually, setting up envs before task execution takes less than a couple minutes, so the token lingers in the background. Nonetheless, all works as expected!
By script, you mean entering these two lines separately as a list for that extra_docker_shell_scripts
arugment?
Just upgraded matplotlib, going to test now
Awesome, thank you. I will give that a try later this week and update if it worked as expected! May finally solve my private dependencies issue 😂
I don't think we explicitly pass the package path to the agent. I expect it to run a regular pip install but it seems to be doing it via git somehow
2021-03-01 20:51:55,655 - clearml.Task - INFO - Completed model upload to s3://15gifts-clearml/artefacts/pre-engine-traits/logistic-regression-paths-and-sales-tfidf-device-brand.8d68e9a649824affb9a9edf7bfbe157d/models/tfidf-logistic-regression-1614631915-8d68e9a649824affb9a9edf7bfbe157d.pkl *****
2021-03-01 20:52:01
2021-03-01 20:51:57,207 - clearml.Task - INFO - Waiting to finish uploads
its a seaborn heatmap that needs to be plotted. not sure if that is useful at all
Thanks AnxiousSeal95 , will check it out! 🙂
I dont think its that. its a 20kb file upload. This was the last message just printedClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-star
Thanks maestro. Will give this a go
While we're here, how can I return the model accuracy (or any performance metric for that matter) given a model(s) belonging to a particular task? Is this information stored anywhere or do I need to explicitly log this data somehow?
On my local I have clearml 0.17.4
Yeah, it's not urgent. I will change the labels around to avoid this error 🙂 thanks for checking!
Locally or on the remote server?
ECR access should be enabled as part of the role the agent instance assumes when it runs a task
Sorry, just revisiting this as I'm only getting around to implementation now. How do you pass the ECR container ID to the defined task?
As in an object from memory directly, without having to export the file first. I thought boto3 can handle this, but looking at the docs again, it doesn't look like it. File-like objects is their term, so maybe an export is required
Okay solved the problem. It is using the version that is locally installed (on my laptop). Is there a way to prevent this? Perhaps a requirements.txt or something like that>
Ok, that explains a lot. The new user was using version 1.x.x and I was using version 0.17.x. That is why I think my task was being drafted. and his was being aborted.
There is no specific use case for draft mode - it was just the mode I knew that I understood to be used for enqueuing a newly created task, but I assume that aborted now has the same functionality
` # Plot the confusion matrix for predictions
sns.heatmap(
preds_confusion_percentage, annot=True, fmt=".3f", linewidths=.5,
square=True, cmap='Blues_r'
)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
title_str = f'Accuracy Score: {round(score, 2)}\n{TRANSFORM_TYPE}'
plt.title(title_str, size=15)
task.logger.report_matplotlib_figure(
title=f"Performance Heatmap - {model_export_name}",
series="Device Brand Predictions",
iteration=0,
figure=pl...
could it be how I am trying to log the figure manually?
I thought nothing should be stored locally on the agent? Shouldn't all files be logged to the storage rather than the instance itself?
That's a good question, which I don't have an answer to 😅 I was hoping to be able to store the config file in some kind of secrets vault, and authenticating via some in-memory trace or so