Reputation
Badges 1
40 × Eureka!Basically, set my Host storage as Azure
AgitatedDove14 , Let me clarify, I meant, let's say I have all the data like checkpoints, test and train logdirs, scripts that were used to train a model. So, how would I upload all of that to the ClearML server without retraining a model, so that the 'Scalars', 'Debug Samples', 'Hyperparameters', everything show up on ClearML server like they generally do?
Oh! With the sleep()
function? Let me try it again
Thanks 🙂
Thanks AgitatedDove14 , I'll go through these and get back to you
Right, parsing the TB is too much work, I'll look into the material you sent. Thanks!
Indeed, sleep()
did the trick but it's going into the Debug Samples tab and not the Plots, any reason why? Earlier (with Task.init()
followed by Task.get_task()
) the same plt plots got reported to 'Plots'.
Understood, I'll look into it!
My use case is that, let's say I'm training with a file called train.py
in which I have Task.init()
, now after the training is finished, I generate some more graphs with a file called graphs.py
and want to attach/upload to this training task which has finished. That's when I realised Task.get_task()
is not working as intended, but it is when I have a Task.init()
before it.
` from trains import Task
import matplotlib.pyplot as plt
import numpy as np
import time
task = Task.get_task(task_id='task_id')
for i in range(0,10):
x_1 = np.random.rand(50)
y_1 = np.random.rand(50)
x_2 = np.random.rand(50)
y_2 = np.random.rand(50)
plt.figure()
plt.scatter(x_1, y_1, alpha=0.5)
plt.scatter(x_2, y_2, alpha=0.5)
# Plot will be reported automatically
# plt.show()
task.get_logger().report_matplotlib_figure(title="My Plot Title", serie...
No, the sample code I sent above works as intended, uploads to 'Plots'. But the main code that I've written, which is almost exactly similar to the sample code behaves differently.
AgitatedDove14 , thanks a lot! I'll get back with a script in a day or two.
Okay, my bad, the code snippet I sent is correctly uploading it to 'Plots' but in the actual script I use this:
` def plot_graphs(input_df,output_df,task):
# Plotting all the graphs
for metric in df_cols:
# Assigning X and Y axes data
in_x = input_df["frameNum"]
in_y = input_df[metric]
out_x = output_df["frameNum"]
out_y = output_df[metric]
# Creating a new figure
plt.figure()
plt.xlabel('Frame Number')
...
I tried it, and it is uploading it to Debug Samples ( Task.get_task()
) with task.get_logger().report_matplotlib_figure()
, but with a Task.init()
, it's uploading it to Plots.
Checking with the RC package now
It's going to Debug Samples with the RC too
0.16.1-320
you mean 0.16?
Ah my bad, I picked up the version from docker-compose file :D
Hey SuccessfulKoala55 , I'm new to Trains and want to setup an Azure Storage Container to store my model artifacts. I see that we can do this by providing an output_uri in Task.init() but is there another way to send all the artifacts to Azure instead of using the Task.init()? Like setting a variable somewhere, so that whenever I run my tasks I know the artifacts will get stored in Azure even if i dont provide an output_uri
Are you doingÂ
plt.imshow
Nope
And yes, I set the report_image=False
Thanks a lot SuccessfulKoala55 🙂
This is what I get with the ' https://
' , this is atleast getting a response from Azure
` 2020-12-03 13:48:49,667 - trains.Task - INFO - No repository found, storing script code instead
TRAINS results page: http://<IP>:8080/projects/<hash>/output/log
2020-12-03 13:48:51,505 - trains.Task - INFO - Waiting for repository detection and full package requirement analysis
2020-12-03 13:48:53,315 - trains.Task - INFO - Finished repository detection and package analysis
2020-12-03 13:48:53,315 -...
I did that, and it works flawlessly. I swear I installed the Azure blob storage package multiple times, anyway thanks a lot again for the detailed debugging O:)
Oh it worked! I did the pip install multiple times earlier, but to no avail. I think it's because of the env variables? Let me try to unset those and provide it within the trains.conf
Yes, copied and pasted the configuration file correctly, points to the right server (running on Azure). I created a clearml.conf file on another Azure VM (not under the VPN) and there it worked fine. The 'on-premise' server fails to connect to the ClearML server because of the VPN I think
Verifying credentials ... Error: could not verify credentials: key=xxxxxx secret=xxxxx Enter user access key:
AgitatedDove14 , here's the code snippet you requested
Should we also provide credentials for the Storage Account on the Web UI under 'Profile' section?
` 2020-12-03 13:31:27,296 - trains.storage - ERROR - Azure blob storage driver not found. Please install driver using "pip install 'azure.storage.blob>=2.0.1'"
Traceback (most recent call last):
File "test.py", line 3, in <module>
task = Task.init(project_name="Test", task_name="debugging")
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/task.py", line 461, in init
task.output_uri = cls.__default_output_uri
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-...
Ahh okay, commented out the whole thing and got the same error as earlier ( Could not get access credentials
)
This is the whole error dump:
`
2020-12-03 13:31:27,296 - trains.storage - ERROR - Azure blob storage driver not found. Please install driver using "pip install 'azure.storage.blob>=2.0.1'"
Traceback (most recent call last):
File "test.py", line 3, in <module>
task = Task.init(project_name="Test", task_name="debugging")
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/task.py", line 461, in init
task.output_uri = cls.__default_output_uri
File "/home/sam/Virtu...