Reputation
Badges 1
40 × Eureka!Verifying credentials ... Error: could not verify credentials: key=xxxxxx secret=xxxxx Enter user access key:
Thanks AgitatedDove14 , I'll go through these and get back to you
I tried it about 2-3 months ago with trains-init (same use-case as this one) and it failed that time too.
Could it be the credentials are actually incorrect?
Highly unlikely, like I said, I generated a new set of credentials from the Web-UI and it worked perfectly fine for an Azure VM (not under the VPN).
Are you doing
plt.imshow
Nope
And yes, I set the report_image=False
So clearml-init can be skipped, and I provide the users with a template and ask them to append the credentials at the top, is that right? What about the "Credential verification" step in clearml-init command, that won't take place in this pipeline right, will that be a problem?
AgitatedDove14 , is there a way to set the default output URI flag so that if and when a new user creates a clearml.conf the URI is already in it? I was hoping that there's a universal flag somewhere. Asking this because I want all the Models and Artifacts to be stored in one place and the users shouldn't have to edit their configuration files.
AgitatedDove14 , I'll have a look at it and let you know. According to you the VPN shouldn't be a problem right?
Hey SuccessfulKoala55 , I'm new to Trains and want to setup an Azure Storage Container to store my model artifacts. I see that we can do this by providing an output_uri in Task.init() but is there another way to send all the artifacts to Azure instead of using the Task.init()? Like setting a variable somewhere, so that whenever I run my tasks I know the artifacts will get stored in Azure even if i dont provide an output_uri
I tried setting the variables with export but got this error:
` Traceback (most recent call last):
File "test.py", line 1, in <module>
from trains import Task
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/init.py", line 4, in <module>
from .task import Task
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/task.py", line 28, in <module>
from .backend_interface.metrics import Metrics
File "/home/sam/VirtualEnvs/test/lib/pyth...
I did that, and it works flawlessly. I swear I installed the Azure blob storage package multiple times, anyway thanks a lot again for the detailed debugging O:)
What if I just copy a clearml.conf file and edit out the tokens? Could that work?
It is http btw, i don't know why it logged https://
Okay, my bad, the code snippet I sent is correctly uploading it to 'Plots' but in the actual script I use this:
` def plot_graphs(input_df,output_df,task):
# Plotting all the graphs
for metric in df_cols:
# Assigning X and Y axes data
in_x = input_df["frameNum"]
in_y = input_df[metric]
out_x = output_df["frameNum"]
out_y = output_df[metric]
# Creating a new figure
plt.figure()
plt.xlabel('Frame Number')
...
No, the sample code I sent above works as intended, uploads to 'Plots'. But the main code that I've written, which is almost exactly similar to the sample code behaves differently.
Basically, set my Host storage as Azure
Yep it does, thanks AgitatedDove14 :)
Right, parsing the TB is too much work, I'll look into the material you sent. Thanks!
It's going to Debug Samples with the RC too
No, those env variables aren't set.
Yes, copied and pasted the configuration file correctly, points to the right server (running on Azure). I created a clearml.conf file on another Azure VM (not under the VPN) and there it worked fine. The 'on-premise' server fails to connect to the ClearML server because of the VPN I think
Understood, I'll look into it!
My use case is that, let's say I'm training with a file called train.py in which I have Task.init() , now after the training is finished, I generate some more graphs with a file called graphs.py and want to attach/upload to this training task which has finished. That's when I realised Task.get_task() is not working as intended, but it is when I have a Task.init() before it.
I tried it, and it is uploading it to Debug Samples ( Task.get_task() ) with task.get_logger().report_matplotlib_figure() , but with a Task.init() , it's uploading it to Plots.
Checking with the RC package now
` * Trying X.Y.Z.W:8080...
- TCP_NODELAY set
- Connected to X.Y.Z.W (X.Y.Z.W) port 8008 (#0)
GET /debug.ping HTTP/1.1
Host: X.Y.Z.W:8080
User-Agent: curl/7.68.0
Accept: /
- Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< Location:
< Connection: close
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: frame-ancestors
< - Closing connection 0 `
Indeed, sleep() did the trick but it's going into the Debug Samples tab and not the Plots, any reason why? Earlier (with Task.init() followed by Task.get_task() ) the same plt plots got reported to 'Plots'.
AgitatedDove14 , Let me clarify, I meant, let's say I have all the data like checkpoints, test and train logdirs, scripts that were used to train a model. So, how would I upload all of that to the ClearML server without retraining a model, so that the 'Scalars', 'Debug Samples', 'Hyperparameters', everything show up on ClearML server like they generally do?
Yeah I noticed that too! Ports are configured properly in the conf file though
2021-02-08 18:49:25,036 - clearml - WARNING - InsecureRequestWarning: Certificate verification is disabled! Adding certificate verification is strongly advised. See: Retrying (Retry(total=239, connect=240, read=239, redirect=240, status=240)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='X.Y.Z.W', port=8015): Read timed out. (read timeout=3.0)",)': /auth.loginGot this, times out after 5 tries
Thanks a lot SuccessfulKoala55 🙂
0.16.1-320
you mean 0.16?
Ah my bad, I picked up the version from docker-compose file :D