
Reputation
Badges 1
40 × Eureka!Hey SuccessfulKoala55 , I'm new to Trains and want to setup an Azure Storage Container to store my model artifacts. I see that we can do this by providing an output_uri in Task.init() but is there another way to send all the artifacts to Azure instead of using the Task.init()? Like setting a variable somewhere, so that whenever I run my tasks I know the artifacts will get stored in Azure even if i dont provide an output_uri
Yep it does, thanks AgitatedDove14 :)
It says container name is optional in azure.storage
, if I don't provide the container name, and remove it from the end of default_uri
, would that work?
Oh it worked! I did the pip install multiple times earlier, but to no avail. I think it's because of the env variables? Let me try to unset those and provide it within the trains.conf
I tried setting the variables with export
but got this error:
` Traceback (most recent call last):
File "test.py", line 1, in <module>
from trains import Task
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/init.py", line 4, in <module>
from .task import Task
File "/home/sam/VirtualEnvs/test/lib/python3.8/site-packages/trains/task.py", line 28, in <module>
from .backend_interface.metrics import Metrics
File "/home/sam/VirtualEnvs/test/lib/pyth...
Thanks AgitatedDove14 , I'll go through these and get back to you
No, the sample code I sent above works as intended, uploads to 'Plots'. But the main code that I've written, which is almost exactly similar to the sample code behaves differently.
No, those env variables aren't set.
So clearml-init can be skipped, and I provide the users with a template and ask them to append the credentials at the top, is that right? What about the "Credential verification" step in clearml-init command, that won't take place in this pipeline right, will that be a problem?
AgitatedDove14 , thanks a lot! I'll get back with a script in a day or two.
It's going to Debug Samples with the RC too
AgitatedDove14 , I'll have a look at it and let you know. According to you the VPN shouldn't be a problem right?
Understood, I'll look into it!
My use case is that, let's say I'm training with a file called train.py
in which I have Task.init()
, now after the training is finished, I generate some more graphs with a file called graphs.py
and want to attach/upload to this training task which has finished. That's when I realised Task.get_task()
is not working as intended, but it is when I have a Task.init()
before it.
AgitatedDove14 , Let me clarify, I meant, let's say I have all the data like checkpoints, test and train logdirs, scripts that were used to train a model. So, how would I upload all of that to the ClearML server without retraining a model, so that the 'Scalars', 'Debug Samples', 'Hyperparameters', everything show up on ClearML server like they generally do?
Right, parsing the TB is too much work, I'll look into the material you sent. Thanks!