and when looking at the running task, I still see the credentials
Now I see the watermarks are 2gb
Thanks very much
Now something else is failing, but I'm pretty sure its on my side now... So have a good day and see you in the next question 😄
I jsut think that if I use "report_table" I might as well be able to download it as CSV or something
and also in the extra_vm_bash_script
variables, I ahve them under export TRAINS_API_ACCESS_KEY
and export TRAINS_API_SECRET_KEY
it seems apiserver_conf
doesn't even change
even though I apply append
If the credentials don't have access tothe autoscale service obviously it won't work
okay, that's acceptable
I'll check the version tomorrow, about the current_task call, I tried before and after - same result
No absolutely not. Yes I do have a GOOGLE_APPLICATION_CREDENTIALS environment variable set, but nowhere do we save anything to GCS. The only usage is in the code which reads from BigQuery
What do you mean by submodules?
She did not push, I told her she does not have to push before executing as trains figures out the diffs.
When she pushes - it works
There are many ohter packages in my environment which are not listed
I don't htink I can, this is private IP and to create a dummy example of a pipeline and execution will take me more time than I can dedicate to this
I believe that is why MetaFlow chose conda
as their package manager, because it can take care of these kind of dependencies (even though I hate conda 😄 )
I guess not many tensorflowers running agents around here if this wasn't brought up already
For example I have a DATA_DIR
environment variable which points to the directory where disk-data is stored
but shouldn't the :lastest
make it redownload the right image?
when I specify --packages
I shoudl manually list them all not?
UptightCoyote42 - How are these images avaialble to all agents? Do you host them on Docker hub?
cool, didn't know about the PAT