Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I Am Getting `Valueerror: Could Not Get Access Credentials For '

Hello, I am getting ValueError: Could not get access credentials for ' s3://my-bucket ' , check configuration file ~/trains.conf but I did specify them in my trains.conf file
sdk.aws.s3.key = **** sdk.aws.s3.secret = ****

  
  
Posted 3 years ago
Votes Newest

Answers 30


Yes, hopefully they have a different exception type so we could differentiate ... :) I'll check

  
  
Posted 3 years ago

Import Error sounds so out of place it should not be a problem :)

  
  
Posted 3 years ago

After some investigation, I think it could come from the way you catch error when checking the creds in trains.conf: When I passed the aws creds using env vars, another error poped up: https://github.com/boto/botocore/issues/2187 , linked to boto3

  
  
Posted 3 years ago

JitteryCoyote63 when the agent is running a job, it prints its configuration at the beginning, do you see the correct credentials there (you will not see the secret but you will see the access key)

  
  
Posted 3 years ago

AgitatedDove14 That's a good point: The experiment failing with this error does show the correct aws key:
... sdk.aws.s3.key = ***** sdk.aws.s3.region = ...

  
  
Posted 3 years ago

the second seems like a botocore issue :
https://github.com/boto/botocore/issues/2187

  
  
Posted 3 years ago

So most likely trains was masking the original error, it might be worth investigating to help other users in the future

  
  
Posted 3 years ago

but I also make sure to write the trains.conf to the root directory in this bash script:
echo " sdk.aws.s3.key = *** sdk.aws.s3.secret = *** " > ~/trains.conf ... python3 -m trains_agent --config-file "~/trains.conf" ...

  
  
Posted 3 years ago

I'm so glad you mentioned the cron job, it would have taken us hours to figure

  
  
Posted 3 years ago

What's the exact error you are getting ?
(Maybe this is privilege error on the cache folder, what are the folders it is using, you can see in the configuration as well)

  
  
Posted 3 years ago

region is empty, I never entered it and it worked

  
  
Posted 3 years ago

And the correct region ?

  
  
Posted 3 years ago

JitteryCoyote63 are you calling to:
my_task.output_uri = " s3://my-bucket
in the code itself ?
Why not with Task.init output_uri=...
Also this is running remotely there is no need fo r that, use the Execution -> Output -> Destination and put it there, it will do everything for you 🙂

  
  
Posted 3 years ago

Ohh "~/trains.conf" is root probably

  
  
Posted 3 years ago

AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION

  
  
Posted 3 years ago

what would be the name of these vars?

  
  
Posted 3 years ago

I'll try to pass these values using the env vars

  
  
Posted 3 years ago

😄

  
  
Posted 3 years ago

I will probably just use everywhere an absolute path to be robust against different machine user accounts: /home/user/trains.conf

  
  
Posted 3 years ago

And I can verify that ~/trains.conf exists in the su home folder

  
  
Posted 3 years ago

I will probably just use everywhere an absolute path to be robust against different machine user accounts: /home/user/trains.conf

That sounds like good practice

Other than the wrong, trains.conf, I can't think of anything else... Well maybe if you have AWS environment variables with credentials ? they will override the conf file

  
  
Posted 3 years ago

Any chance ?

  
  
Posted 3 years ago

File "devops/valid.py", line 80, in valid(parse_args) File "devops/valid.py", line 41, in valid valid_task.output_uri = args.artifacts File "/data/.trains/venvs-builds/3.6/lib/python3.6/site-packages/trains/task.py", line 695, in output_uri ", check configuration file ~/trains.conf".format(value)) ValueError: Could not get access credentials for 's3://ml-artefacts' , check configuration file ~/trains.conf

  
  
Posted 3 years ago

So the problem comes when I do
my_task.output_uri = " s3://my-bucket , trains in the background checks if it has access to this bucket and it is not able to find/ read the creds

  
  
Posted 3 years ago

AgitatedDove14 This seems to be consistent even if I specify the absolute path to /home/user/trains.conf

  
  
Posted 3 years ago

without the envs, I had error: ValueError: Could not get access credentials for ' s3://my-bucket ' , check configuration file ~/trains.conf After using envs, I got error: ImportError: cannot import name 'IPV6_ADDRZ_RE' from 'urllib3.util.url'

  
  
Posted 3 years ago

JitteryCoyote63 see if upgrading the packages as they suggest somehow fixes it.
I have the feeling this is the same problem (the first error might be trains masking the original error)

  
  
Posted 3 years ago

AgitatedDove14 Yes exactly, I tried the fix suggested in the github issue urllib3>=1.25.4 and the ImportError disappeared 🙂

  
  
Posted 3 years ago

JitteryCoyote63 what am I missing?
What are the errors you are getting (with / without the envs)

  
  
Posted 3 years ago

(btw, yes I adapted to use Task.init(...output_uri=)

  
  
Posted 3 years ago