Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Attach inline policy to user" (aws_s3.yml task) fails when using assumed IAM role #29

Open
copelco opened this issue Jul 29, 2020 · 3 comments
Labels
bug Something isn't working

Comments

@copelco
Copy link
Member

copelco commented Jul 29, 2020

As far as I can tell, Ansible's iam_policy module doesn't work with an assumed IAM role due to a limitation of the underlying boto library.

This gist provides a workaround, which works by just running assumed-role-credentials.py before running the playbook which calls the aws_s3.yml tasks.

@copelco copelco added the bug Something isn't working label Jul 29, 2020
@copelco
Copy link
Member Author

copelco commented Jul 29, 2020

Maybe sts_assume_role could be used to obtain temporary access credentials before the iam_policy call.

@copelco
Copy link
Member Author

copelco commented Jul 29, 2020

Example script to add to local project:

# deploy/boto-temporary-creds.py
import boto3, sys

session = boto3.Session(profile_name="MY-PROJECT-AWSCLI-PROFILE")
credentials = session.get_credentials().get_frozen_credentials()

print(f'export AWS_ACCESS_KEY_ID="{credentials.access_key}"')
print(f'export AWS_SECRET_ACCESS_KEY="{credentials.secret_key}"')
print(f'export AWS_SECURITY_TOKEN="{credentials.token}"')
print(f'export AWS_SESSION_TOKEN="{credentials.token}"')

Then:

python boto-temporary-creds.py
# copy printed export statements and run in shell
export AWS_ACCESS_KEY_ID="..."
# ...
# now run Ansible playbook that failed
ansible-playbook deploy...

@vkurup
Copy link
Contributor

vkurup commented Jan 29, 2021

I can confirm the same issue when I tried to move the CI IAM user creation to this role.

Here was my initial traceback.

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: boto.provider.ProfileNotFoundError: Profile "saguaro-cluster" not found!
fatal: [staging]: FAILED! => changed=false 
  module_stderr: |-
    Traceback (most recent call last):
      File "/home/vkurup/.ansible/tmp/ansible-tmp-1611955984.4850554-685814-113242977458548/AnsiballZ_iam.py", line 102, in <module>
        _ansiballz_main()
      File "/home/vkurup/.ansible/tmp/ansible-tmp-1611955984.4850554-685814-113242977458548/AnsiballZ_iam.py", line 94, in _ansiballz_main
        invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
      File "/home/vkurup/.ansible/tmp/ansible-tmp-1611955984.4850554-685814-113242977458548/AnsiballZ_iam.py", line 40, in invoke_module
        runpy.run_module(mod_name='ansible_collections.community.aws.plugins.modules.iam', init_globals=None, run_name='__main__', alter_sys=True)
      File "/home/vkurup/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 210, in run_module
        return _run_module_code(code, init_globals, run_name, mod_spec)
      File "/home/vkurup/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 97, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/home/vkurup/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/tmp/ansible_iam_payload_4h5g948t/ansible_iam_payload.zip/ansible_collections/community/aws/plugins/modules/iam.py", line 869, in <module>
      File "/tmp/ansible_iam_payload_4h5g948t/ansible_iam_payload.zip/ansible_collections/community/aws/plugins/modules/iam.py", line 708, in main
      File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/iam/connection.py", line 66, in __init__
        super(IAMConnection, self).__init__(aws_access_key_id,
      File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/connection.py", line 1091, in __init__
        super(AWSQueryConnection, self).__init__(
      File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/connection.py", line 551, in __init__
        self.provider = Provider(self._provider_type,
      File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/provider.py", line 201, in __init__
        self.get_credentials(access_key, secret_key, security_token, profile_name)
      File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/provider.py", line 296, in get_credentials
        raise ProfileNotFoundError('Profile "%s" not found!' %
    boto.provider.ProfileNotFoundError: Profile "saguaro-cluster" not found!

Then when I switched my [default] profile to use my assume-role creds, I got this error:

TASK [caktus.django-k8s : Create CI user] *************************************************************************************************************************************************************************
task path: /home/vkurup/dev/ansible-role-django-k8s/tasks/aws_ci.yml:12
<staging> ESTABLISH LOCAL CONNECTION FOR USER: vkurup
<staging> EXEC /bin/sh -c 'echo ~vkurup && sleep 0'
<staging> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/vkurup/.ansible/tmp `"&& mkdir "` echo /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028 `" && echo ansible-tmp-1611957437.7826958-689522-184058437163028="` echo /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028 `" ) && sleep 0'
redirecting (type: modules) ansible.builtin.iam to community.aws.iam
Loading collection amazon.aws from /home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/ansible_collections/amazon/aws
Using module file /home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/ansible_collections/community/aws/plugins/modules/iam.py
<staging> PUT /home/vkurup/.ansible/tmp/ansible-local-689424kim03r1d/tmp42nb4ph0 TO /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028/AnsiballZ_iam.py
<staging> EXEC /bin/sh -c 'chmod u+x /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028/ /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028/AnsiballZ_iam.py && sleep 0'
<staging> EXEC /bin/sh -c '/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/bin/python3.9 /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028/AnsiballZ_iam.py && sleep 0'
<staging> EXEC /bin/sh -c 'rm -f -r /home/vkurup/.ansible/tmp/ansible-tmp-1611957437.7826958-689522-184058437163028/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_iam_payload_lu1058d0/ansible_iam_payload.zip/ansible_collections/community/aws/plugins/modules/iam.py", line 708, in main
  File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/iam/connection.py", line 66, in __init__
    super(IAMConnection, self).__init__(aws_access_key_id,
  File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/connection.py", line 1091, in __init__
    super(AWSQueryConnection, self).__init__(
  File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/connection.py", line 568, in __init__
    self._auth_handler = auth.get_auth_handler(
  File "/home/vkurup/.pyenv/versions/3.9.0/envs/philly-hip/lib/python3.9/site-packages/boto/auth.py", line 1018, in get_auth_handler
    raise boto.exception.NoAuthHandlerFound(
fatal: [staging]: FAILED! => changed=false 
  invocation:
    module_args:
      access_key_ids: null
      access_key_state: null
      aws_access_key: null
      aws_ca_bundle: null
      aws_config: null
      aws_secret_key: null
      debug_botocore_endpoint_logs: false
      ec2_url: null
      groups: null
      iam_type: user
      key_count: 1
      name: hip-staging-ci-user
      new_name: null
      new_path: null
      password: null
      path: /
      profile: null
      region: null
      security_token: null
      state: present
      trust_policy: null
      trust_policy_filepath: null
      update_password: always
      validate_certs: true
  msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials

Running the boto-temporary-creds.py script, exporting those variables and then re-running the deploy works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants