1

Initially I launched a brand-new Windows Server 2016 server EC2. I assigned a S3 full admin IAM role to this instance when launching it. I installed CLI on it. I started a CMD window, and typed in "aws s3 ls". It lists all my buckets. All working fine.

I then created an AMI from this instance. I launched a new instance from this instance with that S3 full admin IAM role. "aws s3 ls" still works.

Then, after a number of days, when I repeat the above process (launching an instance from the same AMI), "aws s3 ls" will stop working, with the following error:

Unable to locate credentials. You can configure credentials by running "aws configure". 

It happened many times. Every time I rebuilt a new Windows Server, install CLI, assign the S3 full admin role to the instance, it works. After a number of days, when I launch a new instance from the exact same AMI, "aws s3 ls" will stop working.

It is so mysterious! Can someone shed some light on this please?

1
  • For each new EC2 instance you launch, you must assign the role it. The role does not get bundled in the AMI. Commented Sep 5, 2017 at 1:24

6 Answers 6

2

The IAM role does not get bundled with your AMI.

So when you launch your new EC2 instances from your AMI, you must assign the IAM role to the new EC2 instances that are launched. Without the role assigned at launch or afterwards, the CLI cannot find the credentials.

2
  • Another way to say this might be "IAM Roles are associated with instances rather than AMI machine templates". Just in case it's not clear to everyone. Commented Sep 5, 2017 at 1:28
  • I understand that the IAM role is not bundled with the AMI. Everytime I launch a new instance from this AMI, I assign the same role to the new instance, but it will stop working one day. Commented Sep 5, 2017 at 23:37
1

I had the same problem recently with a Windows EC2 instance deployed from my own AMI. As @Kugel (https://serverfault.com/users/138998/kugel) has mentioned in their answer (Previous answer of this), one can see that the NextHop values when is executed the Get-NetRoute command doesn't changes in the new instance, and the instance remains the previous values for the existent network configurations in the base instance from where the AMI was generated.

Then, to resolve this issue, I have created new Net Routes with the new NextHop values and deleted the oldest ones. Taking as example the output given by @kugel, the creation of the new routes will be:

Set-NetRoute -InterfaceIndex 3 -DestinationPrefix "169.254.169.254/32" -RouteMetric 15 -NextHop 10.2.6.1 

Repeat the previous command for the rest of the Destinations prefixes (169.254.169.123/32, 169.254.169.249/32, 169.254.169.250/32, 169.254.169.251/32 and 169.254.169.253/32)

Once the new net routes has been created, you can delete the oldest with the command:

Remove-NetRoute -NextHop 10.2.4.1 

The previous command will delete all occurrences that has as NextHop that IP.

References for commands:

1

I found this question while having the same issue. What ended up solving it for me is this answer which I have quoted below so as to not have an answer with just a link.

There is a script call InitializeInstance.ps1 that resets some configuration information.

For example, if the instance has changed subnets it might not work correctly due to cached routing rules. The InitializeInstance.ps1 can correct this.

As the top comment on that answer points out:

In Windows Instance, this script is located here: C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1

0

This happens to me all the time with new Windows 2019 instances. I create an instance configure it, snapshot it and then when I launch the AMI in a different subnet or AZ the IAM roles are broken.

What I found it is that the routing is gets horribly broken. You see, AWS uses magic IPs set up on the machine to retrieve various things, including IAM credentials.

When you snapshot a machine the Windows Route table gets snapshontted with it and when you launch an AMI the Routing table is not modified and ends up pointing to the wrong gateway.

Here's my example:

PS C:\Users\Administrator> Get-NetRoute ifIndex DestinationPrefix NextHop RouteMetric ifMetric PolicyStore ------- ----------------- ------- ----------- -------- ----------- 3 255.255.255.255/32 0.0.0.0 256 15 ActiveStore 1 255.255.255.255/32 0.0.0.0 256 75 ActiveStore 3 224.0.0.0/4 0.0.0.0 256 15 ActiveStore 1 224.0.0.0/4 0.0.0.0 256 75 ActiveStore 3 169.254.169.254/32 10.2.4.1 15 15 ActiveStore 3 169.254.169.253/32 10.2.4.1 15 15 ActiveStore 3 169.254.169.251/32 10.2.4.1 15 15 ActiveStore 3 169.254.169.250/32 10.2.4.1 15 15 ActiveStore 3 169.254.169.249/32 10.2.4.1 15 15 ActiveStore 3 169.254.169.123/32 10.2.4.1 15 15 ActiveStore 1 127.255.255.255/32 0.0.0.0 256 75 ActiveStore 1 127.0.0.1/32 0.0.0.0 256 75 ActiveStore 1 127.0.0.0/8 0.0.0.0 256 75 ActiveStore 3 10.2.6.255/32 0.0.0.0 256 15 ActiveStore 3 10.2.6.138/32 0.0.0.0 256 15 ActiveStore 3 10.2.6.0/24 0.0.0.0 256 15 ActiveStore 3 0.0.0.0/0 10.2.6.1 0 15 ActiveStore 3 ff00::/8 :: 256 15 ActiveStore 1 ff00::/8 :: 256 75 ActiveStore 3 fe80::b8bf:5c16:bc0d:562f/128 :: 256 15 ActiveStore 3 fe80::/64 :: 256 15 ActiveStore 1 ::1/128 :: 256 75 ActiveStore 

Please note how default gateway is 10.2.6.1 and is correct for this subnet. However all the magic routes still point to previous gateway 10.2.4.1 from the old AZ.

To fix this you have to manually fix the routes. If you have this in an Auto-Scaling group you might have to write a powershell that runs after boot to fix routes.

0

This is very old but there was pretty much zero help for a similar issue that's current (Q1 2024). On even the latest instance metadata versions (IMDSv2) you may find yourself in a situation where the iam endpoint simply doesn't exist, but the other metadata does:

$token = Invoke-RestMethod -Method PUT -Headers @{ "X-aws-ec2-metadata-token-ttl-seconds" = 21600 } -Uri http://169.254.169.254/latest/api/token # You should see your role name here Invoke-RestMethod -Headers @{ "X-aws-ec2-metadata-token" = $token } -Uri http://169.254.169.254/latest/meta-data/iam/security-credentials/ 

In my case /meta-data/iam/* was missing, as was the iam key itself.

The cause was adding and linking my instance role after the instance had already had its first boot. Turns out there are some initialization scripts that run on first boot, and if your instance profile and role aren't linked to your machine at that time, you are out of luck. I had a new enough instance I didn't dig in to how to re-run these scripts, I'm sure it's possible. When I terminated my instance and started a new one, the iam key was suddenly there and the aws cli and powershell tools were happy.

0

When launching a new Windows EC2 instance from a custom AMI, everything may seem to work fine—network access is available, the IAM role is attached, and you can browse websites—but AWS CLI or SDKs might still throw the error: "Unable to locate credentials. You can configure credentials by running 'aws configure'". I encountered this while running aws sts get-caller-identity on a Windows EC2 instance placed in a private subnet. Despite having internet access through a NAT gateway and correct IAM role permissions, the CLI couldn’t retrieve credentials. After some investigation, I discovered the root cause was with the default gateway. Since the EC2 instance was created from an existing server AMI, it inherited the old network configuration, including a gateway pointing to the original subnet, not the current one. This broke communication with the Instance Metadata Service (IMDS), which is essential for retrieving temporary credentials and is only accessible at the link-local address 169.254.169.254.

To troubleshoot, I did the following:

Opened PowerShell as Administrator

Ran route print to inspect the routing table

Ran ipconfig /all to confirm the incorrect default gateway (e.g., 10.21.11.1)

To fix it, I used the built-in EC2Launch module provided by AWS:

powershell -

$ Import-Module "C:\ProgramData\Amazon\EC2-Windows\Launch\Module\Ec2Launch.psd1"

$ Add-Routes

This command automatically added the correct default route for the new subnet. A follow-up route print confirmed the correct gateway was now in place. After this fix, running:

$ aws sts get-caller-identity

successfully returned the expected identity response, confirming that the EC2 could now access IMDS and retrieve credentials via its instance profile.

Key Takeaways:

EC2s launched from custom AMIs may retain old network routes, including outdated default gateways.

Even if the IAM role is correctly attached and permissions are in place, an incorrect route can break access to IMDS.

Always check:

route print for routing

ipconfig /all for current gateway info

Use Add-Routes from the EC2Launch module to fix the routing automatically.

Pro Tip:

Add the following command to your User Data script when launching Windows instances from AMIs:

powershell:-

Import-Module "C:\ProgramData\Amazon\EC2-Windows\Launch\Module\Ec2Launch.psd1"

Add-Routes

This ensures your EC2 instance has a proper gateway set at boot time, avoiding credential and metadata access issues.

This experience taught me that credential errors aren't always permission-related—they can stem from low-level networking misconfigurations inherited during AMI-based provisioning. Double-checking routes can save hours of unnecessary debugging.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.