What you did was make every file in the filesystem a 2770 permission.
-rwxrws--- 1 username agroup 2 Feb 19 23:07 thefilename
Thats a sticky bit in the group column, which means all files inside a directory are owned by agroup.
I've never bjorked an AWS image quite that badly. But I've seen a few probs that kill them.
FIRST Revert to your last snapshot, before hosing the file modes.
You don't do periodic snapshots?
SECOND Look at your backups. Is it going to be more or less work to rebuild the box vs restore your data from backups?
What? You don't have backups either?
Then the last ditch standard recovery method would be something like:
- Create a new instance from a current AMI, ideally the same distro as your broken machine. It can be something small like a t3.nano
- Detach the volumes from your broken machine, and attach them to the new instance as sdf, g, h... and so on
Log into your new instance as root and for each of your broken instance's disks run
fsck /dev/xvdX mkdir /sdX mount /dev/xvdX /sdX cd /sdX ls -l
At this point you need to decide whether its worth using chmod over and over to fix your problem, or whether you copy the data to your new instance and set it up over again.
So manually change into each directory, and chmod each file to what it should be. Keep two windows open and compare the live host's files with the broken disks mounted. Make sure you're changing the RIGHT files - check often!!!
When you've done the lot, shut down the temp machine, detach the disks in the EC2 web gui and then reattach them to the old machine, in the same mountpoints from which they came. NOTE the root drive is attached as sda1 not sda but all other volumes are attached as sdb through z.
Either way, you should set up automated snapshots or backups, or both!
To prevent yourself doing this exact same thing again, alias chmod to
chmod --preserve-root
But this won't protect any other directory.
Also don't use sudo in front of commands just by habit.