I remember having to terminate an entire Amazon EC2 instance because I somehow lost access to it via SSH. Well, looking back, termination really wasn’t necessary. Using the simple process described below, you can easily regain access to the AWS EC2 instance that you locked yourself out of.
These instructions were originally written for Ubuntu. But the same basic concepts should apply to any Linux distro.
Steps To Regain Access
Step 1: Take a snapshot of the root EBS volume associated with your EC2 instance.
NOTE: If you need to maintain 100% uptime, you can go ahead and launch a new EC2 instance using the snapshot you just created. Then you will need to associate the new instance with your current Elastic IP address. That way, your server will continue running while you troubleshoot and fix your access issue on the original volume.
Step 2: Boot up another EC2 instance.
Step 3: Detach the EBS volume from the instance you got locked out of and attach it to the instance you created in Step #2 above.
NOTE: Depending on whether you attached the volume while the instance was already running or not, Linux may boot up using the “corrupt” attached volume and you will still be locked out. So it may be necessary to only attach the volume to the new instance AFTER the new instance is already running.
Step 4: Run lsblk. You should see the default 8GB volume and the second volume you just attached. Note the name of the second volume.
Step 5: Make a temporary directory on root that will act as our mount point. We’ll name this directory “recovery”. You could also use the default /mnt directory.
sudo mkdir recovery
Step 6: Mount your attached volume (the one you got locked out of) using the name of the volume as noted in Step #4 above (this name will of course depend on the path you chose when you attached the volume to the instance in Step #3. You may also have used the default value suggested by AWS). So for example, if you attached your volume to /dev/sdf, it will show up in Linux as /dev/xvdf. Let’s assume that we’re working with /dev/xvdf. Run the following command to perform the mount:
sudo mount /dev/xvdf1 /recovery
Step 7: Now you can just…
cd /recovery/home
and fix your login issue.
If you lost your access key, edit this file “/recovery/home/ubuntu/.ssh/authorized_keys”. You can copy the private key from the new Ubuntu instance that you know you have access to. Worst case, you can even copy the .ssh or the entire “/home/ubuntu” folder from the new instance to the volume you got locked out of.
Step 8: If you’re working with passwords instead of keys, instead of Step #7 above, just chroot into your new mount (sudo chroot /recovery) and type passwd. Then reset the password or make any other changes you like (for example, changes within the “/etc/ssh/sshd_config” file).
Press control-D or type exit to exit the chroot.
Step 9: Unmount the volume:
sudo umount /recovery
Step 10: Stop the instance and detach the now fixed EBS volume.
Step 11: Re-attach the fixed EBS volume to your original Amazon EC2 instance (re-attach it to your boot device /dev/sda1). Boot up and confirm that you now have access.
Step 12: After confirming that the login access issue has been fixed, cleanup all unused artifacts – EC2 instances, EBS volumes, and/or any snapshots you no longer need.
As a general rule, always grab an EBS volume snapshot before making configuration changes. This will save you from risky lockout situations going forward.
Leave a Reply