• Skip to main content
  • Skip to primary sidebar

Technical Notes Of
Ehi Kioya

Technical Notes Of Ehi Kioya

  • Forums
  • About
  • Contact
MENUMENU
  • Blog Home
  • AWS, Azure, Cloud
  • Backend (Server-Side)
  • Frontend (Client-Side)
  • SharePoint
  • Tools & Resources
    • CM/IN Ruler
    • URL Decoder
    • Text Hasher
    • Word Count
    • IP Lookup
  • Linux & Servers
  • Zero Code Tech
  • WordPress
  • Musings
  • More
    Categories
    • Cloud
    • Server-Side
    • Front-End
    • SharePoint
    • Tools
    • Linux
    • Zero Code
    • WordPress
    • Musings
Home » AWS, Azure, Cloud » How To Shrink Amazon EBS Volumes (Root Or Non-Root)

How To Shrink Amazon EBS Volumes (Root Or Non-Root)

Last updated on September 10th, 2020 at 01:08 pm by Ehi Kioya 23 Comments

Amazon Web Services (AWS) makes it very easy to expand EBS volumes. You just right-click on the volume, select modify, and enter the new, larger volume size. Done. To shrink Amazon EBS volumes, however, is a whole different matter – there is no way to do this directly using the AWS console.

And if you made the mistake of allocating too much disk space to your EBS volume when originally setting up your Amazon EC2 instance, you would generally get stuck with an over-sized volume for which you will incur unnecessary month-to-month disk space charges.

In this article, I describe a roundabout technique that I have often used to save some bucks in scenarios where I mistakenly over-allocated Amazon EBS when setting up EC2.

What we will basically be doing is this: We will create a new, smaller EBS volume and then copy over the contents of the old over-sized volume. Then we will delete the old, over-sized volume and work with the new, small volume going forward.

RELATED READING: Swap File For Ubuntu On Amazon EC2 – Why And How?

This process could get tricky though, since most of the work will be done via the EC2 command line (or terminal). But if you follow the below steps carefully, you shouldn’t lose any data and you shouldn’t encounter any issues with EBS drives refusing to boot up.

Steps To Shrink Amazon EBS Volumes

Step 1: IMPORTANT: Stop your Amazon EC2 instance and take a snapshot of the current volume – the over-sized one that you’re looking to shrink. This is an important step because it is the only way to go back to your “last known good” state should something go terribly wrong.

Step 2: Now you need to create a new, smaller EBS volume. The size of this volume should be the size you want to shrink to.
Exactly how you create this volume will depend on if the volume you are trying to shrink is a root volume or not.

For a non-root volume: Just create the EBS volume directly from the AWS console by clicking “Create Volume”. Generally, I would probably recommend going the root volume route EVEN IF the volume you are creating will not be used as a root volume.

For a root volume: Create a new Amazon EC2 instance with the same operating system as the one on your existing instance. The EC2 instance creation process will ask you to add storage. Enter the volume size that you want to shrink to. When done, detach the volume from the instance and terminate the instance. Going this route saves you the stress of formatting the volume, creating partitions, marking the root flag, etc. If you are attempting to resize a root volume and don’t go this route, you will quite possible end up with an unbootable EC2 instance.

Step 3: Create another new Amazon EC2 instance with the same operating system as your existing instance. A micro or small instance should do. We only need this temporarily to run some commands.

Step 4: Detach the old, over-sized volume from its instance and attach it to the instance you created in step #3 above.
Mount the old volume as /dev/sdf/ (it will become /dev/xvdf).

Step 5: Mount the new, smaller volume (created in step #2) as /dev/sdg (it will become /dev/xvdg).

Step 6: Power on the new EC2 instance and SSH into it.
 
Step 7: Run this command:

sudo e2fsck -f /dev/xvdf1

Where “1” is the partition number you wish to resize. For simplicity, I’ll assume that you are not working with multiple partitions. So, just go ahead and run the command as is.
NOTE: Make sure that the xvdf1 is not mounted. Otherwise, you may run into errors. If for some weird reason xvdf1 is mounted (happened to me once), you may need to detach and reattach all 3 volumes on the instance. Technically, the only mounted volume should be the root volume that came with our new temporary EC2 instance.

Step 8: If the last command runs without errors, go ahead and run this command:

sudo resize2fs -M -p /dev/xvdf1

Step 9: The last line from the resize2fs command above will tell you how many 4k blocks are on your file system. Now, you need to calculate the number of 16MB blocks you need.
Use this formula:
blockcount * 4 / (16 * 1024)
Where blockcount is from the last line of the resize2fs command.
Round up this number to give yourself a small buffer. Save this result. We will use it soon.

Step 10: If you don’t yet have a partition on your new volume (/dev/xvdg1), use fdisk to create one.
NOTE: This step would only apply if you created a non-root volume in step #2 above. If you created a root volume, you can completely ignore this step and move on.

Step 11: Now run this command:

sudo dd bs=16M if=/dev/xvdf1 of=/dev/xvdg1 count=SavedResultFromPreviousStep

This may take a while to complete depending on how much data you have. This step performs the actual copying of data from the old, over-sized drive to the new, smaller drive.

Step 12: When the copy finishes, run the following two commands:

sudo resize2fs -p /dev/xvdg1
sudo e2fsck -f /dev/xvdg1

These commands will resize and check that everything is good with your new file system.

Step 13: If everything is good, shut down the temporary instance. Detach both the old and new volumes. Attach the new, small volume to your original EC2 instance. This time, mount the drive to your boot device (/dev/sda1).

Note: As pointed out by Adriaan van Wyk in the comments, AWS naming convention is different depending on AMI (see here). But it will usually be either /dev/sda1 or /dev/xvda. So, your small new volume might need to be mounted to /dev/xvda. Please confirm. Or you could just try /dev/sda1 first, and if it doesn’t work, try /dev/xvda next.

Step 14: Boot up your original EC2 instance and test that everything works well with the new, small volume.

Step 15: When you are satisfied that all is well, you may go ahead and delete all the unused artifacts – the new EC2 instance you created, the old, over-sized EBS volume, and the snapshot of the old volume.

Have you have ever had to shrink Amazon EBS volumes? Did this article help you accomplish that? Please feel free to share your experience or contributions using the comments section below.

Found this article valuable? Want to show your appreciation? Here are some options:

  1. Spread the word! Use these buttons to share this link on your favorite social media sites.
  2. Help me share this on . . .

    • Facebook
    • Twitter
    • LinkedIn
    • Reddit
    • Tumblr
    • Pinterest
    • Pocket
    • Telegram
    • WhatsApp
    • Skype
  3. Sign up to join my audience and receive email notifications when I publish new content.
  4. Contribute by adding a comment using the comments section below.
  5. Follow me on Twitter, LinkedIn, and Facebook.

Related

Filed Under: AWS, Azure, Cloud, Cloud Computing, Linux & Servers Tagged With: Amazon EBS, Amazon EC2, AWS, Cloud Computing

About Ehi Kioya

I am a Toronto-based Software Engineer. I run this website as part hobby and part business.

To share your thoughts or get help with any of my posts, please drop a comment at the appropriate link.

You can contact me using the form on this page. I'm also on Twitter, LinkedIn, and Facebook.

Reader Interactions

Comments

  1. Ezequiel Gioia says

    October 15, 2018 at 12:39 pm

    Hi Ehi,

    Great article! I was wondering if you will consider rsync an option for non-root volumes. I faced some shrink issues with big volumes (6 TiB), particularly it takes too much time to shrink.

    Regards,
    Ezequiel

    Reply
    • Ehi Kioya says

      October 15, 2018 at 1:09 pm

      Hi Ezequiel, I have never tried rsync for this purpose and I’m not sure if it will offer any speed advantages.

      The dd command used above worked quite fast and flawlessly for me. However, if you do try the rsync command, please let me know how it goes. And if you get a significant speed boost, do share your rsync command line here as well to help others who may experience the same problem as you.

      Reply
  2. guy harris says

    January 4, 2019 at 4:08 am

    when you terminate the instance the volume is terminated

    Reply
  3. Andrii says

    January 11, 2019 at 9:17 am

    It good to run dd with option status=progress but we have this only on Ubuntu 16.04+.
    sudo dd bs=16M if=/dev/xvdf1 of=/dev/xvdg1 count=SavedResultFromPreviousStep status=progress

    Reply
    • Ehi Kioya says

      February 11, 2019 at 9:50 pm

      Hi Andrii, I have never used the status=progress flag. But thanks for pointing it out for everyone’s benefit. I’ll consider adding this flag the next time I use the dd command.

      Reply
  4. Murielle says

    February 11, 2019 at 9:40 pm

    Thank you very much Kioya.
    I have seen a lot of tuto on shrinking aws volume on the web but this is the most straight forward and it works perfectly. You save me from $60 montly with my oversized amazon ec2
    Great job

    Reply
    • Ehi Kioya says

      February 11, 2019 at 9:47 pm

      Hi Murielle,

      True, many people who have attempted to explain the process of shrinking Amazon EBS volumes either provide unnecessary technical detail or leave out some crucial step. This is why people often end up with unbootable volumes when they attempt to shrink Amazon EBS volumes. I wrote this article partly as a reference for myself, so I made sure to capture the exact details needed to make this process successful on the first try.

      I’m happy to know this helped you save some cash. I’ve seen significant recurring monthly savings myself as well.

      Glad I could help.

      Reply
  5. Victor says

    April 13, 2019 at 3:41 pm

    Hi!
    In “Step 7”, my ‘xvdf1’ is mounted:

    xvda 202:0 0 8G 0 disk
    └─xvda1 202:1 0 8G 0 part
    xvdf 202:80 0 80G 0 disk
    └─xvdf1 202:81 0 80G 0 part /
    xvdg 202:96 0 25G 0 disk
    └─xvdg1 202:97 0 25G 0 part

    But detaching and attaching again the volumes didn’t help.
    What can I do?

    Thanks.

    Reply
  6. Chris Beecher says

    May 7, 2019 at 3:14 pm

    Same as Victor’s comment.

    I enjoy the post. it seems so clean and direct, but after I added the drives to my new instance I cannot get past “e2fsck -f /dev/xvdf1”. I did a unmount /dev/xvdf, and it told me it was unmounted. The I tried the e2fsck and got message “/dev/xvdf1 was busy” so I “umounted /dev/xvdf1” and it took. but then got “sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the ‘nosuid’ option set or an NFS file system without root privileges?”. I am still sshing in as ubuntu, but it now is giving me this message even for “sudo fdisk -l”

    Reply
  7. Dominic Desbiens says

    May 31, 2019 at 9:51 pm

    Hi,

    I’m new on self web hosting (learning more about Linux and self hosting on Amazon). I have created too big volume on my first real « production » instance (30 Go). Your tutorial was really simple to understand and it worked like a charm. Many thanks for it.

    Reply
    • Ehi Kioya says

      May 31, 2019 at 10:00 pm

      Hi Dominic. Great to know this worked flawlessly for you. And welcome to Linux and Amazon cloud. You’re in for a treat 🙂

      Reply
  8. Thomas Lange says

    August 2, 2019 at 5:46 am

    Thx for the instructions.

    Worked (almost) flawless.
    Although I had some strange behavior when attaching devices and then powering on the machine.

    AWS mounted a different volume (the one I wanted to resize in fact).

    So instead I started the EC2 Instance and then attached the devices. As such they were not used as the root device.

    Reply
  9. Deepak says

    August 6, 2019 at 9:47 am

    Excellent. Precise. Superb.
    Great work my friend.
    Thanks for sharing this wonderful tutorial.

    Reply
    • Ehi Kioya says

      August 6, 2019 at 9:56 am

      You’re welcome Deepak. Glad to know it helped.

      Reply
  10. Jeff Kee says

    October 18, 2019 at 11:53 am

    After running the dd command to copy the disk, naturally the disk showed 0 files when I mounted it just to see.
    When I ran df -H to see the usage, I saw a “negative” usage. nvme2n1p1 is the NEW smaller drive, while nvme1n1p1 is the original large drive

    After resize2fs, I think the partition shrinks to what it needs to be, regardless of the physical drive size, which is 3TB and 0.3 TB, respectively, which I guess is normal.

    [root@ip-172-31-41-211 snap]# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    nvme1n1 259:0 0 3T 0 disk
    ├─nvme1n1p1 259:1 0 3T 0 part /mnt/snap
    └─nvme1n1p128 259:2 0 1M 0 part
    nvme2n1 259:3 0 300G 0 disk
    ├─nvme2n1p1 259:4 0 300G 0 part /mnt/small
    └─nvme2n1p128 259:5 0 1M 0 part
    nvme0n1 259:6 0 8G 0 disk
    ├─nvme0n1p1 259:7 0 8G 0 part /
    └─nvme0n1p128 259:8 0 1M 0 part

    [root@ip-172-31-41-211 small]# df -H
    Filesystem Size Used Avail Use% Mounted on
    devtmpfs 4.2G 0 4.2G 0% /dev
    tmpfs 4.2G 0 4.2G 0% /dev/shm
    tmpfs 4.2G 402k 4.2G 1% /run
    tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
    /dev/nvme0n1p1 8.6G 1.5G 7.2G 18% /
    /dev/nvme1n1p1 93G 86G 6.2G 94% /mnt/snap
    /dev/nvme2n1p1 89G -229G 317G – /mnt/small
    tmpfs 829M 0 829M 0% /run/user/1000

    I unmounted, then ran the resize2fs.
    Question here:
    Is it normal for the sudo resize2fs -p /dev/nvme2n1p1 command to take forever?

    On a data block set of less than 100G of data (just under 90G filled on a 300G HDD) it took overnight it seems. In fact it took so long, I had to close my laptop, open it at home.. and then the “broken pipe” happened and I could not even see the results.

    Also the dd command took 26 hours on less than 100G.. seems abnormally high.

    Am I missing something?

    Reply
    • Ehi Kioya says

      October 18, 2019 at 12:31 pm

      Hi Jeff Kee,

      The time taken by the resize2fs command can vary depending on loads of different things. But if you suspect there might be a problem, try opening another terminal window/session and running the dmesg (display message or driver message) command which can help you see I/O errors (if any).

      You can also try the df -h command in a new window to see whether available space is changing.

      In any case, it does seem to be taking too long in your case. Especially that dd command you mentioned. 26 hours sounds way too long! I have never had it take that long in my experience but I can’t be too sure why that is happening for you.

      As pointed out by Andrii in a comment above, you may want to run the dd command with the status=progress option. That way you can actually view the progress and know if something got stuck or anything.

      Reply
      • Jeff Kee says

        October 22, 2019 at 2:50 am

        One of my mistakes, I realize, was I actually mounted the partitions using sudo mount /dev/[deviceid] [mountdir], whereas I think I’m not supposed to run sudo e2fsck and similar commands to unmounted (physically connected, but not actually part of filesystem).

        I’m running it again. There were a bunch of messages like these:

        Clearing orphaned inode 394851 (uid=0, gid=0, mode=0100644, size=4)
        Clearing orphaned inode 395045 (uid=0, gid=0, mode=0100644, size=4)
        Clearing orphaned inode 395039 (uid=0, gid=0, mode=0100644, size=4)
        Clearing orphaned inode 394822 (uid=0, gid=0, mode=0100600, size=0)
        Clearing orphaned inode 394826 (uid=0, gid=0, mode=0100600, size=32768)
        Clearing orphaned inode 962761 (uid=0, gid=0, mode=0100644, size=0)
        Clearing orphaned inode 38142264 (uid=27, gid=27, mode=0100600, size=0)
        Clearing orphaned inode 38142224 (uid=27, gid=27, mode=0100600, size=0)
        Clearing orphaned inode 38142223 (uid=27, gid=27, mode=0100600, size=0)
        Clearing orphaned inode 38142222 (uid=27, gid=27, mode=0100600, size=0)
        Clearing orphaned inode 38142221 (uid=27, gid=27, mode=0100600, size=0)

        The sudo resize2fs command is running on 25952535 (4k) blocks, and taking at least half hour so far and counting, on a 3TB drive with about 100G of data (managed to purge hundreds of gigs of meaningless data, hence the shrink down project haha!)’

        We’ll see how this goes! 😀

        Reply
        • Jeff Kee says

          October 22, 2019 at 6:44 am

          [root@ip-xxxxxxx ec2-user]# sudo resize2fs -M -p /dev/nvme1n1p1
          resize2fs 1.42.9 (28-Dec-2013)
          Resizing the filesystem on /dev/nvme1n1p1 to 25952535 (4k) blocks.
          Begin pass 2 (max = 16714965)
          Relocating blocks XXXXXXXXXXXXXXXXX———————–

          So far 4 hours in.. this seems abnormally long! The disk is 3TB with 100G of data.

          Reply
          • Ehi Kioya says

            October 22, 2019 at 7:09 am

            Yeah that’s a long time. But considering that your disk is 3TB in size originally (which is quite large) and previously held much more than 100GB of data, maybe this is not abnormally long after all.

            On the positive side of things, you are at least able to see that progress is happening. So the process is not stuck. I’d say, just wait it out.

            Let us know if it completes successfully. Thanks.

  11. littlepear says

    February 18, 2020 at 12:27 am

    What if I am dealing with 2 partitions on a non-root volume. How will this command differ?
    sudo e2fsck -f /dev/xvdf1

    Reply
  12. Adriaan van Wyk says

    February 26, 2020 at 9:17 am

    Hi Ehi,

    Thanks for this walkthrough – it did the trick.

    I ran into the same issue as Thomas Lange at Step 6, and used the same workaround he did. Looks like AWS is perfectly happy attaching volumes to a running instance.

    Also, had to make one more amendment: in Step 13, instead of mounting the smaller drive as “/dev/sda1”, I had to mount it as “/dev/xvda”. Apparently the naming convention differs by AMI:

    https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html

    Thanks again!

    Reply
    • Ehi Kioya says

      February 26, 2020 at 2:22 pm

      Hi Adriaan. Thanks for letting us know what worked for you. And your information about different naming conventions for different AMIs is also very useful. I have updated Step 13 of the article to mention that the boot device might be /dev/xvda depending on the the user’s AMI.

      Reply
  13. Ritu says

    April 21, 2020 at 3:07 pm

    Hi Ehi,
    Thank you for the detailed article. I was able to successfully shrink a non-root ext file system drive. When I am trying to shrink a root xfs file system, I am getting stuck on step 7. Do we have to do it differently for xfs file systems? Thank you for your time.

    lsblk output
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda 202:0 0 80G 0 disk
    ├─xvda1 202:1 0 1M 0 part
    └─xvda2 202:2 0 80G 0 part /
    xvdf 202:80 0 1.5T 0 disk
    ├─xvdf1 202:81 0 1M 0 part
    └─xvdf2 202:82 0 1.5T 0 part
    xvdg 202:96 0 100G 0 disk
    ├─xvdg1 202:97 0 1M 0 part
    └─xvdg2 202:98 0 100G 0 part

    sudo e2fsck -f /dev/xvdf2 output
    e2fsck 1.42.9 (28-Dec-2013)
    ext2fs_open2: Bad magic number in super-block
    e2fsck: Superblock invalid, trying backup blocks…
    e2fsck: Bad magic number in super-block while trying to open /dev/xvdf2

    The superblock could not be read or does not describe a correct ext2
    filesystem. If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

26,219
Followers
Follow
30,000
Connections
Connect
14,641
Page Fans
Like

POPULAR   FORUM   TOPICS

  • How to find the title of a song without knowing the lyrics
  • Welcome Message
  • How To Change Or Remove The WordPress Login Error Message
  • The Art of Exploratory Data Analysis (Part 1)
  • Getting Started with SQL: A Beginners Guide to Databases
  • Replacing The Default SQLite Database With PostgreSQL In Django
  • Storing and Deleting Multiple Files In Laravel
  • Building A Blog With Laravel – Part 4: Working With Models And Migrations
  • Customizing Bar Graphs With Python’s Matplotlib Library
  • Getting Started with Frontend Frameworks Using Bootstrap
  • Recently   Popular   Posts   &   Pages
  • Actual Size Online Ruler Actual Size Online Ruler
    I created this page to measure your screen resolution and produce an online ruler of actual size. It's powered with JavaScript and HTML5.
  • Allowing Multiple RDP Sessions In Windows 10 Using The RDP Wrapper Library Allowing Multiple RDP Sessions In Windows 10 Using The RDP Wrapper Library
    This article explains how to bypass the single user remote desktop connection restriction on Windows 10 by using the RDP wrapper library.
  • WordPress Password Hash Generator WordPress Password Hash Generator
    With this WordPress Password Hash Generator, you can convert a password to its hash, and then set a new password directly in the database.
  • Forums
  • About
  • Contact

© 2021   ·   Ehi Kioya   ·   All Rights Reserved
Privacy Policy