Comparing the AWS EC2 x1.16xlarge vs. r3.8xlarge: time to upgrade?
AWS released the EC2 x1.16xlarge instance as a way for more users to access the gargantuan memory-focused family at a lower price and technical footprint. Let’s explore the cost efficiency of the new x1.16xlarge compared with multiple r3.8xlarge instances to see if the new addition to the X1 family is worth the upgrade.
What does the new X1 instance size bring to the table?
When we last explored the X1 family, we learned that the x1.32xlarge became a hit with users wanting to run SAP HANA (high-performance analytic appliance) in-memory databases. It also became very popular with enterprise-class businesses running high-performance compute workloads, such as Apache Spark and Hadoop.
The addition of the x1.16xlarge adds a more affordable, yet monstrous memory-optimized instance to EC2. Just like its larger sibling, the 16xlarge features Intel’s Turbo Boost 2.0, AVX 2.0, AES-NI (improved encryption support), and TSX-NI (improved hardware transactional memory support).
To get a better idea of the cost efficiency of the intro-level X1 instance, we compare it to the specs and pricing of the largest R3 instance available: the r3.8x.large. As a quick recap, the R3 family is AWS’s memory-optimized family featuring instances designed to run in-memory databasing, distributed memory process, and other memory-intensive operations.
Price and spec comparison
According to the AWS EC2 documentation, each x1.16xlarge instance features two Intel Xeon E7 8880 v3 (Haswell) processors running at 2.3GHz. Each has 32 cores and 64 vCPUs. They are equipped with 976 GB of RAM (featuring Single Device Data Correction, or SDDC+1), 1,920 GB of SSD storage, 10 Gbps of network bandwidth, and dedicated bandwidth to EBS volumes (up to 5 Gbps) at no additional cost.
The r3.8xlarge instances use Intel Xeon E5-2670 v2 (Ivy Bridge) processors providing up to 32 vCPU, 244 GB of memory, and 2 x 320 GB of SSD storage. It features EC2’s Enhanced Networking for more efficient use of processing power.
Price per GB per hour comparison
With memory optimization as the leading selling point for both the X1 and R3 families, we compare the x1.16xlarge with the r3.8xlarge to see which is more cost efficient from a memory standpoint:
|Instance||Memory||Linux On-Demand Price per Hour||Price per GB Accessed per Hour|
|x1.16xlarge||976 GB||$6.669 per hour||$0.0068 per GB per hour|
|r3.8xlarge||244 GB||$2.660 per hour||$0.010 per GB per hour|
The x1.16xlarge offers four times (4x) the memory at 68% of the r3.8xlarge’s price per hour (using our Linux On-Demand example above). Granted, one x1.16xlarge instance costs almost three times (3x) as much as one r3.8xlarge, but the r3.8xlarge can’t beat the memory cost efficiency of the x1.16xlarge.
When would migrating make sense?
Current R3 users can take a look at their EC2 costs and usage with a cloud cost management tool to determine if migrating makes sense. Getting an operational history of R3 instance costs and usage can paint a picture of how efficiently users actually put their instances to work.
Many AWS users provision clusters of R3 instances to run distributed memory processing, such as Solr, and high-performance applications that require real-time in-memory database access, like Neo4J and Titan. If these types of users haven’t already opted for the x1.32xlarge, they might be running a number of r3.x8large instances with various levels of utilization, and the cheaper, smaller footprint of the x1.16xlarge might look appealing.
Let’s use an example of four r3.8xlarges as a comparison. This cluster provides access to just as much memory as one x1.16xlarge (976 GB of memory) to handle memory-intensive tasks. In that case, migrating makes sense if the current workload makes the most of the allocated 976 GB of memory between the four r3.8xlarge instances. If so, there’s a strong argument for moving to the x1.16xlarge since users can access the same amount of memory for a cheaper hourly rate:
Linux On-Demand Example:
Cost per hour of four r3.8xlarge instances: $10.64 per hour
Cost per hour of one x1.16xlarge: $6.669 per hour
Not only would R3 users improve their memory cost efficiency, but they would also be able to put the newer-generation Intel processors and updated encryption and transactional memory features (AES-NI and TSX-NI) to work. Purchasing Reserved Instance hours for the x1.16xlarge lowers the hourly rate even more (e.g., for Linux users, the price drops to $4.579 hourly).
When would migrating not make sense?
If the costs of migrating to and maintaining the x1.16xlarge are too high, it might not make sense to upgrade. These costs could include the working hours to prepare a migration (dev and test hours) and other contingency costs, such as support and quality assurance. If the R3 instances had Reserved Instance hours purchased, there would also be the task of selling the old RIs on the AWS Marketplace and purchasing new x1.16xlarge RIs, which can yield various additional costs and affect cash outlays until the “break-even point” of the new RI (for more on RI portfolio management.
You’ll want to check to see if you’re getting the most of your current setup as well. Using a cloud cost management tool to review EC2 costs and usage can reveal if the instances are being put to use and utilized to their potential. In the case of high utilization, and perhaps experiencing a bottleneck in performance, there might be a case to upgrade. If the instances are underutilized, there might be an opportunity to stop using some instances to cut down on the bill and improve cost efficiency.
It’ll be interesting to see if AWS continues to create smaller footprints, such as an x1.8xlarge, to make the X1 family even more accessible to users to run their memory-hungry operations.
Have the right cost and usage data ready to make the migration case
Whether the new x1.16xlarge or a current fleet of R3s is the right fit, understanding your actual EC2 costs and usage is key to running a cost-efficient environment. Using a cloud cost management tool, such as Cloudability, is a way to monitor these data points daily. Then if bottlenecks appear, or underutilization is present, you and your engineering and operations teams can act accordingly to improve AWS EC2 cost efficiency.
To see this type of cloud cost management at work, we invite anyone to get in touch for a free trial.