AWS Placement Groups
EC2 service attempt to place all regional EC2 instances spread out across underlying hardware to minimise correlated failures. We can influence this placement by choosing placement groups for group of interdependent EC2 instances depending on the workload.
AWS EC2 Placement Group Strategies are as follows:-
1. Cluster
Packs interdependent instances to one AZ(Availability Zone). This has 10Gbps bandwidth between instances with Enhances Networking enabled(recommended).
This strategy enables a low latency network performance which is necessary for tightly coupled node to node communication which is crucial for high performance computing (HPC) applications.
Pros:- Great Network with low latency and 10Gbps bandwidth between. instances.
Cons:- If the rack(hardware) fails, all the instances fail at the same time
Use cases:-
- Big data jobs that needs to complete fast.
- Applications that needs extremely low latency with high performance throughput.
- Recommended for applications where majority of traffic is between the instances in the interrelated instance group
Ways to launch instances in Cluster Strategy:-
- Single launch request with all the instances needed for that placement group
- Use same type of instances for that placement group
Errors and solutions:-
- Insufficient Capacity Error — when trying to add more instances later to a placement group or when trying to launch more than one instance type to a placement group
Solution — Stop and start all the running instances in the placement group and try the launch again. Starting instances may migrate the placement group to a different hardware which has the capacity for all the requested instances.
Maximum network throughput traffic speed between instances can get affected by the instances types. It is recommended to choose an instance type with network connectivity to meet the requirements of applications which needs high throughput.
2. Spread
Group of instances which are placed on distinct hardware. Each EC2 instance is placed on different hardware.
Pros:-
- Can spread across AZs in the same region
- Reduced risk in simultaneous failures that might occur when instances share the same equipment.
- EC2 instances are in different physical hardware
- Since Spread strategy provide access to distinct hardware, it is suitable for mixing instance types or launching instances over time
Cons:- Limited to 7 instances per AZ per placement group
Eg:- In a region with two availability zones, we can run 14 instances in the group with 7 instances in each AZ. If you try to launch the eighth instance in the same AZ, same spread placement group, it will not launch
Use Cases:-
- Applications that needs to maximise high availability
- Critical applications where each instance must be isolated from failure from each other
- Recommended for applications which has small number of critical instances which should be kept separated from each other to avoid simultaneous failures.
Errors and solutions:-
- When starting or launching EC2 instance in a spread placement group and of there’s inefficient unique hardware to fulfill the request, the request fails. AWS EC2 service more distinct hardware available over time, so you can try your request again later.
3. Partition
This strategy spreads the instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in other partitions.
Reduce the likelihood of correlated hardware failures of the application.
AWS EC2 service divides the group in to logical segments called partitions. Each partition within a placement group has its own set of racks. Each rack has its own network and power source.
Instances in a partition do not share racks with instances in other partitions in the same placement group, allowing to contain the impact of a single hardware failure only to the associated partition.
Can span across multiple AZs
Up to 7 partitions per AZ. The number of instances that can be launched into a partition placement group is limited only by the limits of your AWS account.
When launching EC2 instances, AWS EC2 service distribute the instances evenly across the partitions specified for the placement group or the user can launch the instances to a specific partition to have more control over where the instances have specified.
Partition placement groups offer visibility over other partitions in the group to see which instance is in which partition via metadata service.
Use Cases:-
Large distributed and replicated workloads such as Hadoop, kafka, Cassandra, HDFS, HBase. These aplications use instance visibility of other partitions to make intelligent data replication decisions for application availability and durability.