Скачать книгу

and limits.

      Policy-Based Storage

      With profile-driven storage, vSphere administrators can use storage capabilities and VM storage profiles to ensure VMs reside on storage that provides the necessary levels of capacity, performance, availability, and redundancy. Profile-driven storage is built on two key components:

      • Storage capabilities, leveraging vSphere’s storage awareness APIs

      • VM storage profiles

      Storage capabilities are either provided by the storage array itself (if the array can use vSphere’s storage awareness APIs) and/or defined by a vSphere administrator. These storage capabilities represent various attributes of the storage solution.

      VM storage profiles define the storage requirements for a VM and its virtual disks. You create VM storage profiles by selecting the storage capabilities that must be present for the VM to run. Datastores that have all the capabilities defined in the VM storage profile are compliant with the VM storage profile and represent possible locations where the VM could be stored.

      This functionality gives you much greater visibility into storage capabilities and helps ensure that the appropriate functionality for each VM is indeed being provided by the underlying storage. These storage capabilities can be explored extensively by using VVOLs or VSAN.

      Refer to Table 1.1 to find out which chapter discusses profile-driven storage in more detail.

      vSphere High Availability

      In many cases, high availability – or the lack of high availability – is the key argument used against virtualization. The most common form of this argument more or less sounds like this: “Before virtualization, the failure of a physical server affected only one application or workload. After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can’t put all our eggs in one basket!”

VMware addresses this concern with another feature present in ESXi clusters called vSphere High Availability (HA). Once again, by nature of the naming conventions (clusters, high availability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high-availability configuration in Windows. The vSphere HA feature provides an automated process for restarting VMs that were running on an ESXi host at a time of server failure (or other qualifying infrastructure failure, as I’ll describe in Chapter 7, “Ensuring High Availability and Business Continuity”). Figure 1.3 depicts the VM migration that occurs when an ESXi host that is part of an HA-enabled cluster experiences failure.

Figure 1.3 The vSphere HA feature will restart any VMs that were previously running on an ESXi host that experiences server or storage path failure.

      The vSphere HA feature, unlike DRS, does not use the vMotion technology as a means of migrating servers to another host. vMotion applies only to planned migrations, where both the source and destination ESXi host are running and functioning properly. In a vSphere HA failover situation, there is no anticipation of failure; it is not a planned outage, which means there is no time to perform a vMotion operation. vSphere HA is intended to minimize unplanned downtime because of the failure of a physical ESXi host or other infrastructure components. I’ll go into more detail in Chapter 7 on what kinds of failures vSphere HA helps protect against.

      vSphere HA Improvements from vSphere 5

      vSphere HA received a few notable improvements in the vSphere 5.0 release. First, scalability was significantly improved; you could run up to 512 VMs per host (up from 100 in earlier versions) and 3,000 VMs per cluster (up from 1,280 in earlier versions). Second, vSphere HA integrated more closely with the intelligent placement functionality of vSphere DRS, giving vSphere HA greater ability to restart VMs in the event of a host failure. The third and perhaps most significant improvement is the complete rewrite of the underlying architecture for vSphere HA; this entirely new architecture, known as Fault Domain Manager (FDM), eliminated many of the constraints found in earlier versions of VMware vSphere.

      By default, vSphere HA does not provide failover in the event of a guest OS failure, although you can configure vSphere HA to monitor VMs and restart them automatically if they fail to respond to an internal heartbeat. This feature is called VM Failure Monitoring, and it uses a combination of internal heartbeats and I/O activity to attempt to detect if the guest OS inside a VM has stopped functioning. If the guest OS has stopped functioning, the VM can be restarted automatically.

      With vSphere HA, it’s important to understand that there will be an interruption of service. If a physical host or storage device fails, vSphere HA restarts the VM, and while the VM is restarting, the applications or services provided by that VM are unavailable. For users who need even higher levels of availability than can be provided using vSphere HA, vSphere Fault Tolerance (FT), which is described in the next section, can help.

      vSphere Fault Tolerance

      Although vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not be good enough for some workloads. vSphere Fault Tolerance (FT) might help in these situations.

As I described in the previous section, vSphere HA protects against unplanned physical server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some downtime – generally less than 3 minutes – is incurred. vSphere FT goes even further and eliminates any downtime in the event of a physical host failure. For a single vCPU VM, the older vLockstep technology is used that is based on VMware’s earlier “record and replay” functionality, vSphere FT maintains a mirrored secondary VM on a separate physical host that is kept in lockstep with the primary VM. vSphere’s newer Fast Checkpointing technology supports FT of VMs with one to four vCPUs. Everything that occurs on the primary (protected) VM also occurs simultaneously on the secondary (mirrored) VM, so that if the physical host for the primary VM fails, the secondary VM can immediately step in and take over without any loss of connectivity. vSphere FT will also automatically re-create the secondary (mirrored) VM on another host if the physical host for the secondary VM fails, as illustrated in Figure 1.4. This ensures protection for the primary VM at all times.

Figure 1.4 vSphere FT provides protection against host failures with no downtime experienced by the VMs.

      In the event of multiple host failures – say, the hosts running both the primary and secondary VMs failed – vSphere HA will reboot the primary VM on another available server, and vSphere FT will automatically create a new secondary VM. Again, this ensures protection for the primary VM at all times.

      vSphere FT can work in conjunction with vMotion. As of vSphere 5.0, vSphere FT is also integrated with vSphere DRS, although this feature does require Enhanced vMotion Compatibility (EVC). VMware recommends that multiple FT virtual machines with multiple vCPUs have 10GbE networks between hosts.

      vSphere Storage APIs for Data Protection and VMware Data Protection

      One of the most critical aspects to any network, not just a virtualized infrastructure, is a solid backup strategy as defined by a company’s disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere 6.0 has two key components: the vSphere Storage APIs for Data Protection (VADP) and VMware Data Protection (VDP).

      VADP is a set of application programming interfaces (APIs) that back up vendors leverage in order to provide enhanced backup functionality of virtualized environments. VADP enables functionality like file-level backup and restore; support for incremental, differential, and full-image backups; native integration with backup software; and support for multiple storage protocols.

      On its own, though, VADP is just a set of interfaces,

Скачать книгу