ТОП просматриваемых книг сайта:
Software Networks. Guy Pujolle
Читать онлайн.Название Software Networks
Год выпуска 0
isbn 9781119694724
Автор произведения Guy Pujolle
Жанр Отраслевые издания
Издательство John Wiley & Sons Limited
1.2. Hypervisors and containers
Clearly, virtualization needs hardware, which can be standard. We speak of commodity hardware (white box), with open specifications, produced en masse to achieve particularly low prices. We will talk further about it in the chapter on open source software (Chapter 4). There are various ways of placing virtual machines on physical equipment, and they can be classified into three broad categories, as shown in Figures 1.4–1.6. The first two figures correspond to hypervisors and the third figure corresponds to containers.
Figure 1.4. Paravirtualization. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
A paravirtualization hypervisor is a program that is directly executed on a hardware platform and which hosts virtual machines linked to operating systems that have been modified so that the virtual machines’ instructions are directly executed on a hardware platform. This platform is able to support guest operating systems with their drivers. The classic hypervisors in this category include Citrix Xen Server (open source), VMware vSphere, VMware ESX, Microsoft Hyper-V Server, Bare Metal and KVM (open source). These programs are also known as type-1 hypervisors.
The second category of hypervisor, or type 2 hypervisor, is a program that is executed on the hardware platform, supporting native operating systems, which means without any modification. The native operating system, when invited by the hypervisor, is executed on the device thanks to an emulator so that the underlying device takes all constructions into account. The guest operating systems are unaware that they are virtualized, so they do not require any modifications, as opposed to paravirtualization. Examples of this type of virtualization would include Microsoft Virtual PC, Microsoft Virtual Server, Parallels Desktop, Parallels Server, Oracle VM Virtual Box (free), VMware Fusion, VMware Player, VMware Server, VMware Workstation and QEMU (open source).
Figure 1.5. Virtualization by emulation. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
The third type leaves behind the previous hypervisor systems, running several machines simultaneously as containers. In such case, we speak of an isolator. An isolator is a program that isolates the execution of the applications in an environment, called the context, or indeed the zones of execution. Thus, the isolator is able to run the same application multiple times in a multi-instance mode. This solution performs very well, because it does not cause any overload, but the environments are more difficult to isolate.
Figure 1.6. Virtualization by containers. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
In summary, this last solution facilitates the execution of the applications in execution zones. In this category, we can cite the examples of Linux-Vserver, chroot, BSD Jail and Open VZ and most of the container solutions such as Docker.
1.3. Kubernetes
Kubernetes (also called K8s) is an open source system that allows deployment, rise and management of containered applications. This solution was first created by Google, which gave it to the Cloud Native Computing Foundation. This platform allows deployment automation, rise and implementation of application containers on clusters and servers. This open source software works with a whole range of container technologies, such as Docker, for example.
The Kubernetes architecture is shown in Figure 1.7. We can see Pods, which are containers or a group of containers hosted by servers that belong to a cluster of hardware. ETCD is the persisting storage unit for the cluster’s configuration data. The scheduler’s goal is to share the workload on servers, thus managing Pods’ execution in the best possible way. Finally, Kubelet is responsible for the execution state of each server.
Figure 1.7. Architecture of the Kubernetes orchestrator
1.4. Software networks
Software networks have numerous properties that are novel in comparison to hardware networks. To begin with, we can easily move virtual machines around, because they are simply programs. Thus, we can migrate a router from one physical node to another. Migration may occur when a physical node begins to fail, or when a node is overloaded, or for any other reason decided on in advance. Migration of a node does not actually involve transporting the whole of the code for the machine, which would, in certain cases, be rather cumbersome and time-consuming. In general, the program needing to be migrated is already present in the remote node, but it is idle. Therefore, we merely need to begin running the program and send it the configuration of the node to be moved. This requires the transmission of relatively little data, so the latency before the migrated machine starts up is short. In general, we can even let both machines run at once, and change the routing so that the data only flow through the migrated node. We can then shut down the first router.
More generally, we carry out what is known as urbanization: we migrate the virtual machines to different physical machines until we obtain optimal performance. Urbanization is greatly used for optimization in terms of energy consumption or workload distribution, as well as to optimize the cost of the software networks or to make the network highly reliable or resilient. For example, in order to optimize energy consumption, we need to bring together the virtual machines on shared nodes and switch off all the nodes that are no longer active. In actual fact, these machines would not be shut down but rather placed on standby, which does still consume a small amount of energy, but only a very small amount. The major difficulty with urbanization arises when it is necessary to optimize all operational criteria at the same time because they are often incompatible – for example, optimizing consumption and performance at the same time.
A very important characteristic mentioned earlier is isolation: the software networks must be isolated from one another, so that an attack on one network does not affect the other networks. Isolation is complex, because simultaneously, we need to share the common resources and be sure that, at all times, each network has access to its own resources, negotiated at the time of establishment of the software network. In general, a token-based algorithm is used. Every virtual device on every software network receives tokens according to the resources attributed to it. For example, for a physical node, ten tokens might be distributed to network 1, five tokens to network 2 and one token to network 3. The networks spend their tokens on the basis of certain tasks performed, such as the transmission of n bytes. At all times, each device can have its own tokens and thus have a minimum data rate determined when the resources were allocated. However, a problem arises if a network does not have packets to send, because then it does not spend its tokens. A network may have all of its tokens when the other networks have already spent all of theirs. In this case, so as not to immobilize the system, we allocate negative tokens to the other two networks, which can then surpass the usage rate defined when their resources were allocated. When the sum of the remaining tokens less the negative tokens is equal to zero, then the machine’s basic tokens are redistributed. This enables us to maintain isolation while still sharing the hardware resources. In addition, we can attach a certain priority to a software network while preserving the isolation, by allowing that particular network to spend its tokens as a matter of priority over the other networks. This is relative priority, because each network can, at any moment, recoup its basic