Скачать книгу

outsources almost all of the company’s IT and networks.

      Figure I.7 shows the functions of the different types of Cloud in comparison with the classical model in operation today.

images

      Figure I.7. The different types of Clouds

      The main issue for a company that operates a Cloud is security. Indeed, there is nothing to prevent the Cloud provider from scrutinizing the data, or – as much more commonly happens – the data from being requisitioned by the countries in which the physical servers are located; the providers must comply. The rise of sovereign Clouds is also noteworthy: here, the data are not allowed to pass beyond the geographical borders. Most states insist on this for their own data.

      In the techniques that we will examine in detail hereafter, we find SDN (Software-Defined Networking), whereby multiple forwarding tables are defined, and only datacenters have sufficient processing power to perform all the operations necessary to manage these tables. One of the problems is determining the necessary size of the datacenters, and where to build them. Very roughly, there are a whole range of sizes, from absolutely enormous datacenters, with a million servers, to femto-datacenters, with the equivalent of only a few servers, and everything in between.

      Figure I.8 shows the rise of infrastructure costs in time. We can see that a speed increase implies a rise in infrastructure costs whereas the income of telecommunication operators stagnates, partly due to very high competition to acquire new markets. It is therefore absolutely necessary to find ways to reduce the gap between costs and income. Among other reasons, two aspects are essential to start a new generation of networks: network automation using autopilot and the choice of open source software in order to decrease the number of network engineers and to avoid license costs for commercial software. Let us examine these two aspects before studying the reasons to turn to this new software network solution.

      The automation of the network pilot is the very first reason for the new generation. The concept of the autopilot created here is similar to that of a plane’s autopilot. However, unlike a plane, a network is very much a distributed system. To achieve autopilot, we must gather all knowledge about the network – which means contextualized information – in all nodes if we want to distribute this autopilot or in a single node if we want to centralize this autopilot. Centralization was chosen for obvious reasons: simplicity and network congestion by packets with knowledge. This is the most important paradigm of this new generation of networks: centralization. This way, the network is no longer a decentralized system. It becomes centralized. It will be necessary to pay attention to the center’s security by doubling or tripling the controller, which is the name given to this central system.

      The controller is the control device that must contain all knowledge about users, applications, nodes and network connections. From there, smart systems will be able to pilot packets in the infrastructure for the best possible service quality for all the clients using the network. As we will see later on, the promising autopilot for the 2020s is being finalized: the open source ONAP (Open Networking Automation Platform).

      This tendency towards open source software raises questions such as: what will become of network and telecom suppliers since everything comes from open source software? Is security ensured with these thousands of thousands of coding lines in which bugs will occur? And so on. We will answer these questions in the Chapter 4, on open source software.

      The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated in 2019 to account for 7% of the total carbon footprint. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with 10 such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by 10 well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide.

      Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025 if nothing is done to control the current growth. Therefore, it is absolutely crucial to find solutions to offset this rise. We will come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it should almost stop, and speed up again when the workload increases.

images

      Figure I.8. Speed of terminals based on the network used

      Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: for identification purposes, to determine who the user is, and also for localization purposes, to determine the user’s position. The difficulty lies in dealing with these two functionalities simultaneously. Thus, when a customer moves sufficiently far to go beyond the sub-network with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location.

Скачать книгу