Скачать книгу

target="_blank" rel="nofollow" href="#ulink_8bd4a4c5-08b7-52a6-9887-9b8a7566d606">Figure 5.5 Gateway DSA engine upward propagation of spectrum sensing information.

      As Figure 5.5 shows, fusion will result in updates in the gateway DSA engine information repository and this can trigger upward updates to the following entities:

      1 The central arbitrator. Now the gateway DSA engine sees a different spectrum awareness map than the last time the gateway DSA engine updated the central arbitrator DSA engine. The gateway updates the central arbitrator with the new spectrum map.

      2 The updates to the central DSA engine, which are sent over the same control plane, will reach the peer gateways.

      Notice the asynchronous aspects of using cognitive engines where each cognitive engine works based on what information it collects and decisions occur as a result of fusion. Although these engines are not timely synchronized with each other, they work collectively, and sometimes using heuristics6 algorithms, to optimize the use of spectrum resources. In the meantime, DSA is offered as a set of cloud services at any point of the heterogeneous networks hierarchies regardless of the status of the control plane. If some spectrum awareness propagation messages are lost, DSA services will always be available when a service is requested.

      The thread in Figure 5.5 shows one possible gateway cognitive DSA engine flow. Fusion can always trigger an action at any entity and this action can trigger a message flow that leads to other cognitive DSA engine fusions and triggers. The DSA design of large‐scale systems has to consider the amount of control traffic that can be generated from all of this information propagation upward, downward, and to peers. The design has to consider the tradeoffs mentioned in this book, which include the thresholds that trigger information dissemination. These thresholds must be selected and updated dynamically based on bandwidth availability.

      A cloud service has to be evaluated by certain metrics per the NIST explanations of cloud services. The previous chapters have introduced metrics in areas such as the time between detecting interference (service request) and the time of overcoming interference and showed how this time as a metric can depend on the entity hierarchy and the specific mix and match of this hybrid approach to DSA. It is important to note that with IaaS, there may be direct metrics and indirect metrics. For example, measuring control traffic volume as a metric may not be used because measuring throughput efficiency or quality of service (QoS) metrics can indirectly consider control traffic volume impact.7 Intuitively, the lower the control traffic volume impact, the higher the throughput efficiency achieved, and thus measuring throughput efficiency indirectly measures the impact of control traffic volume.

      5.3.1 DSA Cloud Services Metrics Model

      1 The pre‐run stage defines the service and the service agreement.

      2 The runtime stage, represented by the gray rectangle, where the service is monitored and enforced to meet the service agreement.

      3 Post processing, where service accountability is measured.

Flow chart depicts the DSA cloud services model.

      The first step in the Figure 5.6 model application to DSA services is simple. One of the DSA cloud services, such as response time, is selected. In typical cloud services, customer response time can be defined in the service agreement and the customer can know the response time before purchasing the service. DSA also needs a service agreement. The definition of a service agreement with DSA can be driven from system requirements and analysis. Response time can be the time between an entity reporting suffering from interference above a certain threshold (that can render connectivity to be lost or bandwidth to be below a certain value) and the time a new frequency band is assigned to overcome interference.

      The gray rectangle in Figure 5.6 shows the runtime aspects of a cloud service where the service is monitored and policies, rule sets, and configuration parameters are adapted in order to force the service to adhere to the defined agreement. With DSA as a cloud service, policies, rule sets, and configuration parameter updates can be triggered by the DSA cognitive engine resource monitor, as shown in Figure 5.2, or by the decision maker, as shown in Figure 5.3. These actions are made in order to enforce the service to adhere to the service agreement during runtime. With DSA as a service, the design can create log files that can be analyzed as post processed in order to evaluate the DSA service accountability over a long period of time.

      With DSA as a set of cloud services, while service agreements can be driven from system requirements, the design of DSA cloud services has to create metrics to help force the services to conform to the agreement and use metrics for measuring services accountability. The cognitive engine‐based design would have to gain understanding of the properties of the metrics used to force the service to adhere to the service agreement and scripted scenarios must be used to assess service accountability before system deployment.

      5.3.2 DSA Cloud Services Metrology

      Metrology is the scientific method used to create measurements. With cloud services, we need to create measurements to:

      1 quantify or measure properties of the DSA cloud services

      2 obtain a common understanding of these properties.

      A DSA cloud services metric provides knowledge about a DSA service property. Collected measurements of this metric help the DSA cognitive engine estimate the property of this metric during runtime. Post‐processing analysis can provide further knowledge of the metric property.

      It is important to look at DSA cloud services metrics not as software properties measurements. DSA cloud services metrology measures physical aspects not functional aspects of the services. The designer of DSA as a set of cloud services should be able to provide measurable metrics such that a service agreement can be created and evaluated during runtime and in post processing. Since the model used here is a hierarchical model, a metric used at different layers of the hierarchy is evaluated differently at each layer. For example, as explained in Chapter 1, response time when providing DSA as a local service should be less than response time when providing DSA as a distributed cooperative service, which is also less than when providing DSA as a centralized service.

      With the concept of providing DSA as a set of cloud services, the design should be able to go through an iterative process before the model is deemed workable. The design should include the following steps:

      1 Create an initial service agreement driven from requirements and design analysis.

      2 Run scripted scenarios to evaluate how the agreement is met during runtime through created metrics.

      3 Run post‐processing analysis of these scripted scenarios to gain further knowledge of the properties of the selected metrics.

      4 Refine the service agreement.

Скачать книгу