Скачать книгу

a three-level hybrid approach to detect intrusions. The first level of the system is a signature-based approach in order to filter the known attacks using the black list concept. The second level of the system is an anomaly detector that uses the white list concept to distinguish between the normal traffic and the attack traffic surpassed by the first level. The third level of the system uses support vectors machines in order to classify the unknown attack traffic. The success of a hybrid method depends on many factors, notably the size of the learning sample, the choice of a basic classifier, the exact manner in which the forming set is modified, the choice of combination method and finally the data distribution and the potential capacity of the basic classifier chosen for solving the problem (Rokach 2010).

      AI is a double use domain. AI systems and the manner in which they are designed can serve both civilian and military purposes, and in a broader sense, beneficial or harmful purposes. Given that certain tasks requiring intelligence are benign while others are not, AI is double edged in the same way that human intelligence is. Researchers in the field of AI cannot avoid producing systems that can serve harmful purposes. For example, the difference between the capacities of an autonomous drone used for delivering parcels and the capacities of an autonomous drone used for delivering explosives is not necessarily too wide. Moreover, fundamental research aiming to improve our comprehension of AI, its capacities and its control seem to be inherently double edged.

      AI and machine learning have an increasingly important impact on the security of citizens, organizations and states. Misuse of AI will impact the way in which we build and manage our digital infrastructure, as well as the design and distribution of AI systems, therefore it will probably require an institutional policy. It is worth noting here that the threats caused by AI misuse have been highlighted in heavily publicized contexts (for example, during a Congress hearing (Moore and Anderson 2012), a workshop organized by the White House and a report of the US Department for Homeland Security).

      The increasing use of AI for the development of cyberattack techniques and the absence of development of adequate defenses has three major consequences.

      For many known attacks, the progress of AI is expected to enlarge the set of players capable of conducting the attack, their attack speed and the set of possible targets. This is a consequence of the efficiency, upgradability and ease of dissemination of AI systems. In particular, the dissemination of intelligent and efficient systems can increase the number of players who can afford specific attacks. If the reliable intelligent systems are also evolutionary (upgradable), then even the players who already have the required resources to conduct these attacks may acquire the capacity to execute them at a much faster pace.

      An example of a threat that is susceptible to develop in this manner is the phishing attack threat. These attacks use personalized messages to obtain sensitive information or money from their victims. The attacker often introduces himself as one of the friends, colleagues or professional contacts of the target. The most advanced phishing attacks require significant qualified manpower, as the attacker must identify the high value targets, research their social and professional networks, and then generate messages that are acceptable to the target.

      1.4.2. Introduction of new threats

      AI progress will enable new varieties of attacks. These attacks may use AI systems to conduct certain tasks more successfully than any human being.

      Due to their unlimited capacities, in contrast with those of humans, intelligent systems could enable players to conduct attacks that would otherwise be impossible. For example, most persons are not able to efficiently imitate the voice of other persons. Consequently, the creation of audio files resembling recordings of human speech becomes essential in these cases. Nevertheless, significant progress has been recently achieved in the development of speech synthesis systems, which learn to imitate human voice. Such systems would in turn enable new methods for spreading disinformation and imitating others.

      Moreover, AI systems could be used to control certain aspects of malware behavior that would be impossible to control manually. For example, a virus designed to modify the behavior of ventilated computers, as in the case of the Stuxnet program, used to disrupt the Iranian nuclear program, cannot receive commands once these computers are infected. Limited communication problems also occur under water and in the presence of signal jammers.

      Properties of AI such as efficiency, upgradability and capacities surpassing those of humans may enable very relevant attacks. Attackers are often faced with a compromise between the frequency, the extent of their attacks and their efficiency. For example, spear phishing is more effective than classical phishing, which does not involve adapting messages to individuals, but it is relatively costly and cannot be conducted en mass. More generic phishing attacks are profitable despite their very low success rates, simply because of their extent. If the frequency and upgradability of certain attacks, including spear phishing, are improved, AI systems can mitigate these compromises. Moreover, properties such as efficiency and upgradability, particularly in the context of target identification and analysis, lead also to finely targeted attacks. The attackers are often interested in adapting their attacks to the characteristics of their targets, aiming at targets with certain properties, such as significant assets or an association with certain political groups. Nevertheless, the attackers must often find a balance between efficiency, the upgradability of their attacks and target precision. A further example could be the use of drone swarms that deploy facial recognition technology to kill specific individuals in a crowd, instead of less targeted forms of violence.

      Cyberattacks are increasingly alarming in terms of complexity and quantity, a consequence of the lack of awareness and understanding of the actual needs. This lack of support explains the insufficient dynamism, attention and willingness to commit funds and resources for cybersecurity in many organizations. In order to limit the impact of cyberattacks, the following recommendations are suggested (Brundage et al. 2018):

       – decision-makers should closely cooperate with technical researchers to study, prevent and limit the potential misuse of AI;

       – researchers and engineers in the AI field should seriously consider the double-edged nature of their work, by allowing considerations linked to abusive use to influence the research priorities and norms and by proactively addressing concerned players when harmful applications are predictable;

       – public authorities should actively try to broaden the range of stakeholders and experts in the field that are involved in the discussions related to these challenges.

      Agarwal, R. and Joshi, M.V. (2000). A new framework for learning classifier models in data mining [Online]. Available

Скачать книгу