Скачать книгу

a traditional sense, cybersecurity focuses largely on the protection of data via access controls, encryption, and other methods. The idea is that when one machine is sending data, only those who should be seeing the data get to, and it does not get modified in between transmission and receiving. To put things into a physical perspective, think of mailing a letter. If one sends a postcard to someone, there is nothing stopping every single person from reading what they wrote on the card. Now, if instead that letter was put in an envelope, it would prevent outsiders from freely reading what you wrote. To take it one step further, what if there was a concern about someone opening the letter and reading it. Perhaps in this case, the letter is placed in a box that is locked and sent to the receiver who has the key (this is how encryption works). The more important and private the letter, the more protection placed on it, ensuring that only the receiver is the one to read it.

      This closed‐off system poses a problem for AI cybersecurity. Gideon explains that for AI to function as they are designed, they need access to steady streams of data [5]. Going back to the letter analogy, if you bring AI into the mix, then AI algorithms would need a special way to get inside the box, be able to read the letter, and extract information from it. If someone were to break in and learn about these vulnerabilities, they could potentially exploit them by altering the data or causing the algorithm to behave other than intended.

      Whenever there is an innovation in tech that disrupts the status quo, we see a similar pattern to it. Moisejevs explains in his article how after the PC, Mac, and smartphone were developed we saw an immediate rise in the usage as it became more popular and more uses were found for them. However, shortly afterward, we saw a similar growth in malware for these systems [6]. ML and AI are transitioning from their infancy stage to the growth phase, which means that in the next few years, we will continue to see more and more applications of AI in healthcare products, in sheer numbers and the depth of role that it plays. However, as these AI numbers increase so too will malware, and in the healthcare setting, this could have devastating effects on the patients.

      When these concerns about malware are applied to healthcare, it is best to view it in two categories; similar to how AI is. There is the digital side that has to do with data, patterns, and ML, and there is the physical side. On the digital side, the primary concern is the protection of data. The reason for this is because all decisions made via AI stems from having reliable data. To give an example, many HCPs use AI to help diagnose patients. If a patient were to come in and have various scans and tests performed, unreliable data may cause the patient to be misdiagnosed, or possibly not diagnosed at all. Another example comes from the EMR. If the patient is taking chronic medication, and the data is corrupted, it might misinterpret the pattern and believe that the patient just picked up their medication from the pharmacy recently when in fact they are due for a refill. If this happens, it may cause issues for the patient because insurance will not pay for another refill since according to the EMR, the patient has plenty of medication.

      The key takeaway is that cybersecurity is a critical aspect of this growing AI field. Without securing AI, it puts the user, patients, and HCPs at an unnecessarily high risk. In healthcare, we see the fastest connection between a cyber threat causing issues and people's lives being at stake. While other fields may experience this as well, healthcare, due to its nature, will experience it more often and with quicker response time. It is imperative that as new designs and new innovations for this field emerge, they are designed with cybersecurity built in from day 1, rather than added on later.

      So, what is next? Based on Moisejevs graphs, we can predict that AI will continue to grow exponentially over the next several years [6]. However, the path in which it grows will be determined by the innovators behind the technology. The US government has always considered America to be at the forefront of innovation and driving technological advances [4]. Yet, other countries are working hard and in some areas are surpassing American innovation. To keep American innovations competitive, the US government has laid out a framework that they believe is necessary for the future.

      In 2019, there was an update to the National AI R&D Strategy that did not exist when it was previously published in 2016. This update pertained to the partnership between the US federal government and the outside sources. There are four main categories in which these partnerships fall into: Individual project‐based collaborations; joint programs to advance open, precompetitive, fundamental research; collaborations to deploy and enhance research infrastructure; and collaborations to enhance workforce development, including broadening participation [4]. These areas all strive to enhance AI by linking universities and students with industry partners to yield real results.

      There are several US federal agencies that have already adopted embracing these partnerships. These include, but are not limited to, Defense Innovation Unit (DIU), National Science Foundation (NSF), Department of Homeland Security (DHS), Silicon Valley Innovation Program (SVIP), and Department of Health and Human Services (HHS) [4]. It is clear through the HHS already working on establishing partnerships that AI and healthcare will continue to grow and be of great importance. The main goal of the HHS partnerships is to develop new AI pilot products and establish research into AI and deep neural networks to further AI's uses in the healthcare field.

      To some degree, it is possible to predict what is coming for AI. This can be achieved by looking at the current trajectory and extrapolating what will be coming next. However, this extrapolation is extremely limited. Deep neural networks as a basic structure for AI existed back in the 1980s; it was not until recently when there was enough data and technological capabilities that this became a reality [4]. Without knowing what technological advances will disrupt the status quo or become available, it is impossible to predict the far future of AI. This is the underlying importance of being viewed as a top innovator and researcher into the subject, so the United States may be first with the latest and greatest AI applications.

      When considering the intersection of AI, cybersecurity, and healthcare industry, we have seen that a myriad of problems exist today, and there are more coming down the pipe in the future. To be prepared, there are several issues that must be addressed. The first issue is also the hardest to resolve, and that is the morality of AI. Morality is a fluid topic that changes not only over time but also by who is viewing the moral issue. Therefore, it is recommended that an international organization preside over AI and what morals are being implemented into AI in the current and future state. By having an international organization, it would allow voices to be heard from all nations so that the best possible options can be decided. The reason that this issue is the hardest to resolve is a conflict between morality and legislation. Having an international organization that acts as a ruling body impedes growth and cannot keep pace with AI in numbers and advancements.

Скачать книгу