ТОП просматриваемых книг сайта:
Cyberphysical Smart Cities Infrastructures. Группа авторов
Читать онлайн.Название Cyberphysical Smart Cities Infrastructures
Год выпуска 0
isbn 9781119748328
Автор произведения Группа авторов
Жанр Физика
Издательство John Wiley & Sons Limited
The other large area of AI in healthcare has to do with the electronic medical records (EMRs) of a patient. Previously, a patient's charts were physical sheets of paper with history of the patient. Collaboration between HCPs meant literally sharing these papers, either physically or sending the file digitally. However, modern medicine has moved away from these paper systems and moved into EMRs. One benefit of applying AI and ML to these EMRs is that they are able to identify family members who are likely to suffer from hereditary disease [3]. In addition, EMR in combination with AI allows HCPs to be more efficient by allowing real‐time sharing of data for collaboration among colleagues. Since AI is used heavily to identify patterns, it could also be used to help predict patient's pharmaceutical needs and identify pharmaceutical abuse. If a patient is taking medication for a chronic ailment and on a regular basis, then the AI could identify when the patient is likely to be low and start a refill process with the pharmacy or automatically generate a request for additional refills to the HCP. This helps prevent the patient from having lapses in taking medication. In addition, if a patient were to be on a highly regulated form of medication, for instance, an opioid‐based product, AI would be able to monitor this across the board. So even if the patient is getting multiple scripts from multiple HCPs and filling them at a variety of pharmacies, if they are all connected to the patient's EMR, then the AI would be able to identify the abuse of pharmaceutical products.
Earlier, it was brought up how HCPs are often overworked and understaffed. In some cases, AI has moved from beyond the virtual aspect of diagnostics and made its way into the physical world. In these cases, AI is often represented by carebots. These carebots can perform a variety of functions including providing company for the elderly or delivering medication to patients [3]. Although AI and robotics are making large steps forward, it will likely still be a long time before AI carebots are able to perform more than simple basic tasks or are widespread across healthcare. Carebots are not the only physical presence of AI in healthcare. AI is also assisting HCPs in surgical matters via robots and in some cases is even performing surgical procedures by themselves [3]. These solo surgical procedures are small, simple, basic surgeries, but the AI is still performing the process without the control of an HCP.
Based on the current trajectory of AI and carebots, it is possible to make some assumptions about what might become norm in the near future. Since AI is used to identify patterns, it would make logical sense that a carebot could ease HCP's burden in large facilities like hospitals by identifying when it is time for patients to take medicine, or be changed, or receive new equipment. These hospital carebots could identify and perform these simple tasks that would allow HCPs to focus on where they are needed most. In addition, with the growing development of smart cities, a carebot that is designed to ease mobility could connect to the smart city and not only help navigate like GPS but also connect with the lights and crossing signals to assist the patient in getting to their destination safely.
1.4 Morality and Ethical Association of AI in Healthcare
Ethics and morality have a large impact on AI in healthcare. Whereas science, technology, and innovation limit what is possible, ethics and morality limit what should be done versus not. Just because the ability to do something exists does not mean that it should be done. HCPs and AI programmers are making attempts to incorporate ethics and morality into ML and final AI products by implementing standards [1]. According to Bryson, these standards are set via consensus of the masses. In other words, if the majority of people feel that it is the ethical or moral thing to do, then that becomes the standard. This obviously is highly flawed as that standard can change based off cultural or societal needs and wants. If the standards are based on an extremist's point of view, for instance, the Taliban, does that mean that these AI carebots should not assist someone if they identify them as a woman, or a person of color, or a foreign ethnic group?
Human beings are flawed creatures to begin with; just because there is a consensus for how to go about doing something does not necessarily mean that it is right. Look at the Salem witch trials. At the time, it was the consensus of the majority that women who were deemed to be partaking in witchcraft be burned at the stake to protect the rest in the village. If the goal of AI is to have autonomous robots that function completely independently of a user that take new information and learn from it, how do we instill ethics into it?
The Institute of Electrical and Electronic Engineers (IEEEs) has attempted to overcome this barrier by listening to as many voices in AI as possible and unifying them. The IEEE's Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems considers various voices, communities, and school of thoughts and then translates this into education and training. IEEE strives to “prioritize ethical considerations in the design and development of autonomous and intelligent system” [1]. However, is that enough? Rigby does not believe so. In fact, he believes that “current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the healthcare field” [2]. When we use Moore's law to view the pace of technological innovation, we see that every 18–24 months the ability of technology basically doubles. This pace cannot be matched by those who implement public policy or guidelines. The result is that new technology and innovation is being implemented ethically agnostic or ethically askew, or somewhere in between, there is no standard.
Previously, as tools developed, they relied on humans to wield them. The hammer, the gun, and the computer are all examples of tools that required a human. Therefore, the human who used these tools was responsible for the outcomes. If you shoot a gun, the death that results from it are not the responsibility of the firearm's creator, but rather falls on who pulled the trigger. Computers were no different. Despite being able to perform tasks faster than any human, they are dependent on human input and direction and will not perform the tasks without said guidance. However, AI is different. It has already been established that AI is able to perform tasks such as a surgical procedure without human aide. What happens when AI is faced with an ethical dilemma? In this hypothetical situation, the carebot in question was programmed with ethics in mind and programmed to relieve pain and suffering of the patient as much as possible. If this carebot is set to take care of a terminally ill patient who is in constant pain, where does the carebot draw the line? From a purely logical point of view, if the patient is so far gone that they are always in pain, the only way to relieve said pain is to terminate the patient. We now are entering back into the realm of science fiction, but the reality is that these are things that need to be considered today.
Another ethical dilemma that should be considered involves the use of ML. If ML's goal is to identify and capture patterns, then what happens when it identifies a pattern within its own coding as being unethical or immoral? This goes back to the idea that humans are flawed, and depending on who, where, and when the ethical creation came from, it could still be flawed. At that intersection, the AI is at a crossroads. Does it defy the limitations put on it by the ethics imposed, or does it continue doing something that it perceives to be ethically wrong? With the rate of technological innovation, these questions must be addressed and answered sooner rather than later.
1.5 Cybersecurity, AI, and Healthcare
There is no doubt that in this era of technological advance and innovation, data is the currency that powers all. Data can be in any number of forms, but it is the driving force behind closed doors as to what decisions are being made. Even in the case of AI, this holds true. Gibian discusses in his article how having more data allows AI to be improved and be more accurate, even surpassing that of humans [5]. We see how data drives ML to make better informed decisions and produce better results. However, what happens when that data is corrupted or unavailable? How do we even know whether it is reliable? The answer to these questions centers around cybersecurity, securing our devices, and protecting our data.