Скачать книгу

your organization

      Mention artificial intelligence, and you’ll get all kinds of reactions, from cartoon fantasies of Rosie, the Jetson’s robot maid, to the dystopian cityscape of Ridley Scott’s Blade Runner, James Cameron’s The Terminator, or Michael Crichton’s Westworld.

      These representations of AI are examples of artificial general intelligence, also called pure AI. Even from the beginning at the Dartmouth workshop, the pioneers of AI aimed for the stars, asserting that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it,” but by the time it filtered down to the masses, it presented itself in humbler forms.

      These days, AI is pervasive, but it slips past unnoticed because it is not a manifestation of the romantic vision of writers. Instead, it is eminently practical. Pragmatic. Useful.

      Like good design, good AI is invisible. When done right, both remove some of the friction from daily life. In fact, you have probably been using artificial intelligence for longer than you realize.

      ELIZA

      In the mid-1960s, Joseph Weizenbaum at the Massachusetts Institute of Technology Artificial Intelligence Laboratory developed ELIZA, a natural-language processing program that converses in the style of a psychologist asking questions based on previous responses. With the advent of the personal computer, ELIZA escaped the MIT lab and ventured into people’s homes. You can still find implementations online.

      Grammar check

      Spell check has been around for a long time, but that’s a simple application that doesn’t require artificial intelligence, just a fuzzy search that reacts to a not-found condition by returning items that are similar but not identical to the search term. By contrast, grammar check uses natural-language processing (NLP) and supervised machine learning (ML) to learn language rules and usage.

      In 1981, Aspen Software released Grammatik, an add-on diction and style checker for personal computers. In 1992, Microsoft Office embedded a grammar checker in Microsoft Word. In 2007, Grammarly launched a cloud-based grammar checker.

      Virtual assistants

      Back in the day, a virtual assistant was really virtual, as in digital, not a term to describe a remote clerical worker.

      As the personal computer gained popularity, it migrated into the homes of users, who had widely varying degrees of computer literacy and aptitude. Software providers rushed in to fill the void between computer capabilities and consumer competence.

      In 2010, three years after Clippy officially died, Apple launched Siri, followed in the next six years by Google Now, Alexa, Cortana, and Google Assistant. This new wave of virtual assistants uses voice recognition and expands the scope beyond help with a specific computer application to help with almost every aspect of life. Like chatbots, virtual assistants can be deployed in the enterprise to enhance internal or external customer service.

      Clippy provided assistance based on Bayesian algorithms, a family of probabilistic classifiers. Modern virtual assistants use NLP to interact more like a human.

      Chatbots

      In 2001, AOL launched SmarterChild, a chatbot that could report the weather, do calculations and conversions, set reminders, store notes, and answer general questions that you would use a search engine or a virtual assistant for today.

      Chatbots, also called chatterbots, are text-based applications or plugins that replace the human side of a conversation with artificial intelligence. Chatbots are frequently used on websites to provide tier-one technical support, schedule appointments, find a specific product, and other simple tasks that can be accomplished without human intervention. Early implementations had a limited ability to parse language and thus required the user to formulate a question with a very specific syntax and vocabulary. Deep learning based on intent recognition has expanded the utility of chatbots by freeing the user to ask questions in everyday language.

      

Unlike virtual assistants such as Siri, closed-domain chatbots are trained as experts in a specific field and can resolve a large volume of questions and requests simultaneously and continuously, tying into a customer relationship management platform and allowing call center employees to focus on more nuanced issues.

      While many of these examples could be typified as business-to-consumer applications, AI began making inroads into enterprise and medical environments as early as the 1970s. Many of these applications use supervised learning, unsupervised learning, or a combination of both.

      Recommendations

       In 2001, Yahoo applied recommendation engines to streaming music with Yahoo LAUNCH, later rebranded as Yahoo Music and then Y! Music. Four years later Pandora went live, its recommendation engine powered by the music genome project, a manual classification system conceived by the founders in 1999.

       In 2007, Netflix applied the recommendation engine to streaming video, powered by the CineMatch algorithm, which Netflix said was accurate to within half a star 75 percent of the time. In 2009, the company replaced CineMatch with Pragmatic Chaos, an algorithm developed by BellKor as a submission for the Netflix Prize competition, a $1 million contest for the first person or team that could beat CineMatch.

      Medical diagnosis

      Attempts to apply AI to medicine date back to the 1970s, but a wealth of data is a fundamental prerequisite for effective AI, and the real push to digitize medical records and test results didn’t gain momentum until the twenty-first century. It’s no surprise that the primary application is diagnostics. A 2013 review of three large U.S. studies reported that about 12 million patients, roughly 5 percent, are significantly misdiagnosed per year.

      Currently, AI brings machine learning, natural-language processing, and other AI techniques to medical diagnosis.

      Traditional breast cancer screening tests involve radiologists examining X-ray film for telltale signs of cancer. It is reliable most of the time, but it produces false negatives (20 percent of the time radiologists fail to find cancer when it is present) and false positives (a false alarm where radiologists mistakenly conclude cancer is present). About 50 percent of women who do annual mammograms get at least one false positive in a ten-year period.

      In 2020, Google Health used DeepMind to train a model to recognize cancer in mammogram X-ray in U.K. patients. Compared to human radiologists, it reduced false negatives by 2.7 percent and false positives by 1.2 percent. When they extended the project

Скачать книгу