Скачать книгу

that may need to be resolved.

      The manner of access may be push (the user is notified that information is available) or pull (the user queries for the information). Query can entail degrees of information availability: waiting (already displayed—just needs to be looked at or touched); ready to display upon request; or in need of fetching, with some time lag—perhaps even with a notification when it does arrive.

      The interaction’s information origin may be endogenous or exogenous. Origin is key to a user’s conceptual understanding of what a display element means, and relevant to how it should be portrayed to the user. Data to be conveyed might be sourced endogenously from the primary user, whether voluntarily or through sensing (e.g., current running pace or effort; one’s personal emotive state, which you wish to share with someone else; time elapsed since you last stood up). Or it might come from outside, exogenously: time for a meeting to start, a target that has been met, a notification of an externally derived event, a feature available in media being felt in a virtual environment or identified by an automatic algorithm in media that is being perused.

      A signal in any modality may occur once, recur occasionally or periodically, or appear continuously—from an information stream or channel being monitored, or from discrete events. Consider how an interface could provide, notify, or guide with differing degrees and types of recurrence.

      Information may be supplied in the user’s attentional foreground or background. Interface attentional demand is a spectrum, from signals that target a users’ full attention to ambient presentation [Weiser and Brown 1996, MacLean 2009], and a focus of considerable current study [Roda 2011]. As with other modalities, haptic sensations can be designed to fall almost anywhere in that spectrum. Information parameters that justify varying salience include urgency (time criticality), importance, or the user’s context. Guiding information (often continuous) may be designed for conscious or non-conscious use, or both. Mechanisms to modulate a user’s attentional demand include perceptual salience of a given signal element, and recruiting additional perceptual modalities to reinforce (amplify) a percept.

      The haptic signal can play several roles as part of a “team” of sensory channels involved in a multimodal design (Figure 3.1).

      A haptic signal can work with other senses to provide reinforcing information about the same percept, or complementary information about a separate one. An example of a reinforcing multimodal display is when an automobile driver is informed of an upcoming turn with a visual map display showing the turn approaching, an auditory voice (“In one kilometer, turn left on Elm Drive”), and vibration of the left side of the seat or steering wheel as the turn approaches.

      Alternatively, a visual map might show a bird’s-eye overview, while the vibration gives graded information about how far away the turn is. In this case, the visual and haptic information complement one another—even though they are both related to the navigation task, each gives a different part of the overall picture.

      When modalities give redundant (reinforcing) information, one may be primary—the one which users will be lost without, even if they are aided by a secondary one. Often the benefit of the secondary modality is to differ in quality or timing of information display. In the previous driving example, the driver might have a general sense of location in mind, and need just a little nudge to distinguish which of several choices is the correct turn—here, the low-detail, easy-to-absorb haptic tap is just right, and having to look away at a detailed map is overkill.

       Figure 3.1 Roles a haptic element can take in a multimodal interaction.

      A given modality’s information must be coordinated in temporality and sequencing with respect to others. Information in a multimodal display can be presented at varying levels of detail at different times depending on need. For instance, the holistic display system can be considered as a state machine, or alternatively as detail that fades in and out. In these different states (or levels of detail zoom), modalities may play different roles.

      One approach is frontline notification to backup detail. A haptic signal can present an easily processed initial notification with low information density; then the user can follow up to query a visual modality for more detail at a better time. In [S1], a smartwatch vibrates to notify the user of a message (haptics as frontline modality) then audio/visual information is displayed when the user looks at the watch, and possibly queries it for additional detail.

      Alternatively, action can be followed by confirmation. A user’s interaction with a device can be actively confirmed, either immediately (button feedback) or later (message sent). In [S2], a user presses a button (visual interaction first), and receives a followup vibration for confirmation.

      We have outlined some possible structures of interaction models, general types of roles that haptic elements might play within them, and some design parameters that must be resolved. Here we will look at how the haptic element might work, using examples that researchers have already studied.

       Guidance Targets and Constraints

      We spoke earlier of guiding as an interactive goal that can benefit from multimodal coordination. Even for this specific goal, the haptic channel can take many forms. We’ll give examples that vary on the recurrence/continuity parameter, spanning both kinesthetic (via force feedback) and tactile varieties.

      Haptics can provide virtual constraints and fields. In virtual space (3D virtual environment, driving game or 2D graphical user interface), with a force feedback device it is possible to render force fields that can assist the user in traversing the space or accomplishing a task. These “virtual fixtures” were first described as perceptual overlays: concrete physical abstractions (walls, bead-on-wire) superposed on a rendered environment [Rosenberg 1993], which can be understood as a metaphor for a real-world fixture such as using a ruler to assist in drawing a straight line. This concept is a fertile means of constructing haptic assistance [Bowyer et al. 2014], which has been used repeatedly in areas such as teleoperated surgical assistance [Lin and Taylor 2004], and efficient implementations devised, e.g., for hand-steadying [Abbott et al. 2003].

      Haptics can predict user goals. To provide guidance without getting in the way, the designer must know something of what the user will want to do; but if the user’s goal was fully known, the motion could be automated and guidance not needed. In dynamic environments like driving, a fixture can be exploited as a means of sharing control between driver and automation system. The road ahead is a potential fixture basis, and a constraint system can draw the vehicle toward the road while leaving actual control up to the driver [Forsyth and MacLean 2006].

      Haptics can layer guidance onto graphical user interfaces (GUIs), or alternatively be built from scratch into visuo-haptic interfaces. Researchers have often sought to add guiding haptic feedback to GUIs, essentially layering a haptic abstraction on top of one designed for visual use. This has been tricky to get right. Some argue the need to start from scratch. Smyth and Kirkpatrick [2006] developed a bimanual system whereby one hand uses a force feedback device to set parameters in a complex drawing program while the mouse hand independently draws—an example of complementary roles of the two modalities. Some guidelines emerged: design for rehearsal; use vision for controlling novel tasks and haptics for routine tasks; and haptic constraints to compensate for the inaccuracies of proprioception.

      Haptics can provide discrete cues. That most familiar of haptic mediums, vibrotactile buzzes, has been well studied for guidance cueing: of spatial directional [Gray et al. 2013], walking speed [Karuei and MacLean 2014], timing awareness [Tam et al. 2013]), and posture [Tan et al. 2003, Zheng et al. 2013]. In Section

Скачать книгу