From the Lab to People’s Home: Lessons from Accessing Blind Participants’ Interactions via Smart Glasses in Remote Studies - PMC Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 8.
Published in final edited form as: Web4All (2022). 2022 Apr 27;2022:24. doi: 10.1145/3493612.3520448

From the Lab to People’s Home: Lessons from Accessing Blind Participants’ Interactions via Smart Glasses in Remote Studies

Kyungjun Lee 1,*, Jonggi Hong 2,*, Ebrima Jarjue 3, Ernest Essuah Mensah 4, Hernisa Kacorri 5
PMCID: PMC10631802  NIHMSID: NIHMS1790343  PMID: 37942017

Abstract

Researchers have adopted remote methods, such as online surveys and video conferencing, to overcome challenges in conducting in-person usability testing, such as participation, user representation, and safety. However, remote user evaluation on hardware testbeds is limited, especially for blind participants, as such methods restrict access to observations of user interactions. We employ smart glasses in usability testing with blind people and share our lessons from a case study conducted in blind participants’ homes (N = 12), where the experimenter can access participants’ activities via dual video conferencing: a third-person view via a laptop camera and a first-person view via smart glasses worn by the participant. We show that smart glasses hold potential for observing participants’ interactions with smartphone testbeds remotely; on average 58.7% of the interactions were fully captured via the first-person view compared to 3.7% via the third-person. However, this gain is not uniform across participants as it is susceptible to head movements orienting the ear towards a sound source, which highlights the need for a more inclusive camera form factor. We also share our lessons learned when it comes to dealing with lack of screen reader support in smart glasses, a rapidly draining battery, and Internet connectivity in remote studies with blind participants.

Keywords: remote method, user study, blind people, smart glasses

1. INTRODUCTION

Approaches and tools for supporting remote user studies play an important role in human-computer interaction as they can help overcome some of the limitations of in-person studies, such as low recruitment of representative participants [38]. With the recent COVID-19 outbreak [37], remote methods have received more attention both to comply with government guidance and to ensure the safety and comfortableness of all involved. In response, the accessibility community has employed online surveys [20, 29, 30], audio calls [1, 19, 23, 26, 41], and video conferencing [42, 49] to relieve the safety concerns of participants with visual impairments related to transportation and face-to-face interactions [40].

Cameras available on laptops or attached to desktops with video conferencing tools, such as Google Meet [21], Microsoft Teams [35], and Zoom [56], have enabled experimenters to observe participants’ interactions [25], which can be supplementary to traditional remote methods for usability testing [45]. Similar setups have also been employed in remote usability testing with blind participants [49]. However, the visual information from such stationary camera may be limited in that it only captures participants’ activities from a single third-person viewpoint, which may not show participants’ real-time interactions with the user interface of smartphone or smartwatch testbeds (i.e., study prototypes). Screen sharing from those devices can allow such observations, but it is not trivial to participants. This is especially the case for blind participants due to its inaccessibility [34]. In this paper, we explore ways to overcome these challenges associated with remote observations of blind participants’ interactions via video conferencing with smart glasses.

Why smart glasses in remote user studies with the blind.

Typically, during in-person studies where blind participants are asked to interact with a testbed or a working application, experimenters employ a plethora of methods for collecting data including but not limited to: (i) video recordings via a carefully framed static camera placed on markings in front or to the left/right of the participant to ensure systematic recordings across participants; (ii) field notes based on their own observations; and (iii) application logs capturing fine-grained interaction data. Experimenters are also strategically seated close to the participant for observation, guidance, and troubleshooting as the testbed or prototype may not be fully functional. Replicating this experimental setup in blind participants’ homes can be challenging. Laptops with video conferencing are usually placed in front of the participant. Thus, the third-person viewpoint may not capture the necessary context for the interactions. On the other hand, smart glasses worn by the participant and equipped with camera and video conferencing functionalities can provide access to the first-person viewpoint, which may increase remote access and support communication.

We explore these premises with the following research questions: RQ1 “What do smart glasses worn by blind people capture during a remote user study?” and RQ2 “How can this be leveraged to increase access to their interactions with the system being evaluated and better support experimenter-participant communication?” Specifically, we investigate the feasibility and challenges of using smart glasses in video conferencing for a user study that, due to the pandemic, had to move from a lab to blind participants’ houses. To that end, we devise a remote experimental setup and protocol for observing blind participants’ interactions with smartphone testbeds and object stimuli. As depicted in Figure 1, the experimenter can access participants’ activities via dual video conferencing where the third-person view is captured by a laptop camera and the first-person view from smart glasses worn by the participant. Both devices are connected to the same video conference call, but, to prevent audio echo, only the laptop sound is used. After multiple iterations of piloting among blind and sighted researchers, the protocol is deployed in the homes of 12 blind participants serving as a case study. Through peer debriefing and fine-grained analysis of study recordings, we explore the feasibility of this approach in terms of access (i.e., by looking at the visual information captured by the smart glasses versus the laptop), support (i.e., by looking at the experimenter-participant communication), and logistics (i.e., by reflecting on our experiences with handling delivery and troubleshooting).

Figure 1:

Figure 1:

Dual video conferencing in our remote study design. While interacting with a testbed on a smartphone, a blind participant wearing smart glasses communicates with the experimenter through dual video conferencing. Two video streams are being sent to the experimenter: one from the participant’s smart glasses and the other from a laptop camera facing the participant on the same Zoom call.

We find that the video conferencing from the smart glasses provided the experimenter with real-time remote access to blind users’ interactions. On average 58.7% of the video frames from the smart glasses fully captured the interactions. This was strikingly high compared to those captured by the laptop camera (3.7%). More so, the experimenter could leverage partially visible interactions on the smart glasses and triangulate them with audio cues such as the screen reader output on the testbed. We also find that the smart glasses supported the experimenter in providing guidance and making observations in an unobtrusive way. Interruptions related to camera aiming were minimal and only occurred for a few participants, usually during onboarding.

While these findings are promising, what can be captured by smart glasses is not uniform among participants. More interactions are captured for those who became blind later in life compared to those who were born blind and tended to “turn their head to orient their ear towards a sound source” [31, 33, 51]. It highlights the importance of the camera field of view for the inclusion of blind users. The lack of screen reader support on smart glasses, a rapidly draining battery, and a dependency on Internet connection called for a nearby presence of the experimenter. Even workarounds like long power cables and portable chargers can fail as cable disconnection can be inaccessible for blind participants to spot. This unpredictability suggests that smart glasses should be used as a complement to a typical laptop video call or phone call setup rather than a substitute.

This paper’s contributions are: (i) a novel approach for facilitating first-person access to participants’ interactions in remote studies via smart glasses; (ii) empirical results with blind participants on the effect of the camera field of view and sound source on what is being captured by their head-worn cameras; and (iii) insights to the challenges and logistics involved in conducting remote studies and employing state-of-the-art smart glasses in them.

2. RELATED WORK

Our work draws upon existing literature in accessibility for conducting remote user studies with a focus on studies that involve participants with visual impairments. To provide broader context for our approach, we discuss prior work in remote sighted guidance and assistive smart glasses for this population.

2.1. Remote User Studies in Accessibility

User evaluation with a large number of participants with disabilities or older adults is often challenging for accessibility researchers [38]. Typically, our community involves a small number of representative users (median sample size 13 [32]) in researchers’ local area [44]. In some cases, remote user study methods are adopted to overcome constraints in participants’ location and time [38]. Real-world deployments are also employed (e.g., [9]), though they require the development of fully accessible and functioning applications, which can be challenging in prototype evaluation. Discussions on the benefits of remote user evaluation have often revolved around reducing travel cost [43] and decoupling the effects of time and space [6].

After the outbreak of COVID-19 [37], many accessibility researchers switched to a remote format for their user studies and shared best practices with the community [14, 45, 53]. With a focus on remote studies that involve people with visual impairments over the past two years (Table 1), we observe that common approaches employed by researchers spanned online surveys, audio calls, and video conferencing. The majority conducted either an online survey [4, 15, 20, 29, 30] or an interview via a call [1, 19, 23, 26, 41, 52]; few did both [12, 46]. Some of them were able to evaluate their prototypes after guiding participants to install on their devices either a web browser extension [26, 52] or a smartphone app [1, 23]. One of them [26] also employed screen recording in their testbed, which was deployed on a web browser on participants’ laptops. We see screen sharing in other studies that don’t involve audio calls. However, they also are restricted to laptops either web browsers [46] or digital artboards [42]. More importantly, we don’t see any prior work employing this screen sharing feature on smartphone. Instead, we see researchers (e.g., [49]) recording participants’ interactions with their prototype system using a laptop camera (i.e., a camera that provides only a stationary and third-person view). While the recording from a third-person point of view may serve as a proxy for recording users’ interactions from a stationary camera in an in-person user study, it has its limitations. For example, it would be difficult for experimenters to remotely observe both the user interface on the smartphone and participants’ interactions with it at the same time. We believe that cameras embedded in smart glasses can play a role in filling this gap as they can provide a mobile and first-person perspective, which may or may not capture both the interface and participant’s interactions with it. To our knowledge, using smart glasses for this task has not been previously explored with blind participants, whose idiosyncratic head movements could potentially affect what is being captured by the camera [27].

Table 1:

Approaches in remote studies with blind and low vision participants during the pandemic (2020–2021).

Prior work Online text Audio call Video call Screenshare Technology
Gleason et al., 2020 [19] laptop/phone: audio call
Gonccalves et al., 2020 [20] web browser: online survey
Akter et al., 2020 [4] web browser: online survey
Troncoso et al., 2020 [49] laptop: video call
Engel et al., 2020 [15] web browser: online survey
Lee et al., 2020 [26] laptop: audio call, screen share
Saha et al., 2020 [41] laptop/phone: audio call
Ahmetovic et al., 2021 [1] phone: audio call
Lee et al., 2021 [29] web browser: online survey
Leporini et al., 2021 [30] web browser: online survey
Siu et al., 2021 [46] web browser: online survey; computer: audio call,
screen share
Chung et al., 2021 [12] web browser: online survey; phone: audio call
Jain et al., 2021 [23] laptop: screen share; phone: audio call, app logging
Schaadhardt et al., 2021 [42] laptop: video call, digital artboards
Wang et al., 2021 [52] laptop: audio call

2.2. People with Visual Impairments, Remote Guidance, and Asssistive Smart Glasses

Many studies have looked at the potential of providing remote sighted guidance to people with visual impairments. The context for guidance varies from having access to visual surroundings (e.g., objects [9] and people [7]) and supporting indoor navigation [11, 24, 39], to remote mobility training [8, 13]. For this remote guidance from sighted crowd, people with visual impairments often share their camera view either on their smartphone or smart glasses and post associated questions. Upon receiving this information, sighted people either provide an answer or inquire more information by interacting with the users with visual impairments. In particular, our community has provided suggestions on remote guidelines that sighted people should follow to help people with visual impairments in indoor navigation [24] and remote mobility training [8, 13] when communicating via video calls. This access paradigm has long moved from research labs to real-world applications, such as BeMyEyes [17], TapTapSee [2], and Aira [3]. Inspired by and complementary to these efforts, our work expands this concept of remote guidance from short instance interactions to longer experimenter-participant communication in remote user evaluation and explores the potential of smart glasses for this task.

The use of smart glasses by people with visual impairments is not new in the accessibility field. Typically, prior work has been interested in automatically leveraging the input from smart glasses camera to support people with visual impairments navigate indoors [5, 18, 54], read text [22, 47, 55], or detect objects [16] and people [28, 36, 48] without the explicit help from sighted people. Assuming that the field of view of the camera worn by users with visual impairments can capture the area of interest to them, all the prior work proposes assistive systems that interpret the visual input and convert it into non-visual formats, such as audio and haptic. While many focused on assistive applications of smart glasses for people with visual impairments, Lee et al. focused on blind people’s camera aiming behaviors in the context of pedestrian detection [27]. They observed that smart glasses worn by blind people tend to capture a passerby well in a corridor but may exclude a passerby from the camera frame when they are close to the passerby due to the behavior of focusing on the sound source (i.e., the passerby’s voice in their case). Our discussions are more related to the later in that we also investigate how smart glasses worn by blind participants help experimenters observe the participants’ activity. While the contexts differ (tracking people versus tracking the interface of the smartphone blind people may be interacting with), we anticipate that there could be similar challenges. Thus, our work could contribute to a broader understanding of smart glasses and their potential and limitations for the blind community.

3. METHOD

Our team, including four sighted researchers (R1, R2, R4 & R5) and one blind researcher (R3), used an iterative design process to devise a remote user study approach for the evaluation of a smartphone testbed employing two different camera form factors: a laptop camera and a camera embedded in smart glasses. Then, we explored the potential and limitations of this approach in a remote study with 12 blind participants, serving as a case study.

3.1. Step 1: Exploration of Smart Glasses

Prior to leveraging smart glasses in remote studies, two sighted researchers (R1 and R5) had extensive experience with employing this technology for in-person studies with blind participants. Specifically, in collaboration with other sighted and blind collaborators, they used a pair of Vuzix Blade smart glasses [50] to explore blind people’s camera aiming behaviors for pedestrian detection [27] and understand its social acceptability [28].

In this initial investigation, blind participants were walking or standing, and the goal was for the smart glasses to capture any nearby pedestrians. While this scenario is different from being seated and interacting with a smartphone, it allowed us to gain the following insights that we leveraged in our approach:

  • Smart glasses capture the blind wearer’s viewpoint.

  • Blind people tend to appreciate the ability to aim the camera on the smart glasses without using their hands.

  • Blind people may not feel comfortable wearing earbuds along with smart glasses as they limit access to the surroundings — prior versions of Vuzix Blade were not equipped with speakers; more so, the form factor of many smart glasses includig Vuzix Blade conflicts with bone-conduction headphones.

  • Camera viewpoint is susceptible to head movements, which can be challenging as some blind people may “turn their head to orient their ear towards a sound source” [33, 51].

Since the smart glasses tend to capture the blind wearer’s viewpoint, we saw the potential of using this device in a remote evaluation study setup, especially for the purpose of observation. Vuzix Blade runs on Android and supports Zoom video conferencing [56]. Zoom on the smart glasses could enable the wearer to share the field of their camera view and communicate with others (e.g., experimenter) on the Zoom session through the built-in camera and mic. However, Vuzix Blade does not support TalkBack (the screen reader feature for Android), which is critical to blind users1. Also, although we found that the smart glasses can automatically launch Zoom at startup, sight is necessary for navigating the visual user interface on the glasses to join a specific Zoom meeting. To work around this inaccessibility, a sighted experimenter ought to set up the Zoom session on the smart glasses before delivering them over to blind participants. This is a huge limitation, and more work is needed to investigate such inaccessibility issues on smart glasses.

3.2. Step 2: Iterative Design via Piloting

We explored how smart glasses can be deployed in a remote study with blind participants in the context of evaluating smartphone applications (case study). To this end, we conducted four pilot sessions where we iterated on the approach for incorporating the smart glasses in a predefined remote study protocol. All pilot sessions were conducted remotely by R2, who is sighted, and R3, who is blind. During each session, R3 focused on clarifying, reviewing, and checking the study procedures from the blind participant’s perspective. After each session, R2 and R3 met with R1, R4, and R5 to reflect. The planned case study, discussed in Section 3.3, included: an interview, a series of tasks with Testbed A, a series of tasks with Testbed B, and an open-ended questionnaire.

The use of smart glasses was critical for capturing participants’ interactions with the mobile applications serving as testbeds. However, since the smart glasses had to be ON and set up with Zoom, our first pilot was merely to estimate the duration of receiving the equipment and conducting the initial interview, which lasted about an hour. The second pilot focused on tasks with Testbed A, which took about 48 minutes. It highlighted challenges in connecting with the WiFi in blind people’s home and battery life for the smart glasses (initial version of Vizux Blade did not last more than 30 minutes). In the third pilot, we focused on the remaining tasks with testbed B and the last questionnaire, which took about 2 hours. Here, we opted for an upgraded version of Vizux Blade, which had a built-in speaker. However, its battery life still remained limited (no more than 40 minutes). For our last pilot, we shortened questionnaires and updated the study protocol to reflect our observations. We obtained a WiFi hotspot and connected all devices: laptop, smartphone, and the upgraded Vizux Blade, which was now also connected to a portable charger. The overall session, including the interview, tasks with testbeds A & B, and open-ended questions lasted 2 hours. The cable connecting the smart glasses to the charger ended up being too short constraining R3’s movements. We switched to a longer cable (1.2m). In a nutshell, we saw that:

Wearing earphones limits participants’ interactions.

Given a quiet environment during the study, participants listen to the device they interact with and the experimenter at the same time. It thus makes it more critical for smart glasses to have built-in speakers to facilitate both listening and communicating.

Video streaming drains smart glasses’ battery.

Battery life for smart glasses is critical and quite susceptible to video streaming. Portable batteries work but can limit natural interactions.

Reliable and fast internet connection is critical.

Insufficient network bandwidth may cause frequent lagging and freezing issues as it can be quickly exhausted by two cameras on the laptop and smart glasses streaming videos simultaneously and testbed apps sending over photos and logs. Portable WiFi devices that provide 5G Nationwide or 4G LTE data speed may help.

Internet configurations can be challenging and inaccessible.

Connecting devices to the Internet can be a nontrivial task for many, especially when not familiar with study devices. For blind participants, it can also be inaccessible. Having a portable WiFi can also prevent participants from connecting study devices to their home Internet; devices can remember the portable WiFi access point and automatically connect to it when available.

Smartphone screen recording may not be reliable.

In our pilot, screen recording needed to be enabled at the start of each task and stopped once completed, but some recordings stopped without a notification once the smartphone screen goes off due to screen timeout or an accidental button push.

Nearby presence may be vital for troubleshooting.

Technical difficulties and disruptions can be frustrating without support and lead to a confounding effect. Beyond step-by-step instructions, it might be necessary to have the experimenter stand by near participants’ houses in case of hardware troubleshooting.

Real-time logging might help monitor task progress.

Visible real-time logging (e.g. server logs on the testbeds) can serve as a supplementary information for the sighted experimenter for tracking participants’ task progress in addition to Zoom calls.

3.3. Step 3: Case Study

3.3.1. Participants.

A total of 12 blind participants were recruited through emailing lists and local organizations — our remote user study was reviewed and approved by IRB (#1255427–6) at the University of Maryland, College Park. As shown in Table 2, six blind participants self-reported as female and the other six as male. Their ages ranged from 33 to 70 (Mean = 54.3, SD = 15.2). Eight participants were totally blind while the other four were legally blind. Five participants (P1, P5, P6, P10, P11) reported having light perception. As depicted in Figure 1, blind participants, located in their homes, communicated with the experimenter, located in a car nearby participants’ home, via dual video conferencing. The experimenter monitored participants’ activities by having access to the two video streams in the same Zoom call and real-time server logs on a separate window. The first video stream was captured by the camera in the smart glasses with the sound muted, and the second from a laptop camera facing participants. All Zoom sessions were recorded.

Table 2:

Demographic information of participants in our study.

PID Age Gender Vision level Age of onset
P1 39 Female Totally blind* Birth
P2 67 Male Legally blind 55
P3 62 Female Totally blind Birth
P4 32 Male Legally blind 20
P5 66 Male Totally blind* 46
P6 61 Male Totally blind* 41
P7 70 Male Legally blind Birth
P8 50 Female Legally blind 45
P9 69 Female Totally blind 55
P10 66 Female Totally blind* Birth
P11 33 Female Totally blind* Birth
P12 36 Male Totally blind Birth
(*)

Asterisks indicate light perception.

3.3.2. Materials.

As shown in Figure 2, participants received two boxes. The first contained a bag with four devices: a fully charged iPhone with two testbed apps (Testbed A and Testbed B) already installed, a fully charged Macbook with the Zoom call initiated, a pair of fully charged Vuzix Blade glasses connected with an 1.2 meter long cable to a portable charger and an initiated Zoom call, as well as a 5G mobile hotspot device. The second box contained the 15 stimuli objects for Testbed A and three snacks separated with a plastic bag for Testbed B.

Figure 2:

Figure 2:

Two groups of study materials put into two different packages: a box (left) and a reusable shopping tote bag (right). Each package has unique texture. Participants were sent the study materials prior to their study session.

The containers (i.e., the shopping bag, the box, and the plastic bag) were chosen to have different textures distinguishable by touch. Participants were asked to pick up each of these containers at different points during the study. For example, they were first asked to set up the laptop in front of them, wear the smart glasses, and bring out the phone. Then, they interacted with the objects following later instructions about the testbeds, described in Section 3.3.4.

3.3.3. Environment.

All participants completed the study session remotely from their houses. They were instructed to find a sitting area in which they feel comfortable setting up the laptop and interacting with the stimuli objects. Before starting the session, the experimenter helped participants to orient the laptop camera so that it can include the participants’ upper body.

3.3.4. Tasks.

After the initial interview, including questions related to demographics as well as experience and attitudes towards technology, participants performed two sets of tasks. The tasks involved two different smartphone testbeds for object recognition, which we refer to as Testbed A and Testbed B respectively in this paper. Testbed A was pretrained by the experimenter to recognize the 15 stimuli objects. Testbed B was trained in real time by the participants to recognize the three snacks that were given to them.

Each set of tasks started with an onboarding session for the associated testbed and followed by one or four more tasks for Testbed A and B, respectively. Before and after each set, participants answered task-related questions verbally. During the session, participants could freely interact with stimuli objects. For example, some participants placed object stimuli on their table and took photos while others held them in their hands and took photos. Below we include more details on the tasks. However, the participants’ feedback on the testbeds and study findings related to the testbeds are beyond the scope of this paper, so we do not include them in our results.

Set of tasks for Testbed A
  • OnboardingA: Pick up objects from the box and find the Testbed A on the smartphone.

  • Task1A: Take photos of 15 stimuli objects using the Testbed A on the smartphone for testing.

Set of tasks for Testbed B
  • OnboardingB: Pick up objects from the plastic bag and find the Testbed B on the smartphone.

  • Task1B: Take photos of three snacks using the Testbed B on the smartphone for training.

  • Task2B: Review visual characteristics of the photos as indicated by the Testbed B on the smartphone.

  • Task3B: Take photos of three snacks using the Testbed B on the smartphone for testing.

  • Task4B: Find specific options on the Testbed B on the smartphone and change them.

3.4. Step 4: Peer Debriefing and Video Analysis

After conducting the case study with all 12 blind participants, R2 met with R1 and R5 for peer debriefing to organize findings and observations. R2 shared his experiences with carrying out the study, focusing on the planning and execution of our study design.

Based on the debriefing analysis, the three researchers created a coding scheme for annotating the Zoom videos recorded by both the laptop camera and the smart glasses. To answer RQ1 “What do smart glasses worn by blind people capture during a remote user study?”, they constructed Visibility. To help answer RQ2 “How can this be leveraged to increase access to their interactions with the system being evaluated and better support experimenter-participant communication?”, they constructed Guidance. R1 coded video frames from the laptop and smart glasses cameras by following these definitions:

Visibility

  • Fully visible: A set of video frames fully show a stimuli object, or a testbed’s interface and a participant’s interaction.

  • Partially visible: A set of video frames partially show a stimuli object, a testbed’s interface, or a participant’s interaction.

  • Not visible: A set of video frames do not show a stimuli object, a testbed’s interface, and a participant’s interaction.

Guidance

  • Count: The number of occurrences where the experimenter provides blind participants camera aiming guidance.

Zoom videos from the laptop camera and smart glasses were recorded for all participants except for P5 and P8. Videos from P5 and P8 were not annotated since their smart glasses got disconnected from the portable battery in the middle of their study sessions, which led to incomplete video recordings. During the video annotation, the annotator focused on frames where participants performed specific actions that were necessary for each task on a testbed. For example, during the onboarding session for Testbed A, the annotator looked at frames where participants interacted with a testbed on a smartphone or picked up stimuli objects from the package. In this example, the annotator checked if the videos captured stimuli objects or the smartphone screen and participants’ interaction with it, which are necessary information for the experimenter to check during that task.

4. FINDINGS AND OBSERVATIONS

We enrich our lessons learned during the pilot studies with observations and findings from the peer debriefing and the fine-grained analysis of study recordings, a total of 17 hours and 50 minutes of videos from 10 participants. Specifically, we discuss results on the feasibility of this approach in terms of access (i.e., by looking at the visual information captured by the smart glasses versus the laptop), support (i.e., by looking at the experimenter-participant communication), and logistics (i.e., by reflecting on our experiences with handling delivery and troubleshooting).

4.1. Access: What is Captured

Focusing on the part of the study where participants’ interactions with the testbeds are critical, we find that overall the camera from the smart glasses captures much more information than the static camera from the laptop as can be observed by comparing Figure 3(a) to Figure 3(b). This observation seems to be consistent across all tasks that involve interactions with the mobile phone and object stimuli. More so, as shown in Figure 4, it is also consistent across all participants2. We find that on average, 58.75% (sd=3.88%) of participants’ interactions throughout the tasks were fully visible via the smart glasses compared to only 3.69% (sd=6.90%) via the laptop. However, this gain in visibility was not uniform across participants. For some (e.g., P6) it was as high as 96.86% on average (sd=8.30%), and for others (e.g., P3) as low as 19.83% (sd=23.80%).

Figure 3:

Figure 3:

A comparison between smart glasses and laptop cameras in terms of percentage of video frames across the tasks where overall participant interactions with the phone and stimuli were fully visible, partially visible, or not visible.

Figure 4:

Figure 4:

A comparison between smart glasses and laptop camera in terms of percentage of video frames across the tasks where each participant interactions with the phone and stimuli were fully visible.

An interesting observation is that those video frames from the smart glasses, which captured participants interactions with the testbeds, tend to do so in an unobtrusive way as shown in Figure 5(a); this resembles more closely an in-lab setup. In contrast, the few video frames from the laptop, where the testbeds or stimuli were fully visible, often resulted from a conscious attempt of the participants to share with the experimenter; as shown in Figure 5(b), participants had to deviate from a study task and point their phone or stimuli to the laptop camera. Typically, this occurred at the start of the onboarding tasks, as illustrated in Figure 3(b).

Figure 5:

Figure 5:

Contrasting examples from the study showing how the two cameras capture complementary information.

Many factors can explain the low visibility of interactions from the laptop camera. First, the camera is static. Thus, participants easily moved out of its field of view. Second, participants were seated during the study and faced the laptop, a typical setup in Zoom calls, which may be appropriate for capturing people’s faces but not necessarily their hands, their phones, the objects, or their interactions with them. Asking participants to move the laptop, tilt it, find an ideal framing, restrict their movements, or show every now and then the phone can be disruptive or affect study outcomes.

The camera in the smart glasses overcomes some of these challenges. However, other obstacles remain for accessing interactions via this form factor from all blind participants. Specifically, we observed higher percentages of visibility from the glasses from those participants who became blind later in life compared to those who were born blind. Late blind participants typically aimed their gaze towards the mobile device. Thus, the testbed interface and stimuli were typically included in the smart glasses’ camera field of view. However, participants who were congenitally blind often aimed their ears towards the mobile device when interacting with it and anticipating audio feedback (e.g., from the screen reader). One of these participants (P3) maintained this head orientation even when taking photos of object stimuli. As a result, only 19.83% of P3’s video frames on average (sd=23.80%) were annotated as ‘fully visible’. Even then, the experimenter was able to leverage many of the video frames from P3’s smart glasses, where the interactions were ‘partially visible’; 35.15% of the video frames on average (sd=9.44%). The experimenter triangulated this partial information with the testbed’s audio feedback for monitoring study progress.

4.2. Support: What is Communicated

Our case study with blind participants demonstrated the potential of smart glasses for interactive communication in a real-time remote setup. The experimenter particularly found smart glasses useful to guide blind participants on picking up stimuli objects in front of them and to observe their actions unobtrusively. More so, observations were mostly done without interruptions (e.g., without asking participants to move stimuli objects or the phone screen towards the laptop camera field of view). Not being obtrusive or interruptive was critical for this particular study, where any priming on object and testbed camera manipulation during tasks that involved taking photos of the objects with the testbed could affect study results. More so, having real-time access to these interactions during participants’ photo-taking was important for the experimenter to gain insights into their behaviors and have a better context for the photos that were being collected and analyzed later.

A unique aspect of deploying smart glasses in studies with blind participants is that audio cues from the testbeds can often help overcome instances of limited visibility. Application serving as testbeds have to be accessible. Thus, they are typically designed to provide audio feedback to participants (e.g., being compatible with screen readers). This audio feedback, which is typically responsive to blind participants actions on the interface, also serves as a good cue for the experimenter. It complements visual cues from the video frames where interactions are ‘partially visible’ or ‘not visible’.

From the annotation of all the videos, we find that only six times the experimenter had to provide guidance for camera aiming; i.e., asking participants to make the stimuli visible to the smart glasses or the laptop camera by moving either their head or their hand slightly. These visibility guidance events mostly occurred at the start of the study in tasks related to the first testbed or during the onboarding for the second testbed. Only in one out of these six times, the experimenter asked a participant (P2) to show an object to the laptop camera. In all the other times, it was more quickly achieved through guidance for the smart glasses. Although one participant (P10) struggled with following the guidance to bring a stimuli object into the camera view of the smart glasses, the other participants (P1, P4, P12) promptly reacted to the experimenter’s guidance and captured their stimuli objects with the smart glasses — P1 was given the guidance two times (one during Task1A and the other during OnboardingB). In the peer debriefing, the experimenter stated that it was easy to provide camera aiming guidance for smart glasses.

4.3. Logistics: What is Handled

4.3.1. Study equipment delivery.

All the study equipment needed to be delivered to participants and set up before the remote study session. Since our remote method involved expensive, fragile hardware devices (i.e., a smartphone, a laptop, a pair of smart glasses, and a mobile hotspot device), we had to ensure the safe delivery of the study equipment. Instead of relying on a third-party shipping service, one of our team members took responsibility for the equipment delivery. This member sanitized all the study equipment and left it in front of participants’ houses. Then, participants picked it up. The study material was grouped according to its purpose and placed into several different containers with different textures. Grouping the study material with distinguishable containers was effective in communication between the experimenter and participants. More specifically, the experimenter was able to provide participants clear instructions about which material to pick up as it helped participants find the right study material among others.

4.3.2. Reliable network connection.

Even within the same local area, we observed variance in network latency although the mobile hotspot device deployed in our study supported up to 5G Nationwide network. Often, such latency made it difficult for participants to communicate with the experimenter and for the experimenter to observe participants’ activities remotely in real time.

Moreover, we observed that the mobile hotspot device sometimes overheated due to the overuse of network data. This issue typically emerged when participants took many photos (they were allowed to take as many photos as they wanted in some of the tasks). These tasks caused a lot of network traffic since the hotspot device needed to send a large number of photos to a remote server and receive real time results from the server at a photo level. All these occurred while the mobile hotspot also supported dual video conferencing on participants’ laptop and smart glasses. The overheated mobile hotspot device first led to increasing network latency and then stopped working; thus, the experimenter had to restart the hotspot device after cooling it down.

4.3.3. Local troubleshooting.

When a remote user study involves multiple hardware devices that participants may not use in their daily lives, troubleshooting is unavoidable. Although we noticed that blind participants learned how to put on smart glasses quickly, local troubleshooting of hardware devices, including a Macbook, an iPhone, and Vuzix Blade smart glasses, was necessary for some participants. For example, the experimenter found remote instructions insufficient to help some participants address unexpected software issues, especially when they were not familiar with default settings on the laptop or mobile device — some of them were used to their customized configurations. One challenge with dual video conferencing (laptop and smart glasses that are co-located) is that it can lead to voice echoing when both of the mics are on. This happened in few cases when participants accidentally activated the mic on their smart glasses. The experimenter managed to mute one of the mics to address the issue, but noticed that there was no way to unmute any when both of the mics were accidentally muted. It occurred once to a participant who was not familiar with Zoom; thus, the experimenter had to go and fix this issue locally. Furthermore, hardware issues, such as cable disconnection or hardware malfunctions, were typically inaccessible for blind participants to spot. Thus, when participants faced such hardware-related issues, the experimenter had to fix the issues locally after retrieving the hardware device from them. In our case study, the experimenter was standing by near participants’ house while remotely conducting a user study. However, we suggest having at least two experimenters during a remote user study session — one conducting the study remotely and the other troubleshooting device issues locally.

5. DISCUSSION

In this section, we first reflect on lessons learned from accessing blind participants’ interactions via smart glasses in remote studies, while discussing the implications for designing inclusive smart glasses and employing this technology in remote studies. We then discuss limitations in our study that may affect the generalizability of our findings as well as future work.

5.1. Implications

Our study and findings provide evidence for the potential of smart glasses in remote usability testing with blind participants. We see how researchers who plan remote studies with this population and those who are interested in designing more inclusive smart glasses could benefit from the following insights:

Increasing remote access.

Video conferencing via smart glasses can provide real-time remote access to blind users’ interactions with a mobile application and stimuli from a first-person perspective. Our analysis indicates that this approach surpasses typical video conferencing setups employing a laptop camera. Specifically, we find a striking difference between participants’ interactions being fully visible via the smart glasses (on average, 58.7%) and the laptop (on average, 3.7%). Even partially visible interactions to the smart glasses can help the experimenter to triangulate participants’ interactions with audio cues such as screen reader output indicating users’ actions.

Supporting real-time communication.

Video conferencing via smart glasses can support real-time experimenter-participant communication. Our analysis indicate that the video stream from the smart glasses supported the experimenter in providing guidance related to study tasks and making observations in an unobtrusive way. Interruptions related to camera aiming were minimal and only occurred for a few participants usually during onboarding; for almost all of the visibility was more quickly achieved via the smart glasses than the laptop.

Exploring camera field of view for inclusion.

What is captured by the smart glasses may relate to the age at onset of blindness as camera viewpoint is susceptible to head movements, especially when the camera field of view is limited. Our analysis of video frames from Vizux Blade, which has a 64-degree horizontal field of view, indicates a higher percentage of visibility from the glasses for participants who became blind later in life compared to those who were born blind and tend to “turn their head to orient their ear towards a sound source” [31, 33, 51]. This observation provides additional evidence on prior work indicating how limited field of view in smart glasses for pedestrian detection can exclude blind users who could benefit the most from this technology [27]. While smart glasses with a wider camera angle could potentially help, they can also lead to image distortions [48]. Our study highlights the need to explore the effect of camera field of view on the inclusion of blind users, especially those with early onset who may exhibit distinct head movements from sighted people.

Prioritizing screen reader support for smart glasses.

It is surprising to see how even smart glasses that run on operating systems supporting accessibility do not include many of those features. For example, accessibility on Vuzix Blade, the smart glasses in our study, is very limited. Vuzix Blade is an Android device, but it does not support TalkBack, the Google screen reader that is typically included on Android devices. Even smart glasses that are specifically designed for people with visual impairments often opt for “hands-free” interactions via voice command support rather than providing a full screen reader experience e.g., on touch interactions. While there are some workarounds that can be employed in remote studies (e.g., by setting up the Zoom call aforetime), unexpected challenges may still arise. For example, participants can accidentally activate or deactivate the mic on the Zoom call on their smart glasses by touch. As with the early Web, accessibility in wearable devices appears to be an afterthought. It cannot continue this way and be effective. Inclusive design process is essential to make wearable technology effective and accessible for all.

Overcoming the rapidly draining battery.

Power remains one of the main challenges in smart glasses. This drawback of limited battery is exacerbated in remote studies where smart glasses are used for video conferencing. Even workarounds like long power cables and portable chargers can fail as cable disconnection or hardware malfunctions can be inaccessible for blind participants to spot; e.g., it happened to two participants in our case study. This unpredictability suggests that smart glasses should be used as a complement to the typical call setup (e.g., video call on a laptop or a phone call) between the experimenter and participant in remote studies.

We see how some of the insights above, which are directly tied to our smart glasses approach for accessing blind participants’ interactions with a mobile device and object stimuli, may be adapted for other testbeds (e.g. smartwatch applications), settings (e.g. when in movement), or populations (e.g. studies where accessing head and gaze information is critical or when the first-person perspective can provide more context). More so, our pilot sessions and case study with blind participants may offer more practical insights into the challenges and logistics for those planning to conduct their studies remotely, independently of whether they employ smart glasses or not. For example, lessons learned relate to (i) use of different textures for helping participants quickly distinguish study materials, (ii) use of real-time logging for monitoring task progress when screen recording is not an option, (iii) use of multiple internet hotspots separating video streaming from testbed network traffic, (iv) nearby presence of an experimenter in addition to a remote experimenter, and (v) duplicating some of the study equipment to reduce risks associated with dependency on equipment delivery.

5.2. Limitations and Future Work

There are many limitations that could impact the generalizability of our findings. Our observations come from a single case study (as well as pilot sessions) conducted by one experimenter, on a single evaluation task (mobile applications for object recognition), in a given area within one country (Maryland, the United States).

The participant pool was small (N = 12) although it is typical for a user study in accessibility [32] and human-computer interaction [10], balanced in terms of male and female participants, and somewhat diverse in terms of age, vision level, and age of onset.

More importantly, insights we obtained from the case study are limited as they were solely derived from peer debriefing among researchers and analysis of video recordings from our blind participants. It does not include explicit input from blind participants in terms of their experiences with the smart glasses or the overall remote study and how their remote experiences may differ from any prior studies they may have attended in-person in research labs. The case study itself, with the goal of understanding blind participants’ experiences with particular testbeds (beyond the scope of this work), already took almost 2 hours. Thus, the researchers opted not to ask participants any further questions, meaning that this work does not include potential challenges that may arise from deploying smart glasses in the houses of blind participants. In contrast to a static laptop camera whose viewpoint could be fixed to a specific area in participants’ houses, the viewpoint of the smart glasses is dynamic and dictated by their head movements. The smart glasses may accidentally capture scenes and information that the blind participant may not feel comfortable sharing. Investigating potential privacy risks of deploying smart glasses or any always-on camera in blind people’s houses is a topic that we believe is critical to explore in the future.

6. CONCLUSION

In this work, we examined the feasibility and challenges of using smart glasses for a user study that, due to the pandemic, had to move from a lab to blind participants’ houses. Taking an iterative approach, we devised a remote experimental setup and protocol, in which an experimenter can observe a participant’s interactions with smartphone testbeds and object stimuli via dual video conferencing: (i) a laptop camera facing the participant and (ii) smart glasses worn by the participant. We shared our findings, observations, and lessons learned from five pilot sessions and a case study with 12 blind participants. Specifically, we found that smart glasses could help the experimenter view participants’ interactions with a testbed and allow the experimenter to communicate with participants without asking them to deviate from their tasks. We observed that there was a difference between video streams captured by late blind and congenitally blind participants; smart glasses worn by those who became blind later in life tend to capture their interactions more often than smart glasses worn by those who born blind. These observations seem to echo prior work indicating that smart glasses with a narrow field of view can be more susceptible to differences in head movements such as directing one’s ear, instead of eye gaze, towards the source of sound. Last, we shared our experiences with attempting to overcome challenges in conducting a remote user study with blind participants, such as lack of screen reader support on the smart glasses, a limited battery of the smart glasses, and ensuring Internet connectivity from participants’ houses.

CCS CONCEPTS.

• Human-centered computing → Accessibility design and evaluation methods.

ACKNOWLEDGMENTS

We thank the anonymous reviewers for their constructive feedback on an earlier version of this paper. This work is funded in part by NIDILRR (#90REGE0008) and NSF (#1816380). The opinions and results herein are those of the authors and not necessarily those of the funding agencies.

Footnotes

1

To our knowledge, there are currently no commercially available smart glasses that support screen readers and remote video conferencing.

2

Please note that the results for P5 and P8 are not included since their video recordings were incomplete and thus were excluded from the analysis.

Contributor Information

Kyungjun Lee, University of Maryland College Park, MD, USA.

Jonggi Hong, Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA.

Ebrima Jarjue, University of Maryland College Park, MD, USA.

Ernest Essuah Mensah, Apple Inc. Cupertino, CA, USA.

Hernisa Kacorri, University of Maryland College Park, MD, USA.

REFERENCES

  • [1].Ahmetovic Dragan, Bernareggi Cristian, Keller Kristian, and Mascetti Sergio. 2021. MusA: Artwork Accessibility through Augmented Reality for People with Low Vision. Association for Computing Machinery, New York, NY, USA. 10.1145/3430263.3452441 [DOI] [Google Scholar]
  • [2].Aira. 2021. Assistive Technology for the Blind and Visually Impaired. https://taptapseeapp.com/
  • [3].Aira. 2021. Connecting you to real people instantly to simplify daily life. https://aira.io/
  • [4].Akter Taslima, Ahmed Tousif, Kapadia Apu, and Swaminathan Swami Manohar. 2020. Privacy Considerations of the Visually Impaired with Camera Based Assistive Technologies: Misrepresentation, Impropriety, and Fairness. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 32, 14 pages. 10.1145/3373625.3417003 [DOI] [Google Scholar]
  • [5].Shurug Al-Khalifa and Muna Al-Razgan. 2016. Ebsar: Indoor guidance for the visually impaired. Computers & Electrical Engineering 54 (2016), 26–39. [Google Scholar]
  • [6].Sieker Andreasen Morten, Villemann Nielsen Henrik, Ormholt Schrøder Simon, and Stage Jan. 2007. What Happened to Remote Usability Testing? An Empirical Study of Three Methods. Association for Computing Machinery, New York, NY, US: A, 1405–1414. 10.1145/1240624.1240838 [DOI] [Google Scholar]
  • [7].Baranski Przemyslaw and Strumillo Pawel. 2015. Field trials of a teleassistance system for the visually impaired. In 2015 8th International Conference on Human System Interaction (HSI). IEEE, 173–179. [Google Scholar]
  • [8].Amy Barrett-Lennard M HSer O&M, et al. 2016. The ROAM Project Part 1: Exploring new frontiers in video conferencing to expand the delivery of remote O&M services in regional Western Australia. International Journal of Orientation & Mobility 8, 1 (2016), 101–118. [Google Scholar]
  • [9].Bigham Jeffrey P and Cavender Anna C. 2009. Evaluating existing audio CAPTCHAs and an interface optimized for non-visual use. In Proceedings of the SIGCHI conference on human factors in computing systems. 1829–1838. [Google Scholar]
  • [10].Caine Kelly. 2016. Local Standards for Sample Size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ‘16). Association for Computing Machinery, New York, NY, USA, 981–992. 10.1145/2858036.2858498 [DOI] [Google Scholar]
  • [11].Chaudary Babar, Paajala Iikka, Keino Eliud, and Pulli Petri. 2017. Tele-guidance based navigation system for the visually impaired and blind persons. In eHealth 360°. Springer, 9–16. [Google Scholar]
  • [12].Chung SeungA, Park Soobin, Park Sohyeon, Lee Kyungyeon, and Oh Uran. 2021. Improving Mealtime Experiences of People with Visual Impairments. Association for Computing Machinery, New York, NY, USA. 10.1145/3430263.3452421 [DOI] [Google Scholar]
  • [13].Dewald Hong Phangia, Smyth Catherine A, et al. 2013. Feasibility of orientation and mobility services for young children with vision impairment using teleintervention. International Journal of Orientation & Mobility 6, 1 (2013), 83–92. [Google Scholar]
  • [14].Dixon Emma, Shetty Ashrith, Pimento Simone, and Lazar Amanda. 2021. Lessons Learned from Remote User-Centered Design with People with Dementia. In Dementia Lab Conference. Springer, 73–82. [Google Scholar]
  • [15].Engel Christin, Karin Müller Angela Constantinescu, Loitsch Claudia, Petrausch Vanessa, Weber Gerhard, and Stiefelhagen Rainer. 2020. Travelling More Independently: A Requirements Analysis for Accessible Journeys to Unknown Buildings for People with Visual Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 27, 11 pages. 10.1145/3373625.3417022 [DOI] [Google Scholar]
  • [16].MR Everingham BT Thomas T Troscianko, et al. 1999. Head-mounted mobility aid for low vision using scene classification techniques. The International Journal of Virtual Reality 3, 4 (1999), 3. [Google Scholar]
  • [17].Be My Eyes. 2021. Bringing sight to blind and low-vision people. https://www.bemyeyes.com/about
  • [18].Fiannaca Alexander, Apostolopoulous Ilias, and Folmer Eelke. 2014. Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. ACM, 19–26. [Google Scholar]
  • [19].Gleason Cole, Pavel Amy, Gururaj Himalini, Kitani Kris, and Bigham Jeffrey. 2020. Making GIFs Accessible. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 24, 10 pages. 10.1145/3373625.3417027 [DOI] [Google Scholar]
  • [20].Gonçalves David, Rodrigues André, and Guerreiro Tiago. 2020. Playing With Others: Depicting Multiplayer Gaming Experiences of People With Visual Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–12. [Google Scholar]
  • [21].Google. 2021. Google Meet: Secure video conferencing for everyone. https://meet.google.com/
  • [22].Huang Jonathan, Kinateder Max, Matt J Dunn Wojciech Jarosz, Yang Xing-Dong, and Emily A Cooper. 2019. An augmented reality sign-reading assistant for users with reduced vision. PloS one 14, 1 (2019), e0210630. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Jain Mohit, Diwakar Nirmalendu, and Swaminathan Manohar. 2021. Smartphone Usage by Expert Blind Users. Association for Computing Machinery, New York, NY, USA. 10.1145/3411764.3445074 [DOI] [Google Scholar]
  • [24].Kamikubo Rie, Kato Naoya, Higuchi Keita, Yonetani Ryo, and Sato Yoichi. 2020. Support Strategies for Remote Guides in Assisting People with Visual Impairments for Effective Indoor Navigation. Association for Computing Machinery, New York, NY, USA, 1–12. 10.1145/3313831.3376823 [DOI] [Google Scholar]
  • [25].Kohlrausch Armin and van de Steven Par. 2005. Audio—visual interaction in the context of multi-media applications. In Communication acoustics. Springer, 109–138. [Google Scholar]
  • [26].Lee Hae-Na, Uddin Sami, and Ashok Vikas. 2020. TableView: Enabling Efficient Access to Web Data Records for Screen-Magnifier Users. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 23, 12 pages. 10.1145/3373625.3417030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Lee Kyungjun, Sato Daisuke, Asakawa Saki, Asakawa Chieko, and Kacorri Hernisa. 2021. Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind. Association for Computing Machinery, New York, NY, USA. 10.1145/3441852.3471232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Lee Kyungjun, Sato Daisuke, Asakawa Saki, Kacorri Hernisa, and Asakawa Chieko. 2020. Pedestrian detection with wearable cameras for the blind: A two-way perspective. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Lee Yunjung, Joh Hwayeon, Yoo Suhyeon, and Oh Uran. 2021. AccessComics: An Accessible Digital Comic Book Reader for People with Visual Impairments. Association for Computing Machinery, New York, NY, USA. 10.1145/3430263.3452425 [DOI] [Google Scholar]
  • [30].Leporini Barbara, Buzzi Marina, and Hersh Marion. 2021. Distance Meetings during the Covid-19 Pandemic: Are Video Conferencing Tools Accessible for Blind People? Association for Computing Machinery, New York, NY, USA. 10.1145/3430263.3452433 [DOI] [Google Scholar]
  • [31].Lewald Jörg. 2002. Opposing effects of head position on sound localization in blind and sighted human subjects. European Journal of Neuroscience 15, 7 (2002), 1219–1224. 10.1046/j.1460-9568.2002.01949.xarXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1460-9568.2002.01949.x [DOI] [PubMed] [Google Scholar]
  • [32].Mack Kelly, McDonnell Emma, Jain Dhruv, Lu Wang Lucy, Froehlich Jon E., and Findlater Leah. 2021. What Do We Mean by “Accessibility Research”? A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to 2019. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ‘21). Association for Computing Machinery, New York, NY, USA, Article 371, 18 pages. 10.1145/3411764.3445412 [DOI] [Google Scholar]
  • [33].Martinez Françoise. 1977. Les informations auditives permettent-elles d’établir des rapports spatiaux? — Données expérimentales et cliniques chez l’aveugle congénital. (1977). 10.3406/psy.1977.28188 [DOI] [PubMed] [Google Scholar]
  • [34].Miao Mei, Hoai Anh Pham Jens Friebe, and Weber Gerhard. 2016. Contrasting usability evaluation methods with blind users. Universal access in the Information Society 15, 1 (2016), 63–76. [Google Scholar]
  • [35].Microsoft. 2021. Microsoft Teams: Meet, chat, call, and collaborate in just one place. https://www.microsoft.com/en-us/microsoft-teams/group-chat-software
  • [36].Morrison Cecily, Cutrell Edward, Grayson Martin, Thieme Anja, Taylor Alex, Roumen Geert, Longden Camilla, Tschiatschek Sebastian, Marques Rita Faia, and Sellen Abigail. 2021. Social Sensemaking with AI: Designing an Open-ended AI experience with a Blind Child. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14. [Google Scholar]
  • [37].World Health Organization. 2020. WHO Director-General’s opening remakrs at the media briefing on COVID-19. https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020
  • [38].Petrie Helen, Hamilton Fraser, King Neil, and Pavan Pete. 2006. Remote usability evaluations with disabled people. In Proceedings of the SIGCHI conference on Human Factors in computing systems. 1133–1141. [Google Scholar]
  • [39].Rafian Paymon and Legge Gordon E. 2017. Remote Sighted Assistants for Indoor Location Sensing of Visually Impaired Pedestrians. ACM Trans. Appl. Percept. 14, 3, Article 19 (July 2017), 14 pages. 10.1145/3047408 [DOI] [Google Scholar]
  • [40].Penny Rosenblum L, Paola Chanes-Mora C. McBride Rett, Flewellen Joshua, Nagarajan Niranjani, Stawasz Rosemary Nave, and Swenor Bonnielin. 2020. Flatten Inaccessibility: Impact of COVID-19 on Adults Who Are Blind or Have Low Vision in the United States. American Foundation for the Blind. https://www.afb.org/research-and-initiatives/flatten-inaccessibility-survey [Google Scholar]
  • [41].Saha Abir and Piper Anne Marie. 2020. Understanding Audio Production Practices of People with Vision Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 36, 13 pages. 10.1145/3373625.3416993 [DOI] [Google Scholar]
  • [42].Schaadhardt Anastasia, Hiniker Alexis, and Wobbrock Jacob O. 2021. Understanding Blind Screen-Reader Users’ Experiences of Digital Artboards. Association for Computing Machinery, New York, NY, USA. 10.1145/3411764.3445242 [DOI] [Google Scholar]
  • [43].Schnepp Jerry and Shiver Brent. 2011. Improving deaf accessibility in remote usability testing. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility. 255–256. [Google Scholar]
  • [44].Sears Andrew and Hanson Vicki. 2011. Representing users in accessibility research. In Proceedings of the SIGCHI conference on Human factors in computing systems. 2235–2238. [Google Scholar]
  • [45].Joschua Thomas Simon-Liedtke Way Kiat Bong, Schulz Trenton, and Fuglerud Kristin Skeide. 2021. Remote Evaluation in Universal Design Using Video Conferencing Systems During the COVID-19 Pandemic. In International Conference on Human-Computer Interaction. Springer, 116–135. [Google Scholar]
  • [46].Siu Alexa F., Fan Danyang, Kim Gene S-H, Rao Hrishikesh V., Vazquez Xavier, O’Modhrain Sile, and Follmer Sean. 2021. COVID-19 Highlights the Issues Facing Blind and Visually Impaired People in Accessing Data on the Web. Association for Computing Machinery, New York, NY, USA. 10.1145/3430263.3452432 [DOI] [Google Scholar]
  • [47].Stearns Lee, Findlater Leah, and Froehlich Jon E. 2018. Design of an Augmented Reality Magnification Aid for Low Vision Users. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 28–39. [Google Scholar]
  • [48].Stearns Lee and Thieme Anja. 2018. Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (Galway, Ireland) (ASSETS ‘18). Association for Computing Machinery, New York, NY, USA, 391–394. 10.1145/3234695.3241017 [DOI] [Google Scholar]
  • [49].Aldas Nelson Daniel Troncoso, Lee Sooyeon, Lee Chonghan, Beth Rosson Mary, Carroll John M, and Narayanan Vijaykrishnan. 2020. AIGuide: An Augmented Reality Hand Guidance Application for People with Visual Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–13. [Google Scholar]
  • [50].Vuzix. 2021. Vuzix Blade Smart Glasses. https://www.vuzix.com/products/blade-smart-glasses-upgraded [Google Scholar]
  • [51].Wanet MC and Veraart C. 1985. Processing of auditory information by the blind in spatial localization tasks. Percept Psychophys 38, 1 (Jul 1985), 91–96. 10.3758/bf03202929 [DOI] [PubMed] [Google Scholar]
  • [52].Wang Ruolin, Chen Zixuan, Mingrui Ray Zhang Zhaoheng Li, Liu Zhixiu, Dang Zihan, Yu Chun, and Chen Xiang ‘Anthony’. 2021. Revamp: Enhancing Accessible Information Seeking Experience of Online Shopping for Blind or Low Vision Users. Association for Computing Machinery, New York, NY, USA. 10.1145/3411764.3445547 [DOI] [Google Scholar]
  • [53].Wood Rachel, Dixon Emma, Salma Elsayed-Ali Etka Shokeen, Lazar Amanda, and Lazar Jonathan. 2021. Investigating Best Practices for Remote Summative Usability Testing with People with Mild to Moderate Dementia. ACM Transactions on Accessible Computing (TACCESS) 14, 3 (2021), 1–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Zhao Yuhang, Kupferstein Elizabeth, Rojnirun Hathaitorn, Findlater Leah, and Azenkot Shiri. 2020. The effectiveness of visual and audio wayfinding guidance on smartglasses for people with low vision. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14. [Google Scholar]
  • [55].Zhao Yuhang, Szpiro Sarit, and Azenkot Shiri. 2015. Foresee: A customizable headmounted vision enhancement system for people with low vision. In Proceedings of the 17th international ACM SIGACCESS conference on computers & accessibility. 239–249. [Google Scholar]
  • [56].Inc. Zoom Video Communications. 2021. Zoom: In this together. Keeping you securely connected wherever you are. https://zoom.us/

RESOURCES