Abstract
Zooming on large tactile displays can result in orientation loss, especially if the user’s reference point disappears from the visible area afterwards. To avoid such displacement we developed a focus zoom approach which keeps the currently focused element as central point for zooming. In this paper, we compare this approach with a conventional midpoint zoom (the center of the output area is maintained after zooming) on the touch-sensitive BrailleDis 7200 device. In a study with four blind and eight blindfolded sighted participants we could show that the focus zoom significantly reduces displacement of the focused element on the tactile output area. Locating the focus after doing a focus zoom needs significantly less time, reduces the overall workload and is also preferred by the users.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
To allow visually impaired users an adequate access to graphical information, large tactile displays have been developed for several years (see [18]). Especially for graphics exploration tasks the display should be as large as possible to allow users for bi-manual reading on the tactile output area [17] applying the concept of active touch (see [3]). Novel two-dimensional pin-matrix devices consist of up to 120 \(\times \) 60 pins (e. g. BrailleDis 7200 [11]) and, therefore, can present much more information at once compared to conventional single-line Braille displays. For instance, the presentation of interactive tactile graphics is possible besides simple Braille output.
Compared to conventional visual screens the resolution of even such a large tactile display is very low (10 dpi). Furthermore, the intake capacity of the tactual sense is considerable lower than that of the visual sense [2] which results in a more time-consuming perception. For these reasons interaction and presentation techniques have to be adapted. In particular, appropriate zooming and panning techniques are important for interacting with small screens and, therefore, are often targeted in current research of visual interaction (e. g. comparison of conventional techniques [5], alternative strategies for map navigation [14], zoomable soft keyboards [8, 9] etc.).
When dealing with graphical applications on large tactile displays zooming is also necessary. Blind and visually impaired users not only prefer the usage of zooming functionalities for exploring detailed diagrams but they also can improve their accuracy compared to having no zooming possibilities [13].
In the following, we give a brief overview over existing zooming techniques on two-dimensional tactile displays. Afterwards we present a novel zooming approach for the BrailleDis 7200 and compare it with the conventional zooming on this device within a user study.
2 Related Work
Some approaches for large tactile displays are based on a semantical zooming, i. e. the amount of information is adapted to the current zoom level. In this way, enlargement results in showing more and more details while downsizing removes details allowing to simplify the image. For instance, Rotard uses some kind of semantic zooming methods for showing Scalable Vector Graphics (SVG) [15].
Furthermore, there are also approaches in which the algorithm automatically decides which zoom levels are uninformative and, therefore, should not be shown to the user [12, 13]. This means, only zoom levels that are significantly different to the previous level are shown, while the cognitive grouping of information is preserved. In a user study with blind subjects, a significant improvement of correct answers was shown compared to a conventional zoom [13]. There was no difference in response time, but fewer clicks were used. However, a large tactile display was not used for exploring the virtual diagram, but a single Braille cell was mounted on a mouse which was moved across a graphics tablet.
If zooming should be realized independently of some knowledge of the presented content, a semantic zoom is not applicable. Instead, geometric zooming is necessary. In graphical applications it normally leads to a continuous change in the scaling. The applicability of such a continuous zooming for tactile displays is unclear. Alternatively, providing 25 discrete zoom levels seem to be enough for handling a haptic zoomable interface [20]. Generally, perception of zooming greatly differs among blind and sighted people as no overview is available when fingers explore the tactile display sequentially. Because of continuous changes in the scaling, visual zooming seems to be rather a sort of morphing of the presented content for sighted people. This allows much more easily to maintain the context compared to some kind of “page flipping feeling” in tactual perception as some blind people have described it to us.
There already exist several approaches to realize a geometric haptic zooming. The most simple one is a kind of midpoint zoom as it is used, for example, in the Tangram workstation [1]. In this 1:1 adoption of the visual zooming metaphor used in common graphical user interfaces, the zoom is performed at the center of the current view port. However, in previous user studies it became clear that blind participants often were confused about clipped objects after zooming [1]. This especially occurs if an object is near to the borders and the user does a zoom-in operation leading to an enlargement which is big enough to move the object outside the visible range (see also Fig. 1 right row). As maintaining the context after zooming is pretty complex in tactual exploration anyway, a large displacement of the content should be avoided.
To allow the user to define a position to be zoomed, another approach can be to rely on the finger position as center for zooming. One example is the Touch Zoom used in the system of Shimada et al. [16]. The original zooming functionality of this system is also based on the conventional midpoint approach. In their paper, Shimada et al. reported no comparison for conventional zoom and Touch Zoom. Users only rated subjectively how useful and easy to use the new Touch Zoom was [16]. The usage of zoom gestures within the HyperReader system [10] is another example of the finger position approach. Thereby, the starting point of the circular or semi-circular gestures is used as center for zooming. However, to allow for an intuitive and usable gesture interaction, recognition of touch has to be reliable. This can be affected by external influences, such as technical problems, or by ambiguous values which can result from a multitouch input.
3 A Novel Zooming Approach on the BrailleDis Device
Both zooming approaches currently used on the BrailleDis 7200 device, namely ‘Midpoint Zoom’ [1] and ‘HyperReader Zoom Gestures’ [10], are not optimal due to the above mentioned problems (clipping of objects after zooming and unreliable touch recognition respectively). Therefore, another approach combining the advantages of both zooming functionalities was implemented. First it uses hardware buttons instead of gesture input, and second it is not performed at the center of the current view, but at the center point of the currently focused elementFootnote 1. Unlike the zooming gesture, the finger is not taken as reference point, but the system focus. In the following we call this zooming approach ‘Focus Zoom’.
The focused element seems to be appropriate for that purpose as it is often the target of the user’s current attention. A typical scenario for using the zooming abilities of a large pin-matrix device is to explore graphical applications where the spatial arrangement or layout is important, such as a tactile image [1] or map [19] application. In information retrieval or editing tasks within such applications the element of interest will be actively marked or selected by the user.
The implemented midpoint as well as focus zoom are based on a fixed scaling ratio. In other words, we use a factor of 1.5, i.e. after a zoom-in operation the content is 50% bigger than before. This seems to be a reasonable zoom factor as it allows for a sufficiently large difference between two zoom levels and it also avoids too many zooming steps.
The difference between midpoint and focus zoom is the usage of different center points displayed after realizing the zooming (compare Fig. 1). The center for midpoint zooming always is the center of the output area. After a zoom operation the position of the view port (offsetFootnote 2) has to be recalculated out of the hypothetical new center and the center of the output area:
In contrast, the offset for the focus zoom results from the difference of the old and the new center of the focused shape:
In both cases the hypothetical new center is the product of the old center and the scale factor: newCenter = oldCenter * (newZoom / oldZoom). Thereby, the old center is either the center of the output area (midpoint zoom) or the center of the focused shape (focus zoom).
Our hypothesis is that the focus zoom is more efficient than the midpoint zoom. We think, it can reduce loss of orientation by the user as the context is changed less when the focused element does not move after zooming.
4 Experimental Setup
To investigate the above mentioned hypothesis, we conducted a user study to compare the midpoint and focus zoom approach on the BrailleDis 7200 device. The following research questions should be answered:
-
1.
Which zooming approach is more efficient?
-
2.
Which zooming approach reduces the workload?
-
3.
Which zooming approach is preferred by the users?
4.1 Participants
12 participants with a mean age of 33 years took part in the study. Four of them were blind, the others were blindfolded sighted people. The demographic data of the subjects are shown in Table 1.
As no Braille knowledge or experiences in tactual shape recognition were necessary for the study, we also took blindfolded sighted people to increase the number of data sets. Their tactual acuity may be inferior compared to that of blind people [4], but individual differences can be isolated by using a within-group design of the study [7]. This means, every participant performed all test conditions as described below.
4.2 Materials, Task and Procedure
To allow for a comparison of the zooming approaches, a set of focus locating tasks was given to the participants. The focused element should be quickly regained on the tactile display after executing a zoom operation. This kind of task is common for scenarios in which a tactile graphic is explored on a two-dimensional pin-matrix device.
We prepared three test images each consisting of 18 shapes different in size and form that were randomly spread over the document (see also Fig. 2). In each single task, one of these shapes was selected randomly (each shape has been chosen only once). Furthermore, the current view port of the pin-matrix device also was placed randomly, but it was ensured that the focused shape was visible in the initial output.
In a short training phase the two zooming approaches as well as the following task was explained to the participant. Furthermore, an example was shown where the focused shape moved outside the visible view after zooming and, therefore, panning was necessary. To allow the user to locate the shape in this situation, panning operations were explained and trained briefly. After the training, three test runs were conducted.
The three test images were randomly assigned to the three test runs. Each single task (one trial) within these test runs consisted of the following phases:
-
1.
searching for the focused element
-
2.
zoom operation (triggered by the test supervisor)
-
3.
retrieving the focused element as fast as possible
Each test run consisted of ten different zooming conditions (single tasks, see Table 2) which were presented in random order. In the first test run, each zooming approach was assigned five times and in a random order to the zooming conditions. In the second test run, one of the zooming approaches (either focus or midpoint zoom) was used in all the ten different zooming conditions. In the third test run, the remaining zooming approach was used. Each participant had to complete all three test runs (within-group design). In total, there were 30 trials per participant. Before each trial, the user was told by which scale factor the current output will be changed after zooming (see ‘zoom mode’ in Table 2). Based on this information, the user can make his/her own expectations (mental model) what will happen to the focused element.
4.3 Apparatus and Measurements
The Tangram workstation (see [1]) was used for presenting the graphical shapes on the BrailleDis 7200 device (see Fig. 2). The graphic files were shown in Libre Office DrawFootnote 3, captured and converted into a 10 dpi binary tactile image. This image was sent to the BrailleDis 7200 device which has a touch-sensitive tactile output area consisting of 120 \(\times \) 60 pins. The size of the tactile area is 30 \(\times \) 15 cm which allows users to use both hands for exploring the content.
The focused shape was tactually marked by a blinking frame (its bounding box) at a frequency of about 1.7 Hz. Note that for locating the target element before and after zooming it was not necessary to recognize its shape but only to detect the blinking frameFootnote 4. As in both zooming methods the frame moves to a greater or lesser extent after zooming, it is not enough for the user to just touch the previous location. A task/trial was considered as successful as soon as one edge of the bounding box was felt and reported by the participant. Therefore, the user gave oral feedback (“stop”).
During the tasks, the following data were recorded in logfiles:
-
focused shape: name and center position (before and after the zooming operation)
-
zoom level: before and after the zooming operation
-
offset of the view port: before and after the zooming operation
-
time: when zooming operation was executed and when subject has successfully found the shape again
Out of this, task completion time as well as the distance between the target shape’s center position before and after zooming was calculated. Moreover, the user’s workload for each of the two zooming approaches was measured by using the NASA-TLX (Task Load Index, see [6]) after the second and third test run. Therefore, the participant verbally had to give a rating between 0 and 100% for each of the TLX factors. At the end of the test, the user should state which zoom approach he prefers. Beside the demographic data these values were recorded in a questionnaire.
5 Results and Discussion
For exploring the tactile output area nearly all participants used both hands. Merely participant S4 used only her right hand (all fingers and palm). B2 and S2 used their palms in addition to their fingers the whole time. B3 and S3 used only their fingertips. The other subjects added their palms in some cases, for instance if using the fingers was not enough to quickly detect the blinking focus. Although the blinking pins made some mechanical sound, according to the participants of the study it seemed not to be a clue to locate the focus.
A comparison of completion time in midpoint and focus zoom conditions for each participant is presented in Fig. 3. The mean displacements of the focused element are compared in Fig. 4.
Regarding the time needed for locating the focused element after doing a zoom operation, the focus zoom (mean completion time = 2.7 s, SD = 0.8) is more efficient than the midpoint zoom (mean completion time = 3.4 s, SD = 0.8). Comparing the average times of blind (midpoint zoom: mean = 2.5 s, SD = 0.5; focus zoom: 2.1 s, SD = 0.9) and sighted (midpoint zoom: mean = 3.8 s, SD = 0.6; focus zoom: 3.0 s, SD = 0.6) participants separately, it can be found that both user groups needed nearly a quarter more time to locate the focused element after using the midpoint zoom. We suspect this is mainly due to the higher displacement of the focused element after zooming with the midpoint approach (mean distance = 25.2 pins, SD = 2.7) compared to that caused by the focus zoom (mean distance = 6.6 pins, SD = 2.6). Paired t-tests show that the difference in completion time (\(t = 3.581\), \(df = 11\), \(p < 0.01\)) as well as in the displacement of the focused element’s center point after zooming (\(t = 15.093\), \(df = 11\), \(p < 0.001\)) is significant.
Trials in which the participants did some panning operation were not included in the analysis of completion times. In some cases panning was necessary as the focused element was not visible anymore in the view port after zooming. While in the focus zoom condition this happened not at all, in the midpoint zoom condition it occurred in 21 out of 180 trials. The average completion time for these trials is more than ten times slower (mean = 35.5 s, SD = 35.1) than for the other midpoint zoom trials (mean = 3.4 s, SD = 0.8). The high standard deviation for panning times can be explained by unexperienced users who panned in a wrong direction and, therefore, needed up to two minutes for locating the focused element. Despite these extreme cases, reorientation after executing the midpoint zoom will be in practice much more time-consuming than the above mentioned 125% compared to focus zoom.
Considering the distance values in the focus zoom condition, further explanation is required. Normally, the focus zoom results in no or only very little displacements of the focused elementFootnote 5. However, in some cases there can be considerable displacements after a zoom-out operation due to keeping the content within the visible range of the pin-matrix device (see Fig. 5). For instance, the positioning of the document in the smallest zoom level (the whole image is visible) is always the same (Fig. 5 right) and does not depend on the used zooming approach. Such an adaptation seems to be necessary to allow for a consistent presentation of content on the pin-matrix device.
This effect also results in a more time-consuming search in zoom-out conditions (see also Fig. 6), i. e. completion time of zoom-out trials (mean = 3.4 s, SD = 1.1) is significantly greater than that of zoom-in trials (mean = 2.8 s, SD = 0.7; \(t = -2.902\), \(df = 11\), \(p < 0.05\)). Especially the zoom-out by three steps at once (factor = 4.5, see Fig. 5) doubles the search time on average from 2.6 (average time in one or two step zoom-out conditions) to 5.3 s. On the other side, the tested zoom-in conditions (one, two and three steps) have no significant difference on the completion time (\(F_{2,220} = 0.578\), \(p > 0.5\)). By and large, the effectiveness of the focus zoom approach against the midpoint zoom is more significant based on completion time in zoom-in conditions (\(t = 3.139\), \(df = 11\), \(p < 0.01\)) than in zoom-out conditions (\(t = 1.307\), \(df = 11\), \(p = 0.22\), no significance). As shown in Fig. 6, these results can be found in both user groups.
The participants assessed the workload related factors of the TLX significantly lower for focus zoom than for midpoint zoom (see Fig. 7; \(t = 4.950\), \(df = 5\), \(p < 0.01\)). Note that low TLX values are better than high valuesFootnote 6. Although the individual ratings partially deviate greatly from each other, the focus zoom was perceived by every participant as less or, at least, equally demanding as the midpoint zoom in all factors. Thus, with respect to the overall workload, the focus zoom has clear benefits over the midpoint zoom.
All participants liked the focus zoom approach very much. 9 of the 12 participants would prefer it over the midpoint zoom, while the other three (one sighted and two blind users) had no preference. Instead, they would like to have both zooming possibilities because they think the suitability of the zoom method highly depends on the current task. For instance, if only the focused element is of interest, then the focus zoom seems to be more appropriate. On the other side, they would prefer the midpoint zoom for better keeping the global context.
Regardless of the positive results of the focus zoom, there are some restrictions to its efficiency. If the bounding box is very big compared to the element itself, the blinking frame could be far away from some parts of the object (e.g. in case of a long diagonal line). On the one side, there could be some difficulties for the user to match the focus blinking with the corresponding object, on the other side the object and its center can be within the current view port while the blinking bounding box is outside, and therefore, not touchable.
6 Conclusion and Outlook
In this paper we have compared two different zooming approaches, namely midpoint and focus zoom, on the pin-matrix device BrailleDis 7200. The task of our study with four blind and eight blindfolded sighted participants was to retrieve the focused element in a tactile graphic after performing a zooming operation. While the midpoint zoom condition maintains the middle of the output area, the focus zoom takes the currently focused element as central point for zooming.
Our results showed that, at least in focus locating tasks, the focus zoom is not only more efficient but also preferred by the users. It allows to better keep orientation in dealing with single tactile graphic elements as it minimizes the displacement of the focused object on the tactile output area after zooming. This again reduces the need for time-consuming panning operations. Besides, the overall workload for focus zoom is significantly lower than that for midpoint zoom.
These results could be shown for both user groups – blind as well as blindfolded sighted people. In fact, the average values for the two tested zooming conditions show that the blind users as well as the sighted users were both about 25% faster in using the focus zoom, regardless of their tactile or visual abilities. Of course, the blind users were faster in all conditions than the sighted users, but for our analysis, the absolute time was not important. Independent of accessibility issues and even for users who are unfamiliar with large tactile displays, a focused-centered zooming approach can support focus finding tasks on two-dimensional tactile displays.
In the end, multiple zooming approaches can be provided redundantly to allow for an efficient interaction on tactile pin-matrix devices in various tasks. On the BrailleDis 7200 the user can choose from the above mentioned zooming methods, namely gesture input, midpoint and focus zoom. For instance, the midpoint zoom is applied if no element is selected and zoom gestures can enable the user to define a fixation point which is independent of the system focus.
Notes
- 1.
The center point of an element is represented by the center of its bounding box.
- 2.
Note that in our case, the offset is \(\le \)0 as it defines how the content is placed in relation to the currently used view port.
- 3.
- 4.
Locating the blinking pins is the major challenge in a finding focus task on the BrailleDis device. The recognition of a shape is quite a different task, which is not part of our test. The participants could trust in that the focused shape is inside the bounding box. In a real-life scenario on the pin-matrix device, the user must find the blinking pins at first, and then he/she can continue the image exploration. By concentrating only on finding the focus, we can reduce the complexity of the task.
- 5.
Little displacements of one or two pins may occur due to rounding errors.
- 6.
For instance, a low TLX performance factor means that a user was very successfully in performing a task (“How successful were you in accomplishing what you were asked to do?”; 0 = perfect, 100 = failure).
References
Bornschein, J., Prescher, D., Weber, G.: Collaborative creation of digital tactile graphics. In: Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 117–126 (2015)
Franke, H.W.: Sehen und Erkennen (See and recognize). Bild der Wissenschaft 5/75 (1975). (in German)
Gibson, J.: Observations on active touch. Psychol. Rev. 69(6), 477–491 (1962)
Goldreich, D., Kanics, I.M.: Tactile acuity is enhanced in blindness. J. Neurosci. 23(8), 3439–3445 (2003)
Gutwin, C., Fedak, C.: Interacting with big interfaces on small screens: a comparison of fisheye, zoom, and panning techniques. In: Proceedings of Graphics Interface 2004, pp. 145–152. Canadian Human-Computer Communications Society (2004)
Hart, S.G., Staveland, L.E.: Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1988)
Lazar, J., Feng, J.H., Hochheiser, H.: Research Methods in Human-Computer Interaction. Wiley, London (2010)
Oney, S., Harrison, C., Ogan, A., Wiese, J.: ZoomBoard: a diminutive QWERTY soft keyboard using iterative zooming for ultra-small devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 2799–2802. ACM, New York (2013)
Pollmann, F., Wenig, D., Malaka, R.: HoverZoom: making on-screen keyboards more accessible. In CHI 2014 Extended Abstracts on Human Factors in Computing Systems, pp. 1261–1266. ACM, New York (2014)
Prescher, D., Weber, G., Spindler, M.: A tactile windowing system for blind users. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 91–98 (2010)
Prescher, D.: Redesigning input controls of a touch-sensitive pin-matrix device. In: Proceedings of the International Workshop on Tactile/Haptic User Interfaces for Tabletops and Tablets, pp. 19–24 (2014)
Rastogi, R., Pawluk, D.T.: Automatic, intuitive zooming for people who are blind or visually impaired. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 239–240 (2010)
Rastogi, R., Pawluk, T.V., Ketchum, J.: Intuitive tactile zooming for graphics accessed by individuals who are blind and visually impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 21(4), 655–663 (2013)
Robbins, D.C., Cutrell, E., Sarin, R., Horvitz, E.: ZoneZoom: map navigation for smartphones with recursive view segmentation. In: Proceedings of the Working Conference on Advanced Visual Interfaces, AVI 2004, pp. 231–234. ACM, New York (2004)
Rotard, M., Otte, K., Ertl, T.: Exploring Scalable Vector Graphics for Visually Impaired Users. In: Miesenberger, K., Klaus, J., Zagler, W.L., Burger, D. (eds.) ICCHP 2004. LNCS, vol. 3118, pp. 725–730. Springer, Heidelberg (2004). doi:10.1007/978-3-540-27817-7_108
Shimada, S., Yamamoto, S., Uchida, Y., Shinohara, M., Shimizu, Y., Shimojo, M.: New design for a dynamic tactile graphic system for blind computer users. In: Proceedings of SICE Annual Conference, pp. 1474–1477. IEEE (2008)
Shimada, S., Murase, H., Yamamoto, S., Uchida, Y., Shimojo, M., Shimizu, Y.: Development of directly manipulable tactile graphic system with audio support function. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2010. LNCS, vol. 6180, pp. 451–458. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14100-3_68
Vidal-Verd, F., Hafez, M.: Graphical tactile displays for visually-impaired people. IEEE Trans. Neural Syst. Rehabil. Eng. 15(1), 119–130 (2007)
Zeng, L., Weber, G.: Audio-haptic browser for a geographical information system. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2010. LNCS, vol. 6180, pp. 466–473. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14100-3_70
Ziat, M., Gapenne, O., Stewart, J., Lenay, C., Bausse, J.: Design of a haptic zoom: levels and steps. In: Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pp. 102–108. IEEE (2007)
Acknowledgments
We thank all participants as well as Jens Bornschein for his work on the extension mechanism of the Tangram workstation. The Tangram project was sponsored by the Federal Ministry of Labour and Social Affairs (BMAS) under the grant number R/FO125423. The paper is part of the Mosaik project which is also sponsored by the BMAS under the grant number 01KM151112. Only the authors of this paper are responsible for its content.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 IFIP International Federation for Information Processing
About this paper
Cite this paper
Prescher, D., Weber, G. (2017). Comparing Two Approaches of Tactile Zooming on a Large Pin-Matrix Device. In: Bernhaupt, R., Dalvi, G., Joshi, A., K. Balkrishan, D., O'Neill, J., Winckler, M. (eds) Human-Computer Interaction - INTERACT 2017. INTERACT 2017. Lecture Notes in Computer Science(), vol 10513. Springer, Cham. https://doi.org/10.1007/978-3-319-67744-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-67744-6_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67743-9
Online ISBN: 978-3-319-67744-6
eBook Packages: Computer ScienceComputer Science (R0)