Performance assessment of an architecture with adaptative interfaces for people with special needs | Empirical Software Engineering Skip to main content
Log in

Performance assessment of an architecture with adaptative interfaces for people with special needs

  • Experience Report
  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

People in industrial societies carry more and more portable electronic devices (e.g., smartphone or console) with some kind of wireless connectivity support. Interaction with auto-discovered target devices present in the environment (e.g., the air conditioning of a hotel) is not so easy since devices may provide inaccessible user interfaces (e.g., in a foreign language that the user cannot understand). Scalability for multiple concurrent users and response times are still problems in this domain. In this paper, we assess an interoperable architecture, which enables interaction between people with some kind of special need and their environment. The assessment, based on performance patterns and antipatterns, tries to detect performance issues and also tries to enhance the architecture design for improving system performance. As a result of the assessment, the initial design changed substantially. We refactorized the design according to the Fast Path pattern and The Ramp antipattern. Moreover, resources were correctly allocated. Finally, the required response time was fulfilled in all system scenarios. For a specific scenario, response time was reduced from 60 seconds to less than 6 seconds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. http://www.inredis.es/default.aspx

  2. The reader should note that we have added some grey notes in the UML diagrams. They are performance annotations that will be explained in Section 3.

  3. http://www.inredis.es/consorcio.aspx

  4. 4 http://www.sce.carleton.ca/rads/index.html

  5. See the top grey note in Fig. 19 for an illustrative example.

  6. For example, blind people interact with tactile interfaces by means of an immediate audible feedback.

  7. http://www.un.org/disabilities/default.asp?id=18

  8. Note that this is not a limitation to evaluate the architecture.

  9. We recall that resources are represented: a) in the sequence diagrams by life-lines, b) in the GSPN by shared places, highlighted in red in Figures.

  10. The ASSM process was introduced in Section 2 and it is detailed in Appendix A.5.

  11. According to EASTIN (www.eastin.eu), the principal Assistive Technology Information Network in Europe, the number of Assistive Products available in the EU increased to more than 39.221 products in 2009.

  12. From all the Assistive Products in the marketplace, we considered those that can be integrated into the architecture, i.e., Assistive Software products.

  13. http://argouml.tigris.org/

References

  • Abrams M, Phanouriou C, Batongbacal AL, Williams SM, Shuster JE (1999) UIML: an appliance-independent XML user interface language. Comput Netw 31(11-16):1695–1708

    Article  Google Scholar 

  • Ajmone Marsan M, Balbo G, Conte G, Donatelli S, Franceschinis G (1995) Modelling with generalized stochastic. Petri nets. Wiley

  • Alvargonzález M, Etayo E, Gutiérrez JA, Madrid J (2010) Arquitectura orientada a servicios para proporcionar accesibilidad. In: Proceedings of 5th Jornadas Científico-Ténicas en Servicios Web y SOA (JSWEB’10), (In Spanish)

  • Bass L, Clements P, Kazman R (2005) Software Architecture in PracticeSEI Series in Software Engineering. Addison-Wesley

  • Becker S, Koziolek H, Reussner R (2009) The Palladio component model for model-driven performance prediction. J Syst Softw 82(1):3–22

    Article  Google Scholar 

  • Bergenti F, Poggi A (2000) Improving UML Designs Using Automatic Design Pattern Detection. In: Proceeding 12th, International Conference on Softw Engineering and Knowledge Engineering (SEKE’00), pp 336–343

  • Bernardi S, Merseguer J (2007) Performance evaluation of UML design with stochastic well-formed nets. J Syst Softw 80(11):1843–1865

    Article  Google Scholar 

  • Brown W, Malveau R, McCormick H, Mowbray T (1998) AntiPatterns: refactoring softwarearchitectures, and projects in crisis. John Wiley

  • Card SK, Robertson GG, Mackinlay JD (1991) The information visualizer: an information workspace. In: Proceeding SIGCHI Conference on Human Factors in Computing Systems (CHI’91), ACM, pp 181–186

  • Catalán E, Catalán M (2010) Performance Evaluation of the INREDIS framework. Technical report, Departament d’Enginyeria Telemàtica, Universitat Politècnica de Catalunya

  • Chappell D (2004) Enterprise Service Bus. O’Reilly Media Inc.

  • Chi CF, Tseng LK, Jang Y (2012) Pruning a decision tree for selecting computer-related assistive devices for people with disabilities. IEEE Trans Neural Syst Rehabil Eng 20(4):564–573

    Article  Google Scholar 

  • Chiola G, Franceschinis G, Gaeta R, Ribaudo M (1995) GreatSPN 1.7: graphical editor and analyzer for timed and stochastic Petri nets. Perform Eval 24:47–68

    Article  MATH  Google Scholar 

  • Cortellessa V, Di Marco A, Inverardi P (2011) Model-Based Software Performance Analysis. Springer

  • Cortellessa V, Di Marco A, Trubiani C (2012) An approach for modeling and detecting software performance antipatterns based on first-order logics. Softw Syst Model, pp 1–42

  • Cortés U, Annicchiarico R, Vázquez-Salceda J, Urdiales C, Cañamero L, López M, Sànchez-Marrè M, Caltagirone C (2003) Assistive technologies for the disabled and for the new generation of senior citizens: the e-Tools architecture. AI Commun 16(3):193–207

    MathSciNet  MATH  Google Scholar 

  • Distefano S, Scarpa M, Puliafito A (2011) From UML to Petri nets: the PCM-based methodology. IEEE Trans Softw Eng 37(1):65–79

    Article  Google Scholar 

  • Dugan-Jr RF, Glinert EP, Shokoufandeh A (2002) The sisyphus database retrieval software performance antipattern. In: Proceeding 3rd International Workshop on Software and Performance (WOSP’02). ACM, pp 10–16

  • Gamma E, Helm R, Johnson R, Vlissides J (1995) Design Patterns: elements of reusable object-oriented software. Addison–Wesley

  • Giménez R, Pous M, Rico-Novella F (2012) Securing an interoperability architecture for home and urban networking: implementation of the security aspects in the INREDIS interoperability architecture. In: Proceedings 26th international conference on advanced information networking and applications workshops (WAINA’12), IEEE Comput Soc, vol 0, pp 714–719

  • Gómez-Martínez E, Merseguer J (2006) ArgoSPE: model-based software performance engineering. In: Proceedings 27th international conference on applications and theory of Petri nets and other models of concurrency (ICATPN’06). Springer-Verlag, LNCS, vol 4024, pp 401–410. http://argospe.tigris.org

  • Gómez-Martínez E, Merseguer J (2010) Performance modeling and analysis of the universal control hub. In: Proceedings 7th European Performance Engineering Workshop (EPEW’10). Springer, LNCS, vol 6342, pp 160–174

  • Gómez-Martínez E, Ilarri S, Merseguer J (2007) Performance analysis of mobile agents tracking. In: Proceedings 6th international workshop on software and performance (WOSP’07). ACM, pp 181–188

  • Gómez-Martínez E, Linaje M, Iglesias-Pérez A, Sánchez-Figueroa F, Preciado JC, González-Cabero R, Merseguer J (2013) Interacting with inaccessible smart environments: conceptualization and evaluated recommendation of assistive software submitted to publication

  • González-Cabero R (2010) A semantic matching process for detecting and reducing accessibility gaps in an ambient intelligence scenario. In: Proceedings 4th International Symposium of Ubiquitous Computing and Ambient Intelligence (UCAmI’10). IBERGACETA Publicaciones, pp 315–324

  • de Gooijer T, Jansen A, Koziolek H, Koziolek A (2012) An industrial case study of performance and cost design space exploration. In: Proceedings 3rd ACM/SPEC international conference on performance engineering (ICPE’12), ACM, pp 205–216

  • Grand M (1998) Patterns in Java: a catalog of reusable design patterns illustrated with UML, vol 1. Wiley

  • Grand M (2001) Java Enterprise Design Patterns: patterns in Java vol 3. Wiley

  • Hermanns H, Herzog U, Katoen JP (2002) Process algebra for performance evaluation. Theor Comput Sci274(1-2):43–87

    Article  MathSciNet  MATH  Google Scholar 

  • Huber N, Becker S, Rathfelder C, Schweflinghaus J, Reussner RH (2010) Performance modeling in industry: a case study on storage virtualization. In: Proceedings 32nd ACM/IEEE international conference on software engineering (ICSE’10). ACM, pp 1–10

  • INREDIS (2010) Deliverable-78.2.1. Final guide to a generic platform deployment

  • International Standard Organization (2009) ISO 24756:2009–Framework for specifying a common access profile (CAP) of needs and capabilities of users, systems, and their environments

  • International Standard Organization (2011) ISO 9999:2011–Assistive products for persons with disability – Classification and terminology

  • Isa MA, Jawawi DNA (2011) Comparative evaluation of performance assessment and modeling method for software architecture. In: Software Engineering and Computer Systems, Communications in Computer and Information Science, vol 181, Springer-Verlag, pp 764–776

  • Jin Y, Tang A, Han J, Liu Y (2007) Performance evaluation and prediction for legacy information systems. In: Proceedings 29th International conference on software engineering (ICSE’07). IEEE Comput Soc, pp 540–549

  • Kadouche R, Abdulrazak B, Giroux S, Mokhtari M (2009) Disability centered approach in smart space management. Int J Smart Home 3(3):13–26

    Google Scholar 

  • Kauppi T (2003) Performance analysis at the software architectural level. In: Technical Report 512, VTT Technical Research Centre of Finland

  • Kounev S (2006) Performance modeling and evaluation of distributed component-based systems using queueing Petri nets. IEEE Trans Softw Eng 32(7):486–502

    Article  Google Scholar 

  • Koziolek A, Koziolek H, Reussner R (2011) PerOpteryx: automated application of tactics in multi-objective software architecture optimization. In: Proceeding 7th international conference on the quality of software architectures (QoSA’11), ACM, pp 33–42

  • Koziolek H, Schlich B, Becker S, Hauck M (2012) Performance and reliability prediction for evolving service-oriented software systems. Empir Softw Eng:1–45

  • Lazowska E, Zahorjan J, Scott G, Sevcik K (1984) Quantitative system performance. In: Computer System Analysis Using Queueing Network Models. Prentice-Hall

  • Lea D (1999) Concurrent programming in Java 2nd edn. In: Design Principles and Patterns. Addison-Wesley Longman Publishing Co. Inc

  • Levandoski JJ, Ekstrand MD, Ludwig M, Eldawy A, Mokbel MF, Riedl J (2011) RecBench: benchmarks for evaluating performance of recommender system architectures. PVLDB 4(11):911–920

    Google Scholar 

  • Liang S, Fodor P, Wan H, Kifer M (2009) OpenRuleBench: an analysis of the performance of rule engines. In: Proceedings 18th International Conference on World Wide Web ACM:601–610

  • Liu Y, Fekete A, Gorton I (2005) Design-level performance prediction of component-based applications. IEEE Trans Softw Eng 31(11):928–941

    Article  Google Scholar 

  • Liu Y, Gorton I, Zhu L (2007) Performance prediction of service-oriented applications based on an enterprise service bus. In: Proceedings 31st Annual International Computer Software and Applications Conference (COMPSAC’07). IEEE Computer Society, pp 327–334

  • Llinás P, Montoro G, García-Herranz M, Haya P, Alamán X (2009) Adaptive interfaces for people with special needs. In: Proceedings 10th International Work Conference on Artificial Neural Networks Part II: Distributed computing. Artificial intelligence, bioinformatics, soft computing, and ambient assisted living (IWANN’09), Springer-Verlag, pp 772–779

  • Miller RB (1968) Response time in man-computer conversational transactions. In: Proceedings AFIPS fall joint computer conference (AFIPS’68), vol 33, pp 267–277

  • Murua A, González I, Gómez-Martínez E (2011) Cloud-based assistive technology services. In: Proceedings Federated Conference on Computer Science and Information Systems (FedCSIS’11), pp 985–989

  • Nielsen J (1993) Usability Engineering. Morgan Kaufmann

  • Object Management Group (OMG) (2011) A UML profile for modeling and analysis of real time embedded systems (MARTE) Version 1.1. http://www.omgmarte.org/

  • Petriu DC, Woodside CM (2002) Software performance models from system scenarios in use case maps. In: Proceedings 12th international conference on computer performance evaluation. Modelling Techniques and Tools (TOOLS’02), Springer-Verlag, pp 141–158

  • Phanouriou C (2000) UIML: a device-independent user interface markup language. Tech. rep. Virginia Polytechnic Institute and State University

  • Pooley RJ, Abdullatif AAL (2010) CPASA: Continuous Performance Assessment of Software Architecture. In: Proceedings 17th IEEE International Conference and Workshops on the Eng of Computer-Based Systems (ECBS’10), IEEE Comput Soc, pp 79–87

  • Pous M, Serra-Vallmitjana C, Giménez R, Torrent-Moreno M, Boix D (2012) Enhancing accessibility: mobile to ATM case study. In: Proceedings IEEE consumer communications and networking conference (CCNC’12). IEEE Computer Society, pp 404–408

  • Prud’hommeaux E, Seaborne A (2006) SPARQL Query Language for RDF. http://www.w3.org/TR/rdf-sparql-query/

  • Q-ImPrESS (2009) Q-ImPrESS consortium: project website. http://www.q-impress.eu

  • QoSA (2005–2014) International ACM Sigsoft Conference on the Quality of Software Architectures, SIGSOFT

  • Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14(2):131–164

    Article  Google Scholar 

  • Sainz F, Casacuberta J, Díaz M, Madrid J (2011) Evaluation of an accessible home control and telecare system. In: Proceedings 13rd Human-Computer Interaction (INTERACT’11), Springer-Verlag, LNCS, vol 6949, pp 527530

  • Schmidt DC, Stal M, Rohnert H, Buschmann F (2000) Pattern-oriented software architecture: patterns for concurrent and networked objects, 2nd edn. Wiley

  • Sereno M, Balbo G (1997) Mean value analysis of stochastic Petri nets. Perform Eval 29(1):35–62

    Article  MATH  Google Scholar 

  • Smith CU (1990) Performance engineering of software systems. Addison–Wesley

  • Smith CU, Williams LG (2000) Software performance antipatterns. In: Proceedings 2nd International Workshop on Software and Performance (WOSP’00). ACM, pp 127–136

  • Smith CU, Williams LG (2001) Software performance antipatterns; common performance problems and their solutions. In: Proceeding 27th International Conference Computer Measurement Group (CMG’01), pp 797–806

  • Smith CU, Williams LG (2002a) New software performance antipatterns: More ways to shoot yourself in the foot. In: Proceedings 28th International Conference Computer Measurement Group (CMG’02), pp 667–674

  • Smith CU, Williams LG (2002b) Performance solutions. In: a practical guide to creating responsive. Scalable software. Addison–Wesley

  • Smith CU, Williams LG (2003) More new software antipatterns: Even more ways to shoot yourself in the foot. In: Proceedings 29th International Conference Computer Measurement Group (CMG’03), pp 717–725

  • Stephanidis C (2001) Adaptive techniques for universal access. User Model User-Adap Inter 11:159–179

    Article  MATH  Google Scholar 

  • Tribastone M, Gilmore S (2008) Automatic translation of UML sequence diagrams into PEPA models. In: Proceeding 5th international conference on the quantitative evaluation of systems (QEST’08), pp 205–214

  • UML-SPT (2005) UML Profile for Schedulabibity, Performance and Time Specification. Version 1.1, formal/05-01-02

  • W3C (2012) OWL 2 Web Ontology Language. http://www.w3.org/TR/owl2-overview/

  • Williams LG, Smith CU (2002) PASASM: A method for the performance assessment of software architectures. In: Proceedings 3rd international workshop on software and performance (WOSP’02). ACM, pp 179–188

  • Woodcock A, Fielden S, Bartlett R (2012) The user testing toolset: a decision support system to aid the evaluation of assistive technology products work. J Prevention Asses Rehabil 41:1381–1386

    Google Scholar 

  • Woodside C, Petriu D, Petriu D, Shen H, Israr T, Merseguer J (2005) Performance by Unified Model Analysis (PUMA). In: Proceedings 5th International Workshop on Software and Performance (WOSP’05), ACM, pp 1–12

  • Woodside CM, Neilson JE, Petriu DC, Majumdar S (1995) The stochastic rendezvous network model for performance of synchronous client-server-like distributed software. IEEE Trans Comput 44(1):20–34

    Article  MATH  Google Scholar 

  • Woodside M, Franks G, Petriu DC (2007) The future of software performance engineering. In: Future of Software Engineering (FOSE’07). IEEE Comput Soc, pp 171–187

  • Woodside M, Petriu DC, Merseguer J, Petriu DB, Alhaj M (2013) Transformation challenges: from software models to performance models. Softw Syst Modeling (in Press)

  • XHTML (2010) Extensible HyperText Markup Language. http://www.xhtml.org/

  • Zimmermann G, Vanderheiden GC (2007) The universal control hub: An open platform for remote user interfaces in the digital home. In: Proceedings 12th International Conference Human-Computer Interaction (HCI’07). Springer, LNCS, vol 4551, pp 1040–1049

Download references

Acknowledgments

The research described in this paper arises from a Spanish research project called INREDIS (INterfaces for RElations between Environment and people with DISabilities). INREDIS is led by Technosite and funded by CDTI (Industrial Technology Development Centre), under the CENIT (National Strategic Technical Research Consortia) Programme, in the framework of the Spanish government’s INGENIO 2010 initiative. The opinions expressed in this paper are those of the authors and are not necessarily those of the INREDIS project’s partners or of the CDTI.José Merseguer has been supported by CICYT DPI2010-20413 and GISED (partially co-financed by the Aragonese Government (Ref. T27) and the European Social Fund).We would like to thank José Antonio Gutiérrez for his work in the experimental tests and Marta Alvargonzález, Esteban Etayo and Fausto Sainz for their help. Last but not least, the authors thank the anonymous reviewers for their valuable help to improve this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elena Gómez-Martínez.

Additional information

Communicated by: Brian Robinson

Appendices

Appendix A: Design of the System

1.1 A.1 First Interaction Scenario

First Interaction, depicted in the UML sequence diagram (SD) in Fig. 19, consists in the creation of the INREDIS initial interface, which acts as the access medium to the environment for the user. It lists all the available devices and services along with their current state and related information; and allows the user to select which one she wants to interact with. Its creation involves two processes detailed in the following sections:

  • The calculation of the INREDIS parameter for that concrete user (Perimeter Calculation).

  • The generation of the initial interface in terms of this newly calculated perimeter (Initial Interface Generation).

Fig. 19
figure 19

UML SD representing the user’s first interaction

1.1.1 A.1.1 Perimeter Calculation Process

The user perimeter represents the list of devices and services available to the user in a given moment. This kind of information is stored in the KB, but its calculation is made by the AMS. This module makes the necessary updates in the KB, keeping updated the situation of the user, the state of the surrounding devices, and the current state and information of the available services. Figure 20 shows the SD describing this process.

Fig. 20
figure 20

UML SD showing the Perimeter Calculation process

This task involves the following steps: First it must update the current location of the user. It starts with the setAbsoluteLocation() method that updates the information about the user in the KB. After setting the current location of the user, the AMS updates the current status of each device in the user’s INREDIS perimeter. It first requests the list of device and services in the user’s perimeter (the KB getUserPerimeterServices() method) and for each of these devices:

  • It requests to the Interoperability Gateway module the current state of each device (the getState() method). The Interoperability Gateway obtains this information no matter whether the device is exposed as a Web service or for UCH Target in a transparent fashion.

  • It updates their current state on the KB accordingly (the KBsetState() method).

    A similar process is performed for the services in the user’s INREDIS parameter:

  • It requests to the services in the perimeter information about their current state (the getServiceInfo() method).

  • It updates the current state of the KB accordingly (the KB setState() method).

1.1.2 A.1.2 Initial Interface Generation Process

Once the system has ensured that the interaction is possible, the first interface is created, see Fig. 21.

Fig. 21
figure 21

UML SD modelling the Initial Interface Generation

Before creating the initial interface the system has firstly to guarantee that the user is able to interact with its controller device. In consequence, it is necessary to determine the assistive technology that is necessary to enable such interaction. The ATS is the module responsible of such task; and also of determining how this software should be configured (method AskAT() ). Using the user URI (Uniform Resource Identifier) the ATS makes queries to the KB to obtain user’s profile, which is according to CAP (International Standard Organization 2009). With such profile, the ATS creates the list of the necessary assistive technology along with its configuration. The next step is the creation of the interface generator context where the variables are stored, such as the user URI, the controller device URI, navigation graph and its variables. Now the initial interface is created. As we have stated, in order to define interfaces we use a set of User Interface Markup Language UIML interfaces. The case of the initial interface is no exception, but instead of having a static UIML document, in the case of the first interaction the UIML interface definition is generated, a UIML document that contains all the available devices and services, allowing the user to choose among of them. The Generator module, the module that generates the UIML documents, delegates the creation of such interface to the Initial Creator module.

The Initial Creator creates what we refer as an abstract interface. It is a UIML document that still includes some context-dependent variables that have not been substituted, and a set of initialization rules that have not been performed. Such interfaces are made concrete by the Injector module (method concretizeInterfaze() ). This module executes the initialization rules and retrieves context related values from the IG context.

Once the UIML interface has been made concrete, it is time to determine the process that transforms this UIML document in to an accessible XHTML user interface. For that we use a collection of XSLT transformations that address different UIML components and users special needs, which after being applied to the UIML document translate it into an XHTML document tailored to user concrete needs. The Decisor module of the Interface Generator in charge of determining the set of transformations (chooseAdaptationTransformation() method). It does so by communicating with the KB (getTransformations() method) that given a user URI and the user’s controller URI determines which is the proper transformation to be applied. The selection of this transformation takes into account many orthogonal aspects, such as user’s special needs, preferences and the controller interaction capabilities, see (González-Cabero 2010).

Once the concrete UIML interface has been generated and the proper XSLT transformations have been selected, there are a set of parameters that are needed to tailor the transformations. We call them the adaptation parameters, and they include the final interface language and other lower-level implementation topics. They are determined in an analogous manner to what we did for selecting the XSLT transformation.

The Decisor module (chooseAdaptationParameters() method) gathers such information asking to the KB for information about the user, and it also takes into account information contained in the IG context. Once the IG posses all the necessary information (i.e. the initial interface as a concrete UIML interface, the set of transformations, and the adaptation parameters) it invokes the adaptInitialInterface() method of the Adaptor module. It returns the XHTML document that represents the initial user interface.

Finally, there may be some transformations that must be applied to the initial interface XHTML document that stem from the set of assistive software provided and configured by the ATS. They are applied by the Adaptor module (the applyATTransformation() method). After these transformations have been applied, the final version of the interface has been created and can be delivered to the user’s controller device.

1.2 A.2 Navigation Scenario

As we have already stated users interaction with a device often implies navigating through different atomic interfaces. Figure 22 shows the SD of the Navigation process.

Fig. 22
figure 22

UML SD modelling a simple navigation

The interaction starts with the interface requesting a navigation to the Starting Point that acts as a gateway between the user interface and the system. The Interaction Enacter is the module that handles the navigation between interfaces. It is so because the Navigation activity is considered a subclass of the Device Interaction activity, as from a user perspective, the kind of buttons that perform device interaction activities are the same as those that allow the user navigate within the complex interface. In the request Interaction Enacter accesses to the navigation graph of the complex interface and determines which is the next interface that should be generated. This information is stored in the context of the IG. Finally, a new interface generation process starts. As the context of the IG has been updated with the next interface to be rendered, this is the one that is rendered.

1.2.1 A.2.1 Interface Generation Process

INREDIS devices UIML interfaces are composed of two types of documents:

  • Views, a set of UIML documents that describe structure and its interaction elements of each atomic interface. As described in (Abrams et al. 1999), the use of UIML allows the abstract and platform-independent definition of user interfaces.

  • Navigability graph, which defines the how and on what conditions the complex interfaces navigates throw the different views. Only one view at a time is shown, we refer to it as the current view.

Generating an interface for a user means to transform the current view of an interface into an accessible XHTML document taking into account the characteristics of the user and the needed assistive technology. Most part of the process is identical to the one defined for the first interaction. The difference is that this process does not adapt the initial interface, but it transforms the current view of the device interface that the user is using at present (which like in the case of the initial interface created by the Initial Creator is a abstract UIML interface).

The first part of the diagram (Fig. 23), the one related with the detection of accessibility gaps and the determination of the necessary assistive technology, is the same as the one defined for the first interaction with the INREDIS system. When the Generator module receives the petition of generating an interface by means of the generateInterface() module the first step is to determine which is the current view of the interface. This information is stored in the context of the IG (getCurrentView() method). The current view is a URL that points to the location of the abstract UIML document that should be used as the starting point of the final user interface. The Generator module invokes the retrieveXMLSource() method of the helper class Resource Manager, and retrieves an abstract UIML interface.

The rest of the steps of the generation of the interface are exactly as the ones described for the first interaction. Instead of using the abstract UIML interface created by the Initial Creator, they use the abstract UIML interface retrieved by the Resource Manager from the interface current view URL (Fig. 23).

Fig. 23
figure 23

UML SD modelling the Interface Generation process

1.3 A.3 Device Interaction Scenario

The interactions with devices, and services are realized by the Interaction Enacter, see the SD in Fig. 24.

Fig. 24
figure 24

UML SD representing a user’s device interaction

This module once initialized executes the action involved in the device/service interaction. In order to do so it invokes the executeAction() method of the Interoperability Gateway, which is a class that decouples the Interaction Enacter from the underneath technology used to interact with the device. The executeAction() method may result in:

  • setValue() method invocation, in case that the device is exposed as a UCH target. The user interaction is translated into the change of one or more values of the UCH Target.

  • invokeWS() method invocation, in case that the device is exposed as a Web service (or when there is no device and we are dealing with a Web service invocation)

The result is stored in the context of the IG, for later use in case of need. Once the interaction has been carried out, a new interface is generated (invoking the Orchestrator method getInteface() ). This new interface is generated to make sure that it reflects the changes and latest state after the interaction with the device.

1.4 A.4 Back to Top Scenario

This process, illustrated in the SD of Fig. 25, means going back to the device initial interface.

Fig. 25
figure 25

UML SD representing the back-to-top process

As in the case of the navigation, the Interaction Enacter is the module that handles the back to top process. It is so because this activity is considered a subclass of the Device Interaction activity, as from a user perspective, the kind of buttons that perform device interaction activities are the same as those that allow going back to the device top interface. In order to keep all the information in the KB up-to-date we begin updating the location of the device and we recalculate the information about the user’s INREDIS perimeter .The next step is to generate top interface of the device, which is made using the Interface Generation process that we have already described in Section A.2.1.

1.5 A.5 Assistive Software Selection Mechanism

Assistive technology is the hardware or software that is added or incorporated within a system than increases accessibility for an individual, as defined in International Standard Organization (2011). Assistive Software (AS) is understood as a piece of software used to increase our ability to manage some kind of information in a digital device. The AS selection mechanism (ASSM) makes the environment able to automatically select the most suitable AS for a given interaction with a specific electronic target device taking advantage of the user’s context (user, controller device and target device) and considering the possible discrepancies between the user and the environment, namely in the case of functional diversity.

The ASSM uses different knowledge based on ontologies to achieve this goal, so this process consists of five main activities, one of them split into six, see the UML activity diagram in Fig. 26. The complete AS selection mechanism (ASSM) is described by Gómez-Martínez et al. (2013). The following is a summarized description of each activity.

Fig. 26
figure 26

UML Activity Diagram of the AS selection process

Detecting discrepancies

The first activity detects any accessibility issues that might prevent the user from being able to use a controller. In order to detect discrepancies we use a set of specific rules stored in the KB that compare the characteristics of the interaction that the user is able to perform with those that the controller is able to emit/receive. The complete catalogue of rules is specified by González-Cabero (2010).

Checking feasibility

Each discrepancy found in the previous step, is analyzed to determine whether mediation by the AS can enable the interaction. The following activities are intended to ascertain which AS is most appropriate.

Matching by History log

When the user has already employed the system to interact with the same target using the same context, it is possible to retrieve the most suitable AS without further reasoning, just by querying the KB and retrieving the matching set from the AS History.

Matching by score

This activity triggers the reasoning process where four subsets of concepts are simultaneously queried in the KB using parallel activities. This activity is divided into the next activities:

  • Retrieve Standard Fulfillment This activity performs an evaluation where the best scoring AS will be those that follow worldwide accessibility standards established by recognized accessibility entities.

  • Retrieve Privacy This activity checks that the AS complies with the data protection measures issued by security bodies. It is important to note that, according to many laws in different countries, when an AS complies with a data protection act level, it also complies some data protection measures. This is taken into account here via rules to assert those facts in the KB. This is the case for e.g., Federal Data Protection and Information Commission of Switzerland or Ley Orgánica de Protección de Datos in Spain.

  • Retrieve Ballot This activity increases the score for those AS with the best user reviews. These reviews are drawn from all the system’s users but they are not linked to any individual user, to maintain privacy about users’ functional diversity.

  • Retrieve Deploy Method The scoring is the simplest, just to foster the use of AS deployed as SaaS (Software as a Service).

  • Retrieve Setup Utilities This activity needs the output of Retrieve Deploy Method, so it is not executed in parallel with the others. All of the concepts are scored in this activity to take into account the ease of access and use of the AS.

  • Weighted Matching This is the final activity of matching by score. It focuses on adapting the matching to the user’s preferences and the Domain Experts’ assumptions.

    The user has been previously asked to state its level of importance by means of the user profile stored in the KB. With the weighting system, all the roles involved in the selection are taken into account (i.e., domain experts, the user, and all system users at once using the reviews).

Sorting

This is the final activity of the whole process. This activity orders the set of AS products/services of the weighted matching activity in descending order.

B: GSPN Overview

A PN system is a tuple 𝒩 = (P, T, Pre, Post, M 0 ), where P and T are the sets of places and transitions, Pre and Post are the ∣P∣ × ∣T∣ sized, natural valued, pre- and post- incidence matrices. For instance, Post [p, t] = w means that there is an arc from t to p with multiplicity w. When all weights are one, the PN is ordinary. Graphically, places and transitions are respectively represented by circles and bars, arcs are shown by arrows.

C = PostPre is the incidence matrix of the net. For pre- and postsets we use the conventional dot notation, e.g., t = {pP: Pre [p,t] ≥ 1}, that can be extended to sets of nodes. If 𝒩′ is the subnet of 𝒩, defined by P′ ⊆ P and T′ ⊆ T, then Pre′ = Pre[P′, T′], Post′ = Post[P′, T′] and M 0 = M 0 [P′]. Subnets defined by a subset of places (transitions), with all their adjacent transitions (places), are called P- (T-) subnets.

A marking M is a ∣P∣ sized, natural valued, vector and M 0 is the initial marking vector. A transition is enabled in M iff MPre [P,t]; its firing, denoted by \( {M}\stackrel {t}{\rightarrow } {M'}\), yields a new marking M′ = M + C[P, t]. The set of all reachable markings is denoted as RS (𝒩, M 0 ). An occurrence sequence from M is a sequence of transitions σ = t 1t k … such that \( {M}\stackrel {t_{1}}{\rightarrow } {M_{1}}\dots {M_{k-1}}\stackrel {t_{k}}{\rightarrow }\dots \). Given σ such that \( {M}\stackrel {\sigma }{\rightarrow } {M'}\), and denoting by σ the ∣T∣ sized firing count vector of σ, then M′ = M + C · σ is known as the state equation of 𝒩.

A GSPN is a tuple 𝒢 = (𝒩, Π, Λ, r), where 𝒩 is a PN system and the set of transitions T is partitioned in two subsets T t and T i of timed and immediate transitions, respectively. Timed transitions are depicted as thick white bars, immediate ones are depicted as thin black bars.

Π is a natural valued, ∣T∣ sized, vector that specifies a priority level of each transition; timed transitions have zero priority, immediate transitions have priority greater than zero. A transition tT, enabled in marking M, can fire if no transition t′ ∈ T : Π[t′] gt; Π[t] is enabled in M.

Immediate transitions fire in zero time. Instead, the firing of a timed transition is a random variable, distributed according to a negative exponential probability distribution function with rate parameter λ (i.e., mean \(\frac {1}{\lambda }\)). Then Λ is the non negative real valued, ∣T t ∣ sized, vector associated to the transition firing rates (accordingly, the transition firing delay is the inverse of the corresponding firing rate). The positive real valued vector r is ∣T i ∣ sized, and specifies the weights of the immediate transitions for probabilistic conflict resolution.

C: List of Performance Patterns and Antipatterns

Table 2 Performance patterns
Table 3 Performance antipatterns

D: GSPN Models of the System

Fig. 27
figure 27

GSPN representing Navigation scenario

Fig. 28
figure 28

GSPN representing Device Interaction scenario

Fig. 29
figure 29

GSPN representing Back To Top scenario

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gómez-Martínez, E., Gonzalez-Cabero, R. & Merseguer, J. Performance assessment of an architecture with adaptative interfaces for people with special needs. Empir Software Eng 19, 1967–2018 (2014). https://doi.org/10.1007/s10664-013-9297-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-013-9297-1

Keywords

Navigation