1 Introduction

Our primary motivation is to reduce human administrator overhead required in making sure that services non-compliant with business and/or technical requirements don’t get composed with during automatic service mashup and composition.

Most automatic approaches to service compositions were primarily concerned with repairing and recomposition of a composite service when its atomic services become unavailable. However, no (not many?) solutions exist that allow forcing recomposition of services when new policy requirements are introduced into the system at runtime directed by the business objectives or technical necessities and non-compliant services are still up and running and available for composition otherwise. For example, a business level requirement is introduced into a system disallowing customer data to be stored beyond service borders (e.g., Canadian banks’ data on the US servers), or customers prefer their VMs to avoid multitenancy when doing VM migration. On the technical side examples include changing ciphersuite key length or cipher type change for encrypted communications and services offering lower versions of these should not be composed with. Another example where SaaS layer applications have some knowledge of run-time conditions that can hint PaaS or IaaS layers for additional resource requirements that meet the SaaS business or technical requirements (e.g., to avoid the mentioned allocation a VM beyond borders or avoiding multi-tenancy or use strong encryption).

In the current setups, system administrators would have to discover and somehow block out such non-compliant services that to avoid them being selected for automatic composition. If they are in control of those services, they would shut them down manually by turning them off or undeploying. If they are not in control, then they would have to implement network- or application-based firewall rules to block them. If the number of services is large, it introduces a scalability problem and time required to remedy the non-compliance issues.

Our design of the policy based broker solution allows introduction of such business or technical requirements at run-time without human administrator overhead to remove non-compliant services.

2 Evaluation

This simulation attempts to tackle three different challenges of service composition problem. First, developing an approach to generate all possible solutions for services composition problem. Second, considering constraint as an important factor in service composition and including a external constraints which has not been addressed in previous approaches. Third, designing required mechanisms to apply effects of constraints on working and future generated service composition plans. A constraint-based model using planning model is designed to express required concepts such as context and services. Unlike other constraint-based approaches [5] that only consider constraint relate to service customers and providers, we identify new sort of constraints (external constraints) that needs to considered in service composition process. Policies provide us a high-level description of what we want without dealing with how to achieve it. Thus, using policies can be a suitable mechanism to determine if the goals were achieved using existing policy refinement techniques [1]. In order to evaluate complexity of composition algorithms, many approaches have performed extensive experiments via available test sets such as [3]. However, we could not find any test set that could support the definition of constraints and effects discussed in this paper. To examine the effectiveness of the brokerage algorithm and the policy enforcement approach we designed an architecture Fig. 1 [2] for a service brokerage to implement different required components. Moreover, GIPSY [4] as a distributed multi-tier architecture for evaluation of programs in a distributed and demand-driven manner is used to generate our required data. At the time of submitting this paper our result are not statistically significant. However, our preliminary result shows significant improve in adaptability of composite services in face of different constraints specifically external constraints.

Need of a context aware system. When we consider the context of both the user and the environment, the services are updated as per the context of the environment such as the vicinity of the location and the services need to be aware of its environment. But these different services such as shopping, travelling, restaurants overlap semantically over a common parameter i.e. location. Furthermore, when we consider using different services in conjunction with each other they provide even more context-oriented results. Such as using the travel services and the civil security services in collaboration results into optimized solutions where the user will not only be able to plan his travel but also avoid places where the security might have been compromised or a state of emergency has arisen. However, in order to combine these services, in the manner of data integration, a service brokerage is forwarded. This broker works as a mediator between clients and services and also as an adaption mechanism which takes into consideration the requirements for service selection and composition.

2.1 Application Domain

The simulation testbed is a context-aware software-as-a-service (SaaS) software which aims to help clients find the most near-matching service providers based on the clients’ required specifications. This framework differentiates from other service providers in that it attempts to enhance its results by providing a layer of integration among different services. This means that the context- aware service does not only consider the clients’ context, but it also adds relevant parameters provided by other services. For example: “A value-added context-aware service would provide a list of shopping offers in the vicinity of the client while considering other effective factors such as the client’s restrictions, transportation limitations due to disabilities, or even emergency situations in the area of the provided offers.” The goal is to provide collaboration among different services, which were originally conceived independently of each other. This integration is done via a Service Broker, which will continuously monitor services and perform updates on the results of the demands stored in the DST database. This basic architecture is depicted in Fig. 1.

Client. The Client is a Demand Generator Tier which will generate Service Demands from the users using a front-end application.

Fig. 1.
figure 1

The high level architecture of a brokerage simulation

These demands are sent to the broker; it will also Receive Service Responses once they have been processed and display them to the end users.

Broker. The Broker’s goal is to provide the client with contextual allocation and contextual scheduling of services. It is composed by a Demand Generator Tier and a Demand Worker Tier (DWT). Further, it contains two important components: the Service-Recommender component and the Fusion-and-Monitoring component. The former is meant to perform all the logic which involves selecting and ranking the different options for a Service Demand (Service Composition). In addition, it adds different context-values that are unique to each Service Demand. The latter is in charge of Service Discovery based on the application user’s location. It includes a Monitor-and-Observer and a Constrain-Generator; these allow the DWTs which are not constrained to be accessed from a database which contains all the registered DWTs (cinemas, shops, etc.).

Store/Database. The store (DST) is where all the Client and Security Service Demands are stored and retrieved by the Broker and the DWTs (once the demand has been processed). This will be implemented as a database with a table with two fields: a unique id and a value, represented as a string. Furthermore, the security services will somehow also store information about security events into the store.

Services/DWTs. The Services are either Shops, Museums, Cinemas and/or Restaurants. They will check demands of their respective type on the Store/Database and take them if the demand type matches their type, interpret the context in some way relative to the type of demand and its own constraints, and directly store it back on the Store/Database so that the Broker can retrieve it and use it accordingly. The Security Service is composed by a DGT, and it is responsible for exploring and discovering security alerts in the vicinity of the environment of the application user’s location.

2.2 Scalability

Scalability testing was done using DGTSimulator with subtopic specific demands such as ShoppingDemands, RestaurantDemands and SecurityDemands etc. Since the response time (the difference of the sending timestamp and result reading timestamp) is preserved regardless of the number of items stored one the DST, this demonstrates the space-time scalability of the system for our specific demand types (Table 1).

Table 1. Response time w.r.t. increasing demands vs demand workers

3 Conclusion and Future Work

The extension of GIPSY to accommodate for a service-broker architecture proved challenging but successful in the sense that the introduction of such a service broker object to manage multiple different services and the filtering of their responses based on other demands and services does not have a significant impact on the demand response time while keeping the space- time scalability of the system intact. Evidently, for the type of service we are proposing, the turnaround time for a demand ultimately depends on the efficiency of the service providers to respond to that demand. However, by keeping the nature of all the service providers equal, we were then able to demonstrate that the number of incoming demands and/or the number of these standardized service providers does not have an impact on the system performance. Finally, the suitability of GIPSY as a distributed webservice provider is very promising even though further testing and extensions to other more elaborated services is needed. We are currently working to establish a test set that fully considers service constraints and other factors using GIPSY and our own data set.