3

QoS Computation and Policing in Dynamic Web Service Selection

Yutu Liu
Department of Computer Science
Texas State University
San Marcos, Texas
Anne H.H. Ngu
Department of Computer Science
Texas State University
San Marcos, Texas
Liangzhao Zeng
IBM T.J. Watson Research Center
New York, USA

hn12@txstate.edu

Abstract:

The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.

D.2.8Software EngineeringMetrics[complexity measures, performance measures]
Management, Experimentation, Algorithms


Introduction

Web services are self-describing software applications that can be advertised, located, and used across the Internet using a set of standards such as SOAP, WSDL, and UDDI [8]. Web services encapsulate application functionality and information resources, and make them available through standard programmatic interfaces. Web services are viewed as one of the promising technologies that could help business entities to automate their operations on the web on a large scale by automatic discovery and consumption of services. Business-to-Business(B2B) integration can be achieved on a demand basis by aggregating multiple services from different providers into a value-added composite service. In the last two years, the process-based approach to web service composition has gained considerable momentum and standardization [1]. However, with the ever increasing number of functional similar web services being made available on the Internet, there is a need to be able to distinguish them using a set of well-defined Quality of Service (QoS) criteria. A service composition system that can leverage, aggregate and make use of individual component's QoS information to derive the optimal QoS of the composite service is still an ongoing research problem. This is partly due to the lack of an extensible QoS model and a reliable mechanism to compute and police QoS that is fair and transparent to both service requesters and providers. Currently, most approaches that deal with QoS of web services only address some generic dimensions such as price, execution duration, availability and reliability [12,6]. In some domains, such generic criteria might not be sufficient. QoS model should also include domain specific criteria and be extensible. Moreover, most of the current approaches rely on service providers to advertise their QoS information or provide an interface to access the QoS values, which is subject to manipulation by the providers. Obviously, service providers may not advertise their QoS information in a ``neutral'' manner, for example, execution duration, reliability, etc. In approaches where QoS values are solely collected through active monitoring, there is a high overhead since QoS must be checked constantly for a large number of web services. On the other hand, an approach that relies on a third party to rate or endorse a particular service provider is expensive and static in nature. In our framework, the QoS model is extensible, and QoS information can either be provided by providers, computed based on execution monitoring by the users, or collected via requesters feedback, depending on the characteristics of each QoS criterion. In a nutshell, we propose a framework that aims at advancing the current state of the art in QoS modeling, computation and policing. There are three key aspects to our work: We believe that an extensible, transparent, open and fair QoS model is necessary for the selection of web services. Such a model can benefit all participants. Although this model requires all requesters to update their usage experiences with a particular type of service in a registry, this overhead on the user is not large. In return, QoS for a particular service is actively being policed by all requesters. Each requester can search the registry to get the most-updated QoS of listed providers. Service providers can view and change their services at any time. With such an open and dynamic definition of QoS, a provider can operate its web services to give its end-users the best user experience. This paper is organized as follows. In Section 2, we give the details of an extensible QoS model and its computation. In Section 3, we describe the implementation of QoS registry and explain how QoS information can be collected based on active monitoring and active user feedback. Section 4 discusses the experiments that we conducted to confirm the fairness and the validity of the various parameters used in QoS computation. Finally, we discuss related work in Section 5 and conclude in Section 6.


QoS-based Service Selection

Currently, in most SOC frameworks, there is a service broker which is responsible for the brokering of functional properties of web services. In our framework, we extend capability of the service broker to support non-functional properties. This is done by introducing an extensible and multiple dimensions QoS model in the service broker. We aim to evaluate QoS of web services using an open, fair and transparent mechanism. In the following subsections, we first introduce the extensible QoS model, and then we give the details on how to evaluate web service based on our model.

Extensible QoS Model

In this section, we propose an extensible QoS model, which includes generic and domain or business specific criteria. The generic criteria are applicable to all web services, for example, their pricing and execution duration. Although the number of QoS criteria discussed in this paper is limited (for the sake of illustration), our model is extensible. New criteria (either generic or domain specific) can be added without fundamentally altering the underlying computation mechanism as shown in Section 2.2. In particular, it is possible to extend the quality model to integrate non-functional service characteristics such as those proposed in [7], or to integrate service QoS metrics such as those proposed by [11].

Generic quality criteria

We consider three generic quality criteria which can be measured objectively for elementary services: (1) execution price, (2)execution duration, and (3) reputation. Criteria such as availability and reliability is not required in our model due to the use of active user feedback and execution monitoring.

Business related criteria

The number of business related criteria can vary in different domains. For example, in phone service provisioning domain, the penalty rate for the early termination and the fixed monthly charge are important factors for users to consider when selecting a particular service provider. We use the generic term usability to group all business related criteria. In our chosen application, we measure usability from three aspects, transaction, compensation rate and the penalty rate.


Web Service QoS Computation

The QoS registry is responsible for the computation of QoS value for each service provider. Assuming that there is a set of web services that have the same functional properties, where $ S$ ( $ S=\{s_{1}, s_{2}, ...,
s_{n}\}$), web service computation algorithm determines which service $ s_{i}$ is selected based on end user's constraints. Using $ m$ criteria to evaluate web service, we can obtain the following matrix $ \mathbf{Q}$. Each row in $ \mathbf{Q}$ represents a web service $ s_{i}$, while each column represents one of the QoS criteria.
$\displaystyle \mathbf{Q} = \left ( \begin{array}{cccc}
q_{1,1} & q_{1,2} & \ldo...
... & \vdots & \vdots \\
q_{n,1} & q_{n,2} & \ldots & q_{n,m}
\end{array} \right)$     (1)


In order to rank the web services, the matrix $ \mathbf{Q}$ need to be normalized. The purposes of normalization are: 1) to allow for a uniform measurement of service qualities independent of units. 2) to provide a uniform index to represent service qualities for each provider. Provider can increase and decrease his/her quality index by entering a few parameters, 3) to allow setting a threshold regarding the qualities. The number of normalizations performed depends on how the quality criteria are grouped. In our example criteria given in Section 2, we need to apply two phases of normalization before we can compute the final $ \mathbf{QoS}$ value. The second normalization is used to provide uniform representation of a group of quality criteria (e.g. usuability) and set threshold to a group of quality criteria.


Implementation of QoS
Registry

To demonstrate our proposed QoS model, we implemented a QoS registry as shown in Figure 1 within a hypothetical phone service (UPS) provisioning market place. The UPS market place is implemented using BEA Weblogic Workshop web service toolkit. It consists of various service providers who can register to provide various type of phone services such as long distance, local phone service, wireless and broadband. The market place also has a few credit checking agencies which offer credit checking service for a fee. The UPS market place has web interfaces which allow a customer to login and search for phone services based on his/her preferences. For example, the customer can specify whether the search for a particular type of service should be price or service sensitive. A price sensitive search will return a list of service providers who offers the lowest price. A service sensitive search will return a list of service providers with the best rated services. The market place also has web interfaces that allow service providers to register their web services with the QoS registry, update their existing web services in the registry or view their QoS ranking in the market place. The registry's other interfaces are available for requesters/end-users to give feedback on the QoS of the web services which they just consumed.
Figure 1: Architecture Diagram of QoS Registry.
\includegraphics[scale=.75]{qosArch2.prn.epsi}

Collecting Service Quality Information

In our framework, we distinguish two types of criterion: deterministic and non-deterministic. Here, deterministic indicates that the value of QoS quality criterion is known or certain when a service is invoked, for example, the execution price and the penalty rate. The non-deterministic is for QoS quality criterion that is uncertain when web service is invoked, for example execution duration. For deterministic criterion, we assume that service providers have mechanisms to advertise those values through means such as web service quality-based XML language as described in [2]. How all service providers come to an agreement of deterministic set of QoS criteria to advertise is beyond the scope of this paper. In this section, we discuss how all QoS criteria in our model can be collected in a fair, open and objective manner via active monitoring and active user's feedback. We assume that service requester can either download a plug-in from QoS registry for providing feedback on QoS or access the services via a portal which must implement the active monitoring mechanism as outlined above for QoS computation.


Experiments

We conducted a series of experiments to: 1) investigate the relationship between QoS value and the business criteria, 2) study the effectiveness of price and the service sensitivity factors in our QoS computation. The experiments are conducted on a Pentium computer with a 750Mhz CPU and 640MB RAM. We first simulated 600 users searching for services, consuming the services and providing feedbacks to update the QoS registry in the UPS application. This will generate a set of test data for those non-deterministic quality criteria such as reputation and execution duration for each service provider in the QoS registry. The deterministic quality criteria (price, penalty rate) have been advertised and stored in the QoS registry for each service provider prior to the simulation.

Table 1: Comparing Service Quality Data of two providers
Providers Price Transaction Time Compensation penalty Execution Reputation
      out Rate Rate Duration  
ABC 25 yes 60 0.5 0.5 100 2.0
BTT 40 yes 200 0.8 0.1 40 2.5


Table 1 shows the values of various quality criteria from two phone service providers with respect to local phone service. From this table, we can see that provider ABC has a better price but the various criteria that are related to services are not very good. Its time out value is small, this means it is not so flexible for end-users. Its compensation rate is lower than BTT, and its penalty rate is higher than BTT. Its execution time is longer than BTT and reputation is lower than BTT as well. Using our QoS computation algorithm, can we ensure that ABC will win for a price sensitive search or BTT will win for a service sensitive search? Table 2 shows that BTT has a QoS value of $ 3.938$ with a price sensitivity search and a QoS value of $ 5.437$ with a service sensitivity search. On the other hand, ABC has a QoS value of $ 4.281$ with a price sensitivity search and a QoS value of $ 3.938$ with a service sensitivity search. ABC is winning for a price sensitive search ( $ 4.281 > 3.938$), BTT is winning for a service sensitive search ( $ 5.437 > 4.281$) which verifies our original hypothesis.

Table 2: Results of Sensitivity Experiments
Providers Price Service QoS
  Sensitivity Sensitivity Value
ABC 1 2 4.675
BTT 1 2 5.437
ABC 2 1 4.281
BTT 2 1 3.938


The following three figures show the relationship between the QoS value and the price, the compensation rate, the penalty rate, and the sensitivities factors. From Figure 2, we can see that the QoS increases exponentially as the price approaches zero. As the price reaches 9x20 = 180, the QoS becomes very stable. This gives us the confidence that price can be used very effectively to increase QoS within certain range. It also shows that in our QoS computation model, QoS cannot be dominated by price component indefinitely.
Figure 2: Relationship between QoS and Price.
\includegraphics[scale=.35]{qos.ps.prn.epsi}
Figure 3 shows the relationship between QoS and services. For the penalty, it shows that the QoS value decreases as the penalty rate increases. For the compensation, it shows that the QoS value increases as the compensation rate increases. This concludes that QoS computation cannot be dominated by these two components indefinitely.
Figure 3: Relationship between QoS, Compensation and Penalty Rate.
\includegraphics[scale=.35]{qos1.prn.epsi}
Figure 4 indicates that QoS value increases almost four times faster for the service sensitivity than the price sensitivity. This shows that using the the same sensitivity value for both price and service factors will not be effective for the price sensitivity search in our QoS registry. To get an effective price and service sensitivity factors for a domain, we need to analyze the sensitivity factors after the QoS registry has been used for a while and re-adjust the values by performing experiment mentioned in Figure 4.
Figure 4: Relationship between QoS, Price and Service Sensitivity factors.
\includegraphics[scale=.35]{qos3.prn.epsi}


Related Work

Multiple web services may provide similar functionality, but with different non-functional properties. In the selection of a web service, it is important to consider both functional and non-functional properties in order to satisfy the constraints or needs of users. While it is common to have a QoS model for a particular domain, an extensible QoS model that allows addition of new quality criteria without affecting the formula used for the overall computation of QoS values has not been proposed before. Moreover, very little research has been done on how to ensure that service quality criteria can be collected in a fair, dynamic and open manner. Most previous work is centered on the collection of networking level criteria such as execution time, reliability and availability etc [12]. No model gives details on how business criteria such as compensation and penalty rates can be included in QoS computation and enforced in an objective manner. In a dynamic environment, service providers can appear and disappear around the clock. Service providers can change their services at any time in order to remain competitive. Previous work has not addressed the dynamic aspects of QoS computation. For example, there is no guarantee that the QoS obtained at run time for a particular provider is indeed the most up to date value. Our proposed QoS computation which is based on active monitoring and consumers' feedback ensures that QoS value of a particular provider is always up to date. In [9], the author proposed a QoS model which has a QoS certifier to verify published QoS criteria. It requires all web service providers to advertise their services with the QoS certifier. This approach lacks the ability to meet the dynamics of a market place where the need of both consumers and providers are constantly changing. For example, it does not provide methods for providers to update their QoS dynamically, and does not give the most updated QoS to service consumers either. There are no details on how to verify the QoS with the service providers in an automatic and cost effective way. In [10], the authors proposed a QoS middleware infrastructure which required a build-in tool to monitor metrics of QoS automatically. If such a tool can be built, it needs to poll all web services to collect metrics of their QoS. Such an approach requires the willingness of service providers to surrender some of their autonomy. Moreover, if the polling interval is set too long, the QoS will not be up to date. If the polling interval is set too short, it might incur a high performance overhead. A similar approach which emphasizes on service reputation is proposed in [5,4]. A web service agent proxy is set up to collect reputation rating from previous usage of web services. The major problem with this approach is that, there is no mechanism in place to prevent false ratings being collected. We have a mechanism to ensure that only the true user of the service is allowed to provide feedback. In [3], the author discusses a model with service level agreement (SLA) which is used as a bridge between service providers and consumers. The penalty concept is given in its definition of SLA. The main emphasis about this concept is what should be done if the service provider can not deliver the service under the defined SLA, and the options for consumers to terminate the service under the SLA. Penalty is not included as a service quality criteria which can be used in QoS computation. In [2], a language-based QoS model is proposed. Here, QoS of a web service is defined using a Web Service QoS Extension Language. This XML-based language can define components of QoS in a schema file which can be used across different platforms and is human readable.


Conclusion

The quality-based selection of web services is an active research topic in the dynamic composition of services. The main drawback of current work in dynamic web service selection is the inability to ensure that the QoS of published web services remain open, trustworthy and fair. We have presented an extensible QoS model that is open, fair and dynamic for both service requesters and service providers. We achieved the dynamic and fair computation of QoS values of web services through a secure active users' feedback and active monitoring. Our QoS model concentrates on criteria that can be collected and enforced objectively. In particular, we showed how business related criteria (compensation, penalty policies and transaction) can be measured and included in QoS computation. Our QoS model is extensible and thus new domain specific criteria can be added without changing the underlying computation model. We provided an implementation of a QoS registry based on our extensible QoS model in the context of a hypothetical phone service provisioning market place. We also conducted experiments which validate the formula we used in the computation of QoS values in the phone service provisioning domain. The formula for computing the QoS values can be varied for different domains by using different sensitivity factors or different criteria with its associated groupings. Additional factors, different groupings, or criteria might serve certain domains better. We provided a mechanism for service providers to query their QoS computed by the registry, and update their published services to become more competitive at anytime. The success of this model depends on the willingness of end-users to give feedback on the quality of the services that they consume. Our future work includes ways to automate the collection of feedback data. To ensure that all users will provide feedback, we need to make providing feedback very straight forward. We also want to see that the implemented QoS registry becomes intelligent internally and externally. Internal intelligence refers to the ability to infer the appropriate sensitivity factors to use to get the list of providers which meets end-users requirements. External intelligence refers to the ability of QoS registry to be able to predict the trend of QoS from certain providers at runtime. For example, we want to have a system which knows how much the QoS value decreases in the morning and increases in the evening or during different time of the year, and takes action to notify its users. For the service provider, we want to have the ability to notify the provider when the QoS of its service is below a certain threshold.

Bibliography

1
Boualem Benatallah and Fabio Casati, editors.
Distributed and Parallel Database, Special issue on Web Services.
Springer-Verlag, 2002.

2
Peter Farkas and Hassan Charaf.
Web services planning concepts.
Journal of WSCG, 11(1), February 2003.

3
Li jie Jin and Vijay Machirajuand Akhi Sahai.
Analysis on Service Level Agreement of Web Services.
Technical Report HPL-2002-180, Software Technology Laboratories, HP Laboratories, June 2002.

4
E. Michael Maximilien and Munindar P. Singh.
Conceptual Model of Web Services Reputation.
SIGMOD Record, October 2002.

5
E. Michael Maximilien and Munindar P. Singh.
Reputation and Endorsement for Web Services.
ACM SIGecom Exchanges, 3(1):24-31, 2002.

6
Daniel A. Menasce.
QoS Issues in Web Services.
IEEE Internet Computing, 6(6), 2002.

7
J. O'Sullivan, D. Edmond, and A. ter Hofstede.
What's in a Service?
Distributed and Parallel Databases, 12(2-3):117-133, September 2002.

8
M.P. Papazoglou and D. Georgakopoulos.
Serive-Oriented Computing.
Communcications of the ACM, 46(10):25-65, 2003.

9
Shuping Ran.
A Model for Web Sevices Discovery with QoS.
ACM SIGecom Exchanges, 4(1):1-10, 2003.

10
Amit Sheth, Jorge Cardoso, John Miller, and Krys Kochut.
Qos for service-oriented middleware.
In Proceedings of the 6th World Multiconference on Systemics, Cybernetics abd Informatics (SCI02), pages 528-534, July 2002.

11
Aad van Moorsel.
Metrics for the Internet Age: Quality of Experience and Quality of Business.
Technical Report HPL-2001-179, HP Labs, August 2001.
Also published in 5th Performability Workshop, September 2001, Erlangen, Germany.

12
Liangzhao Zeng, Boualem Benatallah, Marlon Dumas, Jayant Kalagnanam, and Quan Z. Sheng.
Quality Driven Web Services Composition.
In Proceedings of the 12th international conference on World Wide Web (WWW), Budapest, Hungary. ACM Press, May 2003.

About this document ...

QoS Computation and Policing in Dynamic Web Service Selection

This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -no_navigation -split 0 p494-ngu

The translation was initiated by Anne Ngu on 2004-03-10


Anne Ngu 2004-03-10