Overview

The aim of the evaluation is to demonstrate the SDC technique, and in particular to quantify the achievable performance increase for automated Web service discovery. For this, we have performed evaluation tests for assessing both the design time and the runtime operations of our two-phased discovery framework with respect to the retrieval accuracy and the computational performance. We use the shipment scenario from the Semantic Web Service Challenge as the use case for the performance analysis. This is a widely recognized initiative for the demonstration and comparison of semantically enabled Web service discovery techniques based on real-world services. We closely follow the original scenario description, which defines five Web services for package shipment from the USA to different destinations along with several examples of concrete client requests for Web service discovery.

The evaluation tests have been run as JUnit tests for the SDC prototype implementation on a conventional laptop with 2 GHz Intel processor and 1 GB of RAM. A concise explanation of the use case scenario and the technical realization of the evaluation tests is given at http://members.deri.at/~michaels/software/sdc/20071019/#usageExample, and the modeling of all the goal and Web service descriptions as well as the domain ontologies for the shipment scenario is available at resourcesSWSCshipment. We refer to Chapter 6 of the PhD thesis for the comprehensive specification of the methodology and discussion of the evaluation results.

Evaluation Results

The following provides the complete results for the evaluation of the SDC technique as prensented within the PhD thesis. This consists of two evaluations tests: the first one is concerned with the correctness and required processing times for the creation and management of SDC graphs as the design time operations in our framework, and the second one quantifies the performance increase for the runtime Web service discovery that can be achieved by the SDC technique. While referring to the detailed discussion in Chapter 6 of the PhD thesis, the following provides the original test implementations and results.

SDC Graph Management

In order to properly evaluate all relevant aspects, we have defined the evaluation test to consist of three parts: (1) creating the SDC graph for the shipment scenario by the subsequent insertion of all goal templates in a top-down manner, (2) creating the SDC graph by inserting the goal templates in a different order, and (3) maintenance updates of the SDC graph when goal templates and Web services are removed or added. For each test, the following provides the implementation class, the resulting SDC graph in form of a WSML knowledge base and a graphical visualization that has been created by the open source GraphViz tool on the basis of the DOT graph representation language (available at http://www.graphviz.org/), the measured times for the management operations, and the log file of the test run.

SDC Graph Creation - Top-Down

SDC Graph Creation - Other Insertion Order

SDC Graph Maintenance]

Runtime Web Service Discovery

The main interest for evaluating the optimized runtime Web service discovery is the actual performance increase that can be achieved with the SDC technique. For this, we compare the SDC-enabled runtime discoverer with (1) a naive engine that does not apply any optimization, and (2) a reduced version of the SDC-optimized engine that does not thoroughly exploit the knowledge captured in the SDC graph. The following provides the implementation classes, the original data, and the log-files of the comparison tests as discussed in detail in Chapter 6 of the PhD thesis.

SDC vs. Naive, Single Web Service Discovery

SDC vs. Naive, All Web Service Discovery

SDC-full vs. SDC-light