您的当前位置:首页正文

1 AN INTEGRATED TEST ENVIRONMENT FOR DISTRIBUTED APPLICATIONS

2023-08-25 来源:钮旅网
AN INTEGRATED TEST ENVIRONMENT FOR DISTRIBUTED APPLICATIONS

Huey–Der Chu and John E Dobson

Centre for Software Reliability, Department of Computing Science

University of Newcastle upon Tyne, NE1 7RU, UK

ABSTRACT

Software testing is an essential component in achieving software quality. However, it is a verytime–consuming and tedious activity and accounts for over 30% of the cost. In addition to itshigh cost, manual testing is unpopular and often inconsistently executed. Therefore, a powerfulenvironment that automates testing and analysis techniques is needed. This paper presents a stat-istics–based integrated test environment (SITE) for testing distributed applications. To addresstwo crucial issues in software testing, when to stop testing and how good the software is aftertesting, SITE provides automatic support for test execution, test development, test failure analy-sis, test measurement, test management and test planning.

Keywords: Software Test Environment, Statistics–based Testing, Distributed Applications1 INTRODUCTION

Software testing is a very time–consuming and tedious activity and therefore a very expensiveprocess and accounts for over 30% of the cost of software system development [11, 12]. In addi-tion to its high cost, manual testing is unpopular and often inconsistently executed. To achievethe high quality required of software applications, a powerful environment that automates soph-isticated testing and analysis techniques is needed. Therefore, Software Testing Environments(STEs) overcome the deficiencies of manual testing through automating the test process and inte-grating testing tools to support a wide range of test capabilities [5]. The use of STE provides sig-nificant benefits as follows [15, 17]. Firstly, major productivity enhancements can be achievedby automating techniques through tool development and use. Secondly, errors made in testingactivities can be reduced through formalizing the methods used. Thirdly, defining testing pro-cesses secures more accurate, more complete and more consistent testing than do human–inten-sive, ad hoc testing processes. Fourthly, automated testing improves the likelihood that resultscan be reliably reproduced.

The Statistics–based Integration Test Environment (SITE) provides a test environment basedon statistical testing which secures automated support for the testing process, including model-ing, specification, statistical analysis, test data generation, test results inspection and test pathtracing. Testing of a distributed application is very complex because such a system is inherentlyconcurrent and non–deterministic. It adds another degree of difficulty to the analysis of the testresults. Therefore, a systematic and effective test environment for the distributed applications ishighly desirable. To address these problems, the SITE is developed on the PVM (Parallel VirtualMachine) software which is used as a platform for developing tools for parallel and distributedprograms.

1

In Section 2 of this paper, an operational environment for testing distributed software is pres-ented. A basic architecture of automated software testing is introduced in Section 3. An overviewof our approach is shown in the end of this section. In Section 4, the architecture of SITE is de-scribed and the relation of the main components is also shown. A comparison of STEs using theSAAM structure is discussed in Section 5. Section 6 summarizes my research work.2 AN OPERATIONAL ENVIRONMENT FOR TESTING DISTRIBUTED SOFTWARE2.1 OverviewDistributed applications have traditionally been designed as systems whose data and processingcapabilities reside on multiple platforms, each performing an assigned function within a knownand controlled framework contained in the enterprise. Even if the testing tools were capable ofdebugging all types of software components, most do not provide a single monitoring view thatcan span multiple platforms. Therefore, developers must jump between several testing/monitor-ing sessions across the distributed platforms and interpret the cross–platform gap as best they can.That is, of course, assuming that comparable monitoring tools exist for all the required platformsin the first place. This is particularly difficult when one server platform is the mainframe as gen-erally the more sophisticated mainframe testing tools do not have comparable PC– or Unix–basedcounterparts. Therefore, testing distributed applications is exponentially more difficult than test-ing standalone applications. To overcome this problem, we present an operational environment for testing distributed ap-plications based on the PVM software as shown in Figure 1, allowing testers to track the flowof messages and data across and within the disparate platforms.CommandsGUIDistributed Applications SITE: A Statistics–basedEnvironmentVPE software,Integrated TestTest ReportXPVM softwarePVM software, Tcl/Tk and C (software)X under UNIX workstations (hardware)Figure 1: An operational environment for testing distributed applications2 The primary goal of this operational environment is an attempt to provide a coherent, seamlessenvironment that can serve as a single platform for testing distributed applications. At the lowestlevel is the UNIX system which often plays a part in distributed and client–server system. Thereis an interesting point to make for use of UNIX in commercial IT operations. When UNIX is in-volved there is a very noticeable UNIX effect on testing. Because it was an engineer’s systemfrom the outset, UNIX supports testing better than almost any other environment. The UNIXcommunity is engineering–minded and this carries over into software engineering and hence intosoftware testing. On top of UNIX system PVM software and some relative software run. PVMsoftware is a de–facto standard for writing distributed and parallel applications based on themessage passing paradigm and provide a unified framework within which parallel programs canbe developed in an efficient and straightforward manner using existing hardware. On top of thisplatform is the SITE which secures automated support for the testing process, including modelingby VPE (Virtual Programming Environment) software, specification, statistical analysis, testdata generation, test results inspection and test path tracing by XPVM software. At the top of thisenvironment are the distributed applications. These can use or bypass any of the facilities andservices in this operational environment. This environment receives commands from the users(testers) through Graphical User Interface (GUI) and produces the test reports back.

The picture given in Figure 1 shows an approximate idea of how the various parts of the oper-ational environment fit together. It gives an indication of the gross structure, so henceforth wewill use it as our model.

2.2 The Distributed Computing Environment2.2.1 PVM System

PVM software [6] is a message passing system that enables a collection of heterogeneous com-puter systems to be viewed as a single distributed memory parallel computer. PVM transparentlyhandles all message routing, data conversion and task scheduling across a network of incompat-ible computer architectures.

The PVM computing model is simple yet very general and accommodates a wide variety ofapplication program structures. The programming interface is deliberately straightforward, thuspermitting simple program structures to be implemented in an intuitive manner. The user writeshis application as a collection of cooperating tasks. Tasks access PVM resources through a libraryof standard interface routines. These routines allow the initiation and termination of tasks acrossthe network as well as communication and synchronization between tasks. The PVM message–passing primitives are oriented towards heterogeneous operation, involving strongly typed con-structs for buffering and transmission. Communication constructs include those for sending andreceiving data structures as well as high–level primitives such as broadcast, barrier synchroniz-ation and global sum.

PVM tasks may possess arbitrary control and dependency structures. In other words, at any pointin the execution of a concurrent application, any task in existence may start or stop other tasks or

3

add or delete computers from the virtual machine. Any process may communicate and/or syn-chronize with any other. Any specific control and dependency structure may be implementedunder the PVM system by appropriate use of PVM constructs and host language control–flowstatements.

Several research groups have developed software packages that like PVM assist programmersin using distributed computing. However, PVM is outstanding because of its good functionalityfor parallel and distributed applications. The recent publicity surrounding MPI (Message PassingInterface) has caused programmers to wonder if they should use the existing de facto standardPVM or whether they should shift their codes to the MPI standard. In [7], they compared the fea-tures of two Application Programmatic Interfaces (APIs) and pointed out situations where oneis better suited than the other.

Programmers should evaluate the functional requirements and running environment of theirapplication and choose the API that has the features they need. PVM software is very popularin parallel and distributed systems. Many projects run on the top of PVM and there is a publicPVM news group (comp.parallel.pvm) for exchanging information amongst researchers. Definitely, more resources and data can be got from the PVM WWW homepage(http://www.epm.ornl.gov/pvm/pvm_home.html). In any case PVM is one of the most wide–spread tools used in parallel and distributed computing and it is available on a large number ofscalar, vector and parallel machines and last but not least it is free of charge.2.2.2 VPE Software

VPE [13] is an integrated parallel computing environment with a message–passing orientation.Programs are created by drawing graphs in which compute nodes represent sequential computa-tions and messages flow among them on arcs. Compute nodes are annotated with ordinary C orFortran program text that contains calls to the VPE message–passing library. One VPE graph maycall another via a Call node. Thus, a VPE program consists of one or more graphs. The list ofgraphs that make up a VPE program is stored in a project file.

VPE itself runs under X on UNIX workstations. The parallel programs it creates run underPVM, although VPE programmers make no direct use of PVM calls. VPE programs thus run ontargets that support PVM, however the focus is on heterogeneous collections of workstations usedas a virtual parallel machine.2.2.3 XPVM Software

It is often useful and always reassuring to be able to see the present configuration of the virtualmachine and the status of the hosts. It would be even more useful if the user could also see what hisprogram is doing–what tasks are running, where messages are being sent, etc. The PVM GUIcalled XPVM was developed to display this information, and more. XPVM [10] provides agraphical interface to the PVM console commands and information, along with several animatedviews to monitor the execution of PVM programs. These views provide information about the

4

interactions among tasks in a parallel PVM program, to assist in debugging and performance tun-ing.

The tracing information from PVM is delivered to XPVM using standard PVM message oper-

ations. Any tasks spawned from XPVM automatically send back trace events that describe theirPVM activity. XPVM decodes these trace messages and saves the data into a trace file which canbe read in to drive the views. XPVM can be used either ‘‘real time” to display the current state of aprogram as it changes or ‘‘post–mortem” by replaying the saved trace files after the program hascompleted.2.3 The GUI

Graphical User Interface (GUI) programs provide more features than the structured navigationand data entry in non–GUI applications. The user can select a variety of functions from eachscreen.. The advantages of GUIs are [16]: firstly, users with no computing experience can learnto use the interface after a brief training session; secondly, the user has multiple screens for systeminteraction; thirdly, full–screen interaction is possible rather than the line–oriented interactionrequired by command interfaces. However, GUIs are more complex than simple textual inter-faces so the cost of interface engineering is greater.

Tcl is an interpreted and very typical–looking shell–like language that is widely used in many

applications. There are commands to set a variable (set), a control flow (if, while, foreach, etc.)and perform the usual mathematics and string operations. Of course, UNIX programs can becalled (exec). Tk provides commands to build user interfaces for the X window system. With Tk,we can build GUIs entirely using Tcl and its extensions.

In this operational test environment, Tcl/Tk is used for building command interfaces amongsttesters, SITE and PVM software. The command may be a query, the initiation of some sub–pro-cesses or it may call up a sequence of other commands to execute the test process. XPVM systemprovides a GUI written in C using the Tcl and Tk systems to the PVM console commands andinformation.

3 A BASIC ARCHITECTURE OF AUTOMATED SOFTWARE TESTING

In this section, a process of automated testing is described for distributed applications. Testingis a method to validate that the behaviour of an object is conforming to its requirements specifica-tion. Therefore, before testing, the requirements specification activity should be to specify thedetail input data, expected results and non–deterministic or deterministic behaviour of a distrib-uted application. Formal or semi–formal specification techniques may be appropriate for expres-sing such a specification which can act as a basis for test data generation, test execution and testresult validation. The basic architecture of automated testing is shown as in Figure 2.3.1 Requirements Specification

Requirements specification is the activity of identifying all of the requirements necessary to de-velop the software and fulfill the user’s needs [2]. Not only do testers need specified–behaviour

5

Test DataGeneratorSoftwareunder TestTest ResultsValidatorTest Bed with Specification(SMAD Tree or Z specifications)Figure 2: The basic architecture of automated testinginformation in order to detect whether or not the test results satisfy their requirements but othersoftware developers must have that information. However, the functional requirements is notenough in achieving software quality. What we need to do is to add quality to software duringthe engineering process. To achieve this, testers must be conscious of quality requirements at thesame time they are building in functional requirements[4]. In the other words, the objective ofthis paper is to answer the two crucial questions in software testing:D When to stop testing (whether distributed or not)D How good the software is after testingTherefore, requirements specification must include the software requirements, test requirementsand quality requirements[2, 3]:D Software requirements include input, processing and output requirements. The input require-ments consist of the types of input, quality characteristics of each type of input, rules for usingthe input and constraints on using the input. The process requirements contain exhaustive listingsof the functions the software must have. The output requirements include man–machine interfaceand other characteristics of the products to be generated by the software.D󰀀Test requirements consist of a test plan with which the software is to be tested and accepted.They define the product units and product unit defectiveness for statistical sampling, samplingmethods for estimating the defect rate of the software population with which to judge softwarequality, statistical inference methods and confidence level of software output population quality,the acceptable software defect rate and test input unit generation methods.D Quality requirements is specified by some activities that are followed to analyze the user’sneeds for quality, to convert the quality needs to requirements and to document the results of thesoftware requirements analysis. These documents must clearly be validated by users since onlythey know what they want. A requirements specification presented to a tester could be as informal as a set of notes scribbledduring a meeting or as formal as a document written in a specification language. Formal lan-guages such as Z, VDM, LOTOS, etc. [8, 14] have been promoted strongly by the academic com-6munity in recent years although their take up in industry has been patchy. They are particularlywell suited for specification–based testing. However, the method of test data selection in theseapproaches was based on the deterministic testing method and two main issues of software test-ing–when to stop testing and how good the software is after testing–were only briefly discussedin these approaches.3.2 Test Data Generator

Test data generation is a process of selecting execution path/input data for testing. Most of theapproaches dealing with automatic test data generation are based on the implementations code,either using stochastical methods for generation or symbolic execution. This seems quiet natural,since in a traditional software development process this usually is the only ‘‘specification” thathas formal semantics allowing detailed, automatic analysis. Using formal specifications thesetasks can now be carried out along the specification. Different work with formal specificationhas been done as well, either describing manual or automatic test data generation [8]. A test data generator is a tool which assists a tester in the generation of test data for a software.It takes formally recorded specification information, treats it as though it were a knowledge basedor data base and applies test design rules to this base to automatically create test data. If a require-ment changes in the knowledge base, new test data can be designed, generated, documented andtraced.

3.3 Test Execution

Test execution is a process of feeding test data to the software and collecting information to deter-mine the correctness of the test run. For a sequential software, this process can be accomplishedwithout difficulty. However, for distributed applications, some test cases can be very hard to ex-ecute because by having more than one process executing concurrently in a system, there are non–deterministic behaviours. Repeated executions of a distributed software with the same input mayexecute different paths in the distributed software and produce different results. This is called thenon–reproducible problem. Therefore, a mechanism is required in order to exercise these testcases.

3.4 Test Results Validator

Validation of test results is a process of analyzing the correctness of the test run. For the sequentialsoftware, the correctness of an execution can be observed by comparing the expected and soft-ware the software outputs. However, for distributed applications again because of non–determin-ism, there are more than one or possible infinite outputs for one execution. Validation of such testresults is much more difficult than that of the sequential test results.

The behaviour of a distributed application can be represented by sequences of communicationevents. Each such sequence represents a possible interaction of communication events. Gen-erally, the event sequence is long and the number of all possible sequences is usually extremelylarge. Because of the non–determinism of distributed applications, using breakpoints to validate

7

the execution result is not acceptable. To reduce the interference of testing to the system, it is re-quired that the sequence of events transmitted during the execution be recorded in a so–calledexecution history file for an off–line analysis. However, it is erroneous and tedious work if thisanalysis is done by human. Thus, an automated analysis tool is required.3.5 Our Approach

To guide testers in testing distributed software, the tool, the SMAD tree which is between formaland informal specification, is presented. Extending this concept of the SIAD/SOAD tree in FAST[3], we attempt to specify all possible delivered messages between events by means of the ‘‘Sym-bolic Message Attribute Decomposition” (SMAD) tree. It combines with classification and syn-tactic structure to specify all delivered messages. In the upper level of the SMAD tree, we classifyall delivered messages into three types of message: input message, intermediate message and out-put message. Each type of message has a syntactic sub–tree describing the characteristics of mes-sages with a happen–before relationship so that it can be determined whether messages were de-livered in an order consistent with the potential causal dependencies between messages. The SMAD tree is used to define the test case, which consists of an input message plus a sequenceof intermediate messages, to resolve any non–deterministic choices that are possible during soft-ware execution, e.g., the exchange of messages between processes. In other words, the SMADtree can be used in two ways, firstly to describe the abstract syntax of the test data ( includingtemporal aspects) and secondly to hold data occurring during the test.

A test data input message can be generated based on the input message part of the SMAD treeand rules for setting up the ordering of messages which are incorporated into the tree (initialevent) .The intermediate message part of the SMAD tree can trace the test path and record thetemporal ordered relationship during the lifetime of the computation. The test results also canbe inspected based on the output message part of the SMAD tree (final event), both with respectto their syntactic structure and the causal message ordering under repeated executions. For testing distributed applications, the test strategy consists of testing on two levels: componenttesting and interaction testing. The component testing is based on dynamic testing which com-bines FAST and other testing techniques. The interaction testing can reveal potential behaviouralproperties of a distributed software using deterministic testing.

4 SITE: A STATISTICS–BASED INTEGRATED TEST ENVIRONMENT

The objective of SITE is to build a fully automated testing environment with the statistical analy-sis. The architecture of SITE suggested in Figure 3 consists of computational components , con-trol components and an integrated database. The computational components include themodeller, the SMAD tree editor, the quality analyst, the test data generator, the test paths tracer,the simulator and the test results validator. There are two control components, the test managerand driver.

8

ModellerSMAD treeEditorSpecificationQualityAnalystTest ProcessTest DataGeneratorTest PathTracerTest ResultsValidatorSimulatorComponentTestingInteractionTestingTest DriverAn Integrated DatabaseTest ManagerFigure 3: The architecture of SITE The SITE is designed for distributed applications, according to the test requirements:D To set up the requirements, including the functional and quality requirements,D To execute automated testing until it has been sufficiently tested (when to stop testing),D To re–execute the input units which have been tested (regression testing),D To execute the component–testing first and the interaction testing second,D To test all ‘‘interface” paths among processes which should be traversed at least once,D To enhance testing in areas that are more critical,D To produce test execution, test failure and test quality reports. For a distributed application, the test environment could model the executing behaviour, editmessages’ specification into a SMAD tree file, automatically generate test data based–on statisti-cal testing, receive a test software, run the software with the generated test data, trace the test pathsrecording in the path records file for re–tests, inspect the test results and finally generate a testreport to the tester.94.1 Test Manager

Software testing is an extremely complicated process consisting of many activities and dealingwith many files created during testing. The test manager has two main tasks: control and datamanagement.

The task of control management provides a GUI between tester and SITE. This GUI receivescommands from the tester and corresponds with the functional module to execute the action andachieve the test results. It will trigger the test driver to start test and get the status report of testexecution back which will be saved in the test report repository.

The task of data management provides the support for creating, manipulating and accessing datafiles as well as the relations among these data files which are maintained in a persistent databasein the test process. This database consists of static and dynamic data files. The static data filesinclude a message–flow paths file, a SMAD tree file, a random number seeds file and a qualityrequirement file. The dynamic data files include an input unit file, a product unit file, a test pathsrecording file, a defect rate file, a file for the range of defect rate and a sample size file.

Message–flow

Paths

ProductUnitInspectPathFlowInInputUnit

DescribeMessagesInspectOutputSMADTree

GenerateInputPass/FailureDefectRate

ProvideSizeIsAcceptablefor The Range ofDefect rate

FlowOut Test PathsRecording

RandomNumberSeeds

ProvideSeedProvideSizeSampleSizeWhenToStopHowGoodIsQuality

Requirement

Figure 4: A conceptual data model for SITE

A conceptual data model for this database is shown in Figure 4. These data files will be describedmore fully through this paper as they arise.

10

4.2 Modeller

The Modelling activity includes [3]: modelling of inputs and outputs as well as modelling of thesoftware. Inputs are modelled in terms of types of input data, rules for constructing inputs andsources of inputs. The modelling of output includes the crucial definitions of product unit andproduct unit defectiveness on which the design and testing of the software must be based. Thesoftware itself, as distinct from its output, is modelled in terms of the description of the processbeing automated, rules for using inputs, methods for producing outputs, data flows, process con-trol and methods for developing the software system.

A distributed application is a system as a set of communicating processes, where each processholds its own local data and the processes communicate by message passing. In SITE, the model-ling component describes a set of asynchronous processes in a distributed application to be testedwith message–flow routines to gather information about an application’s desired behaviour fromwhich all tests are then automatically derived.

This model is used as the basis of a specification in the SMAD tree that can be used to describethe abstract syntax of the test cases as well as to the trace data occurring during the test. The mess-age–flow routines will provide an elemental function visible at the system level and constitutethe point at which integration and system testing meet, which results in a more seamless flowbetween these two forms of testing. This information provides support for test planning (a com-ponent testing and an interaction testing) to the test driver as well as the SMAD tree editor forspecifying messages among events.

The modelling of output also includes output quality planning, in which sampling methods andparameters for software testing and the acceptance procedure are determined. These parametersinclude firstly a definition of the defectiveness of the product unit so that the quality of a productunit can be evaluated and secondly an identification of the tolerance limits in defining the defec-tiveness of a product unit. This information provide support for test planning and test measure-ment to the statistical analyst.4.3 SMAD Tree Editor

Requirements specification is the activity of identifying all of the requirements necessary to de-velop the software and fulfil user needs. Here, the SMAD tree is a powerful tool to represent theinput/output domain in a convenient form for the crucial part of requirements specification. The SMAD tree editor is a graphical editor that supports editing and browsing of the SMADtree. The SMAD tree and the model will be built at the same time. The modeller will trigger theSMAD tree editor when each message links two events during the modelling process. The resultof editing will be saved in a SMAD tree file which allows the test data generator to generate testdata by a random method and the test results validator to inspect the product unit.

11

4.4 Test Driver

The test driver calls the software being tested and keep track of how it performs. More specifi-cally, it should

D Set up the environment needed to call the software being tested. This may involve setting upand perhaps opening some files.

D Make a series of calls to operate the dynamic testing. The arguments for these calls could beread from a file or embedded in the code of the driver. If arguments are read from a file, theyshould be checked for appropriateness, if possible.

There are some different activities between the component testing and interaction testing. There-fore, the test driver invokes different computational components/sub–components in differentlevel testing. This difference is shown in Figure 3.

During the component testing, the test driver triggers the test data generator to generate inputaccording to the requirements determined by the statistical analysis of the quality analyst, makesa series of calls to execute the application and produces the product unit to the test results validatorfor evaluation of the tests and software.

After the component testing, the test driver performs the interaction testing. It starts by callingthe call test data generator to generate an input message plus a sequence of intermediate messageswhich are selected to correspond to the message–flow paths file and sets up the ordering of mess-ages using ‘happened before’ relationships which are incorporated into the SMAD tree. Whenthe test runs, the test driver invokes the test paths tracer to trace the test path and record the tem-poral ordered relationship into the path recordings file during the lifetime of the computation.The test results also can be saved into the product unit file to the test results validator for inspect-ing the product unit, both with respect to their syntactic structure and the causal message orderingunder repeated executions using the path recordings file.4.5 Quality Analyst

4.5.1 Statistical Analysis For Component Testing

Testing a piece of software is likely to find the defect rate of the product unit population generatedby the software. Therefore, each execution of the software in SITE is considered equivalent to‘sampling’ a product unit from the population which consists of an infinite number of units. Thegoal of statistics–based testing is to find certain characteristics of the population such as the ratioof the number of defective units in the population to the total number of units in the population.Clearly a mass inspection of the population to find the rate is prohibitive. An efficient methodis through statistical random sampling. A sample of n units is taken randomly from the popula-tion. If it contains d defective units, then the sample defect rate, denoted by θ0, is θ0 = d/n. Ifn is large enough, then the rate θ0 can be used to estimate the product unit population defectiverate θ.

12

Addressing the two major testing issues: when to stop testing and how good the software is aftertesting, the statistical analyst provides an iterative sampling process that dynamically determinesthe sample size n. It also provides a mechanism to estimate the mean, denoted by µ, of the productunit population. Once the value of µ is estimated, the product unit population defect rate θ canbe computed by µ = nθ. If the value of θ is acceptable, then the product unit population is accept-able. The piece of software is acceptable only when the product unit population is acceptable.Therefore, the estimated product unit population defect rate θ can be viewed as the software qual-ity index. The full details can be seen in [3].

The statistical analyst receives quality statements from a quality requirement file. The qualitystatement defines software quality that is equivalent to p% of the product unit population beingnon–defective (the acceptance level). The result of the iterating sampling process, sample sizen, will be dynamically saved into a sample size file for providing an information to the test datagenerator. The values of confidence interval also is computed and will be saved into a file for therange of defect rate for supporting the evaluation of software quality by the test results validator.4.5.2 Test Coverage Analysis For Interaction Testing

The objective of interaction testing is to to verify the message exchanges among processes. Onereasonable cover would be to require that all ‘‘interface” messages between a pair of processshould be exercised at least once. The ‘‘interface” message is the message sent out and receivedfrom different processes. In SITE, we can use the path recordings file in comparison with themessage–flow paths to examine whether or not there are ‘‘interface” messages which do not ver-ify. If so, more tests are added until the test set is sufficient for the quality level required. 4.6 Test Data Generator

After the sample size is determined, the SMAD tree file is used for automatically generating inputtest data through random sampling with a random number seed. The input test data will be tem-porarily saved in the input unit file for regression tests according to the test requirements. For interaction testing, the test generator addresses how to select the input test data plus eventsequences from the SMAD tree with the ‘‘happened before” relationship. Due to the unpredict-able progress of distributed processes and the use of non–deterministic statements, multiple ex-ecutions of an application with the same input may exercise different message–flow paths. There-fore, the input test data plus event sequences are generated with reference to the message–flowpaths file.

4.7 Test Path Tracer

The reproducibility of tests is important, particular in testing distributed applications. Therefore,we need a mechanism for tracing and recording test paths during the test. The tracer consists ofcorrelated views that allow the tester to compare different information about a path routing in thesoftware execution. The path tracer records events from currently executing tasks into a path re-13

cords file, where the trace is played in ‘‘real time”. Once a path record file has been created, thetester can replay the trace for re–tests.

XPVM [10] receives task tracing information in the form of distinct trace event messages bythe standard PVM message passing mechanism. PVM uses the existing message channels andPVM Daemons to route the trace event messages. Because it may not be desirable in all situationsto trace all of the PVM system routines (we are only interested in tracing ‘‘interface” messageshere), the PVM library instrumentation uses a trace mask to selectively determine which PVMroutines are to be traced. When the trace mask for a particular routine is set, trace events are gener-ated on each invocation of that routine by the user application. One event is recorded at the entryto each traced PVM routine. This event contains the values of any calling parameters for the spe-cific invocation of the routine. Another event records the return from each routine, including anyreturned values or status codes for that invocation.

There are two distinct trace playing models in XPVM to provide SITE for recording the testpath and re–testing, firstly, ‘‘Trace OverWrite” mode is used to play traces in ‘‘real–time” as theyare collected and secondly, ‘‘Trace Playback” mode allows analysis of traces ‘‘post–mortem” byplaying back saved trace files.4.8 Test Results Validator

A test results validator in SITE is like a compiler. Much as a compiler reads and analyzes sourcecode, the validator reads and analyzes the test results with the SMAD tree. It introduces the statictesting method to inspect the test results during dynamic testing. The main advantage of usingthe SMAD tree here is that we do not need a test oracle to compute expected results. The SMADtree can be used directly for automatic inspection whether or not the results produced by the soft-ware are correct. In the interaction testing, the validator examines the execution of different testpaths which drive from different test data or from the same test data (repeated execution) to testthe causal message ordering with the ”happened before” relationships in the SMAD tree. The validator receives the test results during test execution. After inspecting the test results,it will compute the defect rate and store it in the defect rate file thus providing data to the qualityanalyst dynamically. According to test requirements, the test failure report is produced by the val-idator.4.9 Simulator

In consideration of the problem of testing a distributed application with many different computersystems dispersed geographically in a large network, there is no practical way to control inputat the dispersed computer systems so that, without a simulator, testing must be conducted froma few selected test computer systems.

PVM software provides the kind of function that enables a collection of heterogeneous com-puter systems to be viewed as a single distributed memory parallel computer. It uses the message–passing model to allow programmers to exploit distributed computing across a wide variety of

14

computer types, including MPPs. A key concept in PVM is that it makes a collection of computersappear as one large virtual machine. SITE is above PVM software which is used as a simulatorfor executing distributed applications under the test driver control. The PVM software can be in-strumental in gathering information about the combined hardware/software system, the soft-ware’s execution behaviour and how it interacts with the hardware.5 COMPARISON WITH OTHER TEST ENVIRONMENTS5.1 An Overview Of The STEP Model And SAAMEickelmann and Richardson [5] developed the Software Test Environment Pyramid (STEP)model which partitioned the STE domain into six canonical functions, test execution, test devel-opment, test failure analysis, test measurement, test management and test planning, in a corre-sponding progression of test process evolution in [5] – the debugging, demonstration, destruc-tion, evaluation and prevention periods. This correspondence is shown in Figure 5 and the fulldetails can be seen in [5].CanonicalFunctionalPartitionsTestExecutionTestDevelopmentTest FailureAnalysisTestMeasurementTestManagementTestPlanningFigure 5: STEP Model, adapted from [5] Based on the STEP model, a comparison and analysis of three STEs [5], PROTest II (PrologTest Environment, Version II) [1], TAOS (Testing with Analysis and Oracle Support) [15] andCITE (CONVEX Integrated Test Environment) [17], was made by the Software ArchitectureAnalysis Method (SAAM) which provides an established method for describing and analyzingsoftware architectures. To accomplish this work, there are three main steps in SAAM [5]: firstly,to characterize a canonical functional partition for the domain, secondly, to create a graphical dia-gram of each system’s structure by the SAAM graphical notation and thirdly, to allocate the func-tional partition onto the system structure. This done, the three STEs were analysed to determine15TestProcessEvolutionDebuggingDemonstrationDestructionEvaluationPreventionwhether the architecture supports specific tasks or not in accordance with the qualities attribu-table to the system. In the graphical notation used by SAAM there are four types of components [9]: a process (unitwith an independent thread of control); a computational component (a procedure or module); apassive repository (a file) and an active repository (a database). There are two types of con-nectors, control flow and data flow, either of which may be uni or bi–directional. The notationis a concise and simple lexicon shown in figure 6.ComponentsProcessConnectionsUni–/Bi–directionalData FlowUni–/Bi–directionalComputationalComponentPassive DataRepositoryActive DataRepositoryControl FlowFigure 6: SAAM architectural notations, adapted from [9]5.2 SITE SAAM Description And Functional AllocationThe SAAM graphical depiction of SITE is shown in Figure 7. SITE supports statistics–based test-ing on the top of specification–based testing with two main issues in software testing, when tostop testing and how good the software is after testing. It provides automatic support for test ex-ecution by the test driver, test development by the SMAD tree editor and the test data generator,test failure analysis by the test results validator, test measurement by the quality analyst, test man-agement by the test manager and test planning by the modeller. These tools are integrated aroundan object management system [16] which includes a public, shared data model describing the dataentities and relationships which are manipulable by these tools. SITE enables early entry of the test process into the life cycle due to the definition of the qualityplanning and message–flow routines in the modelling. After well–prepared modelling and re-quirements specification are undertaken, the test process and the software design and imple-mentation can proceed concurrently. 16Test PlanningTest Failure AnalysisTest ExecutionTest PlanningReportModellingTest Failure ReportTest ExecutionReportTest MeasurementTest DevelopmentTest QualityReportSMAD TreeEditorTest DataGeneratorTest ResultsValidatorTest PathTracerQualityAnalystTestDriverTest ManagementSoftwareunder TestTestManagerSimulatorObject Management SystemFigure 7: The SITE system structure and functional allocation through SAAM5.3 STE ComparisonThe use of SAAM provides a canonical functional partition to characterize the system structureat a component level. The functionalities supported and structural constraints imposed by thearchitecture are more readily identified when compared in a uniform notation [5]. A comparisonof four STEs, PROTest II, TAOS, CITE and SITE, made by the SAAM is shown in Figure 8.TestTestTest FailureTestTestTestExecutionDevelopmentAnalysisMeasurementManagementPlanningPROTest IITAOSCITESITEnnnnnnnnnnnnnnnnnnnFigure 8: A comparison of four STEs by SAAM17 However, test process focus was identified for each STE across the software development lifecycle, shown in Figure 9.Development TimeAnalyzeProblemSoftwareDesignCodingTestDesignTest CaseTestGenerationExecutionEvaluate Testsimplementation–based testingAnalyzeProblemwithSpecificationSoftwareDesignTestDesignCodingTest CaseGenerationTestEvaluate TestsExecutionspecification–based testingAnalyzeProblemwithModellingSoftwareDesignTestDesignCodingTest CaseGenerationSpecificationTestEvaluate TestsExecutionand Softwarestatistics–based testingSoftware Development Life CycleFigure 9: Test process focus appears the life cycleD PROTest II and CITE support implementation–based testing and have a destructive testing pro-cess focus. This focus has a limited scope of life cycle applicability, as it initiates testing afterimplementation for the purpose of detecting failures.D TAOS supports specification–based testing and has an evaluative test process focus. An evalua-tive test process focus provides complete life cycle support, as failure detection extends from re-quirements and design to code.D SITE supports statistics–based testing and has a prevention testing process focus. It focuses onfault prevention through parallel development and test processes. SITE uses the way that timelytesting improves software specifications by building models that show the consequences of thesoftware specifications. There exist some differences amongst implementation–based, specification–based and statis-tics–based testing. With implementation–based testing, only a set of input data can be generatedfrom an implementation, but the expected outputs can not be derived from the implementation.In this case, the existence of an oracle (in the human mind) must be assumed and checking thetest results against the oracles has to be done. With specification–based testing, both test inputdata and the expected outputs can be generated from a specification. The statistics–based testingis on the top of specification–based testing with the quality plan before specification. 186 CONCLUSIONThe support of fully automated test environment for distributed applications have been a desiredsignificant issue to the software development process. In this paper, an operational environmentfor testing distributed applications is proposed. An essential component for developing qualitysoftware is SITE in this operational environment. It consists of control components (test man-ager, test driver), computational components (Modeller, SMAD tree editor, quality analyst, testdata generator, test paths tracer, test results validator, simulator) and an integrated database. Theactivities of the test process are integrated around an object management system which includesa public, shared data model describing the data entities and relationships which are manipulableby these tools. SITE addresses the two crucial requirements, when to stop testing and how goodthe software is after testing, for software test and provides automated support for test execution,test development, test failure analysis, test measurement, test management and test planning.ACKNOWLEDGEMENTThe authors would like to thank Dr C.K. Cho of Computa and Ms N. S. Eickelmann for their fortheir suggestions and corrections which were useful for improving the paper. The work of oneauthor, H.D. Chu, is funded by the National Science Council in Taiwan from whom he receiveda fellowship to work toward a doctoral degree.REFERENCES[1] Belli, F. & Jack, O., Implementation–based analysis and testing of prolog programs. In Pro– ceedings of the International Symposium on Software Testing and Analysis, 70–80, Cam– bridge, Massachusetts, 1993.[2] Cho, C. K., Quality Programming: Deleloping and Testing Software with Statistical Quality Control. John Wiley & Sons, Inc., New York, 1988.[3] Chu, H.D. & Dobson, J.E., FAST: A Framework for Automating Statistics–based Testing. Technical Report 564, Dept. of Computing Science, University of Newcastle upon Tyne, 1997.[4] Deutsch, M. S. & Willis, R. R., Software Quality Engineering: A Total Technical and Man– agement Approach. Prentice Hall, Inc., Englewood Cliffs, NJ, 1988.[5] Eickelmann, N. S. & Richardson, D. J., An Evaluation of Software Test Environment Archi– tectures. In Proceedingd of the 18th International Conference on Software Engineering, 353–364, Berlin, Germany, 1996. [6] Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, R. & Sunderam, V., PVM: Parallel Virtual Machine, A Users’ Guide and Tutorial for Networked Parallel Computing, MIT Press Ltd., Cambridge, MA, 1994. [7] Geist, G.A., Kohl, J.A. & Papadopoulos, P.M., PVM and MPI: a Comparison of Features. download from http://www.epm.ornl.gov/pvm/pvm_home.html/PVMvsMPI.ps.[8] Hörcher, H. M. and Peleska, J., Using Formal Specifications to Support Software Testing. Software Quality Journal, 4, 309–327, 1995.19[9] Kazman, R., Bass, L., Abowd, G. & Webb, W., SAAM: A Method for Analyzing the Prop– erties of Software Architectures. In Proceedings of the 16th International Conference on Soft– ware Engineering, 81 – 90. Sorrento, Italy, 1994.[10] Kohl, J. A. & Geist, G. A., XPVM 1.0 User’s Guide, November 1996.[11] Myers, G. J., The Art of Software Testing. John Wiley & Sons, New York, 1978.[12] Norman, S., Software Testing Tools. Ovum Ltd, London, 1993.[13] Newton, P. & Dongarra, J., Overview of VPE: A Visual Environment for Message–Passing Parallel Programming. download from http://www.cs.utk.edu/~newton/vpe/other_docs.html.[14] Poston, R. M., Automating Specification–Based Software Testing. IEEE Computer Society Press, Los Alamitos, CA, 1996.[15] Richardson, D.J., TAOS: Testing with Oracles and Analysis Suport. In Proceedings of the International Symposium on Software Testing and Analysis, 138–153, Seattle, Washington, 1994.[16] Sommerville, I., Software Engineering (Fifth ed.). Addison–Wesley Publishing Company, Wokingham, England, 1996. [17] Vogel, P. A., An Integrated General Purpose Automated Test Environment. In Proceedings of the International Symposium on Software Testing and Analysis, 61–69. Cambridge, Massa– chusetts, 1993.20

因篇幅问题不能全部显示,请点此查看更多更全内容