ISSRE #2 2002


Current News (Home)

Introduction to ISSRE

Organizers & Program Committee

Agenda

Conference Registration

Hotel Registration



CFP: WOSA

Important Dates

Sponsors

Archive
    Agenda

Friday, 15 November 2002
Time Track 1 Track 2 Track 3
8:30-9:30
Keynote: TBA, Ravi Iyer, Professor, University of Illinois
9:30-10:00
Break
10:00-11:30

Practical Experience with Testing

Session chair:
Jeff Offutt,
George Mason University, USA

Test Reuse in the Spreadsheet Paradigm

A Case Study Using the Round-Trip Strategy for State-Based Class Testing

An Empirical Study of Tracing Techniques from a Failure Analysis Perspective

Reliability Prediction and Analysis

Session chair:
Katerina Goseva-Popstojanova,
West Virginia University, USA

Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects

A Vector Markov Model for Structural Coverage Growth and the Number of Failure Occurrences

Blocking-based Simultaneous Reachability Analysis of Asynchronous Message-Passing Programs

Fast Abstracts

Session chair:
Sachin Garg,
Avaya Labs, USA

Internet, E-Business and Software

Testing

Formal Methods

11:30-1:00
Lunch
1:00-2:30

Software Mutation

Session chair:
Aditya Mathur,
Purdue University, USA

Emulation of software faults by educated mutations at machine-code level

Mutation of Java Objects

Inter-Class Mutation Operators for Java

Reliability Assessment

Session chair:
Linda Rosenberg,
NASA, USA

Reliability Assessment of Framework-Based Distributed Embedded Software Systems

Effect of Disturbances on the Convergence of Failure Intensity

Toward a Quantifiable Definition of Software Faults

Fast Abstracts

Session chair:
Dr. Khalid Lateef,
Titan Systems, Inc., USA

Software Process and Metrics

Networked and Distributed Systems Dependability

2:30-3:30
Student Presentations
3:30-4:30
Closing Session


8:30 - 9:30am, Keynote

TBA
Ravi Iyer, Professor, University of Illinois

Bio
Ravishankar K. Iyer is Director of the Coordinated Science Laboratory (CSL) at the University of Illinois at Urbana-Champaign, where he is the George and Ann Fisher Distinguished Professor of Engineering. He holds appointments in the Department of Electrical and Computer Engineering, the Department of Computer Science and he is Co-Director of the Center for Reliable and High-Performance Computing at CSL. His research interests are in the area of reliable and secure networked systems. He currently leads the Chameleon-ARMORs project at Illinois, which is developing adaptive architectures for supporting a wide range of dependability and security requirements in heterogeneous networked environments.

Professor Iyer is a Fellow of the IEEE, the ACM and an Associate Fellow of the American Institute for Aeronautics and Astronautics (AIAA). He has received several awards, including the Humboldt Foundation Senior Distinguished Scientist Award for excellence in research and teaching, the AIAA Information Systems Award and Medal for "fundamental and pioneering contributions towards the design, evaluation, and validation of dependable aerospace computing systems" and the IEEE Emanuel R. Piore Award "for fundamental contributions to measurement, evaluation, and design of reliable computing systems."


10:00 - 11:30am, Track 1: Practical Experience with Testing

Session chair:
Jeff Offutt, George Mason University, USA

Test Reuse in the Spreadsheet Paradigm
Marc Fisher II, Dalai Jin, Gregg Rothermel, Margaret Burnett

Spreadsheet languages are widely used by a variety of end users to perform many important tasks. Despite their perceived simplicity, spreadsheets often contain faults. Furthermore, users modify their spreadsheets frequently, which can render previously correct spreadsheets faulty. To address this problem, we previously introduced a visual approach by which users can systematically test their spreadsheets, see where new tests are required after changes, and request automated generation of potentially useful test inputs. To date, however, this approach has not taken advantage of previously developed test cases, which means that users of the approach cannot benefit, when re-testing following changes, from prior testing efforts. We have therefore been investigating ways to add support for test re-use into our spreadsheet testing methodology. In this paper we present a test re-use strategy for spreadsheets, and the algorithms that implement it, and describe their integration into our spreadsheet testing methodology. We report results of a case study examining the application of this strategy.

A Case Study Using the Round-Trip Strategy for State-Based Class Testing
G. Antoniol, L. C. Briand, M. Di Penta, Y. Labiche

A number of strategies have been proposed for state-based class testing. An important proposal made by Chow, that was subsequently adapted by Binder, consists in deriving test sequences covering all round-trip paths in a finite state machine (FSMs). Based on a number of (rather strong) assumptions, and for traditional FSMs, it can be demonstrated that all operation and transfer errors in the implementation can be uncovered. Through experimentation, this paper investigates this strategy when used in the context of UML statecharts. Based on a set of mutation operators proposed for object-oriented code we seed a significant number of faults in an implementation of a specific container class. We then investigate the effectiveness of four test teams at uncovering faults, based on the round-trip path strategy, and analyze the faults that seem to be difficult to detect. Our main conclusion is that the round-trip path strategy is reasonably effective at detecting faults (87% average as opposed to 69% for size-equivalent, random test cases) but that a significant number of faults can only exhibit a high detection probability by augmenting the round-trip strategy with a traditional black-box strategy such as category-partition testing. This increases the number of test cases to run—and therefore the cost of testing—and a cost-benefit analysis weighting the increase of testing effort and the likely gain in fault detection is necessary.

An Empirical Study of Tracing Techniques from a Failure Analysis Perspective
Satya Kanduri and Sebastian Elbaum

Tracing is a dynamic analysis technique to continuously capture events of interest on a running program. The occurrence of a statement, the invocation of a function, and the trigger of a signal are examples of traced events. Software engineers employ traces to accomplish various tasks, ranging from performance monitoring to failure analysis. Despite its capabilities, tracing can negatively impact the performance and general behavior of an application. In order to minimize that impact, traces are normally buffered and transferred to (slower) permanent storage at specific intervals. This scenario presents a delicate balance. Increased buffering can minimize the impact on the target program, but it increases the risk of losing valuable collected data in the event of a failure. Frequent disk transfers can ensure traced data integrity, but it risks a high impact on the target program. We conducted an experiment involving six tracing schemes and various buffer sizes to address these trade-offs. Our results highlight opportunities for tailored tracing schemes that would benefit failure analysis.


10:00 - 11:30am, Track 2: Reliability Prediction and Analysis

Session chair:
Katerina Goseva-Popstojanova, West Virginia University, USA

Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects
Peter G. Bishop and Robin E. Bloomfield

In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. The implications of the theory are discussed and compared with alternative reliability models.

A Vector Markov Model for Structural Coverage Growth and the Number of Failure Occurrences
Michael Grottke

Most software reliability growth models specify the expected number of failures experienced as a function of testing effort or calendar time. However, there are approaches to model the development of intermediate factors driving failure occurrences. This paper starts out with presenting a model framework consisting of four consecutive relationships. It is shown that a differential equation representing this framework is a generalization of several finite failure category models.

The relationships between the number of test cases executed and expected structural coverage, and between expected structural coverage and the expected number of failure occurrences are then explored further.

A non-homogeneous Markov model allowing for partial redundancy in sampling code constructs is developed. The model bridges the gap between setups related to operational testing and systematic testing, respectively. Two extensions of the model considering the development of the number of failure occurrences are discussed.

The paper concludes with showing that the extended models fit into the structure of the differential equation presented at the beginning, which permits further interpretation.

Blocking-based Simultaneous Reachability Analysis of Asynchronous Message- Passing Programs
Yu Lei Kuo-chung Tai

Reachability analysis of a concurrent program derives global states of the program and detects the existence of deadlocks and other types of faults. Due to the state space explosion problem, how to reduce the state space in reachability analysis while preserving some fault detection capabilities has been investigated for a long time. Existing reachability analysis techniques for asynchronous message-passing programs assume causal communication, which means that messages sent to a destination are received in the order they were sent.

In this paper, we present a new reachability analysis approach, called blocking-based simultaneous reachability analysis (BSRA). BSRA can be applied to asynchronous message-passing programs based on any communication scheme. The main idea of BSRA is the following. From a global state $g$, processes are allowed to proceed simultaneously until each of them terminates or is ready to execute a receive operation. Global states reached by such executions from $g$ are called next blocking points of $g$. For each next blocking point of $g$, possible matches between waiting messages and receive operations are performed to produce immediate BSRA-based successor states of $g$. The intermediate global states from $g$ to each of $g$'s immediate BSRA-based successors are not saved. We describe an BSRA-based algorithm for generating reachability graphs and show that this algorithm guarantees the detection of deadlocks. Our empirical results indicate that BSRA significantly reduces the number of states in reachability graphs. Extensions of BSRA for partial order reduction and model checking are briefly discussed.


10:00 - 11:30am, Track 3: Fast Abstracts

Session chair:
Sachin Garg, Avaya Labs, USA

Internet, E-Business and Software

  • Conflicting Forces for Software Reliability in e-Business
  • Dependable Web Service
  • Virtual Software Enterprises for Reliable Software Development
  • Simultaneous Data Extraction and Verification using Semistructured Constraints

Testing

  • Relationship Between Test Effectiveness and Coverage
  • Adapted Statistical Usage Testing: A Case Study
  • Scalable Source Code Debugger Agent to Provide Program Debugging in a Script Testing Environment
  • Model-Based Extreme Testing
  • A Framework for Experimental Error Porpagation Analysis of Software Architecture Specifications

Formal Methods

  • Whole-Program Specifications Permit Better Abstraction and Concurrent Implementations
  • A Formal Method with TUG for Developing Reliable Programs


1:00 - 2:30pm, Track 1: Software Mutation

Session chair:
Aditya Mathur, Purdue University, USA

Emulation of software faults by educated mutations at machine-code level
Jo�o Dur�es and Henrique Madeira

This paper proposes a new technique to emulate software faults by selective mutations introduced at the machine-code level and presents an experimental study on the accuracy of the injected faults. The proposed method consists of finding key programming structures at the machine code-level where high-level software faults (i.e., bugs) can be emulated. The main advantage of emulating the software faults at the machine-code level is that software faults can be injected even when the source code of the target application is not available, which is very important for the evaluation of COTS components or for the validation of software fault tolerance techniques in COTS based systems. The analysis of bug reports and common pitfalls of popular programming languages has been used to define the experimental setup used in the accuracy evaluation of our approach. Starting from the orthogonal defect classification (ODC), faults of each ODC class are characterized in a more detailed way, and the precision of the proposed technique has been evaluated by comparing the impact (failure modes) of the high level faults with the selective mutations introduced at the machine code level. This evaluation used several real programs and many different types of faults and, additionally, it includes the study on the key aspects that may impact the technique accuracy, such as the compiler optimization options, the use of different compilers for the same language, and the use different programming languages. The portability of the technique is mainly related to the programming model of the target processor and the results show that classes of faults such as assignment, checking, interface, and simple algorithm faults can be directly emulated using this technique.

Mutation of Java Objects
Roger T. Alexander, James M. Bieman, Sudipto Ghosh, Bixia Ji

Fault insertion based techniques have been used for measuring test adequacy and testability of programs. Mutation analysis inserts faults into a program with the goal of creating mutation-adequate test sets that distinguish the mutant from the original program. Software testability is measured by calculating the probability that a program will fail on the next test input coming from a predefined input distribution, given that the software includes a fault. Inserted faults must represent plausible errors.

It is relatively easy to apply standard transformations to mutate scalar values such as integers, floats, and character data, because their semantics are well understood. Mutating objects that are instances of user defined types is more difficult. There is no obvious way to modify such objects in a manner consistent with realistic faults, without writing custom mutation methods for each object class. We propose a new object mutation approach along with a set of mutation operators and support tools for inserting faults into objects that instantiate items from common Java libraries heavily used in commercial software as well as user defined classes. Preliminary evaluation of our technique shows that it should be effective for evaluating real-world software testing suites.

Inter-Class Mutation Operators for Java
Yu-Seung Ma Yong-Rae Kwon Jeff Offutt

The effectiveness of mutation testing depends heavily on the types of faults that the mutation operators are designed to represent. Therefore, the quality of the mutation operators is key to mutation testing. Mutation testing has traditionally been applied to procedural-based languages, and mutation operators have been developed in support of most of their language features.

Object-oriented programming languages contain new language features, most notably inheritance, polymorphism, and dynamic binding. Not surprisingly, these language features allow new kinds of faults, some of which are not modeled by traditional mutation operators. Although mutation operators for OO languages have previously been suggested, our work in OO faults indicate that the previous operators are insufficient to test these OO language features, particu larly at the class testing level. This paper introduces a new set of class mutation operators for the OO language Java. These operators are based on specific OO faults and can be used to detect faults involving inheritance, polymorphism, and dynamic binding. A Java mutation tool is currently under construction.


1:00 - 2:30pm, Track 2: Reliability Assessment

Session chair:
Linda Rosenberg, NASA, USA

Reliability Assessment of Framework-Based Distributed Embedded Software Systems
Farokh B. Bastani, Sung Kim, I-Ling Yen, and Ing-Ray Chen

Distributed embedded software systems, such as sensor networks and command and control systems, are complex systems with stringent performance, reliability, security, and safety constraints. These are also long-lived systems that must be continually upgraded and evolved to incorporate enhanced functionality. One approach for achieving high quality and evolvability for these systems is to organize them in the form of application-oriented frameworks that allow the system to be composed from orthogonal aspects that can be independently developed, evolved, and certified.

In this paper, we define a general framework that allows a distributed embedded system to have relatively independent aspects, including ``plug-and-play'' capability. We present conditions under which the reliability of the system can be inferred from the reliability of the individual aspects. The approach is illustrated for a framework-based distributed sensor network.

Effect of Disturbances on the Convergence of Failure Intensity
Joao W. Cangussu, Aditya P. Mathur and Ray A. DeCarlo

We report a study to determine the impact of four types of disturbances on the failure intensity of a software product undergoing system test. Hardware failures, discovery of a critical fault, attrition in the test team, are examples of disturbances that will likely affect the convergence of the failure intensity to its desired value. Such disturbances are modeled as impulse, pulse, step, and white noise. Our study examined, in quantitative terms, the impact of such disturbances on the convergence behavior of the failure intensity. Results from this study reveal that the behavior of the state model, proposed elsewhere, is consistent with what one might predict. The model is useful in that it provides a quantitative measure of the delay one can expect when a disturbance occurs.

Toward a Quantifiable Definition of Software Faults
John C. Munson, Allen P. Nikora

An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system�s eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely-used definitions are not measurable � there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results.

As part of our on-going work in modeling software faults, we have developed a method of unambiguously identifying and counting faults. Specifically, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.


1:00 - 2:30pm, Track 3: Fast Abstracts

Session chair:
Dr. Khalid Lateef, Titan Systems, Inc., USA

Software Process and Metrics

  • Software Reliability Prediction is not a Science...Yet
  • A Software Development Life Cycle Model for Low Maintenance and Concurrency
  • A Methodology for Reliable Concurrent Programming
  • A Hierarchical Classification for Software Health Indicators
  • Combining Process Simulation and Orthogonal Defect Classification for Improving Software Dependability

Networked and Distributed Systems Dependability

  • Error Detection in Distributed Systems
  • A diagnosis service for CORBA-based applications
  • Reliable Computation Using Unreliable Network Tools
  • Failure Detection in Telephone Switching Systems
  • A Hierarchical Trade-off Assessment Model and the Systematic Evaluation of Networked Systems
  • QoS Assurance in Unreliable Networks





view blog
all rights reserved © 2003,2002, issre2002.org,
Header photo #6 courtesy Annapolis & Anne Arundel
County Conference & Visitors Bureau.