ISSRE #1 2002


Current News (Home)

Introduction to ISSRE

Organizers & Program Committee

Agenda

Conference Registration

Hotel Registration



CFP: WOSA

Important Dates

Sponsors

Archive
    Agenda

Thursday, 14 November 2002
Time Track 1 Track 2 Track 3 Track 4
8:30-9:30
Keynote: Everyday Dependability for Everyday Needs, Mary Shaw, A.J. Perlis Professor of Computer Science, Carnegie Mellon University
9:30-10:00
Break
10:00-11:30

Testing with Formal Methods

Session chair:
Jean-Claude Laprie,
LAAS-CNRS, France

Testing Processes from Formal Specifications with Inputs, Outputs and data types

Saturation Effects in Testing of Formal Models

Informal Proof Analysis Towards Testing Enhancement

Reliability Modeling

Session chair:
Sachin Garg,
Avaya Labs, USA

Heterogeneous Software Reliability Modeling

A Reliability Estimator for Model Based Software Testing

Reliability Prediction and Sensitivity Analysis Based on Software Architecture

Failure Detection and Recovery

Session chair:
Michael R. Lyu,
Computer Science & Engineering Dept., The Chinese University of Hong Kong, China

Automatic Failure Detection, Logging, and Recovery for High-Availability Java Servers

The Impact of Recovery Mechanisms on the Likelihood of Saving Corrupted State

The Architecture and Performance of Automatically Generated Dependability Wrappers

Student Posters

11:30-1:00
Lunch
1:00-2:00
Keynote, Donald Ferguson, IBM Fellow and IBM WebSphere Architect
2:00-2:30
Break
2:30-4:00

Assessment of Testing

Session chair:
Harald Stieber,
University of Applied Sciences, Germany

Metrics for Measuring the Effectiveness of Software-Testing Tools

Optimal Allocation of Testing Resources for Modular Software Systems

On Estimating Testing Effort Needed to Assure Field Quality in Software Development

High Availability Software Maintenance

Session chair:
Norm Schneidewind,
Naval Postgraduate School, USA

A Framework for Live Software Upgrade

Modeling and Analysis of Software Rejuvenation in Cable Modem Termination System

Dependability Analysis of a Client/Server Software System with Rejuvenation

Fast Abstracts

Session chair:
Ram Chillarege,
Chillarege Inc., USA

Security

Component based/object-oriented software reliability

Modeling

4:00-4:15
Break
4:15-5:45

Testing Technologies

Session chair:
Yashwant Malaiya
Colorado State University, USA

Data Coverage Testing

Genes and Bacteria for Automatic Test Cases Optimization in the .NET Environment

Fault Detection Capabilities of Coupling-based OO Testing

System Analysis

Session chair:
Sherif Yacoub
HP Labs, USA

Improving Usefulness of Software Quality Classification Models Based on Boolean Discriminant Functions

Fault Contribution Trees for Product Families

Automatic Synthesis of Dynamic Fault Trees from UML System Models

Panel

Open-source Software: More or Less Secure and Reliable?... Jeff Offutt, GMU; Ron Ritchey, Booz-Allen; Brendan Murphy, Microsoft; Mike Shaver, Cluster File Systems/Mozilla

Student posters continued

Ends at 5pm

5:45-6:30
Break
6:30-10:00
Banquet


8:30 - 9:30am, Keynote

Everyday Dependability for Everyday Needs
Mary Shaw, A.J. Perlis Professor of Computer Science, Carnegie Mellon University

Abstract
Everyday software must be sufficiently dependable for the needs of everyday people. Everyday people can usually intervene when software misbehaves, and problems with their software are usually irritating but not catastrophic. Everyday software must thus provide cost-effective service with reasonable amounts of human attention. Dependability for these everyday needs arises from matching dependability levels to actual needs, achieving reasonably low failure rates at reasonable cost, providing understandable mechanisms to recognize and deal with failure, and enabling creation of individually-tailored systems and configurations from available resources. This leads to different challenges from mission-critical systems that operate autonomously and risk catastrophic failure.

Much everyday software depends on inexpensive or free information resources available dynamically over the internet or through retail channels. Increasingly, this software is composed by its users rather than by professionals, and the resulting software uses information resources in ways that the resourcees' creators could not anticipate. Software development in this setting requires methods that tolerate incomplete knowledge, pursue value rather than simply capability, and base reasoning on aggregate rather than fully-detailed information. We will identify research challenges that arise from the need for everyday dependability and give examples of research that addresses these challenges.

Bio
Mary Shaw is the Alan J. Perlis Professor of Computer Science, Co-Director of the Sloan Software Industry Center, and member of the Institute for Software Research International, the Computer Science Department, and the Human Computer Interaction Institute at Carnegie Mellon University. Her research interests in computer science lie primarily in the areas of programming systems and software engineering, particularly value-driven software design, software architecture, programming languages, specifications, and abstraction techniques. She has also participated in developing innovative curricula in Computer Science from the introductory to the doctoral level.

She has been a member of the Carnegie Mellon faculty since completing the Ph.D. degree at Carnegie-Mellon in 1972. From 1992 to 1999 she served as the Associate Dean for Professional Education. In 1997-98 she was a Fellow of the Center for Innovation in Learning. From 1984 to 1987 she served as Chief Scientist of Carnegie Mellon's Software Engineering Institute. She had previously earned a BA (cum laude) from Rice University and worked in systems programming and research at the Research Analysis Corporation and Rice University.

Dr. Shaw is an author or editor of seven books and more than one hundred forty papers and technical reports. In 1993 she received the Warnier prize for contributions to software engineering. She is a Fellow of the Association for Computing Machinery (ACM), the Institute for Electrical and Electronics Engineers (IEEE) and the American Association for the Advancement of Science (AAAS). She is also a member of the Society of the Sigma Xi, the New York Academy of Sciences, IFIP Working Group 2.10 on software architecture.

Further information is available at URL http://www.cs.cmu.edu/~shaw/


10:00 - 11:30am, Track 1: Testing with Formal Methods

Session chair:
Jean-Claude Laprie, LAAS-CNRS, France

Testing Processes from Formal Specifications with Inputs, Outputs and data types
Gregory Lestiennes, Marie-Claude Gaudel

Deriving test cases from formal specifications of communicating processes has been studied for a while. Several effective methods have been proposed based on specifications corresponding to FSM (Finite State Machines), LTS (Labelled Transition Systems), etc.

However, these approaches are based on models limited to some finite set of actions, excluding the possibility of exchanging typed values between processes.

More realistic models of communication between processes are provided by I/O automata defined by Lynch or IOTS (Input Output Transition Systems).

This article presents a test derivation and selection method based on a model of communicating processes with inputs, outputs and data types, which is closer to actual implementations of communication protocols.

In a few words, a notion of "exhaustive test set" is derived from the semantics of the formal notation and from the definition of a correct implementation, assuming some "testability hypotheses" on the implementation. Then a finite test set is selected via some "selection hypotheses".

Making the testability conditions explicit allows the definition of an exhaustive test set which improves the one proposed by Tretmans for IOTS in the case without data types. Besides, the selection hypotheses make it possible to deal with data types while keeping the test set finite.

Saturation Effects in Testing of Formal Models
Tim Menzies, David Owen, Bojan Cukic

Formal analysis of software is an powerful analysis tool, but can be too costly. Random search of formal models can reduce that cost, but is theoretically incomplete. However, random search of finite state machines exhibits an "early saturation effect"; i.e. quickly yields all that can be found, even after a much longer search. Hence, the theoretical problem of incompleteness does not arise "providing" testing continues until after the saturation point. Such a random search is rapid, consumes little memory, simple to implement, and can handle very large formal models (in one experiment shown here, over $10^{178}$ states).

Informal Proof Analysis Towards Testing Enhancement
Guillaume Lussier, Helene Waeselynck

This paper aims at verifying properties of generic fault-tolerance algorithms. Our goal is to enhance the testing process with information extracted from the proof of the algorithm, whether this proof is formal or informal: ideally, testing is intended to focus on the weak parts of the proof (e.g., unproved lemmas or doubtful informal evidence). We use the Fault-Tolerant Rate Monotonic Scheduling algorithm as a case study. This algorithm was proven by informal demonstration, but two faults were revealed afterwards. In this paper, we focus on the analysis of the informal proof, which we restructure in a semiformal proof tree based on natural deduction. From this proof tree, we extract several functional cases and use them for testing a prototype of the algorithm. Experimental results show that a flawed informal proof does not necessarily provide relevant information for testing. It remains to investigate whether formal (partial) proofs allow better connection with potential faults.


10:00 - 11:30am, Track 2: Reliability Modeling

Session chair:
Sachin Garg, Avaya Labs, USA

Heterogeneous Software Reliability Modeling
Wen-Li Wang, Mei-Hwa Chen

A number of Markov-based software reliability models have been developed for measuring software reliability. However, the application of these models is strictly limited to software that satisfies the Markov properties. The objective of our work is to expand the application domain of the Markov-based model, so that most software can be modeled and reliability can be measured at the architecture level. To overcome the limitations of Markov properties, our model takes execution history into account and addresses both deterministic and probabilistic software behaviors. Each state represents the executions of one or more components depending on the architectural styles. In addition, the executions of one component are depicted by using distinctive states, when such executions are influenced by past states. Furthermore, we construct loops to eliminate the likelihood of unlimited state expansion and utilize a binomial tree structure to account for all the different execution processes. We show that Markov models are applicable even to software that does not fully satisfy the Markov properties. Therefore, we significantly improve the state of the art in architecture-based software reliability modeling.

A Reliability Estimator for Model Based Software Testing
Kirk Sayre, Jesse Poore

This paper presents a reliability estimator specifically formulated to take advantage of software testing performed using Markov chain models of the use of the software under test. The reliability estimator presented in this paper is useful in the absence of observed failures, behaves in a predictable manner, can make use of pretest reliability information, and has an associated variance.

Reliability Prediction and Sensitivity Analysis Based on Software Architecture
Swapna S. Gokhale Kishor S. Trivedi

Prevalent approaches to characterize the behavior of monolithic applications are inappropriate to model modern software systems which are heterogeneous, and are built using a combination of components picked off the shelf, those developed in- house and those developed contractually. Development of techniques to characterize the behavior of such component-based software systems based on their architecture is then absolutely essential. Earlier efforts in the area of architecture-based analysis have focused on the development of composite models which are quite cumbersome due to their inherent largeness and stiffness. In this paper we develop an accurate hierarchical model to predict the performance and reliability of component-based software systems based on their architecture. This model accounts for the variance of the number of visits to each module, and thus provides predictions closer to those provided by a composite model. The approach developed in this paper enables the identification of performance and reliability bottlenecks. We also develop expressions to analyze the sensitivity of the performance and reliability predictions to the changes in the parameters of individual modules. In addition, we demonstrate how the hierarchical model could be used to assess the impact of changes in the workload on the performance and reliability of the application. We illustrate the performance and reliability prediction as well as sensitivity analysis techniques with examples.


10:00 - 11:30am, Track 3: Failure Detection and Recovery

Session chair:
Michael R. Lyu, Computer Science & Engineering Dept., The Chinese University of Hong Kong, China

Automatic Failure Detection, Logging, and Recovery for High-Availability Java Servers
Reinhard Klemm, Navjot Singh

Many server systems such as e-commerce and telecommunications servers are partially or completely implemented in the Java programming language. One reason why many developers prefer Java over more traditional server implementation languages such as C++ is the perception that reliable code can be produced much more quickly in Java. However, the Java Runtime Environment (JRE) does not detect many server execution failures. Moreover, the JRE does not produce detailed failure logs that would aid in the problem analysis, and it does not restore the full availability of a server once a software failure has occurred. This lack of built-in detection, logging, and recovery mechanisms forces developers of high-availability Java servers to design and implement customized availability-enhancing solutions. In this article, we present the application-independent Java Application Supervisor (JAS), an extension of the JRE. JAS can automatically detect and resolve a variety of application execution problems and failures. A set of simple user-specified policies guides the failure detection, logging, and recovery process in JAS. We describe the features, usage, and architecture of JAS, and an experiment showing the effectiveness of JAS.

The Impact of Recovery Mechanisms on the Likelihood of Saving Corrupted State
Subhachandra Chandra, Peter M. Chen

Recovery systems must save state before a failure occurs to enable the system to recover from the failure. However, recovery will fail if the recovery system saves any state corrupted by the fault. The frequency and comprehensiveness of how a recovery system saves state has a major effect on how often the recovery system inadvertently saves corrupted state. This paper explores and measures that effect. We measure how often software faults in the application and operating system cause real applications to save corrupted state when using different types of recovery systems. We find that generic recovery techniques, such as checkpointing and logging, work well for faults in the operating system. However, we find that they do not work well for faults in the application because the very actions taken to enable recovery often corrupt the state upon which successful recovery depends.

The Architecture and Performance of Automatically Generated Dependability Wrappers
Christof Fetzer and Zhen Xiao

Improving the dependability of distributed services is increasingly important as more and more systems depend on the availability of remote services. The dependability of services has to be addressed in all phases of the life-cycle and at all levels. In this paper we describe a system that can generate wrappers for shared libraries. The system can generate various wrappers like robustness wrappers, buffer overflow detection wrappers, and retry wrappers. Generating wrappers automatically simplifies the task of targeting the wrapper to the life-cycle of an application, e.g., a wrapper in the debugging phase might abort the execution of an application while a wrapper used in the deployed phase should try to keep the application running. We describe the architecture of the generator, the problems of generating wrappers for library functions, and our solutions to these problems. Based on a set of properties declared for a function, the generator can create a variety of wrappers. Performance measurements indicate that the overhead of the generated wrappers is small.


1:00 - 2:00pm, Keynote

TBA
Donald Ferguson, IBM Fellow and IBM WebSphere Architect

Bio
Dr. Ferguson is one of 55 IBM Fellows in IBM's engineering community of 160,000 technical professionals. Don is the chief architect and technical lead for IBM's WebSphere platform and family of products. Don's most recent efforts have focused on Web services, business process management, Grid services and application development for WebSphere.

Donald Ferguson earned a Ph.D. in Computer Science from Columbia University in 1989. His thesis studied the application of economic models to the management of system resources in distributed systems. Don joined IBM Research in 1987 and initially led research and advanced development efforts in the areas of file system performance (Hiperbatch), tuning database buffer pools (DB2), goal oriented performance management and tuning of operating systems (MVS Workload Manager), and workload balancing for parallel transaction processing systems (CICSPLEX/SM)

Starting in 1993, Don started focusing his efforts in the area of distributed, OO systems. This work focused on CORBA based SM solutions and frameworks, and evolved into a effort to define frameworks and system structure for CORBA based object transaction monitors. The early design and prototype of these systems produced IBM Component Broker and WebSphere Family of products.

Don has earned two Corporate Award (EJB Specificatio, WebSpheren), 4 Outstanding Technical Awards and several division awards at IBM. Don was the co program committe chairman for the First Internation Conference on Information and Computation Economies. He received a best paper award for work on database buffer pools, has over 24 technical publications and 7 granted or pending patents. He has give approximately ten invited keynote speeches at technical conference. Don was elected to the IBM Academy of Technology in 1997 and was name a Distinguished Engineer on April Fool's Day, 1998. No one is sure of the joke was on IBM or Don. Don was named an IBM Fellow on May 30, 2001.


2:30 - 4:00pm, Track 1: Assessment of Testing

Session chair:
Harald Stieber, University of Applied Sciences, Germany

Metrics for Measuring the Effectiveness of Software-Testing Tools
James B. Michael, Bernard J. Bossuyt, and Byron B. Snyder

The levels of quality, maintainability, testability, and stability of software can be improved and measured through the use of automated testing tools throughout the software development process. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects of the software-testing task. Automated testing tools vary in their underlying approach, quality, and ease-of-use, among other characteristics. In this paper we propose a suite of objective metrics for measuring tool characteristics, as an aid in systematically evaluating and selecting automated testing tools.

Optimal Allocation of Testing Resources for Modular Software Systems
Chin-Yu Huang, Jung-Hua Lo, Sy-Yen Kuo, and Michael R. Lyu

In this paper, based on software reliability growth models with generalized logistic testing-effort function, we study three optimal resource allocation problems in modular software systems during testing phase: 1) minimization of the remaining faults when a fixed amount of testing-effort and a desired reliability objective are given, 2) minimization of the required amount of testing-effort when a specific number of remaining faults and a desired reliability objective are given, and 3) minimization of the cost when the number of remaining faults and a desired reliability objective are given. Several useful optimization algorithms based on the Lagrange multiplier method are proposed and numerical examples are illustrated. Our methodologies provide practical approaches to the optimization of testing-resource allocation with a reliability objective. In addition, we also introduce the testing-resource control problem and compare different resource allocation methods. Finally, we demonstrate how these analytical approaches can be employed in the integration testing. Using the proposed algorithms, project managers can allocate limited testing-resource easily and efficiently and thus achieve the highest reliability objective during software module and integration testing.

On Estimating Testing Effort Needed to Assure Field Quality in Software Development
Osamu Mizuno, Eijiro Shigematsu, Yasunari Takagi, and Tohru Kikuno

In the practical software development, software quality is generally evaluated by the number of residual defects. To keep the number of residual defects within a permissible value, too many efforts are often assigned to software testing.

In this paper, we try to develop a statistical model to determine the amount of testing effort which is needed to assure the field quality. The model includes explicitly design, review, and test (including debug) activities.

Firstly, we construct a linear multiple regression model that can clarify the relationship among the number of residual defects and the efforts assigned to design, review, and test activities. We then have confirmed the applicability of the model by statistical analysis using actual project data.

Next, we obtain an equation based on the model to determine the test effort. As parameters in the equation, the permissible number of residual defects, the design effort, and the review effort are included. Then, the equation determines the test effort that is needed to assure the permissible residual defects. Finally, we have conducted an experimental evaluation using actual project data and showed the usefulness of the equation.


2:30 - 4:00pm, Track 2: High Availability Software Maintenance

Session chair:
Norm Schneidewind, Naval Postgraduate School, USA

A Framework for Live Software Upgrade
Lizhou Yu, Gholamali C. Shoja, Hausi A. Muller, Anand Srinivasan

The demand for continuous service in mission- and safety-critical software applications is increasing. For these applications, it is unacceptable to shutdown and restart the system during software upgrade. This paper examines issues relating to on-line upgrades for mission- and safety-critical software applications. We believe that a dynamic architecture and communication model provides an excellent foundation for runtime software evolution. To solve the problems mentioned above, we designed and implemented a framework, which addresses four main areas: dynamic architecture and communication model, reconfiguration management, the upgrade protocol, and the upgrade technique. The framework can be used for on-line upgrading of multi-task software applications, which provide multiple mission-critical services. In the framework discussed in the paper, the ability to make runtime modifications is considered at the software architecture-level. The dynamic architecture and communication model makes it possible for software applications to add, remove, and hot swap modules on the fly. The transition scenario is specified by the upgrade protocol. The framework also provides the mechanism for maintaining state consistency. In order to ensure a reliable upgrade, a two-phase commit protocol is utilized to implement the atomic upgrade transactions. In addition, a command line interface facilitates the reconfiguration management. A simulation study of the proposed framework was carried out for live software upgrade of several practical applications. The results of the simulation are also presented.

Modeling and Analysis of Software Rejuvenation in Cable Modem Termination System
Yun Liu, Yue Ma, James J. Han, Haim Levendel, and Kishor S. Trivedi

In order to reduce system outages and the associated downtime cost caused by the "software aging" phenomenon, we propose to use software rejuvenation as a proactive system maintenance technique deployed in a CMTS (Cable Modem Termination System) cluster system. Different rejuvenation policies are studied from the perspective of implementation and availability. To evaluate these policies, stochastic reward net models are developed and solved by SPNP (Stochastic Petri Net Package). Numerical results show that significant improvement in capacity-oriented availability and decrease in downtime cost can be achieved. The optimization of the rejuvenation interval in the time-based approach and the effect of the prediction coverage in the measurement-based approach are also studied in this paper.

Dependability Analysis of a Client/Server Software System with Rejuvenation
Hiroyuki Okamura, Satoshi Miyahara and Tadashi Dohi

Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon an active fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state.

In this paper, we deal with dependability analysis of a client/server software system with rejuvenation. Three dependability measures in the server process, steady-state availability, loss probability of requests and mean response time on tasks, are derived from the well-known hidden Markovian analysis under the time-based software rejuvenation scheme. In numerical examples, we investigate the sensitivity of some model parameters to the dependability measures.


2:30 - 4:00pm, Track 3: Fast Abstracts

Session chair:
Ram Chillarege, Chillarege Inc., USA

Security

  • Security Testing using a Susceptibility Matrix
  • Security modeling and quantification of intrusion tolerant systems

Component based/object-oriented software reliability

  • Fault Insertion in Concurrent Object-Oriented Programs for Mutation Analysis and Testability Measurement
  • Search-based Execution-Time Analysis in Component-Oriented Real-Time Application Development
  • Structurally Guided Testing
  • Verified Systems by Composition from Verified Components

Modeling

  • A Descrete Stochastic Logistic Equation and a Software Reliability Growth Model
  • Uncertainty Analysis of Software Reliability Based on Method of Moments
  • An Intuitive and Practical Method for Reliability Analysis of Complex Systems
  • Unified Modeling Framework and Comprehensice Offline Analysis For Quantifiable Adaptive Real-Time Systems
  • Good Enough Reliability Certification for Extreme Programming


4:15 - 5:45pm, Track 1: Testing Technologies

Session chair:
Yashwant Malaiya, Colorado State University, USA

Data Coverage Testing
Pornrudee Netisopakul, Lee J. White, John Morris and Daniel Hoffman

Data coverage testing employs automated test generation to systematically generate increasing test data set size. Given a program and a test model, it can be theoretically shown that there exists a sufficiently large test data set size N, such that testing with a data set size larger than N does not detect more faults. A number of experiments have been conducted using a set of C++ STL programs, comparing data coverage testing with two other testing strategies: statement coverage and random generation. These experiments validate the theoretical analysis for data coverage, confirming the predicted sufficiently large N for each program.

Genes and Bacteria for Automatic Test Cases Optimization in the .NET Environment
Benoit Baudry, Franck Fleurey, Jean-Marc J�z�quel, Yves Le Traon

The level of confidence in a software component is often linked to the quality of its test cases. This quality can in turn be evaluated with mutation analysis: faulty components (mutants) are systematically generated to check the proportion of mutants detected ("killed") by the test cases. But while the generation of basic test cases set is easy, improving its quality may require prohibitive effort. This paper focuses on the issue of automating the test optimization. We looked at genetic algorithms to solve this problem and modeled it as follows: a test case can be considered as a predator while a mutant program is analogous to a prey. The aim of the selection process is to generate test cases able to kill as many mutants as possible. To overcome disappointing experimentation results on the studied .Net system, we propose a slight variation on this idea, no longer at the "animal" level (lions killing zebras) but at the bacteriological level. The bacteriological level indeed better reflects the test case optimization issue: it introduces of a memorization function and the suppresses the crossover operator. We describe this model and show how it behaves on the case study.

Fault Detection Capabilities of Coupling-based OO Testing
Roger T. Alexander, A. Jefferson Offutt, James M. Bieman

Object-oriented programs cause a shift in focus from software units to the way software components are connected. Thus, we are finding that we need less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance and aggregation, especially when combined with polymorphism, introduce new kinds of integration faults, which can be covered using testing criteria that take the effects of inheritance and polymorphism into account. This paper demonstrates, via a set of experiments, the relative effectiveness of coupling-based OO testing criteria. These criteria are all more effective at detecting faults due to the use of inheritance and polymorphism.


4:15 - 5:45pm, Track 2: System Analysis

Session chair:
Sherif Yacoub, HP Labs, USA

Improving Usefulness of Software Quality Classification Models Based on Boolean Discriminant Functions
T.M. Khoshgoftaar and N. Seliya

The cost-effectiveness of software reliability control endeavors can be increased if a software quality estimation was available prior to system tests and operations. If all the likely fault-prone modules were identified prior to operations, then enhanced software reliability can be obtained. Boolean Discriminant Functions (BDFs) have been applied in the past as quality classification models. Their simplicity and ease in model-interpretation, make BDFs an attractive technique for software quality prediction.

Quality classification models based on BDFs, provide stringent rules for classifying not fault-prone modules (nfp), thereby predicting a large number of modules as fault-prone (fp). Such models are practically not useful from software quality assurance and software management points of view. This is because, given the large number of modules predicted as fp, project management will face a difficult task of deploying, cost-effectively, the always-limited reliability improvement resources to all the fp modules.

This paper proposes the use of ``Generalized Boolean Discriminant Functions" (GBDFs) as a solution for improving the practical and managerial usefulness of classification models based on BDFs. In addition, the use of GBDFs avoids the need to build complex hybrid classification models in order to improve usefulness of models based on BDFs. A case study of a full- scale industrial software system is presented to illustrate the promising results obtained from using the proposed classification technique using GBDFs.

Fault Contribution Trees for Product Families
Dingding Lu and Robyn R. Lutz

Software Fault Tree Analysis (SFTA) provides a structured way to reason about the safety or reliability of a software system. As such, SFTA is widely used in mission-critical applications to investigate contributing causes to possible hazards or failures. In this paper we propose an approach similar to SFTA for product families. The contribution of the paper is to define a top- down, tree-based analysis technique, the Fault Contribution Tree Analysis (FCTA), that operates on the results of a product- family domain analysis and to describe a method by which the FCTA of a product family can serve as a reusable asset in the building of new members of the family. Specifically, we describe both the construction of the fault contribution tree for a product family (domain engineering) and the reuse of the appropriately pruned fault contribution tree for the analysis of a new member of the product family (application engineering). The paper describes several challenges to this approach, including evolution of the product family, handling of subfamilies, and distinguishing the limits of safe reuse of the FCTA, and suggests partial solutions to these issues as well as directions for future work. The paper illustrates the techniques with examples from applications to two product families.

Automatic Synthesis of Dynamic Fault Trees from UML System Models
Ganesh J Pai, Joanne Bechta Dugan

The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is significant to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs, or in establishing system cost. In this paper, we present a UML based framework for modeling computer-based systems, to permit automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is then presented. We are successful, both in embedding information needed for reliability analysis within the system model, and in generating the DFT. Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from our algorithmic DFT solution and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs


4:15 - 5:45pm, Track 3: Panel: Open-source Software: More or Less Secure and Reliable?

Jeff Offutt, GMU
Ron Ritchey, Booz-Allen
Brendan Murphy, Microsoft
Mike Shaver, Cluster File Systems/Mozilla

Open-source Software (OSS) follows a new development and business model whereby the software is created and modified by various individuals who work pretty much independently and usually for no pay. An essential difference is that the code for OSS is freely available to everyone, and anyone can contribute modifications. An ongoing and sometimes contentious debate is over the quality of OSS, with ISSRE being particularly interested in the quality attributes of reliability and security. One argument is that closed-source software will always be superior because it is created with well defined processes by professional, paid programmers. A counter argument is that open-source software is modified and reviewed by a large number of individuals in an open way, allowing more problems to be found and corrected. For security, OSS advocates claim that the number of people looking at the code allows vulnerabilities to be corrected easily. The detractors claim that restricting access to the source will help prevent hackers from finding and exploiting vulnerabilities.

While both of these analytical arguments may sound reasonable, there is a shortage of facts and evidence. This panel will attempt to discuss the issues in an open and respectful way, and put the arguments into an objective scientific context, with the goal of clarifying the issues enough so that it is possible to gather experimental, scientific truth.






© 2003,2002,2001 issre2002.org all rights reserved,
Header photo #6 courtesy Annapolis & Anne Arundel
County Conference & Visitors Bureau.