ISSRE #1 2002


Current News (Home)

Introduction to ISSRE

Organizers & Program Committee

Agenda

Conference Registration

Hotel Registration



CFP: WOSA

Important Dates

Archive
    Agenda

Wednesday, 13 November 2002: Industry Practice Day
Time Track 1 Track 2 Track 3
8:00-8:30
Opening Remarks
8:30-9:30
Keynote: Is it time to redefine software engineering?, Amitabh Srivastava, Distinguished Engineer/Director, Programming Productivity Research Center Microsoft Research
9:30-10:00
Break
10:00-11:30

Automated Testing

Automating Reliability Testing

Model-based Approach to Security Test Automation

Automated Generation of Self-Checking Function Tests

Software Reliability

Requirements Risk versus Reliability

A Taxonomy of Causes of Software Vulnerabilities in Internet Software

Predicting the Impact of of Requirement Changes on Software Reliability

Modeling and Analysis

Evaluation of S-dependence in Software Reliability Modeling

Steady State Markov Modeling and Its Excel Implementation for 1 to N Redundant System with Imperfect Switchover

A New Testing-Path Coverage Measure: Testing-Domain Metrics Based on a Software Reliability Growth Model

11:30-1:00
Lunch
1:00-2:00
Keynote: Achieving quality in a dynamic environment, Craig Miller, CTO, Dimension Data North America
2:00-2:30
Break
2:30-4:00

Empirical Software Engineering

Network Vulnerability From Memory Abuse and Experimented Software Defect Detection

Failure Acceleration reduces error propagation: Fault Injection Experiments on NFS

Maturity of Testing Technique Knowledge

Trustworthy Systems

Lessons Learned in Developing Trustworthily Software for Safety Critical Systems

Predicting Latent Software Faults: A Commercial Telecom Case Study

Quality Assurance for Document Understanding Systems

Panel

Everything You Wanted to Know About Software Reliability Engineering But Didn't Know Who to Ask... William W. Everett, Karama Kanoun, Dr. Michael R. Lyu, John D. Musa, Dr. Norman F. Schneidewind, Prof. Mladen A. Vouk

4:00-4:15
Break
4:15-5:45

Managing Software Quality

e-Business Reliability through CMMI and Six Sigma

Quality management metrics for software development

The Overlap of Six Sigma and Software Reliability

New Paradigms and Techniques

Will AOP Improve Software Quality?

Using Aspect-Oriented Programming to Address Security Concerns

Automatic Fault Tolerance for Applications

Panel

Risk and Security Management in Outsourcing



8:30 - 9:30am, Keynote

Is it time to redefine software engineering?
Amitabh Srivastava, Distinguished Engineer
Director, Programming Productivity Research Center
Microsoft Research

Bio
Amitabh Srivastava is a Distinguished Engineer and Director of Programmer Productivity Research Center in Microsoft Research. Amitabh Srivastava joined Microsoft Research in 1997 as a Senior Researcher where he has worked tirelessly to create innovative tools and technologies that can improve the performance and quality of Microsoft software. His vision and energy led to the creation of Programmer Productivity Research Center in Microsoft Research which he has led since its inception. Before joining Microsoft, Srivastava was the founder and Chief Technical Officer of TracePoint Inc., a spin-off from Digital Equipment Corporation that was based on the binary code modification technology he invented at Digital's Western Research Labs in Palo Alto, Ca. Prior to joining Digital, Srivastava worked on Lisp machines at Texas Instruments Research Labs in Dallas, TX. Srivastava is the author of OM, ATOM, and SCOOPS software systems that resulted in products for Digital Equipment and Texas Instruments on the Alpha and PC platforms, and led the design and development of the Vulcan system at Microsoft.


10:00 - 11:30am, Track 1: Automated Testing

Automating Reliability Testing
J. C. Widmaier, William W. Everett

Application software, which is used in domains where the consequences of failure are costly, should be of high quality and reliability. Measuring the reliability from a structured testing process helps to quantify the risk of failure (rate) and direct corrective measures to the code to achieve the users desired functional reliance. However, the process of deriving reliability estimates contributes extra work to the software development cycle and would be more easily accepted if it were to be automated or made transparent to the testing organization. This experience report discusses the development and use of a prototype tool, ART, (Automated Reliability Testing). ART was developed under Government contract to streamline the entire process of modeling the software requirements, test case generation, test case execution, failure data gathering, and finally reliability estimation. Teradyne Corporation, a world leader in test automation for hardware and software subcontracted with SPRE, Inc a company specializing in software reliability, to develop the prototype and use it to validate the reliability of a production application for the customer. Even though this concept proved that automating the entire testing and reliability estimation process is possible and can generate the desired reliability estimation capability, it remains yet somewhat involved procedurally for immediate deployment.

Model-based Approach to Security Test Automation
Mark Blackburn, Robert Busser, Aaron Nauman, Ramaswamy Chandramouli

Security functional testing is a costly activity typically performed by security evaluation laboratories. These laboratories have struggled to keep pace with increasing demand to test numerous product variations. This paper summarizes the results of applying a model-based approach to automate security functional testing. The approach involves developing models of security function specifications (SFS) as the basis for automatic test vector and test driver generation. In the application, security properties were modeled and the resulting tests were executed against Oracle and Interbase database engines through a fully automated process. The findings indicate the approach, proven successful in a variety of other application domains, provides a promising approach to security functional testing.

Automated Generation of Self-Checking Function Tests
Amit Paradkar

This paper describes a new unified test generation method which simultaneously addresses the following critical issues in software function testing: (1) Selection of appropriate combinations of parameter values for testing individual operations, (2) Selection of appropriate sequence of operation invocations, (3) Generation of test oracles in the form of self-checking sequences, and, (4) Generation of negative test cases. Our method exploits a novel mutation scheme applied to operations specified as pre-, and postconditions (actions) on parameters and state variables; a set of novel abstraction techniques which result in a new form of compact transition system, called quasi-reachability graph; and the techniques developed for planning under resource constraints to automatically generate self-checking positive and negative test cases with appropriate parameter values. The test cases generated using our approach target detection of certain faults in an implementation. We discuss the reduction techniques used in our method to control the size of the generated test suite. We also report our experiences with using our method in an industrial setting.


10:00 - 11:30am, Track 2: Software Reliability

Requirements Risk versus Reliability
Norman F. Schneidewind

Problem Definition
While software design and code metrics have enjoyed some success as predictors of software quality, the measurement field is stuck at this level of achievement. If measurement is to advance to a higher level, we must shift our attention to the front- end of the development process, because it is during requirements analysis that errors are inserted into the process.

A requirements change may induce ambiguity and uncertainty in the development process that cause errors in implementing the changes. Subsequently, these errors propagate through later phases of development and maintenance. These errors may result in significant risks associated with implementing the requirements. For example, reliability risk (i.e., risk of faults and failures induced by changes in requirements) may be incurred by deficiencies in the process (e.g., lack of precision in requirements).

Potential Solution
Identify the attributes of requirements that cause the software to be unreliable Quantify the relationship between requirements risk and reliability. If these attributes can be identified, then policies can be recommended to NASA for recognizing these risks and avoiding or mitigating them during development. Extend and validate our work in this area on the Space Shuttle to Goddard Space Flight Center and Jet Propulsion Laboratory software projects.

A Taxonomy of Causes of Software Vulnerabilities in Internet Software
Frank Piessens

At the root of almost every security incident on the Internet are one or more software vulnerabilities, i.e. security-related bugs in the software that can be exploited by an attacker to perform actions he should not be able to perform. Analysis of vulnerability alerts as distributed by organisations like CERT or SANS, and analysis of causes of actual incidents shows that many vulnerabilities can be traced back to a relatively small number of causes: software developers are making the same mistakes over and over again.

The goal of this paper is to propose a structured taxonomy of the most frequently occuring causes of vulnerabilities. Such a taxonomy can be useful in a number of scenarios: as an aid for developers, to avoid common pitfalls, as didactical material for students in software engineering or as a "checklist" for software testers or auditors.

Predicting the Impact of of Requirement Changes on Software Reliability
Jon Peterson, Meng Lai Yin

The software reliability prediction is a major concern for business applications and large-scale mission critical systems. Up until now, most software reliability prediction methods were based solely on failure data. From the collected failure data, the best-fit software reliability prediction model can be identified and applied to predict the product�s future software reliability. The problem with this approach is that the failure data can be affected by requirement changes, which might occur anytime during the software development period. The question that follows is: "What will be the effects of requirement changes to the software reliability prediction?"

This paper presents the observed correlation between requirement changes and coding defects for several projects. First, there is a latency in the precipitation of failures from the time the requirement change occurs, since faults resulting from requirement changes often take time to surface. This increase in software failure data triggers the changes to the software reliability prediction. This is the second topic explored in this report. What we have learned from our experiences is that timing is a critical issue. In essence, when the requirement changes occur during the software development period, and how long it takes to complete a change are the two major factors that impact the software reliability prediction.

Nevertheless, it is inevitable that software requirements do change at any time during the development. For these cases, this report gives the software project managers an idea of the impact to the software reliability prediction and its ultimate effect on managing the software project.


10:00 - 11:30am, Track 3: Modeling and Analysis

Evaluation of S-dependence in Software Reliability Modeling
Zhibin Tan

S-dependency among successive software runs is observed in practice. When software runs result in either success or failure, two parameters P and Q are defined to describe the s-dependence. P (Q) is defined as the probability for one software run to result in success (failure) conditional on the previous software run resulted in success (failure). When there is no change in software and its input domain, P and Q are assumed to be constants. However, little has been discussed about how to determine P and Q. This paper aims at estimating P and Q by treating them as two independent random variables that follow Beta distributions. Bayesian technique is applied to update their distributions based on the outcomes of a sequence of software runs. Then the estimates are defined as the updated distribution means. Simulation study shows that the estimates can approach their true values with finite number of software runs. Two reliability models based on the dependencies among a sequence of software runs are proposed to evaluate software reliability. By applying the reliability models to software builds and releases using testing data, the reliability of each build and release can be evaluated. Then the changes in software reliability along builds and releases can be traced and monitored. It is expected that software reliability can be predicted when the relationship between the dependence among successive software runs and certain software metrics is established.

Steady State Markov Modeling and Its Excel Implementation for 1 to N Redundant System with Imperfect Switchover
Haihong Zhu, Jim Huang, Madhav Marathe

1:N active-standby and 1+N load-sharing are typical configurations used in redundant systems and networks. There are many factors associated with the availability of such systems, including MTBF, MTTR, switchover coverage, and planned outages, etc. Markov modeling is often used to assess the system or network availability due to its advantageous ability of capturing the complexity and dependencies using states and state transitions. However, the Markov modeling itself may become computationally extensive, thus untamable to be used in engineering field. In this paper, we present the design, validation, and implementation of a spreadsheet-based tool for 1:N and 1+N redundancy Markov modeling. Combining the Markov modeling power with the spreadsheet�s functionality and flexibility, we have made a user-friendly and easy-to-use tool for engineers.

A New Testing-Path Coverage Measure: Testing-Domain Metrics Based on a Software Reliability Growth Model
Takaji Fujiwara and Shigeru Yamada

In the case of embeded software systems, since various operating systems are used, it is difficult to measure testing-path coverage. Therefore, the manager has to determine the stopping time of testing in consideration of the convergence situation of the cumulative number of detected faults and the prespecified delivery time to the users. This determination method based on the manager?fs experience or intuition has ambiguity. In this paper, we investigate the relationship between the testing- domain ratio derived from a testing-domain dependent software reliability growth model and the testing-path coverage. Then, we show that the testing-domain rate, which is defined as the increasing ratio of the testing-paths in modules and functions in the software system to be influenced by the executed test-cases, is useful as an alternative measure of testing-coverage metrics.


1:00 - 2:00pm, Keynote

Achieving quality in a dynamic environment
Craig Miller, CTO, Dimension Data North America

Reliable software and predictable development have been our holy grail for decades, but despite our best efforts and huge improvements in the process and product quality, software is still among the most error-ridden products on the market, and software projects still miss schedules and budgets. The rigor of good processes is undercut by the dynamic business environment we must operate in. We consistently rail against poorly defined and changing requirements. Perhaps, instead, we should accept that vagueness and change are inherent and largely irreducible, and tailor our processes to maximize quality given this constraint. Extreme Programming and Extreme Project Management attempt to do just this, but are the XPs a step forward or a step back to chaos.

Bio
Craig Miller is Dimension Data's Chief Technology Officer for North America. In this role he is responsible for ensuring that the company is at the forefront of changing technology. He has worked exclusively in the area of electronic commerce since 1985, when it was spelled EDI. Systems he has developed are in use at more than 2000 companies across the Unites States. His first web-centric application was CATEX, the online reinsurance exchange, which was deployed in 1997. For this work he earned recognition from the Smithsonian Institution for "heroic achievement in the advancement of information technology". H e also founded GreenOnline.Com, the premier site for business related to the environment. Prior to joining Dimension Data he was a Chief Scientist at Science Applications International Corporation, and founded their successful Electronic Commerce RAD Lab. He holds a Ph.D. in Systems Engineering from the University of Virginia.


2:30 - 4:00pm, Track 1: Empirical Software Engineering

Network Vulnerability From Memory Abuse and Experimented Software Defect Detection
Jun Xu Christopher Hoang Pham

While majority of software developers are concerning about features, performance, CPU usage and similar criteria, many neglects memory management as one of the most fundamental resources for software operation. The consequence of this negligence is more severe than it sounds, such as: the memory resource can be exhausted by malicious applications leading to system malfunction, and the most vulnerable of all is the risk of security attack.

To address some of the observed common run-time problems resulted from poor memory management, the authors developed tools to detect the problems early in the development cycle and isolated them to source code level. A practice was also experimented to alleviate the poor memory management software defects in important phases of the SEI software development model as a solution.

This paper shares with the reader the non-proprietary observed data, methods and technology that was developed and leveraged to address some severe memory abuse issues in both off-line and run-time domains.

Failure Acceleration reduces error propagation: Fault Injection Experiments on NFS
Ram Chillarege, Murthy Devarakonda, Kumar Goswamy

This paper provides new insight into the design of system level fault injection experiments. A matched pair of experiments are conducted at two different levels of failure acceleration, studying its effect on two key measures: Probability of Failure and Error Propagation. In the second experiment of the matched pair, failure acceleration approached the maximum,

These results are valuable to experimentalists in systematic design of such experiments. Specific results from the study are:

1) The probability of failure increased from around 53% to 65%, while approaching almost maximum acceleration in this system.

2) Error propagation in the same situation came down from 38% to around 19%. This new, significant result seems counter intuitive, until one understands the full impact of what failure acceleration achieves.

3) Note that error propagation, in fault injection, destroys the single fault model --- a typical design point for many fault- tolerant systems. Thus, failure acceleration actually helps drive fault injection experiments towards their design point, yielding realistic estimates of measures such as coverage.

Maturity of Testing Technique Knowledge
A.M. Moreno, S. Vegas

Engineering disciplines are characterised by the use of mature knowledge by means of which they can achieve predictable results. Unfortunately, the type of knowledge used in software engineering can be considered to be of a relatively low maturity, and developers are guided by reasoning based on intuition, fashion or market-speak rather than by facts or undisputed statements proper to an engineering discipline. This paper analyses the maturity level of the knowledge about testing techniques by examining existing empirical studies about these techniques. For this purpose, three categories of knowledge of increasing maturity have been presented and the results of these empirical studies have been placed in these three categories.


2:30 - 4:00pm, Track 2: Trustworthy Systems

Lessons Learned in Developing Trustworthily Software for Safety Critical Systems
Samuel Keene, Gavin Watt

This paper describes the key processes, practices, and tools used in our attempt to develop the most trustworthy software possible. When one builds a practical size real-time system, on a near-term development schedule, we are forced to use Commercial off the Shelf software and components. It is not practical to do otherwise. An example of this is the AIX operating System with its millions of lines of code. One conceivably could develop a tailored version of such code, which would only incorporate the minimal functionality set that is deemed necessary. This would minimize the code extent and number of initial faults (latent faults at delivery) in the code, since the latent fault content does vary in proportion to the code extent. However commercial systems, such as AIX, benefit from the extensive testing done by the manufacturer and their massive accumulation of field experience gained from a widely deployed application, across many diverse users.

There is also a need to consider the impact of such use in an integrated systems approach, which must consider the operational profile of the installed system in an environment not necessarily anticipated by the individual COTS vendors in testing their products. This is where the safety and reliability problems become manifest (system integration and integration testing), and where the lessons learned can best be applied.

We have demonstrated extraordinarily high reliability from such COTS code. This high reliability results from the disciplined execution processes incorporated in our operational profile and structure imposed on our software and its operating environment. The reliability/safety enabling structures will be reported, along with the lessons learned from this vital development experience.

Predicting Latent Software Faults: A Commercial Telecom Case Study
Robert W. Stoddard

I am planning on summarizing the use of CASRE and the inherent statistical models within the Motorola PCS handset division over the past 18 months. My talk would cover not only the actual results and correlation of predictions to actuals, but also the challenges of introducing such a capability within a fast-paced commercial environment. I would also summarize the challenges and lessons learned of this activity and indicate aspects we plan to address and improve. Learning curve times and issues will also be covered. Finally, I will discuss how this activity integrates into the overall product development process and the roles and responsibilities associated with successful use of this technology and tool.

Quality Assurance for Document Understanding Systems
Sherif Yacoub

Document understanding is a field that is concerned with semantic analysis of documents to extract human understandable information and codify it into machine-readable form. Document understanding systems provide means to automatically extract meaningful information from a raster image of a document. Those systems provide means to create information rich content that is usable in many end-user applications such as search and retrieval. To process a large volume of data, such as the collection of books and journals produced by a publisher, content understanding systems should run non-stop in an automated fashion and in an unattended operation mode. Ensuring the quality of the output of such system is a challenging task due to several factors including the unattended nature of the system and the mass amount of data (in terabytes) which could give rise to considerable number of exceptions. Automated quality assurance (QA) techniques are essential to the success of the operation of a large-scale document understanding system. In this paper, we propose QA techniques that are essentially needed for a document understanding system and their automation. Keywords: quality assurance, document understanding, and content remastering.


2:30 - 4:00pm, Track 3: Panel

Everything You Wanted to Know About Software Reliability Engineering But Didn't Know Who to Ask
William W. Everett, Karama Kanoun, Dr. Michael R. Lyu, John D. Musa, Dr. Norman F. Schneidewind, Prof. Mladen A. Vouk


4:15 - 5:45pm, Track 1: Managing Software Quality

e-Business Reliability through CMMI and Six Sigma
Robert B. Wen

e-Business reliability refers to the probability of failure-free operation of the internet. Designing and managing a web site to ensure reliability with high performance despite peak loads and problems can be an important challenge. It is related to many aspects of web-based applications. The best practice for achieving high reliability software is to put standards on the development process. An example of such a paradigm is the Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute at Carnegie Mellon University. Another example is the Six Sigma developed by the Motorola. This paper describes the use of CMMI and Six Sigma guidance for improving organization's processes and ability to manage the development for achieving high reliability e-Business. Furthermore, a case study demonstrates the advantage of e-Business Reliability through CMMI and Six Sigma.

Quality management metrics for software development
John S. Osmundson, James B. Michael, Martin J. Machniak, Mary Alice Grossman

It can be argued that the quality of software management has an affect on the degree of success or failure of a software- development program. We have developed a metric for measuring the quality of software man-agement along four dimensions: requirements management, estimation/planning management, people management, and risk management. The quality management metric (QMM) for a software-development program manager is a composite score obtained via a questionnaire administered to both the program man-ager and a sample of his or her peers. The QMM is intended to both characterize the quality of software management and serve as a template for improving software-management performance. We administered the questionnaire to measure the performance of managers who were responsible for large software-devel- opment programs within the US Department of Defense. Informal verification and validation of the metric compared the QMM score to an overall program-success score for the entire program and resulted in a positive correlation.

The Overlap of Six Sigma and Software Reliability
Robert W. Stoddard

I plan to discuss how our (Motorola) vision of Six Sigma applied to software development heavily overlaps with basic concepts and activities of the software reliability program. In this talk, I would go into detail on the topics of overlap and how Six Sigma is proving to be a very effective tool in the deployment of Software Reliability activities. In this talk, examples will be discussed to show that Six Sigma also provides a customer-centric approach to software reliability techniques.


4:15 - 5:45pm, Track 2: New Paradigms and Techniques

Will AOP Improve Software Quality?
Roger Alexander

Aspect-oriented technology (AOT) is a new programming paradigm that is receiving considerable attention from research and practitioner communities alike. It deals with those concerns that crosscut the modularity of traditional programming mechanisms, and its objectives include a reduction in the amount of code written and higher cohesion. As with any new technology, aspect-oriented technology has both benefits and costs. What are those benefits, and are they worth it? Is AOT a viable technology that can help improve the quality of industrial software? What new problems come with it? Understanding these questions and their answers are crucial if this technology is to be successfully adopted by industry.

Using Aspect-Oriented Programming to Address Security Concerns
Viren Shah

Software security has become an overriding concern for commercial and government entities in recent years. One of the main impediments to improving the state of security in software applications is the dearth of security experts. This lack of security knowledge leads to developers defaulting into the role of security experts, or to security being ignored. A complementary issue is that security needs to be enforced in a consistent manner across an entire system. This is difficult to ensure in the present climate of big, distributed and diverse teams of software developers. Finally, current security solutions to various common security problems are usually point solutions that work well against only one type of attack (e.g., only buffer overruns or only race conditions.) We have an approach to dealing with security issues that we feel addresses the all the above concerns. We have designed and built an Aspect-Oriented Security Framework (AOSF) that is based on the AOP paradigm. Within this framework, we have implemented several solutions to such security issues as buffer overruns, race conditions and input sanitization, among others. We have also begun to address more complex issues, such as proper use of encryption and function ordering. This system has been used successfully on several open source systems (e.g., wu-ftpd) as well as on in-house applications.

Automatic Fault Tolerance for Applications
Michael Spertus

We describe a solution for transparently adding software fault tolerance against common program errors to existing applications.

One of the most serious challenges facing software operations organizations today is the problem of maintaining system availability while running imperfect software. A recently released study by the NIST estimates software defects cost the U.S. economy $59.5 billion with 64% of that cost borne by end users.

We describe what we believe is a significant new approach to this problem with a technology for automatically retrofitting existing applications to be fault tolerant, with no rebuilding or access to source required. This technology operates by wrapping selected calls between the application and its runtime environment, identifying application misuse of the abstractions provided by its language runtime environment. It reports this information and then automatically takes action to protect against the consequence of this misuse.

We will give extensive empirical results based on large-scale field deployments of this technology.


4:15 - 5:45pm, Track 3: Panel

Risk and Security Management in Outsourcing
Satyam Computers, et. all & Ram Chillarege

Outsourcing, a major trend in the software industry has added a new dimension to the challenge of Information Technology (IT) management. When software development was in-house the issues of security and risk of IT assets was a core responsibility of the CIO. Often, these were wrapped into the overall organizational responsibility and handled as integral part of the CIO�s security strategy. As outsourcing of IT functions, and IT enabled services increase, the risk and security management take on a new dimension. The assets are not directly managed by the office of the CIO, while the responsibility continues to be there. The problem was restricted in scope when non-core issues were being outsourced. However, as greater parts of the asset base needs to be outsourced, the issues have become larger. With newer sources of threat arising, and surprising us, the problem�s magnitude and challenge is greater than we initially anticipated.

This session will cover a broad range of subjects in the security and risk management in the outsourcing business. We will have three speakers, that provide three different perspectives on the problem.

Talk 1. The Outsourcing company�s perspective. Talk 2. The Customer�s perspective. Talk 3. Quality and Risk management case study.

Talk 1 will cover the current best practice in setting up different grades of development sites and infrastructure. Security issues in networking, data, information, training, people issues, and creating bonded facilities, import-export restrictions etc.

Talk 2 will cover the types of issues that are faced by a US customer. The risks and security issues combined with their business model will shed light on asset management and future growth and competitiveness.

Talk 3 will cover a case study that discusses quality and risk management from a recently executed project.






all rights reserved © 2001,2002, issre2002.org,
Header photo #6 courtesy Annapolis & Anne Arundel
County Conference & Visitors Bureau.