Current News (Home)

Introduction to ISSRE

Organizers & Program Committee

Agenda

Conference Registration

Hotel Registration



CFP: WOSA

Important Dates

Sponsors

Archive
   
wosa
WOSA2002
Second Workshop on Software Assessment
November 12, 2002
Annapolis MD, USA

 
Time Presentation
08:30-09:00 Welcome, Norman F. Schneidewind, Department of Information Sciences, Naval Postgraduate School, Monterey, CA, nschneid@nps.navy.mil
09:00-10:00 A Framework for Experimental Error Propagation Analysis of Software Architecture Specification, Hany Ammar, Lane Dept. of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, ammar@cemr.wvu.edu

Abstract: Early assessment of software quality attributes plays a central role in developing better quality software. Error propagation between software system components is a quantitative factor that reflects on the reliability of a software product. We introduce a framework for experimental error propagation analysis. This framework addresses the problem of estimating error propagation at the architecture design phase. Our approach is based on fault injection and post-simulation trace analysis. We compute error propagation estimates through comparison of faulty-run traces with a reference, fault-free, trace. We use this framework to experimentally study error propagation in a medium-sized real-time system. We believe that this framework can be further extended to allow for experimental analysis of change propagation and requirements propagation

10:00-10:30
Break
10:30-11:30 Uncertainty Analysis Of The Operational Profile And Software Reliability, Katerina Goseva � Popstojanova and Sunil Kamavaram, Lane Dept. of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, {katerina, sunil}@csee.wvu.edu

Abstract: Many architecture based software reliability models were proposed in the past. Regardless of the accuracy of these models, if a considerable uncertainty exists in the estimations of the operational profile and components reliabilities then a significant uncertainty exists in calculated system reliability.  Therefore, uncertainty analysis of the operational profile and software reliability are of essential importance. In this talk we use two approaches for uncertainty analysis. In the first approach we use the source entropy, a well-known concept from information theory, to quantify the uncertainty in the operational profile and software reliability models. We further estimate the execution rate and uncertainty of each component using the theory of Markov chains and conditional entropy respectively. The second approach is based on the method of moments.  It allows us to calculate system reliability moments based on (1) the knowledge of software architecture reflected in the expression of the system reliability as a function of component reliabilities and (2) estimates of moments of components reliabilities obtained from components failure data. We apply both approaches on the case study from European Space Agency that consists of almost 10,000 lines of C code. The faulty versions of the program were obtained by reinserting the faults discovered during integration testing and operational usage. Component traces obtained during random testing accordingly to the known operational profile were used to build software architecture. Obviously, the uncertainty analysis provides richer set of measures than the traditional point estimate of software reliability. These measures can be used for guiding allocation of testing efforts, making quantitative claims about the quality of software subjected to different operational usages, and for reliability certification of component-based systems. 

11:30-12:30 A Process Simulation Approach to Software Quality Prediction and Management, Peter Lakey, Cognitive Concepts, Webster Groves, MO, plakey01@earthlink.net

Abstract:  This topic offers direction for research and application work in the field of software process
modeling and simulation by describing a vision of the state of this technology in our future. The discussion begins with an example of a recent software process simulation model. The positive aspects of this example are highlighted, including the manner in which it contributes to this evolving field. Then the shortcomings of the simulation model are exposed, mainly in terms of its limited practical value to a wide selection of software projects. Following that introductory discussion, a vision of where the software modeling and simulation field can be in the future is proposed. This vision is illustrated with another example. Finally, a road map for achieving this vision is outlined. The intention is to inspire workshop attendees to form a collaborative effort.  The purpose of the example dynamic software process model is to assist software managers in planning and estimating activities prior to a Development cycle, and to enable them to better manage and control effort, schedule and quality throughout the software development and test processes. The scope of the model is a complete software development cycle for a single project.

12:30-01:30
Lunch
01:30-02:30 DDP: A Practical Quantitative Early-Lifecycle Risk Assessment and Risk Mitigation Planning Tool and Process, Martin S. Feather , Steven L. Cornford, Jet Propulsion Laboratory, Pasadena, CA {Martin.S.Feather, Steven.L.Cornford}@jpl.nasa.gov; James D. Kiper, Miami University, OH, kiperjd@muohio.edu

Abstract:  DDP's purpose is to help early-lifecycle planning of cost-effective risk reduction strategies. This is a crucial but challenging time - decisions have considerable leverage on the development to follow, but detailed knowledge (e.g., design knowledge) is sparse. DDP fills this niche with a simple scalable quantitative model of "Objectives", "Risks" (whose occurrence would adversely impact Objectives, including quantitative estimates of how much they impact Objectives), and "Mitigations" (including quantitative estimates of how much they reduce Risks). Data in a DDP model is typically experts' estimates, gathered in interactive sessions. Custom tool support for DDP facilitates on-the-fly data capture, performs cost and risk calculations, and presents information back to users through a variety of cogent visualizations. DDP has been applied to help plan the development of individual technologies (both hardware and software) for use on spacecraft, and, in ongoing work, in the planning for an entire spacecraft.  DDP relies heavily on expert estimates, formal validation of a collection of which is problematic. Nevertheless, results of DDP applications on the whole agree with experts' overall expectations, but in most applications have also led to surprising insights that, upon reflection and further study, the experts have agreed are both valid and valuable.

02:30-03:30 Requirements Risk versus Reliability, Norman F. Schneidewind, Department of Information Sciences, Naval Postgraduate School, Monterey, CA, nschneid@nps.navy.mil

Abstract:  While software design and code metrics have enjoyed some success as predictors of software quality, the measurement field is stuck at this level of achievement. If measurement is to advance to a higher level, we must shift our attention to the front-end of the development process, because it is during requirements analysis that errors are inserted into the process.  A requirements change may induce ambiguity and uncertainty in the development process
that cause errors in implementing the changes. Subsequently, these errors propagate through later phases of development and maintenance. These errors may result in significant risks associated with implementing the requirements. For example, reliability risk (i.e., risk of faults and failures induced by changes in requirements) may be incurred by deficiencies in the process (e.g., lack of precision in requirements).  A potential solution is to identify the attributes of requirements that cause the software to be unreliable, and to quantify the relationship between requirements risk and reliability. If these attributes can be identified, then policies can be recommended to NASA for recognizing these risks and avoiding or mitigating them during development.

03:30-04:00
Break
04:00-05:00 Open Discussion and Concluding Remarks:  All attendees are invited and encouraged  to discuss the presentations.



http://www.frugalindy.com
2001,2002, www.issre2002.org © all rights reserved,
Header photo #6 courtesy Annapolis & Anne Arundel
County Conference & Visitors Bureau.