1 Molecular Biology 3M03 Fundamental Concepts of Development ...
203 Pages
English

1 Molecular Biology 3M03 Fundamental Concepts of Development ...

-

Downloading requires you to have access to the YouScribe library
Learn all about the services we offer

Description

  • cours magistral
1 Molecular Biology 3M03  Fundamental Concepts of Development      This course examines fundamental mechanisms underlying development, from an embryological  and molecular genetic point of view. Besides class material presented by Dr. Campos, Drs. Gupta,  Gillespie and West‐Mays have been invited to showcase their model organisms and research  programme.      We will begin by examining how pattern and identity is established in early embryogenesis, and  then proceed to explore how cells acquire their fate, and build the intricate morphology of the maturing  embryo. Lecture material will be complemented by laboratory demonstrations.     The first half will focus on early development using three classical model systems, sea urchins,  frogs, chicks and flies. The second half of the course will examine specific processes underlying cellular  diversity and morphogenesis and is focused on nervous system development. This will be  complemented by additional model systems in which a genetic approach to development is employed  (guest speakers).  In each of these model systems a specific hypothesis will be addressed and  experiments and current models will be discussed.    Course Instructor:  Ana Campos   Office LSB 541             e‐mail:      Office Hours:  by  appointment   Please make inquiries by e‐mail     Lectures: Tuesday and Wednesday 12:30‐13:20, ABB 136  Labs: Tuesday and Wednesday, 14:30 LSB 109 and 110    Textbook      Gilbert, Developmental Biology strongly recommended, either 9th or 8th Edition is acceptable.
  • feb 28‐29  central nervous system development chapter 9
  • jan 24‐25  continuation and chapter 8 chicken 
  • week 4   dr.
  • week 3   sea urchin i  jan 17‐18 
  • week 7  dr.
  • week 5   frog i or ii  jan 31‐feb 1  
  • week 9  fly lab i  feb 28‐29 

Subjects

Informations

Published by
Reads 49
Language English
Document size 3 MB

Statistical Procedures for Certification of
Software SystemsTHOMASSTIELTJESINSTITUTE
FORMATHEMATICS
c Corro Ramos, Isaac (2009)
A catalogue record is available from the Eindhoven University of Technology Library
ISBN: 978-90-386-2098-5
NUR: 916
Subject headings: Bayesian statistics, reliability growth models, sequential testing,
software release, software reliability, software testing, stopping time, transition sys-
tems
Mathematics Subject Classification: 62L10, 62L15, 68M15
Printed by Printservice TU/e
Cover design by Paul Verspaget
This research was supported by the Netherlands Organisation for Scientific Research
(NWO) under project number 617.023.047.Statistical Procedures for Certification of
Software Systems
proefschrift
ter verkrijging van de graad van doctor aan de
Technische Universiteit Eindhoven, op gezag van de
Rector Magnificus, prof.dr.ir. C.J. van Duijn, voor een
commissie aangewezen door het College voor
Promoties in het openbaar te verdedigen
op dinsdag 15 december 2009 om 16.00 uur
door
Isaac Corro Ramos
geboren te Sevilla, SpanjeDit proefschrift is goedgekeurd door de promotoren:
prof.dr. K.M. van Hee
en
prof.dr. R.W. van der Hofstad
Copromotor:
dr. A. Di BucchianicoContents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The importance of software testing . . . . . . . . . . . . . . . 1
1.1.2 Software failure vs. fault . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Black-box vs. model-based testing . . . . . . . . . . . . . . . 3
1.1.4 When to stop testing . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Goal and outline of the thesis . . . . . . . . . . . . . . . . . . . . . . 4
2 Probability Models in Software Reliability and Testing 9
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Stochastic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Counting processes . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Property implications . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Software testing framework . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.1 Common notation . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.2 Reliability growth models . . . . . . . . . . . . . . . . . . . . 19
2.3.3 Stochastic ordering and reliability growth . . . . . . . . . . . 21
2.4 Classification of software reliability growth models . . . . . . . . . . 23
2.4.1 Previous work on model classification . . . . . . . . . . . . . 23
2.4.2 Classification based on properties of stochastic processes . . . 26
2.5 General order statistics models . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Jelinski-Moranda model . . . . . . . . . . . . . . . . . . . . . 30
2.5.2 Geometric order statistics model . . . . . . . . . . . . . . . . 32
2.6 Non-homogenous Poisson process models . . . . . . . . . . . . . . . . 33
2.6.1 Goel-Okumoto model . . . . . . . . . . . . . . . . . . . . . . 35
2.6.2 Yamada S-shaped model . . . . . . . . . . . . . . . . . . . . . 36
2.6.3 Duane (power-law) model . . . . . . . . . . . . . . . . . . . . 37
2.7 Linking GOS and NHPP models . . . . . . . . . . . . . . . . . . . . 38
2.7.1 A note on NHPP-infinite models . . . . . . . . . . . . . . . . 40
2.8 Bayesian approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.9 Some other models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.9.1 Schick-Wolverton model . . . . . . . . . . . . . . . . . . . . . 42
3 Statistical Inference for Software Reliability Growth Models 45
3.1 Data description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2 Trend analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Model type selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Model estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.1 ML estimation for GOS models . . . . . . . . . . . . . . . . . 56
Jelinski-Moranda model . . . . . . . . . . . . . . . . . . . . . 57
vvi Contents
3.4.2 ML estimation for NHPP models . . . . . . . . . . . . . . . . 58
Goel-Okumoto model . . . . . . . . . . . . . . . . . . . . . . 58
Duane (power-law) model . . . . . . . . . . . . . . . . . . . . 59
3.5 Model validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 Model interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 A New Statistical Software Reliability Tool 65
4.1 General remarks about the implementation . . . . . . . . . . . . . . 65
4.2 Main functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 Data menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2.2 Graphics menu . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.3 Analysis menu . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.4 Help menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3 Two examples of applying reliability growth models in software de-
velopment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.1 Administrative software at an insurance company . . . . . . . 78
4.3.2 A closable dam operating system . . . . . . . . . . . . . . . . 83
5 Statistical Approach to Software Reliability Certification 89
5.1 Previous work on software reliability certification . . . . . . . . . . . 90
5.1.1 Certification procedure based on expected time to next failure 90
5.1.2 pro based on fault-free system . . . . . . 92
5.2 Bayesian approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3 Bayesianreleaseprocedureforsoftwarereliabilitygrowthmodelswith
independent times between failures . . . . . . . . . . . . . . . . . . . 97
5.3.1 Jelinski-Moranda and Goel-Okumoto models . . . . . . . . . 99
Case 1: N and deterministic . . . . . . . . . . . . . . . . . 99
Case 2: N known and fixed, Gamma distributed . . . . . . 100
Case 3: N Poisson distributed, known and fixed (Goel-
Okumoto model) . . . . . . . . . . . . . . . . . . . . 102
Case 4: N Poisson and Gamma distributed (full Bayesian
approach) . . . . . . . . . . . . . . . . . . . . . . . . 103
5.3.2 Run model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Case 1: N and deterministic . . . . . . . . . . . . . . . . . 107
Case 2: N Poisson distributed, known and fixed . . . . . . 107
Case 3: N known and fixed, Beta distributed . . . . . . . . 109
Case 4: N Poisson and Beta (full Bayesian ap-
proach) . . . . . . . . . . . . . . . . . . . . . . . . . 110
6 Performance of the Certification Procedure 111
6.1 Jelinski-Moranda model . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.1.1 Case 1: N and deterministic . . . . . . . . . . . . . . . . . 111
6.1.2 Case 2: N known and fixed, Gamma distributed . . . . . . 112
6.1.3 Case 3: N Poisson distributed, known and fixed (Goel-
Okumoto model) . . . . . . . . . . . . . . . . . . . . . . . . . 117Contents vii
6.1.4 Case 4: N Poisson and Gamma distributed (full Bayesian
approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2 Run model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.2.1 Case 1: N and deterministic . . . . . . . . . . . . . . . . . 124
6.2.2 Case 2: N Poisson distributed, known and fixed . . . . . . 124
6.2.3 Case 3: N known and fixed, Beta distributed . . . . . . . . 126
6.2.4 Case 4: N Poisson and Beta (full Bayesian ap-
proach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7 Model-Based Testing Framework 131
7.1 Labelled transition systems and a diagram technique for representation132
7.2 Example of modelling software as a labelled transition system . . . . 134
7.3 Error distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.3.1 Binomial distribution of error-marked transitions . . . . . . . 137
7.3.2 Poisson distribution ofed . . . . . . . . 139
7.4 Testing process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.5 Walking Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.5.1 Walking function update for labelled transition systems . . . 146
7.5.2 W update for acyclic workflow transition systems148
7.6 Common notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8 Statistical Certification Procedures 155
8.1 Certificationprocedurebasedonthenumberofremainingerror-marked
transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.2 procedure based on the survival probability . . . . . . . 157
8.3 Practical application . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.3.1 General setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.3.2 Performance of the stopping rules . . . . . . . . . . . . . . . 162
9 Testing the Test Procedure 167
9.1 Generating random models . . . . . . . . . . . . . . . . . . . . . . . 167
9.2 Quality of the procedure . . . . . . . . . . . . . . . . . . . . . . . . . 170
9.3 Stresser: a tool for model-based testing certification . . . . . . . . . 173
9.3.1 Creating labelled transition systems . . . . . . . . . . . . . . 173
9.3.2 Error distribution . . . . . . . . . . . . . . . . . . . . . . . . 173
9.3.3 Parameters of testing: walking strategy and stopping rule . . 175
9.3.4 Collecting results . . . . . . . . . . . . . . . . . . . . . . . . . 175
9.3.5 Further remarks . . . . . . . . . . . . . . . . . . . . . . . . . 176
Summary 177
Bibliography 179
Index 191
About the author 195Chapter 1
Introduction
Inthischapterwefirstgiveabriefoverviewofsoftwaretestingtheory. Weemphasize
onthedifferentapproachestosoftwaretestingfoundintheliteratureandincommon
problems being studied during the past four decades. Afterwards, we introduce the
main goals of our research and the outline of this thesis.
1.1 Motivation
The main goal of this section is to provide a clear motivation to our work. We first
discuss the importance of software testing in Section 1.1.1. A common problem in
software testing theory is that the terminology used is often confusing. For that
reason, we introduce in Section 1.1.2 consistent terminology that will be used in
this thesis. In Section 1.1.3 we present the two main approaches to software testing
(called black-box and model-based testing). Finally, in Section 1.1.4 we consider the
decision problem when to stop testing and the role of statistical models in order to
answer this question.
1.1.1 The importance of software testing
Our first goal is to answer the question why software testing is important? If a
software user is asked about this, the answer would likely to be because software
often fails. The study of software systems during the past decades has revealed
that practically all software systems contain faults even after they have passed an
acceptance test and are in operational use. Software faults are of a special nature
since they are due to human design or implementation mistakes. Since humans
are fallible (so are software developers), software systems will have faults. Software
systems are becoming so complex that, even if the number of possible test cases is
theoretically finite, which is not always the case (for example, if unbounded input
strings are allowed, then the number of test cases is infinite), their execution takes
unacceptable much time in practice. Hence, it is impossible from a practical, or
even theoretical, point of view to test them exhaustively. Therefore, there it is most
likelythatcomplexsoftwaresystemshavefaults. Wecanimproveuponthissituation
by designing rigorous test procedures. A test can be defined as the act of executing
softwarewithtestcaseswiththepurposeoffindingfaultsorshowingcorrectsoftware
execution (cf. Jorgensen (2002)[Chapter 1]). A test case is associated with the
software behaviour since after its execution testers are able to determine whether
a software system has met the corresponding specifications or not. Testing the
software against specific acceptance criteria or requirements is a way to determine
whether the software meets the quality demands. In that sense, testing can be
regarded as a procedure to measure the quality of the software. Testing also helps
12 Introduction
to detect (and repair) faults in the system. As long as faults are found and repaired,
the number of remaining faults should decrease (although during the repair phase
new faults may be introduced), resulting in a more reliable system. Here testing can
be regarded as a procedure to improve software quality. Sound test designs should
include list of inputs and expected outputs and documentation of the performed
tests. Tests must be checked in order to avoid test cases to be executed without
prior analysis of the requirements or mistake test faults for real software faults.
There is a vast literature on software testing starting in the 1970’s, Myers (1979)
being one of the first monographs on the field. For more recent ones we refer to
Beizer (1990), Jorgensen (2002) or Patton (2005).
1.1.2 Software failure vs. fault
The definition of a software fault is a delicate matter since vague or confusing
definitions are often found in the software testing literature. In this thesis, we
adopt the following terminology: when a deviation of software behaviour from user
requirements is observed we say that a failure has occurred. On the other hand, a
fault (error, bug, etc.) in the software is defined as an erroneous piece of code that
causes failure occurrence. For us, a software fault occurs when at least one of the
following rules (cf. Patton (2005)[Chapter 1]) is true:
1. The software does not do something that its specifications says it should do.
2. The software does something that its specifications says it should not do.
3. The software is difficult to understand, hard to use, slow or (in the software
tester’s eyes) will be viewed by the end user as just plain “not right”.
There are many types of software faults, each of them with their own impact on
the use of software systems. Classifications of software faults provide insight into
the factors that lead to programming mistakes and help to prevent these faults in
the future. Faults can be classified in several ways according to different criteria:
impact in the system (severity), difficulty and cost of repairing, frequency at which
they occurred, etc. Taxonomies of software faults have been widely studied in the
software testing literature (see e.g. Basili and Perricone (1984), Beizer (1990)[Chap-
ter 2], Du and Mathur (1998), Sullivan and Chillarege (1991) and Tsipenyuk et al.
(2005)). One of the main problems with this kind of classifications is that they
are ambiguous. Most of the authors agree on that their classification schemes may
not avoid this ambiguity since the interpretation of the categories is subjected to
the point of view of the corresponding fault analyst. The following two classifica-
tion schemes give a good overview about software fault taxonomies. One of the
first classifications of software faults can be found in Myers (1979)[Chapter 3] where
faults are classified into seven different categories: data reference (uninitialized vari-
ables, array references out of bounds, etc.), data-declaration (variables not declared,
attributes of a variable not stated, etc.), computation (division by zero, computa-
tions on non-arithmetic variables, etc.), comparison (incorrect Boolean expressions,
comparisons between variables of different type, etc.), control-flow (infinite loops,

)