CSDP Preparation Course Module V: Software Testing - PowerPoint PPT Presentation

1 / 89
About This Presentation
Title:

CSDP Preparation Course Module V: Software Testing

Description:

CSDP Preparation Course Module V: Software Testing Specifications The exam specific topics covered in this module are listed below, and are the basis for the outline ... – PowerPoint PPT presentation

Number of Views:131
Avg rating:3.0/5.0
Slides: 90
Provided by: UHCL9
Learn more at: http://sce.uhcl.edu
Category:

less

Transcript and Presenter's Notes

Title: CSDP Preparation Course Module V: Software Testing


1
CSDP Preparation CourseModule V Software
Testing
2
Specifications
  • The exam specific topics covered in this module
    are listed below, and are the basis for the
    outline of its content.
  • A. Types of Tests
  • B. Test Levels
  • C. Testing Strategies
  • D. Test Design
  • E. Test Coverage of Code
  • F. Test Coverage of Specifications
  • G. Test Execution
  • H. Test Documentation
  • I. Test Management

3
Objectives
  • After completing this module, you should be able
    to
  • To determine the purpose of testing
  • To identify the types of tests
  • To identify the process and strategies for
    performing
  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing
  • To discuss test planning and other test
    documentation

4
Organization
  • The organization of information for each
    specification topic is as follows
  • Topic Content Slides - detail the important
    issues concerning each topic and support the
    module objectives
  • Topic Reference Slides - detail the sources for
    the topical content and provides additional
    references for review
  • Topic Quiz Slides - allow students to prepare for
    the exam

5
Introduction
  • Definition of Testing SW04 It is an activity
    performed for evaluating product quality, and for
    improving it, by identifying defects and
    problems.
  • Software testing consists of the dynamic
    verification of the behavior of a program on a
    finite set of test cases, suitably selected from
    the usually infinite executions domain, against
    the expected behavior.
  • Testing is currently considered that the right
    attitude towards quality is one of prevention it
    is obviously much better to avoid problems than
    to correct them.

6
Introduction - 2
  • Software Testing is the process of executing a
    computer program for the purpose of finding
    errors.
  • Software Testing is not
  • -- A means of determining that errors are not
    present in the program
  • -- A proof that the software performs as
    required
  • -- A demonstration that all logic paths are
    executed
  • -- The only means of verifying that the system
    meets the requirement specifications
  • -- The sole means of demonstrating the quality
    of the software.

7
Introduction - 3
  • The purpose of Testing
  • Testing is a process of executing a program with
    the intent of finding errors
  • -- A good test is one that has a high
    probability of finding undiscovered error
  • -- A good test uses a minimum number of tests to
    find a maximum number of errors
  • -- A successful test is one that finds
    undiscovered error
  • Testing cannot show the absence of defects it
    can only show that software errors are present.

8
Introduction - 4
  • Major issues in Software Testing
  • Convincing stakeholders the thorough testing is
    vital to the success of a software project
  • Convincing project managers not to cancel or
    reduce when the project is running short on time
    or money
  • Convincing programmers that independent testing
    is worthwhile
  • Faulty software requirements (resulting in faulty
    functional testing)
  • Testing is time consuming and expensive (and
    tempting to reduce to save money and time)
  • Errors are often difficult to reproduce
    (particularly without proper test planning)
  • Fixing errors found during testing (should be
    fixed as a separate effort)
  • Lack of explicit test planning, test procedures,
    and test cases

9
Introduction - 5
  • Testing Principles
  • All functional tests should be traceable to
    software requirements
  • Tests should be planned at the beginning of the
    project (requirements phase)
  • The Pareto principles applies (80 of the errors
    will be found in 20 of the code)
  • Testing begins with software units and ends with
    systems tests
  • Exhaustive testing is not possible
  • Testing is more effective when conducted by a
    third party.

10
Introduction - 6
  • Software Testing Fundamentals
  • Testing-Related Terminology
  • a) Definitions of Testing-Related Terminology
  • SW04, pp5-2
  • b) Faults vs. Failures
  • SW04, pp5-2

11
Introduction - 7
  • Key Issues
  • a) Test selection criteria/Test adequacy
    criteria (or stopping rules) SW04, pp5-3
  • b) Testing effectiveness/Objectives for testing
    SW04, pp5-3
  • c) Testing for defect identification SW04,
    pp5-3
  • d) The oracle problem SW04, pp5-3
  • e) Theoretical and practical limitations of
    testing SW04, pp5-3
  • f) The problem of infeasible paths SW04,
    pp5-3
  • g) Testability SW04, pp5-3
  • Relationships of testing to other activities

12
Introduction References
  • SW04 Guide to the Software Engineering Body of
    Knowledge Chapter 5

13
Introduction References - 2
  • LIST OF STANDARDS
  • (IEEE610.12-90) IEEE Std 610.12-1990 (R2002),
    IEEE Standard Glossary of Software Engineering
    Terminology IEEE, 1990.
  • (IEEE829-98) IEEE Std 829-1998, Standard for
    Software Test Documentation IEEE, 1998.
  • (IEEE982.1-88) IEEE Std 982.1-1988, IEEE Standard
    Dictionary of Measures to Produce Reliable
    Software IEEE, 1988.
  • (IEEE1008-87) IEEE Std 1008-1987 (R2003), IEEE
    Standard for Software Unit Testing IEEE, 1987.
  • (IEEE1044-93) IEEE Std 1044-1993 (R2002), IEEE
    Standard for the Classification of Software
    Anomalies IEEE, 1993.
  • (IEEE1228-94) IEEE Std 1228-1994, Standard for
    Software Safety Plans IEEE, 1994.
  • (IEEE12207.0-96) IEEE/EIA 12207.0-1996//ISO/IEC122
    071995, Industry Implementation of Int. Std.
    ISO/IEC 1220795, Standard for Information
    Technology-Software Life Cycle Processes, vol.
    IEEE, 1996.

14
Introduction Quiz
  • Two terms often associated with testing are
  • a) Faults and Failures
  • b) Verification and Validation
  • c) Alpha and Beta
  • d) Functional and Structural

15
A. Types of Tests
  • The types of tests are
  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

16
A. Types of Tests - 2
  • V Model (Testing Model)

17
A. Types of Tests - 3
  • X Model (Testing Model)

18
A. Types of Tests - 4
  • Unit Testing
  • Unit testing is the testing of the smallest
    component of the software system, i.e., a unit or
    a module
  • Unit testing is normally done by the programmer
    (coder) who coded the module
  • Unit test may or may not be part of the test plan
  • Unit testing normally involves white-box testing
  • Unit testing can be replaced by inspections, peer
    reviews and argumentative.

19
A. Types of Tests - 5
  • Unit testing normally includes
  • -- Statement testing Every statement in the
    module is executed at least once
  • -- Branch testing Every decision point in the
    code is executed at least once
  • -- Loop testing Every loop is repeated at least
    two times
  • -- Path testing Every distinct path through the
    code is executed at least once.

20
A. Types of Tests - 6
  • Verifying the Software Unit
  • Types of Verification Techniques
  • Testing The process of executing a computer
    program for the purpose of finding errors
  • Code Walkthroughs A peer review for the purpose
    of finding errors
  • Code Inspections A peer review for the purpose
    of finding errors (more formal than walkthroughs)
  • Code reviews A management/technical review of a
    software project for the purpose of assessing
    progress
  • Formal proof techniques Involves proving, using
    mathematical arguments, that a program is
    consistent with its specifications

21
A. Types of Tests - 7
  • Integration testing
  • Integration testing has two purposes
  • Finding errors in implementing the design.
  • Finding errors in interfacing between components.
  • Tests are derived from the architectural design
    of the system
  • Integration Testing Methods
  • Bottom-up testing The bottom components of the
    system are tested first Requires component
    drivers
  • Top-down testing The top components of the
    system are tested first Requires component
    stubs
  • Big-bang testing All components are tested at
    once Also called a smoke test
  • Sandwich testing Combines a top-down strategy
    with a bottom-up strategy Requires the
    selection of a target area

22
A. Types of Tests - 8
  • System Testing
  • Functional testing Do the systems functional
    requirements perform as specified in the
    requirements specifications?
  • Performance testing Do the systems
    nonfunctional requirements perform as specified
    in the requirements specifications?
    (Nonfunctional requirements are performance,
    external interfaces, design constraints, and
    quality attributes)
  • Stress testing The use of input data that is
    equal to or exceeds the capacity of the system.
  • Regression testing Selective retesting of a
    system or component to verify that modifications
    have not caused unintended effects and that the
    system or component still complies with its
    specified requirements. IEEE Std 610.12-1990

23
A. Types of Tests - 9
  • Functional Testing
  • Functional testing compares the systems
    performance with its requirements
  • Functional testing does end-to-end testing
  • Each function can be associated with those system
    components that accomplish it (sometimes called a
    thread)
  • The tests are normally done one function at a
    time
  • Functional tests should
  • -- Have a high probability of detecting a fault
  • -- Use an independent testing team
  • -- Know the expected actions and outputs
  • -- Test both invalid and valid inputs
  • -- Have a stopping criteria

24
A. Types of Tests - 10
  • Types of Performance tests
  • Stress tests Evaluate the system when all
    variables are at their extreme settings
  • Volume tests Evaluate the system for handling
    large amounts of data
  • Timing tests Evaluate the requirements to
    respond to a user and time to perform a function
  • Recovery tests Evaluate the systems response to
    the presence of faults and the effects of
    failures
  • Quality tests Evaluate the requirements for such
    quality attributes as reliability,
    maintainability, availability, usability,
    security, etc.

25
A. Types of Tests - 11
  • Stress Testing
  • Modeled after the hardware stress testing in
    which excess voltage and excess speed are applied
    to determine what will break first, thereby
    showing the weak spots in the system.
  • In software, stress testing can involve
  • -- Adding more terminals to the system than it
    was original designed for
  • -- Increasing the CPU cycle time beyond normal
  • -- Increasing the capacity of data structures
  • This approach may reveal areas that are likely to
    fail under abnormal conditions

26
A. Types of Tests - 12
  • Regression Testing
  • Regression testing is a selective retesting of a
    system or component to verify that modifications
    have not caused unintended effects and that the
    system or component still complies with its
    specified requirements. IEEE Std 610.12-1990
  • A regression test identifies new results that
    have been introduced as old faults are corrected
  • It is primarily used after system maintenance is
    performed

27
A. Types of Tests - 13
  • Acceptance Test
  • A test run by the customer or on behalf of the
    customer to determine acceptance or rejection of
    the system
  • Four types of acceptance tests
  • -- Benchmark test A set of test cases that
    reflect expected uses of the system
  • -- Pilot test A system installed on an
    experimental basis
  • -- Alpha test An in-house system test with the
    developing staff assuming the role of the user
  • -- Beta test An external system test with a
    select subset of the eventual users of the system

28
A. References
  • SW04 Guide to the Software Engineering Body of
    Knowledge - Chapter 5
  • IEEE Std 610.12-1990

29
A. Quiz
  • ________ refers to ensuring correctness from
    phase to phase of the software development cycle.
  • a) Verification
  • b) Validation
  • c) Testing
  • d) None of the above
  • _________ involves checking the software against
    the requirements
  • a) Verification
  • b) Validation
  • c) Testing
  • d) None of the above

30
B. Test Levels
  • Software testing is usually performed at
    different levels along the development and
    maintenance processes. That is to say, the target
    of the test can vary a single module, a group of
    such modules (related by purpose, use, behavior,
    or structure), or a whole system. SW04, pp5-3
  • Three big test stages can be conceptually
    distinguished, namely Unit, Integration, and
    System.
  • Unit Testing Unit testing verifies the
    functioning in isolation of software pieces which
    are separately testable. SW04, pp5-3

31
B. Test Levels - 2
  • Integration testing is the process of verifying
    the interaction between software components.
    SW04, pp5-4
  • System testing is concerned with the behavior of
    a whole system. System testing is usually
    considered appropriate for comparing the system
    to the non-functional system requirements, such
    as security, speed, accuracy, and reliability.
    SW04, pp5-4

32
B. Test Levels - 3
  • Objectives of Testing Testing is conducted in
    view of a specific objective, which is stated
    more or less explicitly, and with varying degrees
    of precision. Stating the objective in precise,
    quantitative terms allows control to be
    established over the test process. SW04, pp5-4
  • The sub-topics listed below are some kinds of
    testing those most often cited in the literature.
  • Acceptance/qualification testing SW04, pp5-4
  • Installation testing SW04, pp5-4
  • Alpha and beta testing SW04, pp5-4

33
B. Test Levels - 4
  • Conformance testing/Functional testing/Correctness
    testing SW04, pp5-4
  • Reliability achievement and evaluation SW04,
    pp5-4
  • Regression testing SW04, pp5-4
  • Performance testing SW04, pp5-5
  • Stress testing SW04, pp5-5

34
B. Test Levels - 5
  • Back-to-back testing SW04, pp5-5
  • Recovery testing SW04, pp5-5
  • Configuration testing SW04, pp5-5
  • Usability testing SW04, pp5-5
  • Test-driven development SW04, pp5-5

35
B. References - 1
  • SW04 Guide to the Software Engineering Body of
    Knowledge - Chapter 5

36
B. References - 2
  • LIST OF STANDARDS
  • (IEEE610.12-90) IEEE Std 610.12-1990 (R2002),
    IEEE Standard Glossary of Software Engineering
    Terminology IEEE, 1990.
  • (IEEE829-98) IEEE Std 829-1998, Standard for
    Software Test Documentation IEEE, 1998.
  • (IEEE982.1-88) IEEE Std 982.1-1988, IEEE Standard
    Dictionary of Measures to Produce Reliable
    Software IEEE, 1988.
  • (IEEE1008-87) IEEE Std 1008-1987 (R2003), IEEE
    Standard for Software Unit Testing IEEE, 1987.
  • (IEEE1044-93) IEEE Std 1044-1993 (R2002), IEEE
    Standard for the Classification of Software
    Anomalies IEEE, 1993.
  • (IEEE1228-94) IEEE Std 1228-1994, Standard for
    Software Safety Plans IEEE, 1994.
  • (IEEE12207.0-96) IEEE/EIA 12207.0-1996//ISO/IEC122
    071995, Industry Implementation of Int. Std.
    ISO/IEC 1220795, Standard for Information
    Technology-Software Life Cycle Processes, vol.
    IEEE, 1996.

37
B. Quiz
  • ________ is associated with formal proofs of
    correctness
  • a) Validation
  • b) Verification
  • c) Testing
  • d) All the above
  • ________ is concerned with executing the software
    with test data.
  • a) Validation
  • b) Verification
  • c) Testing
  • d) All the above

38
C. Testing Strategies
  • Test techniques
  • Based on the software engineers intuition and
    experience
  • a) Ad hoc testing SW04, pp5-5
  • b) Exploratory testing SW04, pp5-5
  • Specification-based techniques
  • a) Equivalence partitioning SW04, pp5-5
  • b) Boundary-value analysis SW04, pp5-5
  • c) Decision table SW04, pp5-6
  • d) Finite-state machine-based SW04, pp5-6
  • e) Testing from formal specifications SW04,
    pp5-6
  • f) Random testing SW04, pp5-6

39
C. Testing Strategies - 2
  • Test techniques
  • Code-based techniques
  • a) Control flow-based criteria SW04, pp5-6
  • b) Data flow-based criteria SW04, pp5-6
  • c) Reference models for code-based testing
    (flow graph, call graph) SW04, pp5-6
  • Fault-based techniques SW04, pp5-6
  • Error guessing SW04, pp5-6
  • b) Mutation testing SW04, pp5-6
  • Usage-based techniques
  • a) Operational profile SW04, pp5-7
  • b) Software Reliability Engineered TestingSW04,
    pp5-7

40
C. Testing Strategies - 3
  • Test techniques
  • Techniques based on the nature of the
    applicationSW04,pp5-7
  • Object-oriented testing
  • Component-based testing
  • Web-based testing
  • GUI testing
  • Testing of concurrent programs
  • Protocol conformance testing
  • Testing of real-time systems
  • Testing of safety-critical systems
  • Selecting and combining techniques SW04,pp5-7
  • a) Functional and structural
  • b) Deterministic vs. random

41
C. Testing Strategies - 4
  • Functional Testing
  • Functional Testing Functional testing addresses
    itself to whether the program produces the
    correct output.
  • -- Focuses on the functional requirements of the
    software, also called black box testing
  • Functional strategy uses only the requirements
    defined in the specification as the basis for
    testing
  • Attempts to find errors of the following
    categories
  • -- Incorrect or missing functions
  • -- Interface errors
  • -- Errors in data structures or external
    database access
  • -- Performance errors
  • -- Initiation and termination errors
  • Bases the test on the external view of the system

42
C. Testing Strategies - 5
  • Structural Testing
  • Structural testing The testing strategy is based
    on deriving test data from the structure of a
    system. The structural strategy is based on the
    detailed design.
  • -- Focuses on the control structure of the
    system design, also called white box or glass
    box testing.
  • Focuses on
  • -- Path testing Exercising all independent
    paths within a module at least once
  • -- Branch testing Exercising all logical
    decisions on both their true and false sides
  • -- Loop testing Executing all loops at their
    boundaries and their operational bounds
  • -- Exercising internal data structures to insure
    their correctness and availability.
  • Bases the test on the internal structures of the
    software

43
C. Testing Strategies - 6
  • Static Analysis
  • A testing technique that does not involve the
    execution of the software with data. It directly
    analyzes the form and structure of a product
    without executing the product.
  • Use of static analysis tools to scan the source
    text of a program and detect possible faults and
    anomalies.
  • Includes
  • -- Program proving.
  • -- Symbolic execution.
  • -- Anomaly analysis

44
C. Testing Strategies - 7
  • Dynamic analysis
  • Dynamic analysis requires that the software be
    executed and relies on instrumenting the program
    to measure internal data and logic states as well
    as outputs.
  • The process of evaluating a program based on
    execution of the program
  • Involves execution or simulation of a development
    activity product to detect errors by analyzing
    the response of a product to sets of input data.
  • The software is exercised through the use of test
    cases
  • The resulting data is compared with the computed
    data to check for errors

45
C. Testing Strategies - 8
  • Dynamic Analysis
  • Dynamic functional This technique executes test
    cases without giving consideration to the
    detailed design of the software.
  • Classified into
  • -- Domain testing
  • -- Random testing
  • -- Adaptive perturbation testing
  • -- Cause-effect graphing

46
C. Testing Strategies - 9
  • Dynamic Analysis
  • Dynamic-Structural This technique executes the
    test cases with creation of the test cases basing
    upon an analysis of the software.
  • Classified into
  • -- Domain testing
  • -- Computation testing
  • -- Automatic test data generation
  • -- Mutation analysis

47
C. References
  • SW04 Guide to the Software Engineering Body of
    Knowledge - Chapter 5

48
C. References - 2
  • LIST OF STANDARDS
  • (IEEE610.12-90) IEEE Std 610.12-1990 (R2002),
    IEEE Standard Glossary of Software Engineering
    Terminology IEEE, 1990.
  • (IEEE829-98) IEEE Std 829-1998, Standard for
    Software Test Documentation IEEE, 1998.
  • (IEEE982.1-88) IEEE Std 982.1-1988, IEEE Standard
    Dictionary of Measures to Produce Reliable
    Software IEEE, 1988.
  • (IEEE1008-87) IEEE Std 1008-1987 (R2003), IEEE
    Standard for Software Unit Testing IEEE, 1987.
  • (IEEE1044-93) IEEE Std 1044-1993 (R2002), IEEE
    Standard for the Classification of Software
    Anomalies IEEE, 1993.
  • (IEEE1228-94) IEEE Std 1228-1994, Standard for
    Software Safety Plans IEEE, 1994.
  • (IEEE12207.0-96) IEEE/EIA 12207.0-1996//ISO/IEC122
    071995, Industry Implementation of Int. Std.
    ISO/IEC 1220795, Standard for Information
    Technology-Software Life Cycle Processes, vol.
    IEEE, 1996.

49
C. Quiz
  • What kind of testing has been included under both
    structural and functional strategies
  • a) Computation testing
  • b) Domain testing
  • c) Random testing
  • d) None of the above

50
D. Test Design
  • Attributes of a Test Design
  • Determine the features to be tested (or not
    tested)
  • Select the test cases to be used
  • Select the process to test the features
  • Determine the pass/fail criteria
  • Design the software test as soon as possible
    after the establishment of requirements. This
    helps
  • -- Non-testable requirements to be found
  • -- Quality to become built in
  • -- Costs to be reduced
  • -- Time saved

51
D. References
  • IEEE standard 829-1998, Standard for software
    Test Documentation.
  • SMC 2003 IEEE CBT/Testing 07
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

52
D. Quiz
  • An engineer is tasked to verify a software
    release for a mission critical system. The plan
    is for the release of software for verification
    to occur on a Monday, with verification complete
    the following Friday. The release turns out not
    to be available until Thursday. The best route
    for the engineer is toa verify release
    criteria regardless of time line.b do whatever
    testing they can by Friday.c volunteer to work
    the weekend.d relax release criteria .

53
E. Test Coverage of Code
  • Test coverage of code The amount of code
    actually executed during the test process. It is
    stated as a percentage of the total instructions
    executed or paths traversed.
  • Statement coverage is the number of statements
    exercised by the test set divided by the total
    number in the module being measured Also known
    as the test effectiveness ratio
  • A 100 cover is achieved by having each line of
    code exercised at least once 100 coverage is
    required but this alone is not a sufficient
    indicator of a good test
  • Manual development of test data results in
    coverage of as low as 60-80

54
E. Test Coverage of Code - 2
  • Automated Test Tools
  • Static analysis
  • -- Code analyzers Check for proper syntax
  • -- Structure checker Checks for structural
    flaws
  • -- Data analyzers Check for errors in data
    structures
  • -- Sequence checker Checks sequence of events
  • Dynamic analysis Monitors the activities of the
    program under operation
  • Test execution tools
  • -- Capture and replay Captures the activities
    of the test and replays them as necessary
  • -- Test case generators Develops test cases
    for complete test coverage (test case generators
    can result in coverage of over 90)

55
E. References
  • SMC 2003 IEEE CBT/Testing 07
  • Sommerville, Ian, Software Engineering, 5th
    edition, Addison-Wesley, 1996, chapter 18.
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

56
E. Quiz
  • Which of the following do NOT affect the accuracy
    of the reliability estimate during statistical
    testing?I.    The validity of the usage
    profile.II.    The number of test cases
    executed.III.    The programming language used
    to implement the code.IV.    The cyclomatic
    complexity of the code.a I and II only.b
    II and III only.c III and IV only.d I and
    III only.

57
F. Test Coverage of Specifications
  • Requirements Specifications
  • Test data is derived from the software
    requirements specifications
  • Test coverage The degree to which a given test
    or set of tests addresses all specified
    requirements for a given system or component
  • Sample coverage of the functional domain is
    achieved by having at least one sample value from
    each functional equivalence partition
  • Boundary coverage of the functional domain is
    achieved by having at least one test case on each
    side of the equivalence partition boundary
  • Decision table represents logical relationships
    between input conditions and output actions
  • Random testing in which tests are generated
    purely random

58
F. Test Coverage of Specifications - 2
  • Design Description
  • Test data is derived from the software design
    descriptions
  • Path testing Exercising all independent paths
    within a module
  • Branch testing Exercising all logical decisions
    on both their true and false sides
  • Loop testing Executing all loops at their
    boundaries and their operational bounds
  • Interface testing Test interfaces between
    modules and group of modules
  • Code based testing Test data derived from a
    flowchart

59
F. References
  • SMC 2003 IEEE CBT/Testing 07
  • IEEE/EIA 12207.0-1996, Standard for Information
    TechnologySoftware life cycle processes, 6.3
    Quality Assurance Process
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

60
F. Quiz
  • Quality assurance may be applied to I.   
    RequirementsII.   DesignIII.  CodeIV. 
    Testinga I and II only b I, II, and III
    only c I, II, III, and IVd IV only

61
G. Test Execution
  • Why not check All of the paths?
  • There are too many paths (each branching or
    loop creates one to many more paths that can be
    reasonably tested in a finite period of time)
  • There can be a number of infeasible paths (a path
    that cannot be reached by a test case)

62
G. Test Execution - 2
  • Infeasible paths An infeasible path is one which
    cannot be executed due to incorrect parameters in
    a branching statement
  • For example
  • 1. Begin
  • 2. Read A
  • 3. If A gt 15
  • 4. Then B B 1
  • 5. Else C C 1
  • 6. If A lt 12
  • 7. Then D D 1
  • 8. End.
  • What path can never be executed?

63
G. Test Execution - 3
  • Dead code Code that is isolated by a jump
    statement and is never executed.
  • Example
  • X1 Go to y
  • X2
  • X3
  • X4 .......
  • Y
  • If the code block X2 X4 is not entered by
    another jump statement, this block of code is
    dead or isolated
  • This is typically caused by a programmer who has
    created a patch to fix a problem and is too
    lazy to remove the old code.
  • This is an excellent place to hide a virus

64
G. References
  • SMC 2003 IEEE CBT/Testing 07
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

65
G. Quiz
  • ________ testing addresses itself to whether the
    program produces the correct output.
  • a) Regression
  • b) Functional
  • c) Static
  • d) System

66
H. Test Documentation
  • Software Testing Documents
  • Test plan Describes the scope, approach,
    resources and schedule for testing
  • Test design specification Specifies the details
    of the test approach
  • Test procedure specification Specifies the
    sequence of actions for the execution of tests
  • Test case specification Specifies input,
    predicted results, and a set execution
    conditions for each test.
  • Test log A chronological record of the test
    incident report - reports any event which
    requires investigation. A log is also called a
    Discrepancy Report (DR)
  • Test summary report Summarizes testing
    activities and results.

67
H. Test Documentation - 2
  • Testing Documents

1.Test plan ? 2.Test Design ? 3.Test Cases ? 4.Test procedures ? 5. Test 1.Test plan ? 2.Test Design ? 3.Test Cases ? 4.Test procedures ? 5. Test 1.Test plan ? 2.Test Design ? 3.Test Cases ? 4.Test procedures ? 5. Test 1.Test plan ? 2.Test Design ? 3.Test Cases ? 4.Test procedures ? 5. Test 1.Test plan ? 2.Test Design ? 3.Test Cases ? 4.Test procedures ? 5. Test
Test data Requirements Design Implementation Test
Unit Testing 1 2,3 4,5
Integration Testing 1 2 3,4 5
System Testing 1 2 3,4 5
Acceptance Testing 1 2 3 4,5
68
H. Test Documentation - 3
  • Test Documentation Integration

69
H. References
  • IEEE Standard 829 -1998, Standard for Software
    Test Documentation.
  • SMC 2003 IEEE CBT/Testing 07
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

70
H. Quiz
  • ________ testing is the name given to the
    functional testing that follows modification.
  • a) Regression
  • b) Functional
  • c) Static
  • d) Integration

71
I. Test Management
  • Test Planning
  • Establishing the major tasks in conducting the
    test. For ex
  • -- Develop the test strategy
  • -- Develop the process
  • -- Develop the test cases
  • -- Conduct the test
  • -- Make the final report
  • Identifying the test items
  • Establishing any constraints in conducting the
    tests
  • Developing a testing schedule
  • Computing the cost and resources required to
    conduct the tests
  • Identify the major testing risks
  • Identifying the software and hardware necessary
    to conduct the tests

72
I. Test Management - 2
  • The Role of the Software System Engineer
  • Review and approve the test plan
  • Determine the quantity and quality of testing to
    be used
  • Develop the acceptance criteria
  • Manage and control the test readiness review
    (TRR) to determine
  • -- If the software is ready for testing
  • -- if the test specifications and procedures are
    adequate to test the software
  • Observe and/or conduct integration, system, and
    acceptance tests
  • Review and verify the test results
  • Turn the final system over to the customer

73
I. Test Management - 3
  • Testing Effectiveness
  • Reduce the number of errors passed on to the
    customer
  • Reduce the cost of effective testing
  • -- Example Inspections are more effective than
    unit testing in finding errors (argumentative)
  • Test management pitfalls
  • -- Absence of testing policy or strategy
  • -- Failing to plan for the testing
  • -- Insufficient time allocated for testing
  • -- Reduce testing when the project begins to
    overrun its budget
  • -- Ineffective testing ( testing strategies not
    suitable for the application)

74
I. Test Management - 4
  • Independent Test Team
  • Testers need to take an independent view of the
    software to be tested
  • -- A software developer can only find a small
    amount his/her own errors (leaving the rest of
    them to be delivered)
  • -- It is difficult to effectively test
    destructively
  • Solution Use an independent test team
  • -- An independent test team is an organization
    independent from the authority of the development
    manager who is given responsibility for testing
    the software being developed.

75
I. Test Management - 5
  • Independent Test Team
  • Advantages of an independent test team
  • -- Does not have a vested interest in the
    program
  • -- Brings a new dimension to the software system
  • -- Independently assists in the verification of
    the software design
  • -- Has a goal to find errors in the system
  • Disadvantages of an independent test team
  • -- Team must learn the system
  • -- Independent teams approach gives the
    appearance of being more expensive than the
    non-independent test team
  • -- Could cause resentment with the development
    team, thereby resulting in their lack of
    cooperation

76
I. Test Management - 6
  • Essential Ingredients of a Good Test Team
  • Detailed understanding of the system
  • System and testing experience
  • Knowledge of internal structure of the system to
    be tested
  • Ability to create test cases
  • Ability to select test strategies
  • The use of comprehensive and sophisticated
    testing tools and simulations.

77
I. References
  • SMC 2003 IEEE CBT/Testing 07
  • Thayer Dorfman, Software Engineering Volume1
    The development Process
  • Coward, P. David A Review of Software
    Testing

78
I. Quiz
  • The two prominent strategy dimensions are
  • a) Structural and functional
  • b) function/structural and static/dynamic
  • c) Static and dynamic
  • d) Static and Integration

79
ADDITIONAL MATERIAL
  • Test Related Measures
  • Evaluation of the program under test SW04,pp5-7
  • Evaluation of the tests performed SW04,pp5-8
  • Test Process
  • Practical Considerations SW04, pp5-8
  • Test Activities SW04, pp5-8

80
Test Related Measures
  • Evaluation of the program under test SW04,pp5-7
  • Program measurements to aid in planning and
    designing testing
  • b) Fault types, classification, and statistics
  • c) Fault density
  • d) Life test, reliability evaluation
  • e) Reliability growth models

81
Test Related Measures - 2
  • Evaluation of the tests performed SW04,pp5-8
  • a) Coverage/thoroughness measures
  • b) Fault seeding
  • c) Mutation score
  • d) Comparison and relative effectiveness of
    different techniques

82
References
  • SW04 Guide to the Software Engineering Body of
    Knowledge - Chapter 5

83
References - 2
  • LIST OF STANDARDS
  • (IEEE610.12-90) IEEE Std 610.12-1990 (R2002),
    IEEE Standard Glossary of Software Engineering
    Terminology IEEE, 1990.
  • (IEEE829-98) IEEE Std 829-1998, Standard for
    Software Test Documentation IEEE, 1998.
  • (IEEE982.1-88) IEEE Std 982.1-1988, IEEE Standard
    Dictionary of Measures to Produce Reliable
    Software IEEE, 1988.
  • (IEEE1008-87) IEEE Std 1008-1987 (R2003), IEEE
    Standard for Software Unit Testing IEEE, 1987.
  • (IEEE1044-93) IEEE Std 1044-1993 (R2002), IEEE
    Standard for the Classification of Software
    Anomalies IEEE, 1993.
  • (IEEE1228-94) IEEE Std 1228-1994, Standard for
    Software Safety Plans IEEE, 1994.
  • (IEEE12207.0-96) IEEE/EIA 12207.0-1996//ISO/IEC122
    071995, Industry Implementation of Int. Std.
    ISO/IEC 1220795, Standard for Information
    Technology-Software Life Cycle Processes, vol.
    IEEE, 1996.

84
Quiz
  • Aiming to demonstrate that there are no faults is
  • a) constructive.
  • b) destructive
  • c) Both
  • d) none of the above

85
Test Process
  • Practical Considerations SW04, pp5-8
  • a) Attitudes/Egoless programming
  • b) Test guides
  • c) Test process management
  • d) Test documentation and work products
  • e) Internal vs. independent test team
  • f) Cost/effort estimation and other process
    measures
  • g) Termination
  • h) Test reuse and test patterns

86
Test Process - 2
  • Test Activities SW04, pp5-8
  • a) Planning
  • b) Test-case generation
  • c) Test environment development
  • d) Execution
  • e) Test results evaluation
  • f) Problem reporting/Test log
  • g) Defect tracking

87
References
  • SW04 Guide to the Software Engineering Body of
    Knowledge - Chapter 5

88
References - 2
  • LIST OF STANDARDS
  • (IEEE610.12-90) IEEE Std 610.12-1990 (R2002),
    IEEE Standard Glossary of Software Engineering
    Terminology IEEE, 1990.
  • (IEEE829-98) IEEE Std 829-1998, Standard for
    Software Test Documentation IEEE, 1998.
  • (IEEE982.1-88) IEEE Std 982.1-1988, IEEE Standard
    Dictionary of Measures to Produce Reliable
    Software IEEE, 1988.
  • (IEEE1008-87) IEEE Std 1008-1987 (R2003), IEEE
    Standard for Software Unit Testing IEEE, 1987.
  • (IEEE1044-93) IEEE Std 1044-1993 (R2002), IEEE
    Standard for the Classification of Software
    Anomalies IEEE, 1993.
  • (IEEE1228-94) IEEE Std 1228-1994, Standard for
    Software Safety Plans IEEE, 1994.
  • (IEEE12207.0-96) IEEE/EIA 12207.0-1996//ISO/IEC122
    071995, Industry Implementation of Int. Std.
    ISO/IEC 1220795, Standard for Information
    Technology-Software Life Cycle Processes, vol.
    IEEE, 1996.

89
Quiz
  • Aiming to find faults is a _________process.
  • a) destructive
  • b) constructive.
  • c) both
  • d) None of the above
Write a Comment
User Comments (0)
About PowerShow.com