Group No.7 Roll No Qasim Ali 1425 Assad Ali 1407 Akber Ali 1406 Touseef Khadim 1427
SOFTWARE TESTING  Software testing is a process of verifying and validating that a software application or program  1. Meets the business and technical requirements that guided its design and development, and  2. Works as expected.  Main purposes  Verification  Validation  Defect finding
Verification  The verificationprocess confirms that the software meets its technical specifications.  A “specification” is a description of a function in terms of a measurable output value given a specific input value under specific preconditions.  A simple specification may be along the line of “a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields <list> ordered by month within 3 seconds of submission.”
Validation  The validation process confirms that the software meets the business requirements.  A simple example of a business requirement is “After choosing a branch office name, information about the branch’s customer account managers will appear in a new window.  The window will present manager identification and  summary information about each manager’s customer base: <list of data elements>.”  Other requirements provide details on how the data will be summarized, formatted and displayed.
Defect finding  A defect is a variance between the expected and actual result. The defect’s ultimate source may be traced  to a fault introduced in the specification, design, or development (coding) phases.
It is the process used to identify the correctness, completeness and quality of developed computer software. It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :-  Specification  Functionality  Performance INTRODUCTION
OBJECTIVES Uncover as many as errors (or bugs) as possible in a given product. Demonstrate a given software product matching its requirement specifications. Validate the quality of a software testing using the minimum cost and efforts. Generate high quality test cases, perform effective tests, and issue correct and helpful problem reports.
Error, Bug, Fault & Failure Error : It is a human action that produces the incorrect result that produces a fault. Bug : The presence of error at the time of execution of the software. Fault : State of software caused by an error. Failure : Deviation of the software from its expected result. It is an event.
 Standard model used word wide to develop a software.  A framework that describes the activities performed at each stage of a software development project.  Necessary to ensure the quality of the software.  Logical steps taken to develop a software product. SDLC(Software Development Life Cycle)
Project Initiation System Study Summary Reports Analysis Regression Test Report Defects Execute Test Cases ( manual /automated ) Design Test Cases Test Plan Testing Life Cycle
Test Plan It is a systematic approach to test a system i.e. software. The plan typically contains a detailed understanding of what the eventual testing workflow will be.
Test Case It is a specific procedure of testing a particular requirement. It will include: Identification of specific requirement tested Test case success/failure criteria Specific steps to execute test Test data
• Verification: The software should confirm to its specification (Are we building the product right?) • Validation: The software should do what the user really requires (Are we building the right product?) Verification vs Validation
Testing Methodologies Black box testing White box testing
Black box testing  No knowledge of internal program design or code required.  Tests are based on requirements and functionality. White box testing  Knowledge of the internal program design and code required.  Tests are based on coverage of code statements, branches, paths, conditions.
Black box testing requirements input events output
White box testing Component code Test outputs Test data DerivesTests
V-Model: test levels Integration Testing in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Code Design Specification System Specification Project Specification Business Requirements
Tests Business Requirements Tests Project Specification Tests System Specification Tests Design Specification Tests Code V-Model: late test design Integration Testing in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Design Tests? “We don’t have time to design tests early”
TestsTests Business Requirements TestsTests Project Specification TestsTests System Specification TestsTests Design Specification TestsTests Code V-Model: early test design Integration Testing in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Run Tests Design Tests
Early test design  test design finds faults  faults found early are cheaper to fix  most significant faults found first  faults prevented, not built in  no additional effort, re-schedule test design  changing requirements caused by test designEarly test design helps to build quality, stops fault multiplication
Experience report: Phase 1 Phase 1: Plan 2 mo 2 mo dev test test 150 faults 1st mo. 50 faults users not happy Quality fraught, lots of dev overtime Actual "has to go in" but didn't work
Experience report: Phase 2 Phase 2: Plan 2 mo 6 wks dev test test 50 faults 1st mo. 0 faults happy users! Quality smooth, not much for dev to do Actual acc test: full week (vs half day) on time Phase 1: Plan 2 mo 2 mo dev test test 150 faults 1st mo. 50 faults users not happy Quality fraught, lots of dev overtime Actual "has to go in" but didn't work Phase 2: Plan 2 mo 6 wks dev test test 50 faults 1st mo. 0 faults happy users! Quality smooth, not much for dev to do Actual acc test: full week (vs half day) on time
Testing Levels • Unit testing • Integration testing • System testing
UNIT TESTING Tests each module individually. Follows a white box testing (Logic of the program). Done by developers.
INTEGRATION TESTING Once all the modules have been unit tested, integration testing is performed. It is systematic testing. Produce tests to identify errors associated with interfacing. Types: Big Bang Integration testing Top Down Integration testing Bottom Up Integration testing Mixed Integration testing
SYSTEM TESTING  The system as a whole is tested to uncover requirement errors.  Verifies that all system elements work properly and that overall system function and performance has been achieved. Types: Alpha Testing Beta Testing Acceptance Testing Performance Testing
Alpha Testing It is carried out by the test team within the developing organization . Beta Testing It is performed by a selected group of friendly customers. Acceptance Testing It is performed by the customer to determine whether to accept or reject the delivery of the system. Performance Testing It is carried out to check whether the system meets the nonfunctional requirements identified in the SRS document.
Types of Performance Testing: Stress Testing Volume Testing Configuration Testing Compatibility Testing Regression Testing Recovery Testing Maintenance Testing Documentation Testing Usability Testing
 In order to be cost effective, the testing must be concentrated on areas where it will be most effective. DISCUSSION  The testing should be planned such that when testing is stopped for whatever reason, the most effective testing in the time allotted has already been done.  The absence of an organizational testing policy may result in too much effort and money will be spent on testing, attempting to achieve a level of quality that is impossible or unnecessary.
THANK YOU
Govt municipal degree college Faisalabad

software testing technique

  • 1.
    Group No.7 RollNo Qasim Ali 1425 Assad Ali 1407 Akber Ali 1406 Touseef Khadim 1427
  • 2.
    SOFTWARE TESTING  Softwaretesting is a process of verifying and validating that a software application or program  1. Meets the business and technical requirements that guided its design and development, and  2. Works as expected.  Main purposes  Verification  Validation  Defect finding
  • 3.
    Verification  The verificationprocessconfirms that the software meets its technical specifications.  A “specification” is a description of a function in terms of a measurable output value given a specific input value under specific preconditions.  A simple specification may be along the line of “a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields <list> ordered by month within 3 seconds of submission.”
  • 4.
    Validation  The validationprocess confirms that the software meets the business requirements.  A simple example of a business requirement is “After choosing a branch office name, information about the branch’s customer account managers will appear in a new window.  The window will present manager identification and  summary information about each manager’s customer base: <list of data elements>.”  Other requirements provide details on how the data will be summarized, formatted and displayed.
  • 5.
    Defect finding  Adefect is a variance between the expected and actual result. The defect’s ultimate source may be traced  to a fault introduced in the specification, design, or development (coding) phases.
  • 6.
    It is theprocess used to identify the correctness, completeness and quality of developed computer software. It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :-  Specification  Functionality  Performance INTRODUCTION
  • 7.
    OBJECTIVES Uncover as manyas errors (or bugs) as possible in a given product. Demonstrate a given software product matching its requirement specifications. Validate the quality of a software testing using the minimum cost and efforts. Generate high quality test cases, perform effective tests, and issue correct and helpful problem reports.
  • 8.
    Error, Bug, Fault& Failure Error : It is a human action that produces the incorrect result that produces a fault. Bug : The presence of error at the time of execution of the software. Fault : State of software caused by an error. Failure : Deviation of the software from its expected result. It is an event.
  • 9.
     Standard modelused word wide to develop a software.  A framework that describes the activities performed at each stage of a software development project.  Necessary to ensure the quality of the software.  Logical steps taken to develop a software product. SDLC(Software Development Life Cycle)
  • 10.
    Project Initiation System Study SummaryReports Analysis Regression Test Report Defects Execute Test Cases ( manual /automated ) Design Test Cases Test Plan Testing Life Cycle
  • 11.
    Test Plan It isa systematic approach to test a system i.e. software. The plan typically contains a detailed understanding of what the eventual testing workflow will be.
  • 12.
    Test Case It isa specific procedure of testing a particular requirement. It will include: Identification of specific requirement tested Test case success/failure criteria Specific steps to execute test Test data
  • 13.
    • Verification: Thesoftware should confirm to its specification (Are we building the product right?) • Validation: The software should do what the user really requires (Are we building the right product?) Verification vs Validation
  • 14.
    Testing Methodologies Black boxtesting White box testing
  • 15.
    Black box testing No knowledge of internal program design or code required.  Tests are based on requirements and functionality. White box testing  Knowledge of the internal program design and code required.  Tests are based on coverage of code statements, branches, paths, conditions.
  • 16.
  • 17.
  • 18.
    V-Model: test levels IntegrationTesting in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Code Design Specification System Specification Project Specification Business Requirements
  • 19.
    Tests Business Requirements Tests Project Specification Tests System Specification Tests Design Specification Tests Code V-Model: late testdesign Integration Testing in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Design Tests? “We don’t have time to design tests early”
  • 20.
    TestsTests Business Requirements TestsTests Project Specification TestsTests System Specification TestsTests Design Specification TestsTests Code V-Model: early testdesign Integration Testing in the Small Integration Testing in the Large System Testing Component Testing Acceptance Testing Run Tests Design Tests
  • 21.
    Early test design test design finds faults  faults found early are cheaper to fix  most significant faults found first  faults prevented, not built in  no additional effort, re-schedule test design  changing requirements caused by test designEarly test design helps to build quality, stops fault multiplication
  • 22.
    Experience report: Phase1 Phase 1: Plan 2 mo 2 mo dev test test 150 faults 1st mo. 50 faults users not happy Quality fraught, lots of dev overtime Actual "has to go in" but didn't work
  • 23.
    Experience report: Phase2 Phase 2: Plan 2 mo 6 wks dev test test 50 faults 1st mo. 0 faults happy users! Quality smooth, not much for dev to do Actual acc test: full week (vs half day) on time Phase 1: Plan 2 mo 2 mo dev test test 150 faults 1st mo. 50 faults users not happy Quality fraught, lots of dev overtime Actual "has to go in" but didn't work Phase 2: Plan 2 mo 6 wks dev test test 50 faults 1st mo. 0 faults happy users! Quality smooth, not much for dev to do Actual acc test: full week (vs half day) on time
  • 24.
    Testing Levels • Unittesting • Integration testing • System testing
  • 25.
    UNIT TESTING Tests eachmodule individually. Follows a white box testing (Logic of the program). Done by developers.
  • 26.
    INTEGRATION TESTING Once allthe modules have been unit tested, integration testing is performed. It is systematic testing. Produce tests to identify errors associated with interfacing. Types: Big Bang Integration testing Top Down Integration testing Bottom Up Integration testing Mixed Integration testing
  • 27.
    SYSTEM TESTING  Thesystem as a whole is tested to uncover requirement errors.  Verifies that all system elements work properly and that overall system function and performance has been achieved. Types: Alpha Testing Beta Testing Acceptance Testing Performance Testing
  • 28.
    Alpha Testing It iscarried out by the test team within the developing organization . Beta Testing It is performed by a selected group of friendly customers. Acceptance Testing It is performed by the customer to determine whether to accept or reject the delivery of the system. Performance Testing It is carried out to check whether the system meets the nonfunctional requirements identified in the SRS document.
  • 29.
    Types of PerformanceTesting: Stress Testing Volume Testing Configuration Testing Compatibility Testing Regression Testing Recovery Testing Maintenance Testing Documentation Testing Usability Testing
  • 30.
     In orderto be cost effective, the testing must be concentrated on areas where it will be most effective. DISCUSSION  The testing should be planned such that when testing is stopped for whatever reason, the most effective testing in the time allotted has already been done.  The absence of an organizational testing policy may result in too much effort and money will be spent on testing, attempting to achieve a level of quality that is impossible or unnecessary.
  • 31.
  • 32.
    Govt municipal degreecollege Faisalabad