Agenda • Software Lifecyle •Software Testing • The Software Testing Life Cycle • Requirement Analysis • Test Planning • Test Preparation • Test Environment Readiness • Test Execution • Test Cycle Closure • TDD
3.
Software Lifecyle ▪ SoftwareLifecycle: the period of time that begins when a software product is conceived and ends when the software is no longer available for use. ▪ The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. ▪ Note that these phases may overlap or be performed iteratively
4.
Testing in Software Lifecyle ▪Testing must be integrated into the software lifecycle to succeed. ▪ This is true whether the particular lifecycle chosen is sequential, incremental, iterative, or spiral. ▪ Proper alignment between the testing process and other processes in the lifecycle is critical for success
5.
Example: Sequential SDLC ▪ Requirementsdefined early and formally ▪ Testing can starts early, system test level can follow an analytical requirements-based test strategy. ▪ Using a requirements-based strategy in a sequential model, the test team would start— early in the project—planning and designing tests following an analysis of the requirements specification to identify test conditions. ▪ This planning, analysis, and design work might identify defects in the requirements, making testing a preventive activity. Failure detection would start later in the lifecycle, once system test execution began.
6.
Example: Incremental SDLC ▪ Thetest team won’t receive a complete set of requirements early in the project, if ever. Requirement changes evolutionary. ▪ Rather than analyzing extensively documented requirements at the outset of the project, the test team can instead identify and prioritize key quality risk areas associated with the content of each sprint; ▪ follow an analytical risk-based test strategy
7.
Testing Alignment toSDLC • No matter what the lifecycle—and indeed, especially with the more fast- paced agile lifecycles—good change management and configuration management are critical for testing. • A lack of proper change management results in an inability of the test team to keep up with what the system is and what it should do.
8.
The Software TestingLife Cycle (STLC) ◼ the sequence of activities conducted to perform Software Testing. ◼ Consists of 6 phases
9.
Test Cycle Closure TestExecution Test Environment Readiness The Testing Lifecycle Requirement Analysis Test Preparation Test Planning The Software Testing Life Cycle (STLC)
10.
Requirement Analysis Review Requirements Requirements are reviewed bythe Test Team. Requirement Traceability Matrix Requirements captured in Excel or in a tool to track of testing coverage (requirements vs. tests) Requirement Walk- through Meeting Held Test team meets with the BAs and Dev to address any questions, gaps, or concerns. Prioritize Requirements (Risk Based Testing) Test Lead works with BAs & SMEs to prioritize the requirements (High, Medium, Low) Identify and Capture Questions about Requirements Questions, gaps, or concerns are identified and captured in a Query log. The Software Testing Life Cycle (STLC)
11.
Test Planning The Test Plan isdefined The Test Lead defines the testing strategy for the project and captures it in a document called the Test Plan. Test Effort Estimation The Test Lead determines the level of effort which will be required to successfully complete the testing The Software Testing Life Cycle (STLC)
12.
Test Preparation Once the requirementwalk through is completed test preparation can begin Tests are Identified The Test Team identifies the Tests needed to validate each requirements. Peer Review Tests are reviewed by the QA Lead to ensure proper coverage between tests and requirements. Test Cases are Reviewed and Finalized Tests cases are reviewed to ensure they are ready for execution. Test Cases are Created Info about the test, Step by step instructions and expected results are captured. The Software Testing Life Cycle (STLC)
13.
Test Environment Readiness Test Environment Setup Thedevelopment team will setup the test environment where testing will take place. Acceptance Test Performed by QA QA Team performs an acceptance test to ensure the application is ready for formal testing to begin. Once the application has been developed the Test Environment can be setup The Software Testing Life Cycle (STLC)
14.
Test Execution Activities Once theTest Environment Setup and Test Preparation is completed, Test Execution can begin. Test Execution Test Cases are executed and actual results are captured. A Test either passes or fails. Defect Reporting If a test case fails because of a problem, the problem is reported as a defect. Defect Resolution Defects are resolved and then reassigned back to test team for retesting. Regression Testing Testing done to ensure that defects were not introduced from the fixing of defects. User Acceptance Testing Testing performed by the business before accepting the application The Software Testing Life Cycle (STLC)
15.
Test Cycle Closure Once TestExecution is complete Test Cycle Closure can begin Execution Summary Report Summary of all tests executed, identified defects, and defects not yet resolved. Meeting with Stakeholders Execution summary report presented to the stakeholders for sign-off. Lessons Learned Meeting Meeting to discuss what went right and what went wrong. The Software Testing Life Cycle (STLC)
16.
REQUIREMENT ANALYSIS ◼ Reviewrequirements ◼ Identify and capture questions about the requirements ◼ Meet with the SMEs and Dev to address questions, determine what is in-scope and out-of-scope for testing, and identify requirement priorities. ◼ The RTM is created. TEST PLANNING ◼ The Test Plan is defined. ◼ Testing Effort is re-visited. TEST PREPARATION ◼ Identify Tests needed to verify requirements. ◼ The QA Lead reviews tests identified to ensure nothing was missed. ◼ Test cases are created. ◼ Test cases are reviewed to ensure they are ready for execution. TEST ENVIRONMENT READINESS ◼ The Test Environment is setup by the developers ◼ QA performs an acceptance test before formal testing begins. TEST EXECUTION ◼ Tests are executed ◼ Testing results are captured ◼ Report defects and retest once resolved ◼ Retest failed and blocked tests ◼ Perform Regression Testing ◼ User Acceptance Testing is done by the business. TEST CYCLE CLOSURE ◼ The test execution summary report is created and presented to the stake holders for sign off ◼ Lesson learned meeting held TO STUDY AND MEMORIZE
Categories of SoftwareTesting • Manual Testing: Testing performed by human testers without the use of automated tools. • Automated Testing: Testing performed using automated tools to execute test cases. • Static Testing: Testing without executing the code (e.g., code reviews, walkthroughs). • Dynamic Testing: Testing by executing the code (e.g., functional testing, performance testing).
19.
Some test techniques StaticDynamic Structural Behavioural Functional Non-functional Reviews Walkthroughs Desk-checking Data Flow Symbolic Execution Definition -Use Statement Branch/Decision Branch Condition Branch Condition Combination LCSAJ Arcs Equivalence Partitioning Boundary Value Analysis Cause-Effect Graphing Random Usability Performance Static Analysis Inspection Control Flow etc. etc. etc. etc. etc. State Transition
20.
Types of Tests •Functional • Performance • Regression • Load • Worst case • Perfective • Exploratory • Random-Input • Certification • Stress • Usability • Real Time • Life • Collision • Security • Installation • Recovery
21.
Test Levels • Unit •Component • Integration • System • Field • Acceptance many variant Integration Testing: • Component integration • System integration • System of system • Conglomeration of System Acceptance: • User acceptance test • Regulatory acceptance test
23.
VV&T • Verification • theprocess of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1] • Validation • determination of the correctness of the products of software development with respect to the user needs and requirements [BS 7925-1] • Testing • the process of exercising software to verify that it satisfies specified requirements and to detect faults
Test Level’s Tasks ▪Clearly defined test goals and scope ▪ Traceability to the test basis (if available) ▪ Entry and exit criteria, as appropriate both for the level and for the system lifecycle ▪ Test deliverables, including results reporting ▪ Test techniques that will be applied, as appropriate for the level, for the team and for the risks inherent in the system ▪ Measurements and metrics ▪ Test tools, where applicable and as appropriate for the level ▪ And, if applicable, compliance with organizational or other standards
28 Test Strategies • Topdown • Bottom Up • Black box • White box • Simulation • I/O first • Alpha/Beta testing • Fault insertion • Fault-Error handling • Equivalence class partitioning • Boundary value analysis • Cause effect graphing • Error guessing • Customer defects
29.
Specific Systems • Systemof system • independent systems tied together to serve a common purpose • Safety critical system • A system whose failure or malfunction may result in death or serious injury to people, or loss or severe damage to equipment, or environmental harm.
30.
System of System ▪Systems of systems are independent systems tied together to serve a common purpose. ▪ Since they are independent and tied together, they often lack a single, coherent user or operator interface, a unified data model, compatible external interfaces, and so forth.
31.
System of System: Characteristicsand risks: • ■ The integration of commercial off-the-shelf (COTS) software along with amount of custom development, often taking place over a long period. • ■ Significant technical, lifecycle, and organizational complexity and heterogeneity. This organizational and lifecycle complexity can include issues of confidentiality, company secrets, and regulations. • ■ Different development lifecycles and other processes among disparate teams, especially—as is frequently the case—when insourcing, outsourcing, and offshoring are involved.
32.
System of System: Characteristicsand risks: ▪ Serious potential reliability issues due to intersystem coupling, where one inherently weaker system creates ripple-effect failures across the entire system of systems. ▪ System integration testing, including interoperability testing, is essential. Well-defined interfaces for testing are needed.
33.
Lifecycle Implications ▪ Multiplelevel integration testing. ▪ Multiple version on each system. Unless all systems to be built by one (large) organization and with same development approach. ▪ Long duration of development. (small = 1 year, 40-50 people) ▪ Large size & complexity → project breakdown → ▪ formal information/knowledge transfer ▪ transfers responsibility ▪ handoffs ▪ The systems evolving.
Safety Critical System •A system whose failure or malfunction may result in death or serious injury to people, or loss or severe damage to equipment, or environmental harm
36.
What to measure? •■ Planned schedule and coverage • ■ Requirements and their schedule, resource, and task implications for testing • ■ Workload and resource usage • ■ Milestones and scope of testing • ■ Planned and actual costs • ■ Risks; both quality and project risks • ■ Defects, including total found, total fixed, current backlog, average closure periods, and configuration, subsystem, priority, or severity distribution • …. and so on…. no end
Definition ▪ Define useful,pertinent and concise set of quality and test metrics ▪ Too large metrics → ▪ Difficult and expensive to measure ▪ Confusing rather than enlighting ▪ Uniform and agreed upon interpretation of metrics → ▪ Minimize disputes and opinions about meaning of certaion measures of outcomes, analyses and trends
39.
Tracking ▪ use automatedtool ▪ reduce the time required to capture, track, analyze, report, and measure the metrics. ▪ be sure to apply objective and subjective analyses for specific metrics over time. ▪ be aware of and manage the tendency for people’s interests to affect the interpretation they place on a particular metric or measure. ▪ Usually people’s interests affect their conclusions
40.
Reporting • Most importantly,reporting of metrics and measures should enlighten management and other stakeholders, not confuse or misdirect them. • Should be easily understood, not overly complex and certainly not ambiguous
What is TDD? •Definition: Test-Driven Development (TDD) is a software development process in which tests are written before code. • Principle: "Red, Green, Refactor" • Red: Write a failing test • Green: Write code to pass the test • Refactor: Improve the code
43.
Benefits of TDD •Improved Code Quality: Ensures that code works as expected • Reduced Bugs: Catches bugs early in the development process • Documentation: Tests serve as documentation for code behavior • Refactoring with Confidence: Safe to refactor code without breaking functionality • Better Design: Encourages simpler, more modular code
44.
TDD Workflow 1. Writea Test: Write a test for a new function or feature. 2. Run the Test: Ensure the test fails (Red). 3. Write Code: Write the minimum amount of code to pass the test. 4. Run the Test Again: Ensure the test passes (Green). 5. Refactor: Improve the code while keeping the test green. 6. Repeat: Continue with the next test.
45.
Best Practices • WriteSimple Tests: Keep tests simple and focused on one functionality. • Test Coverage: Aim for high test coverage, but prioritize critical code paths. • Continuous Integration: Use CI/CD pipelines to run tests automatically. • Collaboration: Encourage team collaboration and code reviews. • Documentation: Maintain good documentation for tests and code.