Context-Driven Performance Testing Alexander Podelko alex.podelko@oracle.com alexanderpodelko.com/blog @apodelko July 27, 2020
About the Speaker • Alexander Podelko • Specializing in performance since 1997 • Currently Consulting Member of Technical Staff at Oracle (Stamford, CT office) • Performance testing and optimization of Enterprise Performance Management (EPM) a.k.a. Hyperion products • Board director at Computer Measurement Group (CMG) – a non-profit organization of performance and capacity professionals Disclaimer: The views expressed here are my personal views only and do not necessarily represent those of my current or previous employers. All brands and trademarks mentioned are the property of their owners.
Agenda • Context-Driven Testing • Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 3
Context-Driven Testing • Context-driven approach was initially introduced by James Bach, Brian Marick, Bret Pettichord, and Cem Kaner – http://context-driven-testing.com • Declared a “school” in 2001 (Lessons Learned in Software Testing) – Became political 4
Basic Principles • The value of any practice depends on its context. • There are good practices in context, but there are no best practices. • People, working together, are the most important part of any project’s context. • Projects unfold over time in ways that are often not predictable. 5
Basic Principles • The product is a solution. If the problem isn’t solved, the product doesn’t work. • Good software testing is a challenging intellectual process. • Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products. 6
“Traditional” Approach • Load / Performance Testing is: – Last moment before deployment • Last step in the waterfall process • Checking against given requirements / SLAs • Throwing it back over the wall if reqs are not met – System-level – Realistic workload • With variations when needed: stress, uptime, etc. – Lab environment • Often scale-down – Protocol level record-and-playback • Expensive tools requiring special skills 7
Agenda • Context-Driven Testing • Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 8
Agile Development 9 • Agile development should be rather a trivial case for performance testing – You have a working system each iteration to test early by definition. – You need performance engineer for the whole project • Savings come from detecting problems early • You need to adjust requirements for implemented functionality – Additional functionality will impact performance
The Main Issue on the Agile Side • It doesn’t [always] work this way in practice • That is why you have “Hardening Iterations”, “Technical Debt” and similar notions • Same old problem: functionality gets priority over performance 10
The Main Issue on the Testing Side 11 • Performance Engineering teams don’t scale well – Even assuming that they are competent and effective • Increased volume exposes the problem – Early testing – Each iteration • Remedies: automation, making performance everyone’s job
Early Testing - Mentality Change 12 • Making performance everyone’s job • Late record/playback performance testing -> Early Performance Engineering • System-level requirements -> Component-level requirements • Record/playback approach -> Programming to generate load/create stubs • "Black Box" -> "Grey Box”
Continuum of Options 13 NewWell-known System Testing Approach Exploratory / Agile Automated / Regression Traditional Traditional
Exploratory Performance Testing • Rather alien for performance testing, but probably more relevant than for functional testing • We learn about system’s performance as we start to run test – Only guesses for new systems • Rather a performance engineering process bringing the system to the proper state than just testing 14
Continuous Performance Testing • Become a must in Agile development • Regression testing in essence – So can’t be the only approach used (“full automation”) – Scope and scale would depend on context • Views differ and practices taken out of context may be misleading 15
Different Perspectives • Consultant: need to test the system – In its current state – Why bother about automation? – External or internal • Performance Engineer – On an agile team – Need to test it each build/iteration/sprint/etc. • Automation Engineer / SDET / etc. 16
Automation: Considerations 17 • You need know system well enough to make meaningful automation • If system is new, overheads are too high – So almost no automation in traditional environments • If the same system is tested again and again – It makes sense to invest in setting up automation • Automated interfaces should be stable enough – APIs are usually more stable on early stages
Time / Resource Considerations • Performance tests take time and resources – The larger tests, the more • May be not an option on each check-in • Need of a tiered solution – Some performance measurements each build – Daily mid-size performance tests – Periodic large-scale / uptime tests outside CI 18
Agenda • Context-Driven Testing • Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 19
Environment • Cloud – No more excuse of not having hardware • Lab vs. Service (SaaS) vs. Cloud (IaaS) – For both the system and load generators • Test vs. Production 20
Scenarios • System validation for high load – Outside load (service or cloud), production system – Wider scope, lower repeatability • Performance optimization / troubleshooting – Isolated lab environment – Limited scope, high repeatability • Testing in Cloud – Lowering costs (in case of periodic tests) – Limited scope, low repeatability 21
Find Your Way • If performance risk is high it may be a combination of environments, e.g. – Production environment to test for max load – Lab for performance optimization / troubleshooting – Limited performance environments for CI 22
Scope / Granularity • System level • Component level – Service Virtualization, etc. • Server time • Server + Network (WAN simulation, etc.) • End-to-end (User Experience) • Each may require different approach / tools 23
Agenda • Context-Driven Testing • Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 24
Record and Playback: Protocol Level Load Testing Tool Virtual Users ServerLoad Generator Application Network 25
Considerations • Usually doesn't work for testing components • Each tool support a limited number of technologies (protocols) • Some technologies are very time-consuming • Workload validity in case of sophisticated logic on the client side is not guaranteed 26
Record and Playback: UI Level 27 Load Testing Tool Virtual Users ServerLoad Generator Application NetworkBrowsers
Considerations • Scalability – Still require more resources • Supported technologies • Timing accuracy • Playback accuracy – For example, for HtmlUnit 28
Programming Load Testing Tool App. Virtual Users ServerLoad Generator Application Network API 29
Considerations • Requires programming / access to APIs • Tool support – Extensibility – Language support • May require more resources • Environment may need to be set 30
Production Workload • A/B testing, canary testing • Should work well if – homogenous workloads that can be controlled precisely – potential issues have minimal impact on user satisfaction and company image and the changes can be easily rolled back – fully parallel and scalable architecture 31
Agenda • Context-Driven Testing • Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 32
Performance Risk Mitigation • Single-user performance engineering – Profiling, WPO, single-user performance • Software Performance Engineering – Modeling, Performance Patterns • Instrumentation / APM / Monitoring – Production system insights • Capacity Planning/Management – Resources Allocation • Continuous Integration / Deployment – Ability to deploy and remove changes quickly 33
Defining Performance Testing Strategy • What are performance risks we want to mitigate? • What part of this risks should be mitigated by performance testing? • Which performance tests will mitigate the risk? • When we should run them? • What process/environment/approach/tools we need in our context to implement them? 34
Examples • Handling full/extra load – System level, production[-like env], realistic load • Catching regressions – Continuous testing, limited scale/env • Early detection of performance problems – Exploratory tests, targeted workload • Performance optimization/investigation – Dedicated env, targeted workload 35
Summary • Testing strategy became very non-trivial – A lot of options along many dimensions – Defined by context • “Automation” is only one part of it – Important for iterative development • Part of performance engineering strategy – Should be considered amongst other activities 36
Questions? Alexander Podelko alex.podelko@oracle.com alexanderpodelko.com/blog @apodelko

Context-Driven Performance Testing

  • 1.
  • 2.
    About the Speaker •Alexander Podelko • Specializing in performance since 1997 • Currently Consulting Member of Technical Staff at Oracle (Stamford, CT office) • Performance testing and optimization of Enterprise Performance Management (EPM) a.k.a. Hyperion products • Board director at Computer Measurement Group (CMG) – a non-profit organization of performance and capacity professionals Disclaimer: The views expressed here are my personal views only and do not necessarily represent those of my current or previous employers. All brands and trademarks mentioned are the property of their owners.
  • 3.
    Agenda • Context-Driven Testing •Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 3
  • 4.
    Context-Driven Testing • Context-drivenapproach was initially introduced by James Bach, Brian Marick, Bret Pettichord, and Cem Kaner – http://context-driven-testing.com • Declared a “school” in 2001 (Lessons Learned in Software Testing) – Became political 4
  • 5.
    Basic Principles • Thevalue of any practice depends on its context. • There are good practices in context, but there are no best practices. • People, working together, are the most important part of any project’s context. • Projects unfold over time in ways that are often not predictable. 5
  • 6.
    Basic Principles • Theproduct is a solution. If the problem isn’t solved, the product doesn’t work. • Good software testing is a challenging intellectual process. • Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products. 6
  • 7.
    “Traditional” Approach • Load/ Performance Testing is: – Last moment before deployment • Last step in the waterfall process • Checking against given requirements / SLAs • Throwing it back over the wall if reqs are not met – System-level – Realistic workload • With variations when needed: stress, uptime, etc. – Lab environment • Often scale-down – Protocol level record-and-playback • Expensive tools requiring special skills 7
  • 8.
    Agenda • Context-Driven Testing •Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 8
  • 9.
    Agile Development 9 • Agiledevelopment should be rather a trivial case for performance testing – You have a working system each iteration to test early by definition. – You need performance engineer for the whole project • Savings come from detecting problems early • You need to adjust requirements for implemented functionality – Additional functionality will impact performance
  • 10.
    The Main Issueon the Agile Side • It doesn’t [always] work this way in practice • That is why you have “Hardening Iterations”, “Technical Debt” and similar notions • Same old problem: functionality gets priority over performance 10
  • 11.
    The Main Issueon the Testing Side 11 • Performance Engineering teams don’t scale well – Even assuming that they are competent and effective • Increased volume exposes the problem – Early testing – Each iteration • Remedies: automation, making performance everyone’s job
  • 12.
    Early Testing -Mentality Change 12 • Making performance everyone’s job • Late record/playback performance testing -> Early Performance Engineering • System-level requirements -> Component-level requirements • Record/playback approach -> Programming to generate load/create stubs • "Black Box" -> "Grey Box”
  • 13.
    Continuum of Options 13 NewWell-known System TestingApproach Exploratory / Agile Automated / Regression Traditional Traditional
  • 14.
    Exploratory Performance Testing •Rather alien for performance testing, but probably more relevant than for functional testing • We learn about system’s performance as we start to run test – Only guesses for new systems • Rather a performance engineering process bringing the system to the proper state than just testing 14
  • 15.
    Continuous Performance Testing •Become a must in Agile development • Regression testing in essence – So can’t be the only approach used (“full automation”) – Scope and scale would depend on context • Views differ and practices taken out of context may be misleading 15
  • 16.
    Different Perspectives • Consultant:need to test the system – In its current state – Why bother about automation? – External or internal • Performance Engineer – On an agile team – Need to test it each build/iteration/sprint/etc. • Automation Engineer / SDET / etc. 16
  • 17.
    Automation: Considerations 17 • Youneed know system well enough to make meaningful automation • If system is new, overheads are too high – So almost no automation in traditional environments • If the same system is tested again and again – It makes sense to invest in setting up automation • Automated interfaces should be stable enough – APIs are usually more stable on early stages
  • 18.
    Time / ResourceConsiderations • Performance tests take time and resources – The larger tests, the more • May be not an option on each check-in • Need of a tiered solution – Some performance measurements each build – Daily mid-size performance tests – Periodic large-scale / uptime tests outside CI 18
  • 19.
    Agenda • Context-Driven Testing •Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 19
  • 20.
    Environment • Cloud – Nomore excuse of not having hardware • Lab vs. Service (SaaS) vs. Cloud (IaaS) – For both the system and load generators • Test vs. Production 20
  • 21.
    Scenarios • System validationfor high load – Outside load (service or cloud), production system – Wider scope, lower repeatability • Performance optimization / troubleshooting – Isolated lab environment – Limited scope, high repeatability • Testing in Cloud – Lowering costs (in case of periodic tests) – Limited scope, low repeatability 21
  • 22.
    Find Your Way •If performance risk is high it may be a combination of environments, e.g. – Production environment to test for max load – Lab for performance optimization / troubleshooting – Limited performance environments for CI 22
  • 23.
    Scope / Granularity •System level • Component level – Service Virtualization, etc. • Server time • Server + Network (WAN simulation, etc.) • End-to-end (User Experience) • Each may require different approach / tools 23
  • 24.
    Agenda • Context-Driven Testing •Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 24
  • 25.
    Record and Playback:Protocol Level Load Testing Tool Virtual Users ServerLoad Generator Application Network 25
  • 26.
    Considerations • Usually doesn'twork for testing components • Each tool support a limited number of technologies (protocols) • Some technologies are very time-consuming • Workload validity in case of sophisticated logic on the client side is not guaranteed 26
  • 27.
    Record and Playback:UI Level 27 Load Testing Tool Virtual Users ServerLoad Generator Application NetworkBrowsers
  • 28.
    Considerations • Scalability – Stillrequire more resources • Supported technologies • Timing accuracy • Playback accuracy – For example, for HtmlUnit 28
  • 29.
    Programming Load Testing ToolApp. Virtual Users ServerLoad Generator Application Network API 29
  • 30.
    Considerations • Requires programming/ access to APIs • Tool support – Extensibility – Language support • May require more resources • Environment may need to be set 30
  • 31.
    Production Workload • A/Btesting, canary testing • Should work well if – homogenous workloads that can be controlled precisely – potential issues have minimal impact on user satisfaction and company image and the changes can be easily rolled back – fully parallel and scalable architecture 31
  • 32.
    Agenda • Context-Driven Testing •Early Performance Testing – Exploratory, Continuous • Environment / Scope / Granularity • Load generation • Performance Testing / Engineering Strategy 32
  • 33.
    Performance Risk Mitigation •Single-user performance engineering – Profiling, WPO, single-user performance • Software Performance Engineering – Modeling, Performance Patterns • Instrumentation / APM / Monitoring – Production system insights • Capacity Planning/Management – Resources Allocation • Continuous Integration / Deployment – Ability to deploy and remove changes quickly 33
  • 34.
    Defining Performance TestingStrategy • What are performance risks we want to mitigate? • What part of this risks should be mitigated by performance testing? • Which performance tests will mitigate the risk? • When we should run them? • What process/environment/approach/tools we need in our context to implement them? 34
  • 35.
    Examples • Handling full/extraload – System level, production[-like env], realistic load • Catching regressions – Continuous testing, limited scale/env • Early detection of performance problems – Exploratory tests, targeted workload • Performance optimization/investigation – Dedicated env, targeted workload 35
  • 36.
    Summary • Testing strategybecame very non-trivial – A lot of options along many dimensions – Defined by context • “Automation” is only one part of it – Important for iterative development • Part of performance engineering strategy – Should be considered amongst other activities 36
  • 37.