Effective Software Testing: A developer's guide
Software-Testing

Effective Software Testing: A developer's guide

Maurizio Aniche, 2022

Inhaltsverzeichnis des Buches

  • front matter
  • forewords
  • preface
  • acknowledgments
  • about this book
  • Who should read this book
  • How this book is organized: A roadmap
  • What this book does not cover
  • About the code
  • liveBook discussion forum
  • about the author
  • about the cover illustration
  • 1 Effective and systematic software testing
  • 1.1 Developers who test vs. developers who do not
  • 1.2 Effective software testing for developers
  • 1.2.1 Effective testing in the development process
  • 1.2.2 Effective testing as an iterative process
  • 1.2.3 Focusing on development and then on testing
  • 1.2.4 The myth of “correctness by design”
  • 1.2.5 The cost of testing
  • 1.2.6 The meaning of effective and systematic
  • 1.2.7 The role of test automation
  • 1.3 Principles of software testing (or, why testing is so difficult)
  • 1.3.1 Exhaustive testing is impossible
  • 1.3.2 Knowing when to stop testing
  • 1.3.3 Variability is important (the pesticide paradox)
  • 1.3.4 Bugs happen in some places more than others
  • 1.3.5 No matter what testing you do, it will never be perfect or enough
  • 1.3.6 Context is king
  • 1.3.7 Verification is not validation
  • 1.4 The testing pyramid, and where we should focus
  • 1.4.1 Unit testing
  • 1.4.2 Integration testing
  • 1.4.3 System testing
  • 1.4.4 When to use each test level
  • 1.4.5 Why do I favor unit tests?
  • 1.4.6 What do I test at the different levels?
  • 1.4.7 What if you disagree with the testing pyramid?
  • 1.4.8 Will this book help you find all the bugs?
  • Exercises
  • Summary
  • 2 Specification-based testing
  • 2.1 The requirements say it all
  • 2.1.1 Step 1: Understanding the requirements, inputs, and outputs
  • 2.1.2 Step 2: Explore what the program does for various inputs
  • 2.1.3 Step 3: Explore possible inputs and outputs, and identify partitions
  • 2.1.4 Step 4: Analyze the boundaries
  • 2.1.5 Step 5: Devise test cases
  • 2.1.6 Step 6: Automate the test cases
  • 2.1.7 Step 7: Augment the test suite with creativity and experience
  • 2.2 Specification-based testing in a nutshell
  • 2.3 Finding bugs with specification testing
  • 2.4 Specification-based testing in the real world
  • 2.4.1 The process should be iterative, not sequential
  • 2.4.2 How far should specification testing go?
  • 2.4.3 Partition or boundary? It does not matter!
  • 2.4.4 On and off points are enough, but feel free to add in and out points
  • 2.4.5 Use variations of the same input to facilitate understanding
  • 2.4.6 When the number of combinations explodes, be pragmatic
  • 2.4.7 When in doubt, go for the simplest input
  • 2.4.8 Pick reasonable values for inputs you do not care about
  • 2.4.9 Test for nulls and exceptional cases, but only when it makes sense
  • 2.4.10 Go for parameterized tests when tests have the same skeleton
  • 2.4.11 Requirements can be of any granularity
  • 2.4.12 How does this work with classes and state?
  • 2.4.13 The role of experience and creativity
  • Exercises
  • Summary
  • 3 Structural testing and code coverage
  • 3.1 Code coverage, the right way
  • 3.2 Structural testing in a nutshell
  • 3.3 Code coverage criteria
  • 3.3.1 Line coverage
  • 3.3.2 Branch coverage
  • 3.3.3 Condition + branch coverage
  • 3.3.4 Path coverage
  • 3.4 Complex conditions and the MC/DC coverage criterion
  • 3.4.1 An abstract example
  • 3.4.2 Creating a test suite that achieves MC/DC
  • 3.5 Handling loops and similar constructs
  • 3.6 Criteria subsumption, and choosing a criterion
  • 3.7 Specification-based and structural testing: A running example
  • 3.8 Boundary testing and structural testing
  • 3.9 Structural testing alone often is not enough
  • 3.10 Structural testing in the real world
  • 3.10.1 Why do some people hate code coverage?
  • 3.10.2 What does it mean to achieve 100% coverage?
  • 3.10.3 What coverage criterion to use
  • 3.10.4 MC/DC when expressions are too complex and cannot be simplified
  • 3.10.5 Other coverage criteria
  • 3.10.6 What should not be covered?
  • 3.11 Mutation testing
  • Exercises
  • Summary
  • 4 Designing contracts
  • 4.1 Pre-conditions and post-conditions
  • 4.1.1 The assert keyword
  • 4.1.2 Strong and weak pre- and post-conditions
  • 4.2 Invariants
  • 4.3 Changing contracts, and the Liskov substitution principle
  • 4.3.1 Inheritance and contracts
  • 4.4 How is design-by-contract related to testing?
  • 4.5 Design-by-contract in the real world
  • 4.5.1 Weak or strong pre-conditions?
  • 4.5.2 Input validation, contracts, or both?
  • 4.5.3 Asserts and exceptions: When to use one or the other
  • 4.5.4 Exception or soft return values?
  • 4.5.5 When not to use design-by-contract
  • 4.5.6 Should we write tests for pre-conditions, post-conditions, and invariants?
  • 4.5.7 Tooling support
  • Exercises
  • Summary
  • 5 Property-based testing
  • 5.1 Example 1: The passing grade program
  • 5.2 Example 2: Testing the unique method
  • 5.3 Example 3: Testing the indexOf method
  • 5.4 Example 4: Testing the Basket class
  • 5.5 Example 5: Creating complex domain objects
  • 5.6 Property-based testing in the real world
  • 5.6.1 Example-based testing vs. property-based testing
  • 5.6.2 Common issues in property-based tests
  • 5.6.3 Creativity is key
  • Exercises
  • Summary
  • 6 Test doubles and mocks
  • 6.1 Dummies, fakes, stubs, spies, and mocks
  • 6.1.1 Dummy objects
  • 6.1.2 Fake objects
  • 6.1.3 Stubs
  • 6.1.4 Mocks
  • 6.1.5 Spies
  • 6.2 An introduction to mocking frameworks
  • 6.2.1 Stubbing dependencies
  • 6.2.2 Mocks and expectations
  • 6.2.3 Capturing arguments
  • 6.2.4 Simulating exceptions
  • 6.3 Mocks in the real world
  • 6.3.1 The disadvantages of mocking
  • 6.3.2 What to mock and what not to mock
  • 6.3.3 Date and time wrappers
  • 6.3.4 Mocking types you do not own
  • 6.3.5 What do others say about mocking?
  • Exercises
  • Summary
  • 7 Designing for testability
  • 7.1 Separating infrastructure code from domain code
  • 7.2 Dependency injection and controllability
  • 7.3 Making your classes and methods observable
  • 7.3.1 Example 1: Introducing methods to facilitate assertions
  • 7.3.2 Example 2: Observing the behavior of void methods
  • 7.4 Dependency via class constructor or value via method parameter?
  • 7.5 Designing for testability in the real world
  • 7.5.1 The cohesion of the class under test
  • 7.5.2 The coupling of the class under test
  • 7.5.3 Complex conditions and testability
  • 7.5.4 Private methods and testability
  • 7.5.5 Static methods, singletons, and testability
  • 7.5.6 The Hexagonal Architecture and mocks as a design technique
  • 7.5.7 Further reading about designing for testability
  • Exercises
  • Summary
  • 8 Test-driven development
  • 8.1 Our first TDD session
  • 8.2 Reflecting on our first TDD experience
  • 8.3 TDD in the real world
  • 8.3.1 To TDD or not to TDD?
  • 8.3.2 TDD 100% of the time?
  • 8.3.3 Does TDD work for all types of applications and domains?
  • 8.3.4 What does the research say about TDD?
  • 8.3.5 Other schools of TDD
  • 8.3.6 TDD and proper testing
  • Exercises
  • Summary
  • 9 Writing larger tests
  • 9.1 When to use larger tests
  • 9.1.1 Testing larger components
  • 9.1.2 Testing larger components that go beyond our code base
  • 9.2 Database and SQL testing
  • 9.2.1 What to test in a SQL query
  • 9.2.2 Writing automated tests for SQL queries
  • 9.2.3 Setting up infrastructure for SQL tests
  • 9.2.4 Best practices
  • 9.3 System tests
  • 9.3.1 An introduction to Selenium
  • 9.3.2 Designing page objects
  • 9.3.3 Patterns and best practices
  • 9.4 Final notes on larger tests
  • 9.4.1 How do all the testing techniques fit?
  • 9.4.2 Perform cost/benefit analysis
  • 9.4.3 Be careful with methods that are covered but not tested
  • 9.4.4 Proper code infrastructure is key
  • 9.4.5 DSLs and tools for stakeholders to write tests
  • 9.4.6 Testing other types of web systems
  • Exercises
  • Summary
  • 10 Test code quality
  • 10.1 Principles of maintainable test code
  • 10.1.1 Tests should be fast
  • 10.1.2 Tests should be cohesive, independent, and isolated
  • 10.1.3 Tests should have a reason to exist
  • 10.1.4 Tests should be repeatable and not flaky
  • 10.1.5 Tests should have strong assertions
  • 10.1.6 Tests should break if the behavior changes
  • 10.1.7 Tests should have a single and clear reason to fail
  • 10.1.8 Tests should be easy to write
  • 10.1.9 Tests should be easy to read
  • 10.1.10 Tests should be easy to change and evolve
  • 10.2 Test smells
  • 10.2.1 Excessive duplication
  • 10.2.2 Unclear assertions
  • 10.2.3 Bad handling of complex or external resources
  • 10.2.4 Fixtures that are too general
  • 10.2.5 Sensitive assertions
  • Exercises
  • Summary
  • 11 Wrapping up the book
  • 11.1 Although the model looks linear, iterations are fundamental
  • 11.2 Bug-free software development: Reality or myth?
  • 11.3 Involve your final user
  • 11.4 Unit testing is hard in practice
  • 11.5 Invest in monitoring
  • 11.6 What’s next?
  • Appendix. Answers to exercises
  • References
  • index