Software-Testing
The Art of Unit Testing: With Examples in Javascript
Vladimir Khorikov, Roy Osherove, 2024
Inhaltsverzeichnis des Buches
- inside front cover
- Praise for the second edition
- The Art of Unit Testing
- Copyright
- dedication
- contents
- Front matter
- foreword to the second edition
- foreword to the first edition
- preface
- acknowledgments
- about this book
- What’s new in the third edition
- Who should read this book
- How this book is organized: A road map
- Code conventions and downloads
- Software requirements
- liveBook discussion forum
- Other projects by Roy Osherove
- Other projects by Vladimir Khorikov
- about the authors
- about the cover illustration
- Part 1 Getting started
- 1 The basics of unit testing
- 1.1 The first step
- 1.2 Defining unit testing, step by step
- 1.3 Entry points and exit points
- 1.4 Exit point types
- 1.5 Different exit points, different techniques
- 1.6 A test from scratch
- 1.7 Characteristics of a good unit test
- 1.7.1 What is a good unit test?
- 1.7.2 A unit test checklist
- 1.8 Integration tests
- 1.9 Finalizing our definition
- 1.10 Test-driven development
- 1.10.1 TDD: Not a substitute for good unit tests
- 1.10.2 Three core skills needed for successful TDD
- Summary
- 2 A first unit test
- 2.1 Introducing Jest
- 2.1.1 Preparing our environment
- 2.1.2 Preparing our working folder
- 2.1.3 Installing Jest
- 2.1.4 Creating a test file
- 2.1.5 Executing Jest
- 2.2 The library, the assert, the runner, and the reporter
- 2.3 What unit testing frameworks offer
- 2.3.1 The xUnit frameworks
- 2.3.2 xUnit, TAP, and Jest structures
- 2.4 Introducing the Password Verifier project
- 2.5 The first Jest test for verifyPassword
- 2.5.1 The Arrange-Act-Assert pattern
- 2.5.2 Testing the test
- 2.5.3 USE naming
- 2.5.4 String comparisons and maintainability
- 2.5.5 Using describe()
- 2.5.6 Structure implying context
- 2.5.7 The it() function
- 2.5.8 Two Jest flavors
- 2.5.9 Refactoring the production code
- 2.6 Trying the beforeEach() route
- 2.6.1 beforeEach() and scroll fatigue
- 2.7 Trying the factory method route
- 2.7.1 Replacing beforeEach() completely with factory methods
- 2.8 Going full circle to test()
- 2.9 Refactoring to parameterized tests
- 2.10 Checking for expected thrown errors
- 2.11 Setting test categories
- Summary
- Part 2 Core techniques
- 3 Breaking dependencies with stubs
- 3.1 Types of dependencies
- 3.2 Reasons to use stubs
- 3.3 Generally accepted design approaches to stubbing
- 3.3.1 Stubbing out time with parameter injection
- 3.3.2 Dependencies, injections, and control
- 3.4 Functional injection techniques
- 3.4.1 Injecting a function
- 3.4.2 Dependency injection via partial application
- 3.5 Modular injection techniques
- 3.6 Moving toward objects with constructor functions
- 3.7 Object-oriented injection techniques
- 3.7.1 Constructor injection
- 3.7.2 Injecting an object instead of a function
- 3.7.3 Extracting a common interface
- Summary
- 4 Interaction testing using mock objects
- 4.1 Interaction testing, mocks, and stubs
- 4.2 Depending on a logger
- 4.3 Standard style: Introduce parameter refactoring
- 4.4 The importance of differentiating between mocks and stubs
- 4.5 Modular-style mocks
- 4.5.1 Example of production code
- 4.5.2 Refactoring the production code in a modular injection style
- 4.5.3 A test example with modular-style injection
- 4.6 Mocks in a functional style
- 4.6.1 Working with a currying style
- 4.6.2 Working with higher-order functions and not currying
- 4.7 Mocks in an object-oriented style
- 4.7.1 Refactoring production code for injection
- 4.7.2 Refactoring production code with interface injection
- 4.8 Dealing with complicated interfaces
- 4.8.1 Example of a complicated interface
- 4.8.2 Writing tests with complicated interfaces
- 4.8.3 Downsides of using complicated interfaces directly
- 4.8.4 The interface segregation principle
- 4.9 Partial mocks
- 4.9.1 A functional example of a partial mock
- 4.9.2 An object-oriented partial mock example
- Summary
- 5 Isolation frameworks
- 5.1 Defining isolation frameworks
- 5.1.1 Choosing a flavor: Loose vs. typed
- 5.2 Faking modules dynamically
- 5.2.1 Some things to notice about Jest’s API
- 5.2.2 Consider abstracting away direct dependencies
- 5.3 Functional dynamic mocks and stubs
- 5.4 Object-oriented dynamic mocks and stubs
- 5.4.1 Using a loosely typed framework
- 5.4.2 Switching to a type-friendly framework
- 5.5 Stubbing behavior dynamically
- 5.5.1 An object-oriented example with a mock and a stub
- 5.5.2 Stubs and mocks with substitute.js
- 5.6 Advantages and traps of isolation frameworks
- 5.6.1 You don’t need mock objects most of the time
- 5.6.2 Unreadable test code
- 5.6.3 Verifying the wrong things
- 5.6.4 Having more than one mock per test
- 5.6.5 Overspecifying the tests
- Summary
- 6 Unit testing asynchronous code
- 6.1 Dealing with async data fetching
- 6.1.1 An initial attempt with an integration test
- 6.1.2 Waiting for the act
- 6.1.3 Integration testing of async/await
- 6.1.4 Challenges with integration tests
- 6.2 Making our code unit-test friendly
- 6.2.1 Extracting an entry point
- 6.2.2 The Extract Adapter pattern
- 6.3 Dealing with timers
- 6.3.1 Stubbing timers out with monkey-patching
- 6.3.2 Faking setTimeout with Jest
- 6.4 Dealing with common events
- 6.4.1 Dealing with event emitters
- 6.4.2 Dealing with click events
- 6.5 Bringing in the DOM testing library
- Summary
- Part 3 The test code
- 7 Trustworthy tests
- 7.1 How to know you trust a test
- 7.2 Why tests fail
- 7.2.1 A real bug has been uncovered in the production code
- 7.2.2 A buggy test gives a false failure
- 7.2.3 The test is out of date due to a change in functionality
- 7.2.4 The test conflicts with another test
- 7.2.5 The test is flaky
- 7.3 Avoiding logic in unit tests
- 7.3.1 Logic in asserts: Creating dynamic expected values
- 7.3.2 Other forms of logic
- 7.3.3 Even more logic
- 7.4 Smelling a false sense of trust in passing tests
- 7.4.1 Tests that don’t assert anything
- 7.4.2 Not understanding the tests
- 7.4.3 Mixing unit tests and flaky integration tests
- 7.4.4 Testing multiple exit points
- 7.4.5 Tests that keep changing
- 7.5 Dealing with flaky tests
- 7.5.1 What can you do once you’ve found a flaky test?
- 7.5.2 Preventing flakiness in higher-level tests
- Summary
- 8 Maintainability
- 8.1 Changes forced by failing tests
- 8.1.1 The test is not relevant or conflicts with another test
- 8.1.2 Changes in the production code’s API
- 8.1.3 Changes in other tests
- 8.2 Refactoring to increase maintainability
- 8.2.1 Avoid testing private or protected methods
- 8.2.2 Keep tests DRY
- 8.2.3 Avoid setup methods
- 8.2.4 Use parameterized tests to remove duplication
- 8.3 Avoid overspecification
- 8.3.1 Internal behavior overspecification with mocks
- 8.3.2 Exact outputs and ordering overspecification
- Summary
- Part 4 Design and process
- 9 Readability
- 9.1 Naming unit tests
- 9.2 Magic values and naming variables
- 9.3 Separating asserts from actions
- 9.4 Setting up and tearing down
- Summary
- 10 Developing a testing strategy
- 10.1 Common test types and levels
- 10.1.1 Criteria for judging a test
- 10.1.2 Unit tests and component tests
- 10.1.3 Integration tests
- 10.1.4 API tests
- 10.1.5 E2E/UI isolated tests
- 10.1.6 E2E/UI system tests
- 10.2 Test-level antipatterns
- 10.2.1 The end-to-end-only antipattern
- 10.2.2 The low-level-only test antipattern
- 10.2.3 Disconnected low-level and high-level tests
- 10.3 Test recipes as a strategy
- 10.3.1 How to write a test recipe
- 10.3.2 When do I write and use a test recipe?
- 10.3.3 Rules for a test recipe
- 10.4 Managing delivery pipelines
- 10.4.1 Delivery vs. discovery pipelines
- 10.4.2 Test layer parallelization
- Summary
- 11 Integrating unit testing into the organization
- 11.1 Steps to becoming an agent of change
- 11.1.1 Be prepared for the tough questions
- 11.1.2 Convince insiders: Champions and blockers
- 11.1.3 Identify possible starting points
- 11.2 Ways to succeed
- 11.2.1 Guerrilla implementation (bottom-up)
- 11.2.2 Convincing management (top-down)
- 11.2.3 Experiments as door openers
- 11.2.4 Get an outside champion
- 11.2.5 Make progress visible
- 11.2.6 Aim for specific goals, metrics, and KPIs
- 11.2.7 Realize that there will be hurdles
- 11.3 Ways to fail
- 11.3.1 Lack of a driving force
- 11.3.2 Lack of political support
- 11.3.3 Ad hoc implementations and first impressions
- 11.3.4 Lack of team support
- 11.4 Influence factors
- 11.5 Tough questions and answers
- 11.5.1 How much time will unit testing add to the current process?
- 11.5.2 Will my QA job be at risk because of unit testing?
- 11.5.3 Is there proof that unit testing helps?
- 11.5.4 Why is the QA department still finding bugs?
- 11.5.5 We have lots of code without tests: Where do we start?
- 11.5.6 What if we develop a combination of software and hardware?
- 11.5.7 How can we know we don’t have bugs in our tests?
- 11.5.8 Why do I need tests if my debugger shows that my code works?
- 11.5.9 What about TDD?
- Summary
- 12 Working with legacy code
- 12.1 Where do you start adding tests?
- 12.2 Choosing a selection strategy
- 12.2.1 Pros and cons of the easy-first strategy
- 12.2.2 Pros and cons of the hard-first strategy
- 12.3 Writing integration tests before refactoring
- 12.3.1 Read Michael Feathers’ book on legacy code
- 12.3.2 Use CodeScene to investigate your production code
- Summary
- appendix. Monkey-patching functions and modules
- A.1 An obligatory warning
- A.2 Monkey-patching functions, globals, and possible issues
- A.2.1 Monkey-patching a function the Jest way
- A.2.2 Jest spies
- A.2.3 spyOn with mockImplementation()
- A.3 Ignoring a whole module with Jest is simple
- A.4 Faking module behavior in each test
- A.4.1 Stubbing a module with vanilla require.cache
- A.4.2 Stubbing custom module data with Jest is complicated
- A.4.3 Avoid Jest’s manual mocks
- A.4.4 Stubbing a module with Sinon.js
- A.4.5 Stubbing a module with testdouble
- index