Allowing tests to drive the code we write has several benefits, we have great resources on the subject. Is a principle we hold dear and follow when possible. Since every project has a particular framework with special tooling, a slightly different strategy yields better results depending on the environment we are operating.
Most recently, my focus has been on the TypeScript and GraphQL realm using React and Apollo. I struggled to write my tests first because the tools and the framework were not in line with what I had in mind; I was fighting it, doing unnecessary work. As an example, when tests drive the code on a language that is not statically typed, part of the process is to make sure the right message gets to the right object. With the tools available to me at the time, that is somewhat counterproductive since TypeScript provides so much checking the types on our code.
Another thing new to me is the use of Apollo. An excellent tool for the particular problem it solves, but presents some new challenges in the process of letting tests drive the code I write.
When working with this stack, a common scenario is adding a component that consumes a particular GraphQL query and displays data to the user. It involves a network request, using data that is external to the system, and making sure it is displayed correctly. By repeating this process, I learned some things to do TDD with TypeScript, GraphQL, React, and Apollo.
TDD (Type Driven Development π€·ββοΈ)
With the power of static typing in TypeScript, the first step on writing my tests has been to comply with the API the new component exposes. Luckily, there are great libraries that generate types from a schema file. I’ve used graphql-code-generator, and it has worked for me.
With types on my side, the first step I take when writing my tests is expressing the shape of my input and output. Having the underlying help of the type system increases confidence throughout the entire system to consumers and producers of the added or modified functionality. Another benefit of this approach is that without even running the tests, any subsequent refactor starts by merely chasing the compiler messages.
When doing TDD without types, testing in isolation usually implies much mocking, making sure that all the stubbing had all the attributes and received all the messages needed. Changing the API of any of the dependencies of a piece of functionality might not even break the tests. Now since the interfaces and type definitions are part of the system, any time they change, the type system is the safety net.
Do not fixate on implementation details
When doing TDD, both behavior and implementation details are part of the tests. A feature takes shape based on the description the tests make of it. Based on this, the lowest level description of a given behavior is the way the code is written, what other components are in use, which properties are passed down, and what’s going to be the final state of an action. This strategy works great on small units with well-defined inputs and outputs; but not so much when building visual components that are expensive to render, mount, and exercise.
Usually writing tests by focusing on the implementation details feels natural. The comfortable cycle of test, fail, make it pass. However, there is a problem with this approach for user interfaces; that’s not the way users consume it. Users won’t trigger events, simulate changes, or check on the state of the component at the end of an action.
When I write tests for user-facing components, there is another pitfall if they are close to the implementation, similar to the Single Responsibility Principle, but for tests. Tying the component unit tests to the implementation details, how to present the information to users, and how the final product looks means that the test has more than one reason driving change.
To test behavior on the components it’s a good thing to stay close to how a user interacts with the component. What actions trigger side effects and what the resulting state is won’t be part of a user’s experience. With this approach, I have the freedom to test the end goal while keeping the code loose. If I tie myself to a specific way of doing things, any refactor destroys everything built so far.
A great tool I have encountered to achieve this flow is react-testing-library. It’s guiding principle is:
The more your tests resemble the way your software is used, the more confidence they can give you.
This checks perfectly with the most compelling use case I have encountered for it. Here is an extract from an example using both Enzyme and Testing Library.
// enzyme
it('should render the country info', async () => {
const wrapper = mount(
<MockedProvider
mocks={countriesWithCubaQueryMock}
addTypename={false}
>
<App />
</MockedProvider>
);
await wait(0);
// we could not find the elements by text, we can only refer
// to them if we add a test id or leak implmentation details
expect(wrapper.find(CountryDetails)).toBeDefined();
})
// react-testing-library
it('should render the country info', async () => {
const { getByText } = render(
<MockedProvider
mocks={countriesWithCubaQueryMock}
addTypename={false}
>
<App />
</MockedProvider>
);
const countryName = await waitForElement(() => getByText(`Cuba`))
const continentName await waitForElement(() => getByText(`America`))
expect(countryName).toBeDefined();
expect(continentName).toBeDefined();
})
π your tests
Another strategy I have adopted is to follow the same behavior that feels natural, going deeply into the details to write the tests and after the first green-refactor cycle I wave it goodbye and add a blazing fast snapshot instead. There are many pitfalls with using snapshots; also not very well suited for doing TDD because they change with the code. Snapshots are to be driven not to drive. As a general rule, I usually end up with small components, and it feels like overkill to test it the usual way. All strategies have a use case, and snapshots can be handy once we reach the general shape of a component to display some data. A useful heuristic to decide whether to use snapshots or not is whether the component code almost identical to what is on the resulting HTML.
Testing the query?
When working on a feature that uses a query, something that helps write the tests first is the type system (yes, types again). Generating types from the schema adds the first layer of confidence and correctness. It helps to know what to expect, and tests are guided first to comply with the API. Generally, the first step with a new GraphQL query is separating what is used on each component, using fragments breaks down complex queries. A significant side effect is the reusability of such fragments.
The difficulty of having enormous queries to mock is a big reason to avoid extra large ones in our codebase. Once the fragments are in place, Apollo offers a mocking utility. The mocked data has to be an exact match of the query and the variables sent, otherwise, the Apollo mocked provider throws an error. The errors are not very informative, but this forces the user to be mindful of the data.
Wrapping up
If after all that you’re still with me, this is what I want you to to take from this post:
- Don’t dive too deep into the implementation details or even checking that the right message gets sent to the right object; you have types.
- Use the Type System to your advantage.
- When in “React land”, if there’s a simple enough component, let your implementation details drive you at the beginning and then delete the whole thing and add a brief (very very brief) snapshot.
- Writing snapshots first is not TDD π.
- Use the right tool for the job. For visual components, pick a library that mimics how the users interact. After all, you are writing a user-facing feature.
- Generate your dummy data in advance and feed those with the mocked providers when testing more complex GraphQL behavior.
- Here’s an example with these ideas.