Engineering

In Conversation: Testing

By

Kevin B.

on •

Aug 13, 2021

In Conversation: Testing

Erik Luetkehans joins Liam Yafuso to talk about types of testing, why we use them, and when.

I would love to start with how you think of testing. How do you frame different types of tests, and what are the ways you interact with them? 

E: When I think of testing, I think of why we do it. For me, there are a couple of different reasons. The most important or most obvious is to have confidence that what we create will work in the real world. We want to make sure that when we deploy our code, it will work and not explode as millions of people try to use it. Additionally, testing allows us to be able to tell when we're done writing code. It allows us to set the criteria for what we're trying to achieve ahead of time and do so in an explicit manner. The final reason we use tests is for documentation, to explain what the code is doing. This helps when we onboard new people or for maintenance purposes when we look back at code written months ago. 

L: I want to speak a little more to something you brought up, specifying intent; it ties into the definition of done. This isn't just a testing thing; it’s a TDD thing as well. With TDD, the basic premise is that it should be easier to specify what you want to happen than to implement it. So when you TDD something, you might start with calling a method that doesn't exist and just expecting the value you want at the end to appear magically. That's where you start. It's easier to write the test and use the code as if it already exists than to implement all the code that makes it happen. From there, you can work backward. By specifying your test initially, you're setting your definition of done; you’re translating the story acceptance into an executable piece of code that tells me when my story is done. I should be able to call this method, and I should be able to get this answer.  

It would be interesting to talk about the difference between Inside Out and Outside In testing, the different layers of testing we go through, and why we use them.

E: Yeah! In the example of using tests to define what done is, this concept of testing from the outside in relates nicely. [It's equally valid if you want to test from the inside out, it's just a different approach.] Outside in is starting from the very furthest layer of tests. In a basic web application, that would look like the end-to-end test, where you're hitting the user interface with either Selenium or some sort of WebDriver. Then, it goes all the way from the front end through any kind of API layer to your data persistence and back. It's capturing the entire thing. That would be your outermost layer. A closer in layer would be your integration level tests, tests that will hit your API and then test down from there. Your smallest layer will be the unit test, where you're just looking at the individual method or objects as you construct out your code. 

Going from out to in and starting with the end-to-end test is where you can directly take the contents of your task and lay out, “here's my acceptance criteria, I need to be able to say: see that when I click on this button, this action happens and once that happens, I know I've completed my ticket.” That gives you that outermost knowledge that I can be done and ship this product once this happens. From there, that's where you start defining that to achieve x, I'm going to need this API call to happen. Then I know I'm making my integration tests for that. For that API call to happen, I need to create x function y function. 

I find when I apply testing in that direction, I end up writing a lot less code. I don't try to solve problems that aren't there. If I do it the other way around, I end up starting from the smallest thing I know. I often find that I tend to try to solve problems I don't need to solve. I try to optimize or abstract things too soon. I end up either generating code I don't need or taking a lot more time to accomplish the task than is required. 

L: Yeah, it's one of the classic things I try to run people through when I can. You mentioned Selenium, which in some systems is the most outside thing. For an API, it would be a contract test. That's the one I typically like to use to make a JSON request; “I should get this JSON response, and then that drives out everything else.” You end up with this pattern where you have one failing test, which specifies something else that you have to do. At that point, you can add another failing test to test that piece. That makes you add another failing test, and then they percolate back up. Finally, you're testing the piece where that test passes, and then you get the next one that's wrapping it to pass. You need to get the model tests to pass, and then the service test to pass, and then your API test passes, and then you're done.

You mentioned inside out; I’m curious how you feel about it, or were you just saying that you don't use it? 

E: I usually don't use it for TDD test writing. I think there is a world where it can be interesting to use to debug; often, in that kind of scenario, you discover a break or a side effect. From that side effect or bug, you have to work backward to figure out where it's originating. So I wouldn't use it so much when writing new code, but maybe if you're going back for maintenance, there are times where it makes sense to start from your innermost thing and go back out; however, it's not what I would default to. 

L: That's a solid use case. So using it for bug investigation, drawing a circle around the bug, and then rippling outwards from there to make sure that it is fixed across the rest of the system. Mainly because when you discover a bug, you also find a hole in your tests. Your tests were passing, but the bug was still able to exist. Some subset of the system is broken, but we don't know how much of it. So wrap the smallest possible site; once that passes, you wrap something slightly larger and larger until you have your head wrapped around the whole situation. 

E: Exactly, and I think that's more so when you're doing discovery on the bug. Whereas if there were an obvious reason why it is failing, then you would start from the outermost part. 

How do you know when you should test? What is your process specifically for deciding whether or not to test something? 

E: In my personal opinion, anything that is either back end or pure functionality should always have tests. It's easier to make the test first than not; it's just going to be faster. When you talk about running into time constraints and needing to skimp on testing, that will be more in complex UI interactions or some kind of integration that you're hitting; some third party that does a bunch of processes in the background. I would say when that happens, it’s highly dependent on the project you're working on. In an ideal world, you never skip any testing whatsoever; you always have full end-to-end integration and have it unit tested all the way through. 

If you are working on an early startup project where all that needs to happen is to get VC money before the end of two months or the entire company goes away, then maybe it's okay to not be worried about if a UI feature is going to work across every single browser, in every single validation situation. You just need something that is more of a useable demo. That would be an instance where you would reduce the amount of testing you're doing, especially higher up in the stack. 

Where you don’t want to do this is if you're working in a larger enterprise situation where you have multiple external teams that are going to be depending on your project or if you already have a mature user base where you have a lot of diverse users all trying to use your products. In these cases, it becomes essential to strictly adhere to having tests on pretty much everything. At that point, you're going to want to do things like cross-browser testing on every single deployment. You're going to want to make sure that this is working how you intended it to. One more edge case would be if the software will be highly distributed. If you are creating some sort of library or making components that you're expecting to be used across a vast array of sites, like some sort of advertising component, you have to make sure that it’s tested and looking good. It's going to be used across a bunch of different browsers inside other people's websites. In that kind of situation, you need robust testing. 

L: You can have scenarios where someone who's not very experienced writing tests, because of that, it's going to slow them down significantly so they would be faster to write it without tests, at least at the beginning. The problem is that untested code is effectively technical debt. So you're giving up future speed to go fast right now. That only makes sense if you're not very solid on how to write tests. When I interact with systems where I'm very comfortable with the testing framework, I'm faster writing the tests first. You have to realize the effects of testing down the road because it's a lot more impactful than you might think. 

There have been experiments done where if you have a clean room with a trash can in the corner and you have people go in with a piece of trash, they'll throw it in the trash can. If you have that same room with trash all over the floor or even some in the corner, they're much more likely to leave that trash around. When you see a space, you take it in. In the few moments of taking it in, you understand the status quo of the area and then perpetuate that. I think this is the same as having a bunch of untested code. When you decide not to write tests, you're making that decision not just for you right now but for everyone moving forward. You need to be fully cognizant of the decision that you're making. 

Another piece of this is that writing tests is its own thing. You can learn a framework like rails, but you still have to learn rspec, and that’s a separate piece of technology that you have to learn. If you're going to treat that as if it will take too long to learn, that's a long-term decision. You're deciding that you're never going to learn that and that you're always going to put short-term speed ahead or long-term stability. You are going to put speed ahead, or you're going to put code quality ahead. The people who can truly move fast and still deliver really solid software applications are the people who do not make this compromise. If you want to do both, you have to write the test; you have to put in that time that it's going to take initially.  

For the question of, “We're on a tight deadline, or we're going to throw this code away; should we write tests now?” Fundamentally, where I fall on this is you should write the tests. 

‍So my answer to “When should you test?” is multifaceted. One part is, how comfortable are you in the testing framework? If people are uncomfortable writing tests because they haven't done it a lot, that will be the source of the pushback for not writing the test. The claim that there isn’t enough time to write tests is not a problem with having enough time; it’s a problem with not knowing how to write tests. They're concerned that they’ll be slow while they try to figure out how to write them.

As you implement something, depending on how many tests you have, the amount of context you have to keep in your head all at once is all of the context of the feature that you're building. So all of the things that interact with that, you have to hold in your head and make sure everything's aligned while you're writing the code, or else you'll drop something, and your code will be broken in some way. You won't know that it's broken because you wrote it to the specifications that are in your head and there's some amount of context falling out. We have limited real-time memory that we can use to hold stuff in our heads at any given time. As you deal with more and more complex systems, you cap out pretty quickly. Even someone who's a genius, who can hold twice as much in their head, can handle a system that's twice as complex; it’s not that much more complicated. Whereas if you're writing the tests, by encoding your requirements into functional running code, every time you write a test, that takes up some of that context that would be in your head. So now, the tests hold the context of what the application is supposed to do for you. Now you only have to think about the subset that remains, and you can be wrong about the other parts in your head, and it doesn't matter because when you're wrong, your tests will tell you. 

So for me, even on short time spans, it's better to write the tests. That said, there are things that you don't need to test. You don't need to test frameworks that you pull in. If I pull in a web framework, I can see that there are tests for the framework and then assume that the core functionality specified in their docs is tested until I see otherwise. You can also get a little too pedantic to where you're writing a test for a test. You’re trying to have the tests hold the context so that the software knows what it's supposed to do. As long as you're checking those boxes, then you're good. One of the key things is if you find a bug and fix it, a good indicator of whether or not you’re thinking about tests correctly is if you put the fix out with a test for that bug. Whenever you're fixing a bug, you should always end up with a test that would fail if the bug still existed.

E: I’ve got two things. First, a good indicator of if I have a hole in my test or not is if I have to console something out. If I don't know what's happening there, it probably means I have a test missing. Something that I think you were hitting on as well is that people skip tests because they're trying to write something and find it very difficult to test. What that tells me is, if what you're trying to write is difficult to test, your approach is probably not good. It probably means you have many side effects occurring that you're trying to mock out, which is a bad sign. Maybe you need to invert your dependencies, or it might be something where what you're trying to write is attempting to have too many responsibilities, and that's why it's difficult to test. You want to be breaking it down smaller and smaller, and ideally down to single responsibility for whatever you're writing. If something's difficult to test, that's usually a sign that you might need a different approach, pattern, or tactic. It forces you to write better code when you are writing tests; it usually means you will have to adopt better coding practices. 

L: I also bundle what you are talking about with knowing how to test. Knowing how to test includes: knowing how to use the framework that you're testing, knowing the syntax of that framework; it also means knowing how to test different types of things. Do you know how to write a test for a react component? Do you know how to write a contract test for the API? Do you know how to write a test for a class? Each of these, I lump under knowing how to write tests. So that goes into what you're talking about, Erik. If you know how to write tests, you'll write code in a clean and easy-to-understand way because you'll be writing it in a way that it's easy to make straightforward assertions about because that's all the tests are doing. Tests are making clean and straightforward assertions about how something is supposed to work. If you can't create a meaningful assertion of how it's supposed to work in your head, that's where you want to start. You want to stay there and don't start writing a bunch of code before you've cleared that first hurdle of, “What is the thing I'm writing supposed to do?” Testing makes you have to answer that before you get to move on. 

E: I think that's another benefit of outside-in testing. As an engineer, if you get an issue or a ticket and can't turn it into a good end-to-end test, or whatever your outer-most layer is, that ticket does not tell you what you need to do. That indicates that you need to go back to the stakeholder or product owner and drive out the actual requirements needed. 

L: It gets back to the definition of done. If you don't have a way of asserting, “This is how I know this thing is done.” Then you probably don't want to keep moving forward without first answering that question. You want to go back and answer that question first and then come back to the code because it's not a code question. It's a business question.

We mentioned layers; it’s not as if everything falls either outside or inside. I see something like tests of the database, testing your models, they actually assert calls to the database in your tests; I don't know what layer they would fall on in comparison to say, unit tests of a class. I feel like unit tests of a class are their own thing, whereas database tests are almost more of an integration type thing. I'm curious, do you write every layer of test on every project that you're on? 

E: For the most part, yes. The only exception would be if you are on a small startup project where it truly is the case where if you don't deliver something in a very rapid time, the entire company disappears, where you are just trying to get VC capital to keep the doors open.

L: Yeah, and you just have to be careful with that. It's not a great place to be, and it can quickly turn into a situation where because you did this in the first month, there will be a push to operate like that in the next month. That's not the approach you want to take, you take on a bunch of technical debt to get that thing now, and now you should be paying that off if you want to move forward. For the times where I have skipped tests, it has tended to be something where it was like I just didn't feel comfortable testing it. I didn't know how to do it very well, and so I felt that time crunch. I think in all the cases where I know how to test well, I do. If you want persistent speed, if you want speed this week to be indicative of speed next week, then we have to do this the clean way. That's kind of what defines Artium as a consultancy. We do high-end, quality work. We're going to build you something that is going to be able to last.