In our latest TwitterSpace, Jenny Bramble and Jenna Charlton discuss test automation trends that drive them crazy, QA and the future of AI, the hidden costs of connecting test automation solutions to your CI/CD pipelines, their thoughts on testing in production, the myths of 100 percent coverage and more!
This is a spicy conversation that you will want to have a listen to from beginning to end, but just in case, we have shared some key takeaways below.
I want to introduce our amazing first guest. Jenna, could you tell us a little bit about yourself?
Sure… I'm Jenna Charlton, also known as @TheyWrestleTest. I have been in the testing world for…going on like 15 years at this point…And about a year and a half ago, I made the transition, took the adventure from being a tester and doing test consulting and agile consulting to product. And now I work for Functionize as the Director of Product and of building things for testers… And along with that, I am working on my MFA and experience design because I want to apply better design practices to the tools that we build because I think better tools lead to better testers.
….I really, really appreciate that. Jenny, can you say hi to our amazing people attending live…and tell us a little bit about yourself.
Hello, all of you. Beautiful, beautiful humans out there…I'm Jenny Bramble, the Director of Quality Engineering here at Papa. I've been speaking and thinking about testing for a very, very long time. And one of the things that really…gets me excited about testing is the ways that we can use technology to make manual testers better and to help enable cultures of quality within our teams….We're asked to do more with less all the time, and how better to do more with less than to invite automation into our lives.
We can appreciate that. I wanna make sure we cover some other juicy topics. Y'all you know, how everyone puts out those test automation trends… I'm curious for you all, what's a current trend in test automation that just drives you crazy?
Automate everything. That, that drives me up the wall. So spicy, take real quick, test automation is still an incredibly young discipline, especially compared to software development as a whole, or quality assurance as a whole. Test automation is still very new, very young, and we are still absolutely enamored with it. And I love that and love our love affair with automation.
But what it can do is it can get us to inappropriately apply automation inappropriately, apply tools, and we're just testing everything. We have hundreds of tests and they're not doing much. They're not giving us a lot of value, but my gosh, are they beautiful? And I am so in love with them. So my least favorite current trend is just automating everything. Not removing tests, not leading tests. Ah, yeah, keep your love affair with automation, but also set some boundaries. That's it. Relationship advice and automation advice.
But I thought we skipped all that and went straight to AI. We're not there yet. Jenny?
Is it too spicy to say that I think AI and testing will not be friends for another 10 years? Is that too spicy?
I may not agree…to give a little bit of context, one, you know, my company is focused in the AI space in testing…I know quite a bit about what our neural net looks like… That being said, I don't believe in everyone's AI infrastructure. I do think that some of the large language models that are coming out are going to enable really interesting new things in the testing AI space. …I also think that we are really going to start to see the cream rise to the top and we're gonna see the difference between AI, fakers and the real deal.
But my least favorite trend is actually that everybody wants to go wide right now. You know, there's wonderful, wonderful, both tools that you can buy and frameworks that you can implement…That being said there are a lot of vendors out there who, because they can't do one thing exceptionally well, have started to do a lot of mediocre things.
There are new players in the test automation space every day, and some of them are coming from vendors that have never been in the testing space before. I'm gonna give you some examples. There are test automation company frameworks or vendors that are now trying to move into the accessibility space. I don't trust it. Accessibility takes a unique and very, very specific and intentional skillset. And you don't just decide one day, “I'm gonna start bringing accessibility into my, into my testing.” You go and you learn about it. You learn the laws, you learn the requirements, you get familiar with WIC...you don't just jump in. There's also, you know, this big push for static analysis for CI/CD that everybody wants to go wide into and they don't have the background for it and the infrastructure for it.
I love this though and as a follow-up question, what's been the biggest perspective shift as you've continued to gain more experiences in this automation space?
When I started as a tester, I refused to do automation. Computers are going to steal our jobs. Computers can't do anything as good as I can do it. I'm always gonna be better than a computer. So I was really very antagonistic, very, very against automation because I was frankly scared of it and didn't understand it. If anyone wants to draw parallels between that and how I feel about AI, please feel free.
But as I've gotten older… I recognize the value of tools. Like there is a place for every single tool that I can potentially have in my toolbox…Automation is something that is going to make my job different. It's not going to make my job worse and it's not going to take my job away. So that's one of the major ways that I've changed. But within my automation practice, one of the things that I've seen change over the past few years is how I have approached automation previously. I wrote automation in order, in order to find bugs, in order to prove that there were bugs out there. And these days I write automation in order to verify people's expectations.
Can you repeat that again?
I no longer write automation to find bugs and defects. I write automation in order to verify expectations of my systems.
…Automation doesn't find defects. It just tells you about, so, so, what happens when you're writing automation is you say, this is the expected state of the system. I expect the system to look like this. Automation runs and it tells you if the system meets those expectations. We don't know if that's a bug. We don't know if that's a defect. We don't know if it is or not. We need a human to come in and say, yes, that's a bug. Yes, that's a defect. Yes, that's wrong. Or the human will come in and say, oh, we forgot to update the test. Silly us. That's not finding a bug, that's just saying our expectations were incorrect and we need to reset them…that's one of my favorite things is stepping away from automation as a bug detection system and automation and embracing automation as something that tells me if my expectations are being met.
So one, I totally agree with Jenny on some of this. Automation helps us find all of the unknowns that we know about. It doesn't help us find the unknown unknowns. So it confirms what we know, but it cannot confirm what we don't know.
Now granted I did move over to the product side, but I'm still a tester at heart. But I've realized my, my thinking has changed because I've been exposed to so much more and I understand so much more. And I see where the people and automation are valuable and they're two different places. I've learned that you can stay manual your whole career and that is 100% acceptable is 100% valid. And there is a place for you that way. And you can also say, I wanna do automation and that's the only thing I wanna do. And that is 100% valid. And it's 100% acceptable and there's a place for you. You don't have to feel like you have to be one or the other…
We often hear all of these like software quality success stories, right? From all of these customers plastered over with logos. And it's all about connecting your test automation solutions to your CI/CD pipeline. .. what are the hidden costs there that we don't talk about both from a people and from a technological perspective…?
Ooh, hidden cost is my absolute favorite topic….So one of the things that I'm doing at my company right now is we are working on getting automation in play…The first is I had to write an RFC. So we are picking what platform we're gonna use, what test automation thing we're gonna use. I…wrote an RFC, here's why we want to go with Appium, here's what I'm thinking, here's the reasons it's good. Here's why we should do test automation in the first place. And I presented it and one of my developers said, well what about this other tool? I'm like, well, you write your own, bring an RFC. And he did…But it did take time out of his schedule to put this together. We ended up going with Appium for a lot of reasons. So that's my first set of hidden costs in making that decision. And then I look around my team and I don't have an Appium developer on the team so I'm gonna have to go hire one. Y'all know how hard it is to hire people…hiring is expensive…after all of that, we pick somebody, we hire them, hooray, we're done. Right?
We are not done because I have to get them trained up. I have to integrate him into our systems. I have to make sure that he has the support that he needs to be a successful developer with my team. So there's another cost. And if I wasn't actually hiring somebody specifically to get this framework in play, I would've to take one of my team members completely off of any of the manual testing they're doing and have them write an Appium based test suite.
The cost there is astronomical. I am losing an entire person for however long it's going to be in order to get this thing put together. And I might not see a return on what they're doing for months, like literal months because they're gonna have to ramp up. They're gonna have to put this thing in play, they're gonna have to test it, they're gonna have to go back and forth a whole bunch. It's so expensive to start this up. And Appium is a free tool. We're not paying for Appium, but I am paying for this investment in Appium…
So you know, I completely agree with Jenny as far as the human cost associated but there are other costs too that we, that we sometimes don't consider because they're a little outside of our purview. And these are things like what, how are you right now hosting things like your CI/CD tools? Is this something that you host on-premium? Is this something you're hoping to host through AWS [because] there's an infrastructure cost. You may have some human cost in actually doing the work to connect those tools. However, it should be relatively simple to connect. Like to use API calls from Jenkins to, you know, whatever your, your test automation is, whether it's your internal automation or a vendor, I will say it's easier to do with a vendor cuz they've like streamlined it for you and created everything already.
Doesn't mean you can't do it if you're in-house. Like if you've built, it's something yourself, you absolutely can do it. You just have to do more of that work yourself. Of course there's a cost no matter what in-house or a vendor. Somehow you're paying that bill. But I think the one thing that you should think about outside of everything Jenny mentioned around the human cost is there's an infrastructure cost. And you've gotta think about that infrastructure cost and whether you are currently ready financially to support that as a business.
I love that because you're right, we don't often think of infrastructure as testers, but as somebody that manages now and has to think about that, I definitely have a small heart attack once in a while when we run out of GitHub actions or when we need to bump up something random because suddenly we're using way more resources. So as we're jumping into test automation, especially from a manager level, think about what it's gonna cost and prep your people for it.
Set alert, set alerts… for both of you two, you're talking about those hitting costs from an infrastructure perspective, but at the end of the day, like why are we writing some of these tests to begin with, right? Like, some of it just feels like we are stroking leadership's, egos, you know what I mean?
I'm curious though for you all too, like what advice would you give because I know this is a silly topic, but this is something that always grinds my gears is, you know, all of those people that would sign a POC, join our forum, and a new customer would like tune in and bless them. They would share their desired goal is to get to 100 percent coverage. You know, and a lot of people actually believe this, and this is the fantasy that they sold to their CTO...
What is your advice for someone who has bought into that fantasy?
I recommend therapy.
So outside, which you might really need if you're that into coverage. I think other folks have probably heard me talk about this.
100% coverage is a myth, 100 percent. Code coverage means less than zero. It is a negative metric because code coverage doesn't mean meaningful and it doesn't mean quality. I can write you a hundred percent code coverage for tests that pass every single time, but I hit a hundred percent. Where's the value? The value is an understanding.
What are your customers using? What are your customers' values? What do you do as a business? What will protect your brand? What will protect you from being sued in liability and identifying the risks and focusing on creating meaningful tests both automated and manual around those things. I don't care if that coverage means that you hit 15% coverage. You've hit what matters, not what makes somebody feel good.
Anytime that I hear people talking about a hundred percent code coverage, when I start probing into that, what I hear is that they are in a very low trust environment and they think a hundred percent code coverage is going to somehow make them become trusted. And that's not how it works. You are trying to solve an emotional problem with a hammer when there's low trust and you say, well, if I can get to a hundred percent coverage or if I can test everything, they'll finally trust me. They're never going to trust you because you're never gonna be able to do that and you're just gonna hurt yourself.
Going for a hundred percent code coverage, going for 50%, going for 20% is a bad metric that can be used as a weapon more than it can be used as a tool to help you. So when you are in that kind of situation, step back and ask what are you trying to achieve with a hundred percent coverage, unquote, what do you actually want? Do you wanna be trusted? Then start having more conversations around the types of testing you're doing. Create test artifacts. Talk about why you're not doing certain types of testing and create that, that confidence, that trust that says I am a professional who is doing professional things out of my subject matter expertise mindset and let me just do that.
And be a quality coach. And what I mean by that is to engage everyone. This is not a single person decision. This is a team decision and a team agreement on what we value and what matters. Get that working agreement.
…And a lot of people in software development do care about software quality and they do care about learning more about testing, but we're seeing a big shift…We're seeing all of these, you know, testing predictions for 2023 around testing and production, right? And shifting, right? And we're hearing about all the perks right of it. So I'm curious for you all and for, you know, for you Jenny and Jenna, what's a pro and con from your years of experience for testing and production?
I have recently at my job been encouraging people to just YOLO it to prod.
There are a lot of things that we can't learn from our staging environments. We can't learn how users are going to interact with something in the real world. We can't always test with real world data. There's only so much we can do in a staging environment… in a simulated environment.
If you have the ability to roll back quickly, then you should be way more risk tolerant. As long as you're not hurting people or damaging your data. Put it out there, man.
See what happens. It's gonna be awesome, I promise. And if it isn't awesome, then flip your feature, flag off, roll back, make sure you do have a mitigation strategy in place for if it goes wrong, but push it to pride, learn, experiment, get wild.
Jenny, I appreciate you so much. We've come a long way.
I agree in some respects definitely if you can roll back if you are in place to make fixes quickly it's important that before you have a conversation about what we are going to push with, you know, a lighter amount of testing or potentially no testing, that there is an understanding of what should and shouldn't go through that process. If it is a breaking change that can't be reverted, then it does not go through that process. If it's something we can roll back, if it is a simple line of code fix, then sure let's go ahead and throw it out and see what happens.
But the other part of shifting, right, that I think is super important is data collection. And so
we talk a lot about testing and production, but that also means that we're testing the data that we're collecting, which means analyzing the monitoring data, analyzing the kinds of errors that we're seeing in production, analyzing the performance that we see in production. And that's kind of the ignored piece of this that I think is really important.
You rarely will find the actual performance issues in your performance testing. You're gonna find them in prod. So you gotta watch for them.
We need to stop throwing things over the wall to production. Monitor it, check it. If you're putting it out there, especially with limited testing, then you should be watching it like an absolute hawk. If you can't do that, then you can't YOLO to prod.
I love that so much. And I wanna make sure that we just wrap, we also got super hyped about your upcoming talks at AgileTD USA and whether you're in Chicago, in the metro area and or, you know, you can make it out for any of their discounted tickets. There is nothing like seeing Jenny and Jenna live and all these other amazing speakers and also people in tutorials.
I adore you both so much and thank you so much for spending your time.
Join us for Agile Testing Days USA set for May 22–24, 2023, in Chicago, IL. Jenna will deliver a keynote, “Imperfect Agile” in which they share lessons learned about themselves, their teams, and what it means to truly be agile from a perfectly imperfect agile transformation. They will also host a full-day tutorial on Fundamental Test Skills that is for everyone who wants to get back to basics to deepen your testing knowledge.
In Jenny's talk, “Seems Good Enough to Me: De-risking Regression,” you will get a stronger understanding of the ways we can simplify and derisk upgrades and learn of structures for working with tests to make sure that we’re doing the right amount of testing.
If you want to learn more about pricing and packages, check out our AgileTD USA 2023 ticket options.