Twitter is a lot of things, but for us it’s how we got spotted by
@shl
,
@naval
, and
@notationcapital
and raised our first $1M. Here’s the story and some of the lessons we learned along the way.
QA Wolf gives the engineers
@possiblefinance
full confidence in each release, which helps them move faster and multiplies the impact that marketing, ops, and the rest of the company can have on customers.
Ed-tech company
@padlet
was spending hours and hours every week maintaining their no-code automated tests, which covered just 10% of their application. By offloading end-to-end testing onto QA Wolf, Padlet’s team got to 95% coverage in 12 weeks.
It’s amazing what you can learn when you have one of the largest automated tests data sets in the world!
We analyzed the test decay rate for 20 high-velocity teams. Without constant maintenance, you can expect to lose 5–10% your tests each week.
The small team of developers
@CoSell_io
needs to be nimble while looking for product-market fit. By providing automated test coverage and hands-on QA expertise, QA Wolf gives them the security to move fast without breaking things.
🐺 2.9.0 is 🚀
🆕 Editing a test after a deployment is easier 😽
When you click "Edit Test" from a run it includes the environment variables for that run
For
@vercel
or
@netlify
deployment that means the same process.env.URL for that run is included
The definitive introduction to Arrange, Act, Assert (AAA) test outlining: What it is, why we use it here at QA Wolf, and best practices for using AAA to get cost-efficient coverage.
QA Wolf built automated tests for much of Chief's core features in four months, enabling their in-house QA engineers to focus on new product development and seamlessly transition from an in-person to virtual community during the COVID pandemic.
The Playwright vs Cypress debate is sure to go on, but when it comes to testing complex workflows, Playwright is better than Cypress for QA Wolf and our customers. Read why 👉
If you're focused on vanity metrics, you're going to miss what's really important: Shipping faster, doing less rework, and increasing profitability. And yes, automated testing has a role in all of those.
We manage tens of thousands of end-to-end tests for our clients. We’ve learned that a lot of the accepted wisdom about automated testing just isn’t true — here's what we've seen building, running, and maintaining large test suites.
You’re not imagining things. Everything that makes E2E regression testing difficult is infinitely more challenging when you’ve integrated with or built on top of Salesforce. Read why 👉
We teamed up up with
@karat
to identify the top skills to look for when hiring QA engineers and how you can find the candidates that have them. Take it from us. QA engineers make up more than half of our company.
With full end-to-end test coverage and unlimited, parallel runs, the
@getpequity
team accelerated their release velocity from 2 weeks to 1 day and sees 66% fewer bug tickets.
When we started QA Wolf we didn’t expect to pioneer a new category, but focusing on customer needs led us there.
Instead of DIY tools or hourly contractors, QA as a Service guarantees coverage levels, 24-hour test triage, maintenance, and bug reporting.
Debugging the
@makersplace
platform, verifying blockchain transactions, and testing third-party integrations used to take 2 weeks. By automating their 300 manual tests cases, QA Wolf took that down to a few minutes.
To run more than 2M end-to-end tests per month, like we do, takes a ton of hard work and few hard knocks. Check out today's
@bytebytego
newsletter and learn the nitty-gritty details (spoiler alert: it includes Kubernetes and Docker).
The team
@ximasoftware
found that maintaining flaky E2E tests took as much time as manual regressions had before. QA Wolf gives their developers 20% more time each week to focus on the product, and cuts their QA cycles from days to minutes.
After QA Wolf doubled their automated E2E test coverage in less than 4 months, and integrated with their deployment pipeline, GUIDEcx is saving more than $640K/year in engineering time, QA, and customer support.
3. End-to-end testing is rarely a developer’s core competency
Which is pretty evident from the low coverage levels most companies have. But we see it in research too: 70% of teams approach test design by pure intuition.
We often hear that devs should maintain automated tests for code quality. When it comes to whitebox testing we completely agree. But when devs own blackbox testing, we see productivity drop and coverage levels suffer. These are the 3 reasons why.
The job of testing in the world of generative AI involves familiarizing yourself with new territory and revisiting some familiar terrain. But the creative essence of testing remains consistent.
When you run E2E tests on PRs before merging, you exacerbate your maintenance burden. Unprepared teams fall into traps on this journey. Find out what they are and how to avoid them.
End-to-end regression testing for generative AI applications is mostly undiscovered territory. We’re blazing the trail, and here are some things we’ve mapped so far.
Attention, gear heads: Learn the details of our kubernetes implementation that enables us to execute fully parallelized test runs, whether there are 200 tests in the run or 20,000.
Modern tools and frameworks make regular web page performance testing easier for teams. We'll provide the code samples to get your team up and running with the performance metrics you need to create a great customer experience.
As you might imagine, we talk to a lot of engineering leaders about QA and end-to-end testing, and they share the same three reasons why it's such a struggle to scale end-to-end test coverage.
2. Writing and maintaining end-to-end tests limits the ability to work on new features
Maintaining end-to-end tests can take 20–40% of a developer’s time. And when it's neglected there's a chain reaction of deployment blockers.