I'm one of those types that puts significant value on the Quality Assurance department of any software development shop. So, I appreciated seeing the post at OnStartups: Business Geeks: Automated Software Testing as Competitive Advantage. The key here is "Automated". There's really no way to build a scalable software business if you do not invest in good software engineering practices, combined with automated testing.
Most startups (and, heck, even many big, established companies) do a crappy job with their Software Quality Assurance programs. Why is that?
- Most CEOs and VPs of Engineering do not understand its importance. Oh, sure. They all say that "Quality is Job 1" or whatever, but when push comes to shove and the product "needs" to be shipped, quality is the first thing that is sacrificed. Executives feel that they can always patch the product later. That's a fine approach, it will just cost you a lot more.
- It's very hard to build a QA group with rock-star programmers. Often times, junior engineers will be hired into the QA department. They will grow there, but the first chance they get to create "product" code, they will try to transfer out of the company or, worse, quit and move on to a different company. Organizations need to reward QA programmers with equal career paths as their code-writing counterparts.
- QA is not factored into the early planning of a software product release. Software teams will spend tons of time fighting over requirements, scope, features, modules, partitioning the teams, outsourcing, private APIs, public APIs, usability, configuration files, installation, upgrade, etc. Frequently, however, the automated test suite is not given its proper attention. If you miss it while planning, you will miss it while executing. While you are doing your product design, you need to be doing your test design.
- Most organizations have no structured software development lifecycle. If you don't know when a product can transition from Alpha to Beta to GA, or when it's time to release a patch vs a hot-fix, or when to release a dot-release vs a dot-dot release, or when its time to do a major release, or the criteria by which you EOL a product -- then how do you know when its time to test? And, if you haven't invested in "Automated" tests, then you will be paying a pretty significant price each time you send something out to a customer.
- The testing cycle is often the last thing before product ship. Most companies treat testing as something that happens as the final "certification" that allows them to release the product to customers. That's very bad. Testing needs to be the first thing that is done. The only way you know you are "done" is if the test suite passes.
At my last company, Cassatt, we have a rock-star group of software developers and QA engineers. We instituted what is referred to in the Agile software arena as a "Test First" software philosophy. How does that work? The developers specify the unit tests at the same time that they specify the design and the interfaces. They code the tests first, then code the module. When the tests all pass, you know you are done. This gives the QA department the ability to look beyond the unit tests and focus on multi-unit, simulation, stress, code-coverage, and GUI tests. It requires rigor and structure, and a management team that believes in the Pay Me Now form of software development.
You can Pay Me Now.
Tags: Extreme Programming, XP, Test-First, Quality Assurance, QA, Brian Berliner, brianberliner