The UK TCL independent Software Testing Consultancy has just been acquired by the Qualitest Group, which claims to be "the world's second largest pure-play testing services provider" - so, I presume, it tries harder.
As you might expect, it offers the usual QA & testing outsourcing, gets you more staff for that big testing effort and provides affordable onshore testing services through local test centres. A very modern approach to testing and if you think that it sounds like the "get a lot of cheap monkeys in to test this stuff just before it goes live" approach, I think (from talking to Qualitest) that you'd be doing it an injustice. For a start, it offers help with test process improvement and offers assistance with things like FDA certification and medical device validation that are really taken seriously.
The biggest problem with testing, to my mind, is not doing it at all; or not doing it in a structured way, which (in effect) means the same thing. Some companies are impressed by running thousands of tests (often exploring the same equivalence class or boundary condition), which the system passes with flying colours. However, since a new system almost certainly has defects - errors - in it, wouldn't it be more impressive (and a better use of resources) to run thousands of tests that fail, each failing on a different defect - which you can then address? To maximise your chances of finding defects, with limited testing resources, you need to structure your testing carefully (ideally, so you find each defect once and once only).
The next testing issue is the fallacy that you can't test anything until you've built it all - when you can simply hire lots of people (preferably cheap people, off-shore) to run your test cases, just before you go live. However, even assuming that your test cases are well-designed, well-structured and well-documented enough to let cheap labour run them and produce usable results (possibly, this is unlikely, if you are taking this approach), what happens if you find defects? Have you allowed enough time, at the end of development, to fix them or will they just make system delivery late? Suppose you find a fundamental error which means expensively recoding, (or, worse, redesigning) a significant part of the system? Testing is most cost-effective if it is done early; and the testing team should be involved from the start of design or even earlier. If what the business is asking for is basically un-testable - "the system should be responsive and deal with all the customer demands we'll get", for example - it's best to get what the stakeholders actually need better defined before you start off on the wrong technical path.
If you partner with an organisation like Qualitest, which understands the testing process, it'll help you avoid these, and other, issues - as long as you involve it early enough and as long as you listen to what it says. Qualitest believes in Results Based Testing, which can give you real, facts-based, confidence in the systems you deliver to the business. Results-based testing means that you pay Qualitest for satisfying your SLAs for Key Process Indicators that relate to your business and project goals for testing (such as, according to Qualitest, finding at least 95% of the bugs; ramping service up within a week or less; or scoring at least 8 out of 10 in a quarterly customer satisfaction survey).
More than that, using an external testing organisation may well result in better quality testing than you can do yourself anyway. Developers tend to be optimistic. They tend to believe that their systems work and that what "their" system actually does is probably what its users will want it to do (and if that isn't the case, the company should hire better-quality users). Good testers, on the other hand, tend to be cynical - they need a different mind-set to developers. They tend to enjoy breaking systems. They tend to ask the system to do things which make no sense to the developer. They tend to take a perfectly good system and find bugs in it! This can be unpopular and testers can be accused of thinking negatively or, worse, of active disloyalty. After all, a new system will presumably deliver business benefits and the pesky testers are simply delaying delivery of those benefits - depending on the maturity of the organisation, it may not want to employ people like that; but a testing organisation like Qualitest can and will.
An effective external testing organisation is largely immune to company politics and cultural immaturities (such as a "blame culture") and can get on with the job of finding defects and improving the testing process without the fear of upsetting people. Looking at its customer list (Sage, FujiFlim Medical Systems, Ministry HealthCare, Qualcomm, Philips Healthcare, to name just a few) suggests that Qualitest is quite successful at this.
So far so good, but my chat with Qualitest did make me think of some rather more subtle issues, which I'll look at in a research note. These don't invalidate the use of an external testing organisation like Qualitest but they do, I think, usefully explore the questions an organisation adopting such an approach should consider.