One of the key success factors for companies, is their ability to measure performance. They have complex evaluation systems and are able to track and anticipate market evolutions or internal trends.
The entertainment industry is no different. The most successful gaming companies have performant systems to track user behavior and to offer users the best experiences money (or gems) can buy. They use market segmentation, track clicks, track average revenues, make stats over stats and try to translate human behavior in palpable numbers.
For the last 15 years, since gaming became complex and profitable, testing companies, named QA or QC or whatever, started to build their tracking system.
Skipping the part where programmers were making their own tests or bugs were written on papers, we arrive to when they started to use simple databases to store bugs and share. Back then, the defender of quality was a skilled tester that was explaining the test principles to others, translating or opening bugs submitted by colleagues.
As structures grew, this responsibility for quality was passed to Lead Testers, QA Managers, Coordinators, Vice Presidents of QA and shaped by Sales People. All these created their own tracking systems. The higher the number of people, the more complex the tracking systems. The higher the request, the higher involvement from the businesses or sales personnel. The tracking systems were not only meant to be efficient but also appealing to customers, who must be impressed by the quality of the QA services.
I met several models of systems that tried to measure and to be the warrant of quality.
One of the most known models is the Benchmarking. This is a complex model that has a simple concept behind: I need to choose a similar product from the past that will help me estimate the workload, the QA timeline and, of course, the budget. This model gives the number of people needed at each stage, the number of bugs they need to submit, the frequency of the build and tries to minimize the risks.
The biggest flaws of this model are exactly the risks it tries to prevent: predictability of the development process and the changing of user expectations. For this model to work, you need a sequel with the exact engine, the exact same design specifications and game structure, the exact same development team and, of course, the exact same QA team. If you have all these, maybe the model can be accurate but your product is highly likely to be a flop. Things change, people change.
The Benchmarking model is using a part from which other models appeared, models much simple, based on number of bugs at certain deadline. These models have everything based on one simple KPI: the number of bugs.
When you choose a KPI based on the number of bugs, you start to have real issues. The QA Managers start to impose a certain number of bugs to teams, which means a certain number of bugs submitted by each individual. Some go even further and make competitions based on the number of bugs: top bugger, best employee, buggy man, etc.
The testers have a daily target of bugs and adapt fast: they enter all minor issues in the Database and they are appreciated for that. The Database starts to look according to the target and the model is on track. But the Database is full with Level Art, Level Design or Sound bugs and the problem is that testers do not take the time to investigate more complex bugs which are the real game breakers for bigger communities.
The game is released, users are upset, development team patches, tests are made, users leave communities, money is lost.
When you start breaking down all testing models, some can be used up to a certain level. Some are just for commercial use but all of them have one thing in common: they all use the work done by this little guy called „the tester”.
All models and analysis are relying on the tester’s capabilities to detect, analyse, report and track bugs so he can make the difference between a happy customer and a flop.
If each tester from the team is aligned with the objectives of the project and is performing inside the team, then the QA team overall is performing. The test plan is followed, test cases are completed, the entire project is checked, the bugs are correctly reported and there is a good overview on the development of the project. Models can be built, are relevant and helpful.
But the alignment of the tester is mostly influenced by the inputs he receives in every single moment of his activity. These inputs come from colleagues, managers, developers or company culture, in every single moment of his activity. The tester needs to feel appreciated for the work he is doing, respected for his added value, helped when he has issues and guided to achieve more. All these require time and leadership at every level of the company.
I met companies that had the most sophisticated and attractive sales packages but that were neglecting or treating people poorly, considered them „disposable resources”. While the company’s numbers and models were looking sharp, the work conditions where terrible and the low quality was covered by very detailed procedures. These procedures worked up to some point: the companies got the contracts and started the process smoothly. But when it came to day-to-day work, they were terrible. Visiting them only came as a confirmation of the above and an ended contract.
A wise man once told me that the only resource you cannot buy, is time and I agreed. You cannot buy time. I agreed because I also consider people not being resources, but values. And value is created in time, with effort and doing things right.
The right thing for us is to create quality, and quality is all about the people.