Friday, March 25, 2005

Embedded QA

I came across the following great article:
This is an entire point that I forgot to talk about: Embedded QA.

As I mentioned at GDC, "thinking Agile" is really hard to adjust to. Our migration of QA is a perfect example of how we struggle with it.

Traditional QA is ramped up during post-Alpha in a traditional waterfall approach to vigorously test the game once all the features are implemented. We've all seen the problems in games where Alpha is the first real chance we have to play the game in some form near it's final state. This often raises the desire to change things even more. This is and example where discovering product value at the end of the project is bad.

Additionally, we underused QA. Good QA people are either frustrated at not having their insights into a game valued ("it's too late to change the game based on your suggestions") or their role is so undervalued in some companies that people are literally hired off the street to fill the role.

We had been doing Scrum for ~ six months before it even occurred to us to reconsider QA. Scrum is about mixing things up. The word is taken from Rugby where the team moves the ball as a group, and doesn't divide up the team into roles as strictly as other sports teams. So why were we separating QA into a separate department that doesn't Scrum with us when our monthly goal was to produce a vertical and complete slice of the game? I can be pretty slow at times, but I eventually "get it".

So we started embedding individuals from QA into the Scrum team. They sat in the same bullpen as the rest of the team. They got their hands a lot more dirty. Some of them could do a bit of coding, others layout and others had some part time associate producer roles, but their core job was to excercise functionality as it came online. It was a huge success.

Our goal with Scrum is to eliminate Alpha/Beta/QA departments, etc (Although we may need a "stabilization Sprint" thrown in to have proper releases). Darkwatch had a rather extended alpha-beta period (having to go through a management buy-out of the studio and shop the game didn't help), so I can't claim we're there yet. Some of our largest remaining problems are:
  • Interteam build stability : How to prevent separate teams from harming each other.
  • Interteam dependencies : How teams can better support each other.
  • Zero bug tolerance.
  • Improved automated testing - We implemented this very late in the project.
  • Improved handling of bad date in a data driven engine.
More topics to talk about...


Paul Mendoza said...

I haven't worked much on game development but when you say that the QA is a part of the process the whole way through, does this mean during the engine development phase or are you talking about mainly when it comes to gameplay aspects of the game?

Clinton Keith said...

All the way through. We wouldn’t work on an engine unless we were able to demonstrate it working every iteration. If you don’t have QA involved on every step then you are saying that it’s OK to leave bugs in the system as a future technical debt.

Anonymous said...

I know this post is 3 years old, but you never know who will come through here in the future, so here's my comments.

I totally agree with everything from the original article and Clint's post. QA needs to be in on things from the beginning. QA should be there when requirements are set, for a number of reasons.

First, in a lot of places, the QA represent the end users, they are the voice of the user on the team. Marketing and Product Managers often try to articulate the voice of the customer, but lots of times they don't actually see the product until beta test.

Second reason why QA should be around from the beginning-- there may be little to no documentation, and this could be their only chance to see what the requirements are. How can you test a game when you don't know what it is supposed to do? It is much easier to do your job if you were there when the requirements were set and the design decided on.

Third, you need to start writing test cases. This way you will know everything you are going to test ahead of time. It is easy to write test cases before the code is written, as it says in the article Clint mentions, all you have to do is "translate user stories into acceptance tests."

If the user story is 'The user can save the game,' and/or 'There will be multiplayer,' you have 2 test cases right there (OK, so I used easy examples, but you get the picture. Start will the basic story and dive deeper, ie 'you can save in the middle of the action,' 'you can save at the end of a level,' whatever. The requirements will tell you what the test cases will be.

If you review the test cases, you can find missing assumptions in the requirements and/or design. Product Manager: 'Where's your test case for saving the game?' QA: "What do you mean, there's no requirment to save." Porduct Manager: 'Well, we need to add that, then.' Better to find this out right away than in testing just before the product is supposed to be released.

While there may not be much (or any) internal documentation, there usually is some sort of end user documentation (written or in product). QA can always volunteer for the role of doc reviewer.

Note that none of these steps require working code to implement. Do these things now, before code is written, because you're not going to get this time back down the road.


Anonymous said...

Hi there I am a QA Lead trying to implement this sort of strategy on a new game currently in early preproduction. In terms of automated tests etc and test cases how would you manage that in preproduction vs production?


Clinton Keith said...

Hi Anon,

I'd think about building up the hooks and architecture more in preproduction more and shift toward full passage in production.

Hope that helps.