Quality Is Undead

QR Skull

Though many give credence to the sentiment that quality is dead, testers linger like ghosts with unfinished business who cannot move on from this plane. There is never as much time for testing as we would like and some of our bug reports are regarded with as much skepticism as messages from beyond the grave. We tend to haunt the developers with questions and carefully couched criticisms, and I daresay that some programmers might like to call for an exorcism.

We may think of ourselves as shuffling zombies toward the end of a long release cycle, when testing time is often sacrified for feature implementation and we can end up working long hours, though thrown under the bus may not be the worst way to go. Those carefully scrutinizing our exceptional strength to endure the inevitable crunch time, our keen powers of observation in uncovering bugs, and our cold appraisal of the software will realize our true nature and may start stocking up on stakes and garlic, peering into our cubes looking for the coffins where we sleep as we must surely be vampires instead. So far, I have no reports of testers biting co-workers, however tempting at times. After all, wouldn’t we want to pass on our dark gifts to our team mates, test-infecting them to become more like us?

Testing wants Braaaains

At STARWest 2011, James Whittaker and Jeff Payne heralded a dark future for testers without automation scripting skills. While I welcome the increasing testing prowess of software developers, their testing focus is on automation, whether at the unit level or something more closely approximating an end user’s interaction. I have started thinking of my automation test suite as my zombie horde: persistent but plodding on unthinking, repeatedly running into obstacles, requiring tending. It really want some brains to interpret the results, maintain its eccentricities, and perhaps play some games in the backyard shed. As Michael Bolton stated at CAST 2011, lots of automated checks versus doing human-observed testing is one of the hard problems of testing. “A computer executing automated tests only makes one kind of observation, it lacks human judgement.”

Even these fast zombies are not a replacement for the thinking mind of a tester, though how to think about testing is a question of much debate. Testers from different schools seem to regard one another with a bit of hostility. Each successive school of thought seems to bury the preceding one with great ceremony, including the departed’s whole entourage for the journey to the afterlife. Those thus interred seem like mummies, desiring a terrible vengeance on the ones disturbing their eternal rest or even grave-robbing their ideas. At CAST 2011, Michael Bolton encouraged us to take a more measured tone with people who disagree with us, referencing Cem Kaner’s sentiment that “reasonable people can disagree reasonably.”

Memento Mori

With the Day of the Dead celebration occurring this week, it seems fitting to ponder our own demise as testers. Those celebrating this holiday want to encourage visits by the dead, so the living can talk to them, remembering funny events and anecdotes about the departed, perhaps even dressing up as them. Some create short poems, called calaveras (“skulls”), as mocking epitaphs of friends, describing interesting habits and attitudes or funny anecdotes.

So here is a calavera for me:

Here lies our dear departed Claire Moss
We miss her because she could test like a boss.
When a defect appeared before her eyes,
Her lilting voice would intone “Hey guys…?”
At best, she was a code monkey,
but her sense of humor was always funky.
Though she claimed her heart was in her test,
we always know she loved us best.

I encourage you, the living, to visit me with events, anecdotes, or funny poems. Whatever undead creature your tester persona most identifies with, keep on pursuing excellence and have a happy Halloween!

Quality Stands Out

My fishie

One of the nice things about going to a science-fiction convention is that you blend into the crowd in your obscure-reference costume. You can go about your nerdy business without anyone stopping you every 5 seconds to ask for a photo op. At Dragon*Con this year, I spent much more time wandering the halls to take in the experience than following the programmed tracks of activities. One advantage of this was the premium people-watching. Some people who are passionate costumers never appear at any of the costuming panels or track sessions. Their costumes might not even fit through the doors of the track’s room!

When you’re wandering around strangely attired in public with your 40,000 closest friends, you will inevitably encounter someone else costumed as the same character. There is a moment of recognition that offers the chance for geeky high-fives and kudos for sharing your interest. The one problem with meeting geeks who get the references is they know their subject matter deeply and can spot inaccuracies in your garb. If you are attempting to replicate an iconic image of a character, they’ll spot deviations immediately. This reminds me of something Mike Lee said in his Making Apps That Don’t Suck talk: “There’s a good chance what you think is wrong with the product, no one else notices or cares about. … Your users are probably not nerds, unless you make software for people who make software and then only God can help you.” When faced with fanboys, you cannot slack off.

On the flip side, when your costume is high quality, people may not care about recognizing the nerdy reference and stop you every 5 seconds just to admire your workmanship. The design is so well-executed or intricate that they don’t care about the subject matter and just want to stare.

“If you want to be remembered, be memorable. If you want to stand out in the crowd, it helps to come up with something other than just looking like everybody else.” — Mike Lee

The real geek gold is in a high-quality obscure-reference ensemble that gets you both kinds of attention. [And if you can actually work popular culture into this mix, you’re golden.]

Big Fish in a Big Pond

I attacked the costuming problem in the same way I attack my testing: with the goal of having the best execution. I know that the users of my software are the nerds of their genre (niche market), much more intricately familiar with the nuances of their business than I. I know that missteps in the vital functions will not go unnoticed or unreported. The software must satisfy the production quality its highly specialized market demands. For niche markets, “the final product quality … is associated more with the specific needs that the product is aimed at satisfy.” (Wikipedia) I studied my source material, in this case Neil Gaiman’s Sandman graphic novels, and noted all the little tell-tale character attributes that must be preserved to be faithful to the design, or in this case the many designs.

However, I know that a faithful reproduction is not what I want to deliver. I want an unexpected element in my ensemble that would transform a good idea to great. I was tempted to purchase a fish balloon and carry that around the con, but I was much happier when I discovered a navigable fish blimp as the perfect accessory for my Delirium. Similarly, knowing what people (human oracles) say they want in their software is only the first step in satisfying their needs, so we cannot limit our testing to only the scenarios they state they want to execute but instead we must explore beyond the known. We can be advance scouts reporting back the plausibility of satisfying those unstated needs. “The essential value of any test case lies in its ability to provide information (i.e. to reduce uncertainty).” – Cem Kaner & James Bach

Then we can take a shot at that surprise and delight that Mike Lee advocates and really wow the crowd.

Taking on Water

Tags

Composition

Recently, I have been struggling with attacking a backlog of automation test cases. I took a much needed break to spend the weekend scrapbooking with a friend. We drove out of town to attend a crop, or gathering of scrapbook hobbyists for those not in the scrapbooking scene. I certainly wasn’t the most experienced, with some scrappers having been scrapbooking for more than 15 years, but I wasn’t the most novice either since one attendee had never made a page before. I met some new friends, ate too much, made some lovely art involving photographs, and learned something useful.

The scrapbooking consultant was pleased to have some newbies attending a crop for the first time. She shared this advice with us: start scrapping your most recent photos. When we started to protest, she assured us that we would have the most energy to attack this problem rather than trying to start at the bottom of the stack of photographs that we have accumulated for years and years. Then, we would feel encouraged to continue with the project of picking up older images to stylize in our layouts.

This suggestion appealed to me since I found the most enthusiasm for a recent trip I had taken to The Wizarding World of Harry Potter. I felt less concern about leaving earlier photographs to languish in their boxes and ended up producing much more in the time I had available. Since my friend was working on the same subject matter, we encouraged each other and even collaborated on some great design ideas.

Prevent a Bail Out

Now that I am back from this refreshing play and getting down to business at work, I find that this lesson resonates with my automation work as well. The most stale test cases are much less appealing and much less fresh in the minds of the developers who collaborate with me on the automation project. In addition, we have an opportunity to make this new code more testable and more automatable rather than having to work around some part of the existing code base that wasn’t written with this end in mind. The automation code becomes more maintainable. The real win is to stop the flow of automation opportunities straight into the backlog that we then have to bail out later, effectively plugging the leak. When we approach the stories in the current sprint as automation candidates, we know that we may have some rework in the future, but that is part of writing code, whether in a product or a a meta-product like automation.

Image source

ET, Phone Home!

Composition

Although I am no longer the newest recruit on my employer’s Quality team, I am still something of an alien creature to the folks back at the mothership (i.e. home office). However, I have been slowly getting to know them through video conferencing, especially my fellow Quality team members. We have been experimenting with paired exploratory testing, but in my case we cranked it up a notch to *remote* paired exploratory testing. (You know testers don’t like to keep it simple, right?) This added an interesting layer of exploration to an already exploratory experience. (This meta goes out to you, Jace and Will S.) Now, each member of the team has a Skype account, establishing a common medium for communication, and we are learning the basics together. While we contended with screen repaint, we were forced to discuss the products more in depth to make use of the lag time and to give some context for each newly displayed page. This also gave us a chance to discuss the testing process, the collaborative online Quality space, our documentation strategy, and a bit of product history. Oh yeah, and we did some testing.

Since I’m still a newbie, I pretty much expect to feel a bit lost in the woods when it comes to the rest of the company’s product suite. Paired exploratory testing (or ET for the testing aficianados among you) gave me a peek into the Daxko-verse. My fellow testers know the lay of the land and so are better positioned to provide test ideas inspired by the suite’s world as we know it – soon to be rocked by my team’s product! In return, I got to ask the naive questions about what we were looking at, what terminology meant, and how it all fits together. Sometimes, having a second set of eyes isn’t enough. You need someone to ask the dumb questions. Stand back, people, I am a professional at this.

Paired ET fosters the Agile Principles:
1. Continuous Feedback
2. Direct Communication
3. Simplicity
4. Responding to Change
5. Enjoyment

We are still working out how to run the sessions. Does the person on the product team pilot or co-pilot the session? Or do we take this rare opportunity to do some concurrent exploratory testing? How long do we test together? Do we test both products back-to-back or does that just leave us yearning for caffeine and a stretch break? Personally, I am loving this. It’s so much fun to play with the new and novel, and I hope that this livens up the regression routine for my home office folks. If nothing else, it is a great opportunity to geek out about testing methodology and learn a bit about what works in our context.

The best parts:
•Finding bugs!
•Communication
•Knowledge sharing

Can’t wait to get into it again this afternoon.

Addendum: Now that we have completed the initial experiment in the vacuum of ignorance, I am free to research other approaches to paired exploratory testing. I paid particular attention to Agile testing as a new mindset that encourages transferring testing skills to other team members so that the whole team shares responsibility for testing.

Read more from Lisa Crispin, Janet Gregory, Brian Marick, Cem Kaner, and James Bach

Composition

Composition

Programmers are an obvious choice for members of a software team. However, various points of view attribute different value to the other potential roles.

Matt Heusser‘s “How to Speak to an Agilista (if you absolutely must)” Lightning Talk from CAST 2011 referred to Agilista programmers who rejected the notion that testers are necessary. Matt elaborates that extreme programming began back in 1999 when testing as a field was not as mature, so these developers spoke about only two primary roles, customer and programmer, making an allowance that while the “team may include testers, who help the Customer define the customer acceptance tests. … The best teams have no specialists, only general contributors with special skills.” In general, Agile approaches teams as composed of members who “normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration’s requirements.” The Agile variation Scrum suggests teams of “people with cross-functional skills who do the actual work (analyse, design, develop, test, technical communication, document, etc.). It is recommended that the Team be self-organizing and self-led, but often work with some form of project or team management.” At the time, this prevailing view that the whole team should own quality diminished the significance of a distinct testing role. Now that testing has grown, Matt encourages us to revisit this relationship to find ways to engage world-changers and their methodologies to achieve a better result. Private questioning about the value of the activities that testers do to contribute is more constructive than head-on public confrontation. One source of the problem is that testers need to acknowledge Agile programmer awesomeness and their specific skills. This produces a more favorable environment to discuss the need for testing, which Agilists do see though they want to automate it. Matt advocates observing what works in cross-functional project teams to determine patterns that work in a particular context, engaging our critical thinking skills to develop processes that support better testing. As a process naturalist, Matt reminds us that “[software teams in the wild] may not fit a particular ideology well, but they are important, because by noticing them we can design a software system to have less loss and more self-correction.”

Having not experienced this rejection by Agile zealots for myself despite working only in Agile-friendly software shops, I have struggled with commenting on his suggestions. I decided to dig a bit deeper to find some classic perspectives on roles within software teams so that I can put the XP and Agile assertions into perspective.

One such analogy for the software team is Harlan Mills’ Chief Programmer Team as “a surgical team … one does the cutting and the others give him every support that will enhance his effectiveness and productivity.” In this model, “Few minds are involved in design and construction, yet many hands are brought to bear. … the system is the product of one mind – or at most two, acting uno animo,” referring to the surgeon and copilot roles, which are respectively the primary and secondary programmer. However, this model recognizes a tester as necessary, stating “the specialization of the remainder of the team is the key to its efficiency, for it permits a radically simpler communication pattern among the members,” referring to centralized communication between the surgeon and all other team members, including the tester, administrator, editor, secretaries, program clerk, toolsmith, and language lawyer for a team size of up to ten. This large support staff presupposes that “the Chief Programmer has to be more productive than everyone else on the team put together … in the rare case in which you have a near genius on your staff–one that is dramatically more productive than the average programmer on your staff.”

Acknowledging the earlier paradigm of the CPT, the Pragmatic Programmers assert another well-known approach to team composition: “The [project’s] technical head sets the development philosophy and style, assigns responsibilities to teams, and arbitrates the inevitable ‘discussions’ between people. The technical head also looks constantly at the bigger picture, trying to find any unnecessary commonality between teams that could reduce the orthogonality of the overall effort. The administrative head, or project manager, schedules the resources that the teams need, monitors and reports on progress, and helps decide priorities in terms of business needs. the administrative head might also act as a team’s ambassador when communicating with the outside world.” While these authors do not directly address the role of tester, they state that “Most developers hate testing. They tend to test gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are driven to find our bugs now, so we don’t have to endure the shame of others finding our bugs later.” In contrast, traditional waterfall software teams have individuals “assigned roles based on their job function. You’ll find business analysts, architects, designers, programmers, testers, documenters, and the like. There is an implicit hierarchy here – the closer to the user you’re allowed, the more senior you are.” This juxtaposition sets up a competitive relationship between the roles rather than seeing them as striving toward the same goal.

A healthier model of cross-functional teams communicates that all of the necesary skills “are not easily found combined in one person: Test Know How, UX/Design CSS, Programming, and Product Development / Management.” This view advocates reducing communication overhead by involving all of the relevant perspectives within the team environment rather than segregating them by job function. Here, a tester role “works with product manager to determine acceptance tests, writes automatic acceptance tests, [executes] exploratory testing, helps developers with their tests, and keeps the testing focus.”

Finally, we have approached my favorite description of a collaborative software team as found in Peopleware: “the musical ensemble would have been a happier metaphor for what we are trying to do in well-jelled work groups” since “a choir or glee club makes an almost perfect linkage between the success or failure of the individual and that of the group. (You’ll never have people congratulating you on singing your part perfectly while the choir as a whole sings off-key.)” When we think of our team as producing music together, we see that this group composed of disparate parts must together be responsible for the quality of the result, allowing for a separate testing role but not reserving a testing perspective to that one individual. All team members must pursue a quality outcome, rather than only the customer and the programmer, as Agile purists would have it. One aspect of this committed team is willingness to contend respectfully with one another, for we would readily ignore others whose perspectives had no value. Yet, when we see that all of our striving contributes to the good of the whole, the struggle toward understanding and consensus encourages us to embrace even the brief discomfort of disagreement.

Image Credit

Talent Scout

Ubiquitous

At CAST this year, Michael Larsen gave a talk about testing team development lessons learned from the Boy Scouts of America. I have some familiarity with the organization since my kid brother was once a boy scout, my college boyfriend was an Eagle scout, a close family friend is heavily involved in scouts, and I anticipate my husband and both of my sons will “join up” as soon as the boys are old enough. I just might be a future Den Mother.

However, when I was growing up, I joined the Girl Scouts of America through my church. We didn’t have the same models of team development, but we had some guiding principles underpinning our troop:

The Girl Scout Promise
On my honor, I will try:
To serve God and my country,
To help people at all times,
And to live by the Girl Scout Law.

The Girl Scout Law
I will do my best to be
honest and fair,
friendly and helpful,
considerate and caring,
courageous and strong, and
responsible for what I say and do,
and to
respect myself and others,
respect authority,
use resources wisely,
make the world a better place, and
be a sister to every Girl Scout.

If we as testers live up to these principles of serving, helping, and living honesty, fairness, and respect in our professional relationships, we can become the talented leaders that Michael encourages us to be:

CAST 2011 Emerging Topics: Michael Larsen “Beyond Be Prepared: What can scouting teach testing?”
From boyscouts, watching how people learn and how people form into groups
1960s model for team development

Team stages:

  • Forming: Arrows pointing in different directions; group comes together and they figure out how they’re going to do things
  • Storming: Arrows indirect opposition to one another
  • Norming: Arrows beginning to go in the same direction; figure out what our goal is
  • Performing: Most arrows in the same direction/aligned; objectives clear and we go for it

EDGE

  • Explain – during the forming stage, leadership role that you take, telling people what they need to know and what they need to know and learn (dictatorship)
  • Demonstrate – show how to do what needs to be done, make it familiar
  • Guide – answer questions but let them do the majority of the hands-on
  • Enable – leader steps out of the way, let the person go their way, “I trust you”

Movies such as Remember the Titans and October Sky demonstrate this process.
Failure to “pivot” can prevent someone from moving through the continuum!
Without daily practice, skills can be lost or forgotten, so may need to drop to a lower stage for review.

After Michael’s lightning talk, members of the audience brought these questions for his consideration:

Q: Is this a model? Or does every team actually go through these steps?
Duration of the steps varies, some may be very brief.
Unlikely to immediately hit the ground running.

Q: What about getting stuck in the Storming mode?
Figure out who can demonstrate. If you don’t like the demonstrated idea, toss it. Just get it out there!

Q: How does this model work when one person leaves and another comes in?
Definitely affects the group, relative to the experience of the person who joins.
May not revert the group back to the very beginning of the process.
Team rallies and works to bring the new person up to the team’s level.
New member may be totally fresh insights, so that person doesn’t necessarily fall in line.

Q: What happens when a group that is Norming gets a new leader?
Can bring the group down unless you demonstrate why you’re a good leader.
Get involved! Get your hands dirty!
Build the trust, then you can guide. Team will accept your guidance.

If this works on young squirrely kids, imagine how well this works on young squirrely developers … testers. – Michael Larsen

Image Credit

Spare the Rod

Ubiquitous

Paul Holland‘s interstitial Lightning Talk at CAST 2011 was a combination of gripe session, comic relief, and metrics wisdom. The audience in the Emerging Topics track proffered various metrics from their own testing careers for the assembled testers to informally evaluate.

Although I attended CAST remotely via the UStream link, I live-tweeted the Emerging Topics track sessions and was able to contribute my own metric for inclusion in the following list, thanks to the person monitoring Twitter for @AST_News:

number of bugs estimated to be found next week
ratio of bugs in production vs. number of releases
number test cases onshore vs. offshore
percent of automated test cases
number of defects not linked to a test case
total number of test cases per feature
number of bug reports per tester
code coverage
path coverage
requirements coverage
time to reproduce bugs found in the field
number of people testing
equipment usage
percentage of pass/fail tests
number of open bugs
amount of money spent
number of test steps
number of hours testing
number of test cases executed
number of bugs found
number of important bugs
number of bugs found in the field
number of showstoppers
critical bugs per tester as proportion of time spent testing

“Counting test cases is stupid … in every context I have come across” – Paul Holland

Paul mentioned that per tester or per feature metrics create animosity among testers on the same team or within the same organization. When confronted with a metric, I ask myself, “What would I do to optimize this measure?” If the metric motivates behavior that is counter-productive (e.g. intrateam competition) or misleading (i.e. measuring something irrelevant), then that metric has no value because it does not contribute to the goal of user value. Bad metrics lead to people in positions of power saying, “That’s not the behavior I was looking for!” To be valid, a metric must improve the way you test.

In one salient example, exceeding the number of showstopper bugs permitted in a release invokes stopping or exit criteria, halting the release process. Often, this number is an arbitrary selection that was made long before, perhaps by someone who may no longer be on staff, as Paul pointed out, and yet it prevents the greater goal of shipping the product. Would one critical bug above the limit warrant arresting a rollout months in the making?

Paul’s argument against these metrics resonated with my own experience and with the insight I gathered from attending Pat O’Toole’s Metrics that Motivate Behavior! [pdf] webinar back in June of this year:

“A good measurement system is not just a set of fancy tools that generate spiffy charts and reports. It should motivate a way of thinking and, more importantly, a way of behaving. It is also the basis of predicting and heightening the probability of achieving desired results, often by first predicting undesirable results thereby motivating actions to change predicted outcomes.”

Pat’s example of a metric that had no historical value and that instead focused completely on behavior modification introduced me to a different way of thinking about measurement. Do we care about the historical performance of a metric or do we care more about the behavior that metric motivates?

Another point of departure from today’s discussion is Pat’s prioritizing behavior over thinking. I think the context-driven people who spoke in the keynotes and in the Emerging Topics sessions would take issue with that.

Whoever spares the rod hates the child, / but whoever loves will apply discipline. – Proverbs 13:24, New American Bible, Revised Edition (NABRE)

My experience with metrics tells me that numbers accumulated over time are not necessarily evaluated at a high level but are more likely as the basis for judgment of individual performance, becoming a rod of discipline rather than the protective rod of a shepherd defending his flock.

Paul did offer some suggestions for bringing metrics back to their productive role:

  • valid coverage metric that is not counting test cases
  • number of bugs found/open
  • expected coverage = progress vs. coverage

He also reinforced the perspective that the metric “100% of test cases that should be automated are automated” is acceptable as long as the overall percentage automated is low.

Metrics have recently become a particular interest of mine, but I have so much to learn about testing software that I do not expect to specialize in this topic. I welcome any suggestions for sources on the topic of helpful metrics in software testing.

Image Credit

I do not think it means what you think it means

When ubiquitous language isn’t

Ubiquitous

Definition:
ubiquitous = [Latin ubique everywhere; Latin ubi where] present, appearing, existing or being everywhere, especially at the same time; omnipresent; constantly encountered, widespread

For example, the passage of time is constantly encountered and occurring everywhere. We measure time in different increments, such as a year.

“What day is it? What year?” – Terminator Salvation movie

How do we define the term “year”?

1. Calendar year?
The Gregorian calendar is only one of many that have been used over time.
“There are only 14 different calendars when Easter Sunday is not involved. Each calendar is determined by the day of the week January 1 falls on and whether or not the year is a leap year. However, when Easter Sunday is included, there are 70 different calendars (two for each date of Easter).” – Wikipedia article

2. Fiscal year = financial year = budget year
This is a period used for calculating annual (“yearly”) financial statements in businesses and other organizations that “fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year” – Wikipedia article

3. Astronomical year numbering system = proleptic Gregorian calendar
This standard includes the year “0” and eliminates the need for any prefixes or suffixes by attributing the arithmetic sign to the date. This definition is used by MySQL, NASA, and non-Maya historians.

4. Billing year
Companies often expect you to sign a contract for service that may encompass the period of a year (e.g. signing a 2-year cell phone contract).

5. Year of your life/age
Count starts at zero and increments on your birthday.

6. Years of working for a company
Count starts at zero and increments on the anniversary of your hire date. This definition is often used to award benefits based on longevity (e.g. more vacation after “leveling up” having completed a given number of work years).

7. Religious year
For example, the Roman Catholic Church starts its liturgical year with the four weeks of Advent that precede Christmas. Other religious calendars include Julian, Revised Julian, Hebrew (a.k.a. Jewish), Islamic (a.k.a. Muslim, Hijri), Hindu, Buddhist, Bahá’í

8. National year
Some countries use nation-based calendars for internally organizing time (e.g. Chinese, Indian, Iranian/Persian, Ethiopian, Thai solar)

When we cannot even speak clearly about a familiar term like “year,” it should be no surprise that we have difficulty communicating on computing projects with cross-functional teams composed of individuals with different professional backgrounds. Each of us masters the jargon of our field as we encounter it, leaving us with gaping holes in our knowledge about domain-specific concepts that are essential when implementing a project.

Ubiquitous Language is “a language structured around the domain model and used by all team members to connect all the activities of the team with the software.”
Domain-Driven Design by Eric Evans

In order to succeed in designing and implementing good software, we must be willing to revisit our assumptions about terminology, avoiding the “inconceivable” situation in which two team members in a discussion are using the same word to represent different ideas. In practical terms, that means asking dumb questions like “What do you mean by that?” even when the answer appears to be obvious, or we risk replaying the old story of the blind men and an elephant.

Once we have a firm grounding in a context-specific set of words to use when speaking about the work, we can proceed with confidence that we will find ourselves in the same position of confusion later in the project, as we iterate through modeling parts of the system again and again. Thus, we must remain vigilant for statements with multiple interpretations. In addition, Evans reminds us that a domain expert must understand this ubiquitous language so that it guides us to design a result that ultimately satisfies a business need.

Testers must consciously use the agreed upon expressions in our test ideas, test plans, test cases, and any other record of testing, whether planned or executed. Consistent usage is key in both explaining the testing approach to a new recruit and in maintaining the history of the project, including the testing effort.

Image credit

Help Wanted, Apply Within

Apply Within

“I know he can get the job, but can he do the job?” — Mr. Waturi, Joe Versus the Volcano

In one of my favorite movies of all time, the protagonist Joe struggles daily through a truly dead-end job while Joe’s boss talks constantly on the phone to the unseen character Harry about hiring concerns. Hiring and retaining the right people worry even this manager whose employees accomplish simplistic tasks.

At the suggestion of a couple of programmer friends, I recently finished reading Peopleware by DeMarco and Lister, which also focuses on the right person for the job from a management perspective. However, the authors advocate a different approach from micro-managing and oppressive Mr. Waturi: get the right people, make them happy so they won’t leave, and turn them loose. But how do managers know the right person when they see him or her?

Employee characteristics the authors emphasize:

  • intelligent, making thoughtful value judgments
  • creative
  • wants to accept responsibility
  • gets very involved in the outcome
  • energy and enthusiasm, hellbent for success
  • worthy of trust, ethical behavior
  • protect the well-being of the psychological self
  • dedicated to the best quality the individual can produce
  • believe strongly in the rightness of the product
  • loyal to positive environments
  • learner, improving skills over time, higher proficiency to deal with higher risk
  • internally motivated
  • the proper mix of perspective and maturity
  • building safety, bonding rather than pretense, forming healthy & satisfying communities
  • peer coaching
  • involved in process improvement
  • replacing chaos with order

Mr. Waturi never acknowledges Joe’s competence or gives him any autonomy but continues to hold him responsible for duties that he prevents Joe from executing. As a result, Joe feels essentially no involvement in the outcome of his work and his existing depression from post-traumatic stress only worsens.

A neurochemistry doctoral student friend recommended the book Flow, in which Mihaly Csikszentmihalyi tells us that “we must constantly reevaluate what we do, lest habits and past wisdom blind us to new possibilities” and “enjoyment depends on increasing complexity … the discovery of new challenges … the development of new skills.”

Joe simply accepts that his life will continue plodding on its weary routine way until he receives startling news that changes his life. Joe’s internal motivation is so weak that only catastrophic circumstances galvanize him for action. Csikszentmihalyi describes people in similar circumstances whose “vision to perceive challenging opportunities for action” change their mundane work into satisfying careers.

In her closing keynote presentation for STAREast 2011, Julie Gardiner encouraged us to take courageous action:

  1. Turn your job into a passion
  2. Retain your integrity
  3. Take your career seriously

But what does all this mean for us as software testers? In order to do our jobs most effectively, we must take these management concerns and internalize them, pushing ourselves and our companies to improve. We must take ownership of the quality of our products and encourage our non-QA team members to embrace it as well, taking pride in our skillful workmanship.

In an environment like this, we are most free to push the software to its limits. We can focus our creativity, intelligence, and judgment on the work at hand to produce better outcomes, such as more effective test cases and fewer bugs in production. We can be gentle catalysts and change agents rather than the “jolt” that Naomi Karten described in her STAREast keynote elaboration of Virginia Satir’s family therapy model.

It all comes down to self-discipline. We must first apply our intense concentration to the weaknesses within ourselves to build self-regard that strengthens us to deal with risk in a professional setting and ultimately achieve success.

Although Joe never learns to find satisfaction in his work, the movie closes with Joe hoping for a better future. We too can hope for a better future as we pursue a real-world self-transformation as the Believers but Questioners that DeMarco and Lister encourage us to be.

Image credit