• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: CAST 2011

Quality Is Undead

31 Monday Oct 2011

Posted by claire in Automation, CAST 2011, STARWest 2011, Testing Humor

≈ 6 Comments

QR Skull

Though many give credence to the sentiment that quality is dead, testers linger like ghosts with unfinished business who cannot move on from this plane. There is never as much time for testing as we would like and some of our bug reports are regarded with as much skepticism as messages from beyond the grave. We tend to haunt the developers with questions and carefully couched criticisms, and I daresay that some programmers might like to call for an exorcism.

We may think of ourselves as shuffling zombies toward the end of a long release cycle, when testing time is often sacrified for feature implementation and we can end up working long hours, though thrown under the bus may not be the worst way to go. Those carefully scrutinizing our exceptional strength to endure the inevitable crunch time, our keen powers of observation in uncovering bugs, and our cold appraisal of the software will realize our true nature and may start stocking up on stakes and garlic, peering into our cubes looking for the coffins where we sleep as we must surely be vampires instead. So far, I have no reports of testers biting co-workers, however tempting at times. After all, wouldn’t we want to pass on our dark gifts to our team mates, test-infecting them to become more like us?

Testing wants Braaaains

At STARWest 2011, James Whittaker and Jeff Payne heralded a dark future for testers without automation scripting skills. While I welcome the increasing testing prowess of software developers, their testing focus is on automation, whether at the unit level or something more closely approximating an end user’s interaction. I have started thinking of my automation test suite as my zombie horde: persistent but plodding on unthinking, repeatedly running into obstacles, requiring tending. It really want some brains to interpret the results, maintain its eccentricities, and perhaps play some games in the backyard shed. As Michael Bolton stated at CAST 2011, lots of automated checks versus doing human-observed testing is one of the hard problems of testing. “A computer executing automated tests only makes one kind of observation, it lacks human judgement.”

Even these fast zombies are not a replacement for the thinking mind of a tester, though how to think about testing is a question of much debate. Testers from different schools seem to regard one another with a bit of hostility. Each successive school of thought seems to bury the preceding one with great ceremony, including the departed’s whole entourage for the journey to the afterlife. Those thus interred seem like mummies, desiring a terrible vengeance on the ones disturbing their eternal rest or even grave-robbing their ideas. At CAST 2011, Michael Bolton encouraged us to take a more measured tone with people who disagree with us, referencing Cem Kaner’s sentiment that “reasonable people can disagree reasonably.”

Memento Mori

With the Day of the Dead celebration occurring this week, it seems fitting to ponder our own demise as testers. Those celebrating this holiday want to encourage visits by the dead, so the living can talk to them, remembering funny events and anecdotes about the departed, perhaps even dressing up as them. Some create short poems, called calaveras (“skulls”), as mocking epitaphs of friends, describing interesting habits and attitudes or funny anecdotes.

So here is a calavera for me:

Here lies our dear departed Claire Moss
We miss her because she could test like a boss.
When a defect appeared before her eyes,
Her lilting voice would intone “Hey guys…?”
At best, she was a code monkey,
but her sense of humor was always funky.
Though she claimed her heart was in her test,
we always know she loved us best.

I encourage you, the living, to visit me with events, anecdotes, or funny poems. Whatever undead creature your tester persona most identifies with, keep on pursuing excellence and have a happy Halloween!

Composition

25 Thursday Aug 2011

Posted by claire in CAST 2011, Context, Soft Skills

≈ Leave a Comment

Composition

Programmers are an obvious choice for members of a software team. However, various points of view attribute different value to the other potential roles.

Matt Heusser‘s “How to Speak to an Agilista (if you absolutely must)” Lightning Talk from CAST 2011 referred to Agilista programmers who rejected the notion that testers are necessary. Matt elaborates that extreme programming began back in 1999 when testing as a field was not as mature, so these developers spoke about only two primary roles, customer and programmer, making an allowance that while the “team may include testers, who help the Customer define the customer acceptance tests. … The best teams have no specialists, only general contributors with special skills.” In general, Agile approaches teams as composed of members who “normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration’s requirements.” The Agile variation Scrum suggests teams of “people with cross-functional skills who do the actual work (analyse, design, develop, test, technical communication, document, etc.). It is recommended that the Team be self-organizing and self-led, but often work with some form of project or team management.” At the time, this prevailing view that the whole team should own quality diminished the significance of a distinct testing role. Now that testing has grown, Matt encourages us to revisit this relationship to find ways to engage world-changers and their methodologies to achieve a better result. Private questioning about the value of the activities that testers do to contribute is more constructive than head-on public confrontation. One source of the problem is that testers need to acknowledge Agile programmer awesomeness and their specific skills. This produces a more favorable environment to discuss the need for testing, which Agilists do see though they want to automate it. Matt advocates observing what works in cross-functional project teams to determine patterns that work in a particular context, engaging our critical thinking skills to develop processes that support better testing. As a process naturalist, Matt reminds us that “[software teams in the wild] may not fit a particular ideology well, but they are important, because by noticing them we can design a software system to have less loss and more self-correction.”

Having not experienced this rejection by Agile zealots for myself despite working only in Agile-friendly software shops, I have struggled with commenting on his suggestions. I decided to dig a bit deeper to find some classic perspectives on roles within software teams so that I can put the XP and Agile assertions into perspective.

One such analogy for the software team is Harlan Mills’ Chief Programmer Team as “a surgical team … one does the cutting and the others give him every support that will enhance his effectiveness and productivity.” In this model, “Few minds are involved in design and construction, yet many hands are brought to bear. … the system is the product of one mind – or at most two, acting uno animo,” referring to the surgeon and copilot roles, which are respectively the primary and secondary programmer. However, this model recognizes a tester as necessary, stating “the specialization of the remainder of the team is the key to its efficiency, for it permits a radically simpler communication pattern among the members,” referring to centralized communication between the surgeon and all other team members, including the tester, administrator, editor, secretaries, program clerk, toolsmith, and language lawyer for a team size of up to ten. This large support staff presupposes that “the Chief Programmer has to be more productive than everyone else on the team put together … in the rare case in which you have a near genius on your staff–one that is dramatically more productive than the average programmer on your staff.”

Acknowledging the earlier paradigm of the CPT, the Pragmatic Programmers assert another well-known approach to team composition: “The [project’s] technical head sets the development philosophy and style, assigns responsibilities to teams, and arbitrates the inevitable ‘discussions’ between people. The technical head also looks constantly at the bigger picture, trying to find any unnecessary commonality between teams that could reduce the orthogonality of the overall effort. The administrative head, or project manager, schedules the resources that the teams need, monitors and reports on progress, and helps decide priorities in terms of business needs. the administrative head might also act as a team’s ambassador when communicating with the outside world.” While these authors do not directly address the role of tester, they state that “Most developers hate testing. They tend to test gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are driven to find our bugs now, so we don’t have to endure the shame of others finding our bugs later.” In contrast, traditional waterfall software teams have individuals “assigned roles based on their job function. You’ll find business analysts, architects, designers, programmers, testers, documenters, and the like. There is an implicit hierarchy here – the closer to the user you’re allowed, the more senior you are.” This juxtaposition sets up a competitive relationship between the roles rather than seeing them as striving toward the same goal.

A healthier model of cross-functional teams communicates that all of the necesary skills “are not easily found combined in one person: Test Know How, UX/Design CSS, Programming, and Product Development / Management.” This view advocates reducing communication overhead by involving all of the relevant perspectives within the team environment rather than segregating them by job function. Here, a tester role “works with product manager to determine acceptance tests, writes automatic acceptance tests, [executes] exploratory testing, helps developers with their tests, and keeps the testing focus.”

Finally, we have approached my favorite description of a collaborative software team as found in Peopleware: “the musical ensemble would have been a happier metaphor for what we are trying to do in well-jelled work groups” since “a choir or glee club makes an almost perfect linkage between the success or failure of the individual and that of the group. (You’ll never have people congratulating you on singing your part perfectly while the choir as a whole sings off-key.)” When we think of our team as producing music together, we see that this group composed of disparate parts must together be responsible for the quality of the result, allowing for a separate testing role but not reserving a testing perspective to that one individual. All team members must pursue a quality outcome, rather than only the customer and the programmer, as Agile purists would have it. One aspect of this committed team is willingness to contend respectfully with one another, for we would readily ignore others whose perspectives had no value. Yet, when we see that all of our striving contributes to the good of the whole, the struggle toward understanding and consensus encourages us to embrace even the brief discomfort of disagreement.

Image Credit

Talent Scout

12 Friday Aug 2011

Posted by claire in CAST 2011, Soft Skills, Training

≈ 6 Comments

Ubiquitous

At CAST this year, Michael Larsen gave a talk about testing team development lessons learned from the Boy Scouts of America. I have some familiarity with the organization since my kid brother was once a boy scout, my college boyfriend was an Eagle scout, a close family friend is heavily involved in scouts, and I anticipate my husband and both of my sons will “join up” as soon as the boys are old enough. I just might be a future Den Mother.

However, when I was growing up, I joined the Girl Scouts of America through my church. We didn’t have the same models of team development, but we had some guiding principles underpinning our troop:

The Girl Scout Promise
On my honor, I will try:
To serve God and my country,
To help people at all times,
And to live by the Girl Scout Law.

The Girl Scout Law
I will do my best to be
honest and fair,
friendly and helpful,
considerate and caring,
courageous and strong, and
responsible for what I say and do,
and to
respect myself and others,
respect authority,
use resources wisely,
make the world a better place, and
be a sister to every Girl Scout.

If we as testers live up to these principles of serving, helping, and living honesty, fairness, and respect in our professional relationships, we can become the talented leaders that Michael encourages us to be:

CAST 2011 Emerging Topics: Michael Larsen “Beyond Be Prepared: What can scouting teach testing?”
From boyscouts, watching how people learn and how people form into groups
1960s model for team development

Team stages:

  • Forming: Arrows pointing in different directions; group comes together and they figure out how they’re going to do things
  • Storming: Arrows indirect opposition to one another
  • Norming: Arrows beginning to go in the same direction; figure out what our goal is
  • Performing: Most arrows in the same direction/aligned; objectives clear and we go for it

EDGE

  • Explain – during the forming stage, leadership role that you take, telling people what they need to know and what they need to know and learn (dictatorship)
  • Demonstrate – show how to do what needs to be done, make it familiar
  • Guide – answer questions but let them do the majority of the hands-on
  • Enable – leader steps out of the way, let the person go their way, “I trust you”

Movies such as Remember the Titans and October Sky demonstrate this process.
Failure to “pivot” can prevent someone from moving through the continuum!
Without daily practice, skills can be lost or forgotten, so may need to drop to a lower stage for review.

After Michael’s lightning talk, members of the audience brought these questions for his consideration:

Q: Is this a model? Or does every team actually go through these steps?
Duration of the steps varies, some may be very brief.
Unlikely to immediately hit the ground running.

Q: What about getting stuck in the Storming mode?
Figure out who can demonstrate. If you don’t like the demonstrated idea, toss it. Just get it out there!

Q: How does this model work when one person leaves and another comes in?
Definitely affects the group, relative to the experience of the person who joins.
May not revert the group back to the very beginning of the process.
Team rallies and works to bring the new person up to the team’s level.
New member may be totally fresh insights, so that person doesn’t necessarily fall in line.

Q: What happens when a group that is Norming gets a new leader?
Can bring the group down unless you demonstrate why you’re a good leader.
Get involved! Get your hands dirty!
Build the trust, then you can guide. Team will accept your guidance.

If this works on young squirrely kids, imagine how well this works on young squirrely developers … testers. – Michael Larsen

Image Credit

Spare the Rod

10 Wednesday Aug 2011

Posted by claire in CAST 2011, Context, Metrics, Training

≈ 4 Comments

Ubiquitous

Paul Holland‘s interstitial Lightning Talk at CAST 2011 was a combination of gripe session, comic relief, and metrics wisdom. The audience in the Emerging Topics track proffered various metrics from their own testing careers for the assembled testers to informally evaluate.

Although I attended CAST remotely via the UStream link, I live-tweeted the Emerging Topics track sessions and was able to contribute my own metric for inclusion in the following list, thanks to the person monitoring Twitter for @AST_News:

number of bugs estimated to be found next week
ratio of bugs in production vs. number of releases
number test cases onshore vs. offshore
percent of automated test cases
number of defects not linked to a test case
total number of test cases per feature
number of bug reports per tester
code coverage
path coverage
requirements coverage
time to reproduce bugs found in the field
number of people testing
equipment usage
percentage of pass/fail tests
number of open bugs
amount of money spent
number of test steps
number of hours testing
number of test cases executed
number of bugs found
number of important bugs
number of bugs found in the field
number of showstoppers
critical bugs per tester as proportion of time spent testing

“Counting test cases is stupid … in every context I have come across” – Paul Holland

Paul mentioned that per tester or per feature metrics create animosity among testers on the same team or within the same organization. When confronted with a metric, I ask myself, “What would I do to optimize this measure?” If the metric motivates behavior that is counter-productive (e.g. intrateam competition) or misleading (i.e. measuring something irrelevant), then that metric has no value because it does not contribute to the goal of user value. Bad metrics lead to people in positions of power saying, “That’s not the behavior I was looking for!” To be valid, a metric must improve the way you test.

In one salient example, exceeding the number of showstopper bugs permitted in a release invokes stopping or exit criteria, halting the release process. Often, this number is an arbitrary selection that was made long before, perhaps by someone who may no longer be on staff, as Paul pointed out, and yet it prevents the greater goal of shipping the product. Would one critical bug above the limit warrant arresting a rollout months in the making?

Paul’s argument against these metrics resonated with my own experience and with the insight I gathered from attending Pat O’Toole’s Metrics that Motivate Behavior! [pdf] webinar back in June of this year:

“A good measurement system is not just a set of fancy tools that generate spiffy charts and reports. It should motivate a way of thinking and, more importantly, a way of behaving. It is also the basis of predicting and heightening the probability of achieving desired results, often by first predicting undesirable results thereby motivating actions to change predicted outcomes.”

Pat’s example of a metric that had no historical value and that instead focused completely on behavior modification introduced me to a different way of thinking about measurement. Do we care about the historical performance of a metric or do we care more about the behavior that metric motivates?

Another point of departure from today’s discussion is Pat’s prioritizing behavior over thinking. I think the context-driven people who spoke in the keynotes and in the Emerging Topics sessions would take issue with that.

Whoever spares the rod hates the child, / but whoever loves will apply discipline. – Proverbs 13:24, New American Bible, Revised Edition (NABRE)

My experience with metrics tells me that numbers accumulated over time are not necessarily evaluated at a high level but are more likely as the basis for judgment of individual performance, becoming a rod of discipline rather than the protective rod of a shepherd defending his flock.

Paul did offer some suggestions for bringing metrics back to their productive role:

  • valid coverage metric that is not counting test cases
  • number of bugs found/open
  • expected coverage = progress vs. coverage

He also reinforced the perspective that the metric “100% of test cases that should be automated are automated” is acceptable as long as the overall percentage automated is low.

Metrics have recently become a particular interest of mine, but I have so much to learn about testing software that I do not expect to specialize in this topic. I welcome any suggestions for sources on the topic of helpful metrics in software testing.

Image Credit

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.