• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Training

Talent Scout

12 Friday Aug 2011

Posted by claire in CAST 2011, Soft Skills, Training

≈ 6 Comments

Ubiquitous

At CAST this year, Michael Larsen gave a talk about testing team development lessons learned from the Boy Scouts of America. I have some familiarity with the organization since my kid brother was once a boy scout, my college boyfriend was an Eagle scout, a close family friend is heavily involved in scouts, and I anticipate my husband and both of my sons will “join up” as soon as the boys are old enough. I just might be a future Den Mother.

However, when I was growing up, I joined the Girl Scouts of America through my church. We didn’t have the same models of team development, but we had some guiding principles underpinning our troop:

The Girl Scout Promise
On my honor, I will try:
To serve God and my country,
To help people at all times,
And to live by the Girl Scout Law.

The Girl Scout Law
I will do my best to be
honest and fair,
friendly and helpful,
considerate and caring,
courageous and strong, and
responsible for what I say and do,
and to
respect myself and others,
respect authority,
use resources wisely,
make the world a better place, and
be a sister to every Girl Scout.

If we as testers live up to these principles of serving, helping, and living honesty, fairness, and respect in our professional relationships, we can become the talented leaders that Michael encourages us to be:

CAST 2011 Emerging Topics: Michael Larsen “Beyond Be Prepared: What can scouting teach testing?”
From boyscouts, watching how people learn and how people form into groups
1960s model for team development

Team stages:

  • Forming: Arrows pointing in different directions; group comes together and they figure out how they’re going to do things
  • Storming: Arrows indirect opposition to one another
  • Norming: Arrows beginning to go in the same direction; figure out what our goal is
  • Performing: Most arrows in the same direction/aligned; objectives clear and we go for it

EDGE

  • Explain – during the forming stage, leadership role that you take, telling people what they need to know and what they need to know and learn (dictatorship)
  • Demonstrate – show how to do what needs to be done, make it familiar
  • Guide – answer questions but let them do the majority of the hands-on
  • Enable – leader steps out of the way, let the person go their way, “I trust you”

Movies such as Remember the Titans and October Sky demonstrate this process.
Failure to “pivot” can prevent someone from moving through the continuum!
Without daily practice, skills can be lost or forgotten, so may need to drop to a lower stage for review.

After Michael’s lightning talk, members of the audience brought these questions for his consideration:

Q: Is this a model? Or does every team actually go through these steps?
Duration of the steps varies, some may be very brief.
Unlikely to immediately hit the ground running.

Q: What about getting stuck in the Storming mode?
Figure out who can demonstrate. If you don’t like the demonstrated idea, toss it. Just get it out there!

Q: How does this model work when one person leaves and another comes in?
Definitely affects the group, relative to the experience of the person who joins.
May not revert the group back to the very beginning of the process.
Team rallies and works to bring the new person up to the team’s level.
New member may be totally fresh insights, so that person doesn’t necessarily fall in line.

Q: What happens when a group that is Norming gets a new leader?
Can bring the group down unless you demonstrate why you’re a good leader.
Get involved! Get your hands dirty!
Build the trust, then you can guide. Team will accept your guidance.

If this works on young squirrely kids, imagine how well this works on young squirrely developers … testers. – Michael Larsen

Image Credit

Spare the Rod

10 Wednesday Aug 2011

Posted by claire in CAST 2011, Context, Metrics, Training

≈ 4 Comments

Ubiquitous

Paul Holland‘s interstitial Lightning Talk at CAST 2011 was a combination of gripe session, comic relief, and metrics wisdom. The audience in the Emerging Topics track proffered various metrics from their own testing careers for the assembled testers to informally evaluate.

Although I attended CAST remotely via the UStream link, I live-tweeted the Emerging Topics track sessions and was able to contribute my own metric for inclusion in the following list, thanks to the person monitoring Twitter for @AST_News:

number of bugs estimated to be found next week
ratio of bugs in production vs. number of releases
number test cases onshore vs. offshore
percent of automated test cases
number of defects not linked to a test case
total number of test cases per feature
number of bug reports per tester
code coverage
path coverage
requirements coverage
time to reproduce bugs found in the field
number of people testing
equipment usage
percentage of pass/fail tests
number of open bugs
amount of money spent
number of test steps
number of hours testing
number of test cases executed
number of bugs found
number of important bugs
number of bugs found in the field
number of showstoppers
critical bugs per tester as proportion of time spent testing

“Counting test cases is stupid … in every context I have come across” – Paul Holland

Paul mentioned that per tester or per feature metrics create animosity among testers on the same team or within the same organization. When confronted with a metric, I ask myself, “What would I do to optimize this measure?” If the metric motivates behavior that is counter-productive (e.g. intrateam competition) or misleading (i.e. measuring something irrelevant), then that metric has no value because it does not contribute to the goal of user value. Bad metrics lead to people in positions of power saying, “That’s not the behavior I was looking for!” To be valid, a metric must improve the way you test.

In one salient example, exceeding the number of showstopper bugs permitted in a release invokes stopping or exit criteria, halting the release process. Often, this number is an arbitrary selection that was made long before, perhaps by someone who may no longer be on staff, as Paul pointed out, and yet it prevents the greater goal of shipping the product. Would one critical bug above the limit warrant arresting a rollout months in the making?

Paul’s argument against these metrics resonated with my own experience and with the insight I gathered from attending Pat O’Toole’s Metrics that Motivate Behavior! [pdf] webinar back in June of this year:

“A good measurement system is not just a set of fancy tools that generate spiffy charts and reports. It should motivate a way of thinking and, more importantly, a way of behaving. It is also the basis of predicting and heightening the probability of achieving desired results, often by first predicting undesirable results thereby motivating actions to change predicted outcomes.”

Pat’s example of a metric that had no historical value and that instead focused completely on behavior modification introduced me to a different way of thinking about measurement. Do we care about the historical performance of a metric or do we care more about the behavior that metric motivates?

Another point of departure from today’s discussion is Pat’s prioritizing behavior over thinking. I think the context-driven people who spoke in the keynotes and in the Emerging Topics sessions would take issue with that.

Whoever spares the rod hates the child, / but whoever loves will apply discipline. – Proverbs 13:24, New American Bible, Revised Edition (NABRE)

My experience with metrics tells me that numbers accumulated over time are not necessarily evaluated at a high level but are more likely as the basis for judgment of individual performance, becoming a rod of discipline rather than the protective rod of a shepherd defending his flock.

Paul did offer some suggestions for bringing metrics back to their productive role:

  • valid coverage metric that is not counting test cases
  • number of bugs found/open
  • expected coverage = progress vs. coverage

He also reinforced the perspective that the metric “100% of test cases that should be automated are automated” is acceptable as long as the overall percentage automated is low.

Metrics have recently become a particular interest of mine, but I have so much to learn about testing software that I do not expect to specialize in this topic. I welcome any suggestions for sources on the topic of helpful metrics in software testing.

Image Credit

Certifiable: Confessions of a Theory Nerd

15 Wednesday Jun 2011

Posted by claire in Certification, ISTQB, Training

≈ 6 Comments

Crazy houseSince I still hold the first job I obtained after college graduation, my training as a tester has been practical, sometimes specific to my circumstances. While this makes me very good at my work, I always feel that I could be better. Perhaps to satisfy that perfectionist urge, over the years I sought different ways to improve myself. One strategy that never fails me is certification. I know that some testers seek certification as a means to fulfill job description requirements or “level up” at their companies, but my situation is distinct. You see, I am a nerd. I love learning, sometimes for its own sake. Back in the day, I would have happily engaged you in conversation about tuple calculus or graph theory, but now testing software is clearly my calling and I want to be the ultimate quality engineer. I don’t feel compelled to compete with anyone else for this distinction; it is purely a competition against myself to see whether I have plateaued or can continue to improve. For this reason, I love certification as a chance to objectively measure my progress.

“I do not think it means what you think it means.” – Inigo Montoya, The Princess Bride

Both times I completed a certification class and the subsequent exam, I returned to work “fired up” for the next big push. Since my training has been largely on-the-job, I did not make distinctions between certain testing terms that my more experienced co-workers did not emphasize. Having exposure to industry standard definitions helps me to clarify problems. In addition, the precision appeals to me as a theory nerd. With the goal of establishing a common language within my testing team, I introduced some of the things I had learned and accepted the group’s idiomatic usage as well. After all, communicating with my testing team and my development teams is the most immediate and effective use of my book learning. I found the rigor and structure of the courses helpful. I did not take these courses expecting to learn the One True Way of Software Testing. I wanted to expand my horizons to include other approaches and viewpoints. Did the certification courses make me a better tester? Yes, they did. They renewed my passion for my work. Formal training increased my confidence in my understanding of the software testing field. I brought back insights about areas for our improvement and I think we are a better department for having this input. Did I need to use a certification course to accomplish this goal? Certainly not. I don’t think that all testers need to be certified. I could have completed self-study through textbooks, websites, blogs, and industry contacts. I suppose that is similar to the perspective of recent news articles declaring college to be a waste of time. For me, the formality and structure of training classes and the certification process help to reinforce learning, which may not be true for everyone. A friend recently posted thisto his blog:

There’s got to be a structure around the knowledge, and practice in applying it, for it to sink in. College works good for that. … Nowadays, I learn by reading textbooks on my own. And it’s harder, because I need to bring my own structure, motivation, and discipline.

As long as we continue to improve our testing skills, remain open to new possibilities, and continue to investigate pressing questions about the quality of our software, we can congratulate ourselves on being certifiable testing zealots, whether we are certified or not. Image credit

Origin Story

01 Wednesday Jun 2011

Posted by claire in Training

≈ 2 Comments

Took the fam to see Kung Fu Panda 2 today. Traditional film can only support about 30 min of 3D, so seeing a whole movie in digital 3D for the first time was pretty cool. I loved it. Also, children have no inhibitions about talking through movies in the theatre. Fun times.

Every hero has an origin story. My favorite superhero is Batman. He overcame loss and through sheer determination – and a family fortune to buy some cool toys – turned it around to watch out for others, guarding them against the vicissitudes of the world, specifically the criminal underworld. Of course, I see Po’s origin story as an echo of Batman’s, but I’ll refrain from ruining it for my son.

Recently, some tester friends were discussing the nature of heroism on Twitter, including how it pertains to testing software. I’ve heard the word hero used with two completely different connotations in this context:

  1. Heroism as the self-sacrifice of the tester, throwing one’s self on the grenade for the good of the unit.
  2. Heroism as the self-actualization of the tester, realizing one’s full professional potential, which benefits the team.

No doubt we are all familiar with work situations that embody the first definition of heroism. Inevitably, the needs of customers dictate swift resolutions to problems that require an extra effort from staff members, including software testers. We manage the current crisis, wipe our brows, and return to our regular work schedule. We don’t want to cultivate situations that necessitate this kind of deliverance: constantly fighting fires is a sign that we aren’t preventing the fires in the first place.

While admirable, accepting – or even taking initiative to seek – tough assignments may not accomplish our goals in a sustainable way. The champion who constantly faces down dangers may not be taking care to restore strength between bouts and can become burnt out. Although Sisyphus could be viewed as an absurdist hero, this is not heroism I choose to imitate. In addition, an environment with one superstar can actually prevent others on the team from cultivating their own courageous abilities. We need to take turns wrestling with the big problems so that we all become more capable, which might mean a less glamorous sacrifice such as familiar regression testing – though we could even take this opportunity to improve this routine material with the knowledge gained since our last encounter.

Then again, when we have individuals with heroic talent, do we want to spend it on tasks that do not require such skills? We must avoid the Harrison Bergeron solution of bringing heavyweights back to the average level. Would the group be better served accomplishing the routine tasks in a more efficient way, such as automation, and devoting able testers to the remaining unaddressed challenges? We want our testing staff to take initiative to seek knowledge in resolving tough problems so that when software crises occur our testers are prepared to face the present trials. Constantly improving our testing skills offers clear benefits to our team in an hour of decision. Our goal should be “building a fighting force of extraordinary magnitude,” where our metric is one of ability, that develops each person’s heroism.

Preventing the formation of heroes would not tamp out circumstances that call for heroics. Why do we create or foster circumstances that routinely demand a hero to set things right? Let’s not be Lois Lane, living for the moment of rescue to admire the Superman who has saved us. We need to stop tying the lady to the tracks, to stop distressing the damsel, and to free up our heroes for the real fight, facing external threats rather than internal ones. While the value of heroism is under discussion, the real topic of conversation should be why heroism is necessary.

Go with the flow – Exploratory Learning

06 Friday May 2011

Posted by claire in STAREast 2011, Training

≈ 9 Comments

Go with the flow

Registering for STAREAST nearly a year in advance gave me a lot of time to plan my conference schedule. At first, the website for the 2011 incarnation of the conference was just a stub, leaving me only previous conference sessions to peruse and time to speculate about which speakers might return.

Then, the website filled with a schedule and pages of session descriptions, allowing me to indulge in one of my favorite pastimes: juxtaposing classes in a week of schooling. My college roommate always found it a bit comical that I would happily push around little notecards with course names on them trying to see how to maximize my time for the semester’s courses. Since I have so many diverse interests, narrowing down the field of conferences sessions to those I could actually attend was a challenge. I finally highlighted some session names and tucked the pages into a file folder.

You can imagine my joy when the glossy printed copy of the conference materials arrived in my mailbox within months of the conference start date. Cue another round of reading, wrangling, circling, highlighting, and underlining. Wash, rinse, repeat. Of course, this list of session selections was different. I was sure these iterations were producing better and better approximations of the conference experience I wanted to have.

When I registered on the first day of tutorials, the registration folks handed me yet another source of scheduling bliss, the official course program. However, at this time, I had registered for specific tutorial classes and so considered my schedule set in stone for the first two days of the week. I attended my first day’s registered tutorial, which was a good refresher from my certification training last year. I had come out of my certification class all fired up to implement the new strategies I had discovered only to meet the reality of slow organizational change. This year, I could use a reminder to try again.

What I didn’t expect was Lee Copeland’s advice: if a session isn’t working for you, respectfully depart and find one that really speaks to your needs. I thought how nice that the conference scheduling chair is encouraging us to be flexible and went on my merry way.

The next morning, I arrived to find my morning tutorial and my afternoon tutorial actually had a conflict since my afternoon tutorial was really a full day in length. I decided to trade off the morning session for Dale Perry’s full day instruction on performance testing and went in ready to learn. After an hour of helpful instruction, I realized that this type of testing was not my company’s urgent need. Since I had already forgone one type of instruction for the other, I was a bit loathe to disrupt my schedule again until I thought about Lee’s suggestion.

So I gave myself permission to adopt a structured, exploratory approach to the conference and made a real-time decision for better learning. I happened to recognize the name James Bach and “crashed the party” of his session. Both presenters were clearly passionate about their subject matter and the audience had opportunities to interact and ask questions. One question that came up just before the break led to me approach both James Bach and Michael Bolton with a related question of my own. I had no idea how much that moment would affect my conference experience.

I ended up following Michael Bolton and pretty much taking over his lunch with Bernie Berger after Bernie mentioned an exercise that involved a critical thinking challenge. (Sorry, Bernie! I hope you found it entertaining to watch me flail through that exercise.) I must have said something worthwhile because Michael took me under his wing and introduced me to other great test professionals.

From that point on, I decided to design my schedule as my day – and the conference – progressed, abandoning all my careful selections and preconceived notions about what my conference experience should be. My focus changed from a script of training on specific topics that I could implement back at the office to a learning charter of growing in a more open way as a quality professional. I was simultaneously learning about my professional needs while designing and executing my conference schedule.

As the days of the conference progressed, my experience adapted the use of my limited time as I was coming to understand this skill of exploratory learning. Just as in exploratory testing, “through this process, one discovery leads to another and another as you explore.” (SQE training Exploratory Testing In Practice). I adopted a session-based framework for exploratory learning that included logging the discoveries I was making with each short interview of my talented and more experienced peers. These discussions were a highly interactive process organized into a series of time boxed missions: their generous listening and providing “suggestions that might work,” to quote Gojko Adzic.

I will be better able to add permanent value to my company through the practical notes I recorded during each session that are now helping me to develop into a more mature and well-rounded quality professional, but more importantly these kind people have reawakened my love for testing and changed the conference into a transformative experience for me.

Special thanks to my benefactors: Lee Copeland, Janet Gregory, Dale Perry, James Bach, Michael Bolton, Selena Delesie, Jon Bach, Dale Emery, Lisa Crispin, Greg Paskal, and Dawn Cannan. You all took time out of your busy schedule to encourage me to become excellent.

I enjoyed meeting speakers Bart Knaack, Andy Kaufman, Gojko Adzic, Robert Sabourin, Paul Cavalho, Naomi Karten, and Bindu Laxminarayan as well as hearing Julie Gardiner speak.

I also enjoyed meeting my fellow conference attendees Andrew Dempster, Greg Johnson, Roy Francis, Richard Michaels, Jeremy Hart, Yvette Francino, Susan Clever, and Niclas Reimertz.

Someday I hope to meet these additional test professionals I am now following on Twitter: Lanette Creamer, Karen Johnson, Fiona Charles, Lynn McKee, Nancy Kelln, Anne-Marie Charrette, Jerry Weinberg, Brett Pettichord, Johanna Rothman, Don Gray, Esther Derby, and Elisabeth Hendrickson.

Image credit

Newer posts →

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • November 2024
  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
    • Reliability
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.