• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Approaches

Big Visible Testing

07 Tuesday Aug 2012

Posted by claire in Agile, Approaches, CAST 2012, Context, Experiences, Experiments, Publications

≈ 2 Comments

Presented as an Emerging Topic at CAST 2012

This was my talk proposal:

I have always thought of myself as an agile tester. After all, my development teams have always delivered features in 2 week sprints. My testing activities included reviewing requirements or stories before the planning meetings to assemble a list of questions and test ideas that I would use to approach the work proposed. I participated in a review before code completion that allowed for some exploratory testing, brief and informal though it may have been at times. In the past couple of years, I also planned and coded test automation.

However, over the past year, I have been transforming from a pseudo-agile tester to a true agile tester. Rather than sitting apart from the software developers in my own quality engineering department, I am now seated in the same room as the other employees from a mix of disciplines who are on my product delivery team. Rather than testing in a silo, I have been gradually increasing the visibility of testing activities through exploratory test charter management, defect backlog organization, and paired exploratory testing with both testers and non-testers. The feedback loops have shortened and the abbreviated time between activities necessitated adjusting how I provide information.

Testers are in the information business. We take the interests and concerns of the business as communicated through the product owner – or in my case the product owner team – and combine those with the needs of the customer as expressed in the story and further augment those with our experience using and analyzing software for deficiencies, abberations, and oddities. We draw upon a variety of resources including the experience and perspectives of fellow testers, heuristics, and product history to approach the goal of delivering a product the customer values, focusing especially on the quality aspects of that value.

Now that the audience for my testing comprises a mix of disciplines and the work environment has shifted from a heavier process to transparent, quick information access, I have been experimenting with different ways to execute testing and to represent the outcomes of that testing activity so that the information consumers understand it in ways that best suit each of their perspectives.

In my brief presentation, we will examine 3 different agile team member personas and their implications for presenting and maintaining testing information as well as the inherent tensions between their distinct and various needs. I will trace my learning curve of adjusting to their needs through the various experiments I have completed in this context, though these lessons extend beyond a purely cross-functional agile product development team.

Other testers will come away with a fresh perspective on viewing their product team members and focus on the value testing artifacts provide to a software development team.

Big Visible Testing from Claire Moss

February results for Tester Merit Badge

09 Friday Mar 2012

Posted by claire in Approaches, Experiments, Techwell, Tester Merit Badges

≈ Leave a Comment

Tester Merit Badge - Explorer

My February was pretty crazy, so I didn’t pursue all of the parts of the Explorer Tester Merit Badge this month. I skipped these parts due to time constraints:

3. How Long and How Far
4. Walk the Distance
8. Bus and Train Maps

No sweat. I’ll certainly be coming back to this one since exploratory testing (ET) is meat-and-potatoes testing for me.

Read the rest on the Techwell blog and tell me about your results!

Tester Merit Badges live with 1 week to go for February!

22 Wednesday Feb 2012

Posted by claire in Approaches, Experiments, Exploratory Testing, Techwell, Tester Merit Badges, Training

≈ 1 Comment

With only a week left in February, I’ll be wrapping up my own trial of the first Tester Merit Badge and posting my results, so I encourage you to try it for yourself. Do let me know if you’re playing along at home!

For reference, here is the introduction and here is the motivation.

Originally posted on the Techwell blog, the badge description is reposted here for convenience:

Tester Merit Badge - Explorer

Tester Merit Badge: Explorer

Skill challenge: Try exploratory testing
Requirements
1. Know Your Maps.

Be able to explain different approaches to exploratory testing (e.g. testing tours, chartered session-based testing, freestyle)

Some web resources:
Session-based testing
Scenario testing
Exploratory testing in agile

Some books:
Crispin, Lisa; Gregory, Janet (2009). Agile Testing
Whittaker, James (2009). Exploratory Software Testing

2. North, South, East, West.

Who says scripted test cases can’t be exploratory? Just because you have a protocol written down doesn’t mean your brain turned off when you began to execute it. As you go along working through a set of instructions, perhaps drawing from a user manual if you lack test scripts or specific test cases, keep your eyes open for what is going on around you, not just what fits the happy path of the case. Make notes of testing ideas and chase down something that’s off-script and keep track of what you do as you go along. If no test scripts or highly structured test cases exist for a particular aspect you want to test, skip ahead to requirement #3.

3. How Long and How Far.

Using an existing reference (if any) estimate the time to exploratory test an aspect of a feature of a software product. If you have been using test cases or test scripts for this testing, then use those as jumping off points for your exploration but don’t follow them. I tend to make a list of some test ideas and then pick and choose which ones to attempt during a particular exploratory session. If no ready-made resources exist for a particular aspect you want to test, skip ahead to requirement #4.

4. Walk the Distance.

Estimate time exploratory test an aspect of a feature of a software product without the aid of any existing materials. Use your knowledge and experience to take an educated guess at what needs to be done and build up a guess-timate. We’re going for ballpark and not precision here. Then, try it and see how close you were. That’s the beauty of iterative learning.

5. Map Maker. Map of the Place. Make a Model.

I see these physical representations of travel as essentially the same when it comes to software testing. This one is about precisely describing what you observe, which makes it a perfect artifact of your exploratory session. This could be a site map, a pairing of requirements description snippets with implemented user interface components, or even a sketch of different paths through a block of code (if you’re doing some white box preparation for your exploratory testing). We want to show what we observed rather than what we expect, so you can explicitly record your expectations if that helps you to clear your mind for the road ahead.

6. Finding Your Way Without Map or Compass.

Freestyle! Do some testing without any more explicit structure than a time box. Give yourself 15 minutes to wander around in an application without stating a particular agenda. You might even try accessing a piece of software through a less favored access point (e.g. a traditional website viewed from a smart phone).

7. Trail Signs Traffic.

This is an opportunity to write a different kind of test guide for another tester to follow. If we stick to the trail metaphor, you want to provide indications of the way out of the woods but you don’t dictate how the hiker travels between the boles of the trees bearing the blaze (or if you’re a big nature nerd the piles of rocks and sticks with their encoded messages about the trail ahead). I think Whittaker’s landmark tour is particularly apt for this example, so I recommend picking a part of your app to extract some landmarks. Avoid step-by-step instructions about how to wander between these milestones! You want to recognize the variation in execution that naturally occurs, even in the presence of a test script. In this case, doing it differently is a strength since you collectively will cover more of the application over time, although you may not encounter exactly the same scenery along the way.

8. Bus and Train Maps.

Use a publicly available source to map out a test. If you have a user manual for the application under test, that would be a good source for producing an expected route through an application. Just like a driver stuck in traffic, you don’t need to adhere to the planned route, so feel free to follow any detours that seem like better alternatives if you are feeling blocked or just want to take a more scenic route. Lacking a user manual for this particular product, try a description of some similar or competing product. Again, we’re exploring here, so having an inexact guide is no barrier to the experiment.

 

When you complete any or all parts of these badge “requirements” take a moment to reflect on whether the technique could be helpful to you in your regular testing work. You don’t have to migrate away from your current approach, but having some options always helps me to switch it up a bit when testing starts to feel monotonous – and I really think I’m doing it wrong when testing bores me! There is always too much testing to complete, so I certainly need to go exploring more often.

Drop me a line or post a comment here to let me know how the experiment went for you and I’ll post my own results here within the month.

Happy testing!

Image source (embroidered version of this image)

Test and Relaxation

29 Tuesday Nov 2011

Posted by claire in Approaches, Experiments, Weekend Testing

≈ Leave a Comment

hammock testing

When I was growing up, my mother enjoyed including a bit of education in our family vacations. She read to us from many sources about our intended destination, preparing us to be observant and appreciative. As a young girl, I read aloud – at her prompting – from guidebooks, tourist bureau brochures, and travel magazines. These days, my mother still e-mails me travel information from many websites, though reading aloud is now optional. Mom’s creative approach to vacation planning sought out off-the-beaten-path sights where we had a better chance of learning something. This early preparation also required us to think through the items we needed to pack to make our agenda attainable, from extra layers of clothing to special equipment.

She purposefully overloaded every day’s schedule, grouping our options for geographic areas of the destination. With three kids, she knew to expect the unexpected and that you can’t always plan for it, so instead she planned to accommodate the necessary flexibility. Sometimes the need for flexibility arose from external sources, so we always packed a map that we had studied in advance and could reference quickly on site. Likewise, we had already reviewed our transportation options so that we were familiar with the available means and routes to allow for quick on-the-spot adjustments. She raised me to embrace these interruptions, saying “sometimes the times you get lost are when you make the best discoveries.”

We joined docent-led architectural walks in Chicago, climbed the Mayan ruins in Costa Maya (Mexico), attended off-Broadway plays in New York City, attempted our limited French at the Quebec World Music Festival, and learned to play the washboard with spoons in New Orleans, though Washington DC was the mother-load of educational sight-seeing. All along the way, mom encouraged us to ask questions and to explore as we toured, capturing what we experienced and what we drew out of that in our daily journaling.

“The keyword for our vacation wasn’t relaxation, it was adventure.” — my mom

With this personal history, I found the idea of a testing vacation very natural when I participated in Weekend Testing Americas two weeks ago. In my daily work, I am familiar with exploratory testing as a chartered but loosely structured activity. I start with a time box and a list of test ideas to guide my testing in the direction of acceptance criteria for a story, but I never script steps of a test case. However, WTA presented us with this mission, should we choose to accept it:

We want to explore this application and find as many short abbreviated charters as possible.
You have 30 minutes to come up with as many “testing vacations” as you can consider. The goal is that no single vacation should take more than five minutes. Shorter is better.

I paired with Linda Rehme and we tested Google Translate in these ways:

  • testing in Firefox 8 and Chrome
  • prompt to use new feature of reordering text in the result
  • selecting alternate translations of phrases in the result
  • manually editing translations of phrases (or alternate translates) of the result
  • moving result text with capitalization
  • moving result text with punctuation
  • couldn’t reorder words within a phrase of the result text
  • re-translate to revert to the original result text
  • Listen to both source and result text
  • manually editing text of the result to include words in another language and then Listen
  • Listen didn’t work for both of us
  • icons for Listen and Virtual Keyboard displayed in Firefox 8 but not Chrome
  • different drag hardware controls (laptop touchpad, laptop nub)
  • virtual keyboard for German (Deutsch)
  • moving virtual keyboard around in the browser
  • switching virtual keyboard between Deutsch and
  • misspelling words
  • prompted to use suggested spelling
  • prompted to select detected source language
  • Turn on/off instant translation
  • translating a single word with instant turned off displaying a list of results

When time was up, our moderators prompted us, “First, describe your “vacation”. Then describe what you saw while you were on vacation. And finally, what you wished you had done while you were on vacation (because really, there’s never enough time to do everything).”

My pair of testers noticed that different browsers displayed different controls, features worked in some browsers and not in others (e.g. Listen), result phrases could be manipulated as a unit but couldn’t be broken apart, and moving result phrases around did not correct either the capitalization or punctuation. I really wanted to go down the rabbit hole of having instant translation turned off because I immediately saw that result text didn’t clear and then clicking the translate button for a single word produced a different format of results (i.e. list of options below the result text). In fact, I found myself full of other testing vacation ideas and it was hard to keep track of them as I went along, testing rapidly. The best I could do was jot them down as I went while typing up descriptions of the testing we had completed. I enjoyed the rapid pace of the testing vacation exercise with its familiar exploratory testing style.

Weekend Testers Americas: Claire, the idea when you find that you are doing too many things is to step back and try to do just one. It’s like touring the Louvre. You can’t take it all in in one sitting. (Well, you can, but it would be severe information overload. 🙂
Claire Moss: I liked that this accommodated my “ooh shiny!” impulses, so don’t get me wrong.
Weekend Testers Americas: Yes, testing vacations and “Ooh, shiny!” go *very well together 😀

Fortunately, my mom was always up for indulging those “Ooh, shiny!” impulses on vacations as I was growing up and now I have a new way to enjoy my testing time at work: testing vacations.

[I took the liberty of correcting spelling and formatting of text from the WTA #22 session.]

Image source

ET, Phone Home!

09 Friday Sep 2011

Posted by claire in Approaches, Experiments, Exploratory Testing

≈ 1 Comment

Composition

Although I am no longer the newest recruit on my employer’s Quality team, I am still something of an alien creature to the folks back at the mothership (i.e. home office). However, I have been slowly getting to know them through video conferencing, especially my fellow Quality team members. We have been experimenting with paired exploratory testing, but in my case we cranked it up a notch to *remote* paired exploratory testing. (You know testers don’t like to keep it simple, right?) This added an interesting layer of exploration to an already exploratory experience. (This meta goes out to you, Jace and Will S.) Now, each member of the team has a Skype account, establishing a common medium for communication, and we are learning the basics together. While we contended with screen repaint, we were forced to discuss the products more in depth to make use of the lag time and to give some context for each newly displayed page. This also gave us a chance to discuss the testing process, the collaborative online Quality space, our documentation strategy, and a bit of product history. Oh yeah, and we did some testing.

Since I’m still a newbie, I pretty much expect to feel a bit lost in the woods when it comes to the rest of the company’s product suite. Paired exploratory testing (or ET for the testing aficianados among you) gave me a peek into the Daxko-verse. My fellow testers know the lay of the land and so are better positioned to provide test ideas inspired by the suite’s world as we know it – soon to be rocked by my team’s product! In return, I got to ask the naive questions about what we were looking at, what terminology meant, and how it all fits together. Sometimes, having a second set of eyes isn’t enough. You need someone to ask the dumb questions. Stand back, people, I am a professional at this.

Paired ET fosters the Agile Principles:
1. Continuous Feedback
2. Direct Communication
3. Simplicity
4. Responding to Change
5. Enjoyment

We are still working out how to run the sessions. Does the person on the product team pilot or co-pilot the session? Or do we take this rare opportunity to do some concurrent exploratory testing? How long do we test together? Do we test both products back-to-back or does that just leave us yearning for caffeine and a stretch break? Personally, I am loving this. It’s so much fun to play with the new and novel, and I hope that this livens up the regression routine for my home office folks. If nothing else, it is a great opportunity to geek out about testing methodology and learn a bit about what works in our context.

The best parts:
•Finding bugs!
•Communication
•Knowledge sharing

Can’t wait to get into it again this afternoon.

Addendum: Now that we have completed the initial experiment in the vacuum of ignorance, I am free to research other approaches to paired exploratory testing. I paid particular attention to Agile testing as a new mindset that encourages transferring testing skills to other team members so that the whole team shares responsibility for testing.

Read more from Lisa Crispin, Janet Gregory, Brian Marick, Cem Kaner, and James Bach

Skepticism: a case study

09 Monday May 2011

Posted by claire in Approaches

≈ 2 Comments

Under the hood

When Jon Bach asked me what one piece of advice I would give a new tester, I answered skepticism.  For example, the people who are giving you requirements may not have complete information.

Question your assumptions

This general approach to problem solving served me well today when my car’s maintenance light came on during my morning commute.  I called a local repair shop knowing it was time to have my oil changed and made an appointment.  Although the phone operator told me there would be a surcharge for my oil based on my car’s make and model, I questioned that judgment.  My questioning approach was affirmed when the two people at the desk each came up with a different recommended type of oil for my car based on their research.

We went back to basics and literally looked under the hood (white box testing anyone?) to check the system only to discover that it wasn’t working as expected: my oil cap was missing!  Lacking that source of confirmation, we broke out the owner’s manual and found the answer (i.e. checked requirements).

Not only did we resolve a pressing question of implementation for work that would be immediately completed but we discovered an existing problem that we assumed was not present (bug).

Most satisfying of all, because the shop providing the current service was part of the same parent company as my previous visit, they were able to provide feedback about problem results to the previous mechanics, thereby improving their system overall.

I think a key part of this interaction was shaped by my in-progress reading of Critical Conversations by Patterson, Grenny, McMillan, and Switzler, which Andy Kaufman recommended in his StarEast keynote.  I approached the conversation as an opportunity to learn about my vehicle and allowed the experts to teach me – though even experts disagreed! They taught me how to think through the problem for myself because they perceived this as a team effort.  They also provided the criticism of their counterparts at the other location.  I didn’t need to say anything negative and I saved some money based on their original quote for the work.  Best of all, I know today’s service was better because the people helping me were more conscious of potential problems.

What’s on your reading list that is helping you?

Image credit

Once bitten, twice shy

08 Sunday May 2011

Posted by claire in Approaches

≈ Leave a Comment

Tags

Approaches

Take a bite

Back when I was in high school, my friend was driving down the highway late at night and leaned toward me, which momentarily sent us careening toward the concrete median.  I won’t say my life flashed before my eyes, but the panic must have been evident on my face since he immediately corrected the car’s path.  It left me with a lasting dread of driving in the left lane that plagues me to this day.  (Yes, I know the rest of the lanes have the excitement of other life-threatening opportunities courtesy of my fellow motorists.)

Not long ago, another friend of mine was driving through a construction zone and was pulled over for a speeding ticket.  They hit him with the maximum fine, which was double the norm due to the construction zone.  Although I wasn’t present for this occasion, it is a salient example of the sort of thing that can bite you on the road.

These examples were foremost in my mind as I was driving home from the StarEast conference last night.  I was driving in the left lane when a median and a construction zone simultaneously appeared before me, sending me retreating to the right lane where I set my cruise control to the speed limit displayed on the signs.

Although I deemed it unlikely that a cop would be waiting for someone to step out of line on that particular stretch of highway on a Saturday night, I certainly had no desire to find out by experimentation.  A co-worker of mine recently related her experience with being pulled over on a lonely stretch of highway when she had no time to argue with the officer’s judgment.  Honestly, who would want to be dragged back to the middle of nowhere to prove your innocence?  (Assuming you can…)

Sometimes, I think our test cases can adhere to the same algorithm I use in driving: That bit us once and the consequences were terrible!  We must test it every time!

James Bach mentioned in his Creative Thinking talk at StarEast that we have diminishing returns for this sort of testing.  When we know we will always test this area, we are more likely to implement correctly.

I think we as testers also can easily become complacent with areas we have tested over and over again.  This has definitely happened to me!

“Because the first bite always tastes best!” – Ramona Quimby

When we get a new feature, we think of all the interesting and malicious and foolish ways to use the system.  We get to take the first bite out of it and savor the sweetness of the quick payoff.

It’s so much more difficult to stick with testing regression cases by hand for each release that goes out the door.  We continue in our probably unnecessary task because we cannot bear the thought of missing the same consequence a second time.  We end up with additional cases that cover certain known error conditions that just lengthen our testing cycle.

Similarly, by limiting my speed and proceeding with caution to avoid the ticket, I lengthened my drive home so that my arrival time was after midnight.  Although I guaranteed that the hazards I was vigilantly monitoring did not occur, my time might have been better spent accomplishing the goal.

Likewise, I know the terrifying experience of veering toward a median, so I am much more likely to avoid that danger and the driving habits that would produce it, guarding against a known case.

Do you maintain cases that slow you down more than benefit your release cycle?  Do you continue to execute them manually?
Or do you automate these tests to free up your manual execution time for new approaches?

Image credit

Newer posts →

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • November 2024
  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
    • Reliability
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.