• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Experiments

Tester Merit Badges live with 1 week to go for February!

22 Wednesday Feb 2012

Posted by claire in Approaches, Experiments, Exploratory Testing, Techwell, Tester Merit Badges, Training

≈ 1 Comment

With only a week left in February, I’ll be wrapping up my own trial of the first Tester Merit Badge and posting my results, so I encourage you to try it for yourself. Do let me know if you’re playing along at home!

For reference, here is the introduction and here is the motivation.

Originally posted on the Techwell blog, the badge description is reposted here for convenience:

Tester Merit Badge - Explorer

Tester Merit Badge: Explorer

Skill challenge: Try exploratory testing
Requirements
1. Know Your Maps.

Be able to explain different approaches to exploratory testing (e.g. testing tours, chartered session-based testing, freestyle)

Some web resources:
Session-based testing
Scenario testing
Exploratory testing in agile

Some books:
Crispin, Lisa; Gregory, Janet (2009). Agile Testing
Whittaker, James (2009). Exploratory Software Testing

2. North, South, East, West.

Who says scripted test cases can’t be exploratory? Just because you have a protocol written down doesn’t mean your brain turned off when you began to execute it. As you go along working through a set of instructions, perhaps drawing from a user manual if you lack test scripts or specific test cases, keep your eyes open for what is going on around you, not just what fits the happy path of the case. Make notes of testing ideas and chase down something that’s off-script and keep track of what you do as you go along. If no test scripts or highly structured test cases exist for a particular aspect you want to test, skip ahead to requirement #3.

3. How Long and How Far.

Using an existing reference (if any) estimate the time to exploratory test an aspect of a feature of a software product. If you have been using test cases or test scripts for this testing, then use those as jumping off points for your exploration but don’t follow them. I tend to make a list of some test ideas and then pick and choose which ones to attempt during a particular exploratory session. If no ready-made resources exist for a particular aspect you want to test, skip ahead to requirement #4.

4. Walk the Distance.

Estimate time exploratory test an aspect of a feature of a software product without the aid of any existing materials. Use your knowledge and experience to take an educated guess at what needs to be done and build up a guess-timate. We’re going for ballpark and not precision here. Then, try it and see how close you were. That’s the beauty of iterative learning.

5. Map Maker. Map of the Place. Make a Model.

I see these physical representations of travel as essentially the same when it comes to software testing. This one is about precisely describing what you observe, which makes it a perfect artifact of your exploratory session. This could be a site map, a pairing of requirements description snippets with implemented user interface components, or even a sketch of different paths through a block of code (if you’re doing some white box preparation for your exploratory testing). We want to show what we observed rather than what we expect, so you can explicitly record your expectations if that helps you to clear your mind for the road ahead.

6. Finding Your Way Without Map or Compass.

Freestyle! Do some testing without any more explicit structure than a time box. Give yourself 15 minutes to wander around in an application without stating a particular agenda. You might even try accessing a piece of software through a less favored access point (e.g. a traditional website viewed from a smart phone).

7. Trail Signs Traffic.

This is an opportunity to write a different kind of test guide for another tester to follow. If we stick to the trail metaphor, you want to provide indications of the way out of the woods but you don’t dictate how the hiker travels between the boles of the trees bearing the blaze (or if you’re a big nature nerd the piles of rocks and sticks with their encoded messages about the trail ahead). I think Whittaker’s landmark tour is particularly apt for this example, so I recommend picking a part of your app to extract some landmarks. Avoid step-by-step instructions about how to wander between these milestones! You want to recognize the variation in execution that naturally occurs, even in the presence of a test script. In this case, doing it differently is a strength since you collectively will cover more of the application over time, although you may not encounter exactly the same scenery along the way.

8. Bus and Train Maps.

Use a publicly available source to map out a test. If you have a user manual for the application under test, that would be a good source for producing an expected route through an application. Just like a driver stuck in traffic, you don’t need to adhere to the planned route, so feel free to follow any detours that seem like better alternatives if you are feeling blocked or just want to take a more scenic route. Lacking a user manual for this particular product, try a description of some similar or competing product. Again, we’re exploring here, so having an inexact guide is no barrier to the experiment.

 

When you complete any or all parts of these badge “requirements” take a moment to reflect on whether the technique could be helpful to you in your regular testing work. You don’t have to migrate away from your current approach, but having some options always helps me to switch it up a bit when testing starts to feel monotonous – and I really think I’m doing it wrong when testing bores me! There is always too much testing to complete, so I certainly need to go exploring more often.

Drop me a line or post a comment here to let me know how the experiment went for you and I’ll post my own results here within the month.

Happy testing!

Image source (embroidered version of this image)

This could be real good

03 Friday Feb 2012

Posted by claire in Agile, Experiments, Retrospective, Scrum

≈ Leave a Comment

It's Groundhog Day!

Something is different.
– Good or bad? (Rita)
Anything different is good…
but this could be real good. (Phil)
— Groundhog Day

I’m a relatively young ScrumMaster, so I adopted a retrospective pattern that a colleague suggested to me at the beginning of my tenure. We have been using that same approach for sprint retros for 6 months and it gets the job done. Still, I found myself bored with doing the same old thing for our release retro this month and was concerned about not getting the desired benefit from the process. So I grabbed a promising technique from the Agile Retrospective Resource Wiki called the Four L’s, which Mary Gorman and Ellen Gottesdiener of EGB Consulting developed as a variation of the World Café since they “wanted some variety in eliciting feedback, collectively sharing that feedback and exploring action possibilities.”

The wiki suggests using the Four L’s for “iteration and project retrospectives as well as for retrospection of training and conference events” with a duration of 30 minutes to 2 hours. My 6-member cross-functional team used this technique to reflect on a release and limited our time to an hour, though that wasn’t a hard cutoff. In the context of our release retrospective and the hospitable space of our team meeting room, we gathered our diverse perspectives to explore questions that mattered about how our release went. I titled each of 4 large sticky notes Liked, Learned, Lacked, and Longed For, hanging them side-by-side on a single wall, which was a variation of the suggestion to move around the room. We set a timer of 15 minutes to generate initial feedback for all 4 categories simultaneously and began scribbling madly on uniformly yellow stickies with our Sharpies. Our team ran dry of serious contributions before the time was up, but I think time-boxing activities tends to drive us to get ideas out more quickly.

We read each sticky note’s single idea aloud and then clustered the notes around themes when there was overlap, listening together for patterns and insights. Then, we discussed the whole category among ourselves, hearing out each person’s comments with understanding and humor (we don’t take ourselves too seriously) since we encourage everyone’s contribution to the conversation. We were happy with our technical skills and technologies, but more importantly we have jelled as a team, or made it through the Norming phase of Tuckman’s model. Characteristically, my team identified our successes but did not dwell on them as much as our areas for improvement. We decided we might use the gathered data to satisfy the lacked or longed for items. We posted the following collective discoveries prominently in our team room:

  • Iteration
  • Continuous Planning
  • Continuous Research
  • Communication
  • Feedback (outside the walls)

These needs resonate with some of the Agile Manifesto principles:

  • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Business people and developers must work together daily throughout the project.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Continuous attention to technical excellence and good design enhances agility.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

In our efforts to optimize our agility, we are learning from our team’s past and planning for the future so that our results could be real good.

Tester Merit Badges: Finding Your Way

31 Tuesday Jan 2012

Posted by claire in Experiments, Exploratory Testing, Tester Merit Badges

≈ 4 Comments

Recently, I began blogging over at Techwell with my Active Lifestyle resolution. As a follow up, I am writing a guest series there.

As I have mentioned before, I was a Girl Scout for a while when I was growing up and loved the exposure to new and different things that I wouldn’t have occasion to try in my everyday life as well as the structure around life skills that would later be essential to self-sufficiency. For me, there’s nothing new around the recent enthusiasm to game aspects of our day-to-day lives and, as I’ve blogged before, having structure around learning helps me to progress.

As a grown woman, I again found an interest in earning badges when a former-Girl Scout friend of mine mentioned Lauren Grandcolas’ book You Can Do It!: The Merit Badge Handbook for Grown-Up Girls to me. It’s full of experiments modeled in the style of Girl Scout badges and reduces the potential intimidation of attempting new skill acquisition.

One day, after I had seen a 3D-printed octocat at the Atlanta Mini-Maker Faire at Georgia Tech, I was searching for octocat images online and I discovered the Nerd Merit Badges that reference Github, the movie Office Space, and the book The Pragmatic Programmer among other wonderful and obscure aspects of geekery. These inspired me to apply the badge concept to software testing. After all, developers shouldn’t have all the nerdy fun, though I’m pretty sure I’ve already earned my Family Tech Support and Homonyms badges… (For my word nerds, contrast with homophone here.)

Allow me to preface this experiment by recognizing the interval of time a Girl Scout takes to earn a badge is not a month. Girls fulfill these requirements over time, probably interspersing activities from several badges over the course of many months. I recognize that neither you nor I will necessarily complete all the requirements for each month’s Tester Merit Badge and that’s fine. Checking everything off the list is not the point. We’re here to learn and step outside our comfort zones, so start where you’re comfortable and stretch yourself a bit!

For the inaugural Tester Merit Badge, I have designed the Explorer badge as an introduction to exploratory testing. I am modeling it after the requirements of the Girl Scouts of America’s Finding Your Way badge.

Girl Scout badge Finding Your Way

Girl Scout of America badge: Finding Your Way

Requirements
  1. Know Your Maps. Be able to explain 3 diff. types of maps.
  2. North, South, East, West. Show you know how to use a compass.
  3. How Long and How Far. Use map to determine time to specific place.
  4. Walk the Distance. Estimate time to walk distance and try it.
  5. Map Maker. Draw map of a route with a legend or key.
  6. Map of the Place. Draw map to scale of a specific place.
  7. Make a Model. Make 3 dimensional model.
  8. Finding Your Way Without Map or Compass. Use sun, stars and nature.
  9. Trail Signs Traffic. Use trail signs to set up a mini-trail for others to follow.
  10. Bus and Train Maps. Learn to use bus or train maps. Try your route.

See the first Testing Merit Badge on the Techwell blog!

Image source

Guest blogging at Techwell

17 Tuesday Jan 2012

Posted by claire in Experiments, Techwell

≈ Leave a Comment

Guest blogging

I have also begun a series of guest blog posts over at Techwell, a Software Quality Engineering (SQE) site, that I will be continuing throughout 2012. Active Lifestyle: Tester Merit Badges imminent!

Please contact me with any feedback. Thanks!

Image credit

Test and Relaxation

29 Tuesday Nov 2011

Posted by claire in Approaches, Experiments, Weekend Testing

≈ Leave a Comment

hammock testing

When I was growing up, my mother enjoyed including a bit of education in our family vacations. She read to us from many sources about our intended destination, preparing us to be observant and appreciative. As a young girl, I read aloud – at her prompting – from guidebooks, tourist bureau brochures, and travel magazines. These days, my mother still e-mails me travel information from many websites, though reading aloud is now optional. Mom’s creative approach to vacation planning sought out off-the-beaten-path sights where we had a better chance of learning something. This early preparation also required us to think through the items we needed to pack to make our agenda attainable, from extra layers of clothing to special equipment.

She purposefully overloaded every day’s schedule, grouping our options for geographic areas of the destination. With three kids, she knew to expect the unexpected and that you can’t always plan for it, so instead she planned to accommodate the necessary flexibility. Sometimes the need for flexibility arose from external sources, so we always packed a map that we had studied in advance and could reference quickly on site. Likewise, we had already reviewed our transportation options so that we were familiar with the available means and routes to allow for quick on-the-spot adjustments. She raised me to embrace these interruptions, saying “sometimes the times you get lost are when you make the best discoveries.”

We joined docent-led architectural walks in Chicago, climbed the Mayan ruins in Costa Maya (Mexico), attended off-Broadway plays in New York City, attempted our limited French at the Quebec World Music Festival, and learned to play the washboard with spoons in New Orleans, though Washington DC was the mother-load of educational sight-seeing. All along the way, mom encouraged us to ask questions and to explore as we toured, capturing what we experienced and what we drew out of that in our daily journaling.

“The keyword for our vacation wasn’t relaxation, it was adventure.” — my mom

With this personal history, I found the idea of a testing vacation very natural when I participated in Weekend Testing Americas two weeks ago. In my daily work, I am familiar with exploratory testing as a chartered but loosely structured activity. I start with a time box and a list of test ideas to guide my testing in the direction of acceptance criteria for a story, but I never script steps of a test case. However, WTA presented us with this mission, should we choose to accept it:

We want to explore this application and find as many short abbreviated charters as possible.
You have 30 minutes to come up with as many “testing vacations” as you can consider. The goal is that no single vacation should take more than five minutes. Shorter is better.

I paired with Linda Rehme and we tested Google Translate in these ways:

  • testing in Firefox 8 and Chrome
  • prompt to use new feature of reordering text in the result
  • selecting alternate translations of phrases in the result
  • manually editing translations of phrases (or alternate translates) of the result
  • moving result text with capitalization
  • moving result text with punctuation
  • couldn’t reorder words within a phrase of the result text
  • re-translate to revert to the original result text
  • Listen to both source and result text
  • manually editing text of the result to include words in another language and then Listen
  • Listen didn’t work for both of us
  • icons for Listen and Virtual Keyboard displayed in Firefox 8 but not Chrome
  • different drag hardware controls (laptop touchpad, laptop nub)
  • virtual keyboard for German (Deutsch)
  • moving virtual keyboard around in the browser
  • switching virtual keyboard between Deutsch and
  • misspelling words
  • prompted to use suggested spelling
  • prompted to select detected source language
  • Turn on/off instant translation
  • translating a single word with instant turned off displaying a list of results

When time was up, our moderators prompted us, “First, describe your “vacation”. Then describe what you saw while you were on vacation. And finally, what you wished you had done while you were on vacation (because really, there’s never enough time to do everything).”

My pair of testers noticed that different browsers displayed different controls, features worked in some browsers and not in others (e.g. Listen), result phrases could be manipulated as a unit but couldn’t be broken apart, and moving result phrases around did not correct either the capitalization or punctuation. I really wanted to go down the rabbit hole of having instant translation turned off because I immediately saw that result text didn’t clear and then clicking the translate button for a single word produced a different format of results (i.e. list of options below the result text). In fact, I found myself full of other testing vacation ideas and it was hard to keep track of them as I went along, testing rapidly. The best I could do was jot them down as I went while typing up descriptions of the testing we had completed. I enjoyed the rapid pace of the testing vacation exercise with its familiar exploratory testing style.

Weekend Testers Americas: Claire, the idea when you find that you are doing too many things is to step back and try to do just one. It’s like touring the Louvre. You can’t take it all in in one sitting. (Well, you can, but it would be severe information overload. 🙂
Claire Moss: I liked that this accommodated my “ooh shiny!” impulses, so don’t get me wrong.
Weekend Testers Americas: Yes, testing vacations and “Ooh, shiny!” go *very well together 😀

Fortunately, my mom was always up for indulging those “Ooh, shiny!” impulses on vacations as I was growing up and now I have a new way to enjoy my testing time at work: testing vacations.

[I took the liberty of correcting spelling and formatting of text from the WTA #22 session.]

Image source

Initial Impact

21 Monday Nov 2011

Posted by claire in Experiments, Hackathon, Volunteering

≈ 1 Comment

Hackers Movie

Last week, I had my first experience of my company’s Impact Day “in which team members take a day to work outside of the office to give back to our local community.” We volunteered at the the Mobile Hackathon for Good, which the WebVisions Atlanta program described as:

Join mobile app development experts, developers and designers in an all day Mobile Hackathon for social good. The day will begin with short presentations by educators and representatives from non-profit organizations, followed by informational sessions on building apps for Windows Phone and other mobile platforms.

We had two charities proposing app ideas for us, but only one of them had specific tasks with loose requirements. Unfortunately, those oracles were not able to stay with us all day due to their regularly scheduled charitable duties, so we were left with concrete examples of activities that would benefit from a mobile app but no way to discover additional information, though I did get a chance to informally chat with a couple of the representatives before the schedule for the day began. I have volunteered with local charities through Hands On Atlanta before, so I knew from experience how frustrating it can be to part of a large group of volunteers waiting on manual, hard-copy check-in to start our volunteer work. That sounded like a good problem to tackle.

The technical informational sessions filled the morning of our “all day” Mobile Hackathon, leaving us with only 4 hours to build apps for the Windows Phone Marketplace, which none of us had done before. Although I do enjoy a good discussion on design and how to execute it well, as you can see from my tweets, I think concentrating on design was a lofty goal for such a compressed timeline. I wanted to incorporate the principles James Ashley advocated, but I first wanted to have some small piece functionality built up, such as navigating between screens. Also, I got a bit lost in the Expression Blend styles and had to have Shawn sort me out.

I think we had about a dozen folks on the Mobile Hackathon implementation crew, and we ended up informally splitting into two groups. About half of us did some whiteboard sketching and discussion of what we wanted the software to do. We had competing designs that were successively refined through an hour of discussion, leaving us only 3 hours to implement. We had many desired features and modes of entering volunteer data, but none of them fit well within our very limited time box, so we ended up abandoning the goal of adding people on site at the project location. We needed to focus on a very narrow implementation goal first. And as it turned out, we didn’t have very many developers present and not all had their own laptops installed with the Visual Studio 2010 Express for Windows Phone that we were to use as a development environment with either Visual Basic or Visual C#.

I profited the most from Shawn’s practical demonstration of building an app in a short period of time, which encouraged me to open up the software. I started several prototypes to explore the various installed templates, trying to get a feel for how to begin organizing the work. Figuring out where to start coding proved to be more of a hurdle for me, not being a programmer by profession, though once upon a time I was a Visual Basic coder for my employer while a co-operative degree student at Georgia Tech.

Since I had 4 co-workers with me, it might have seemed logical to form a unified team to attack problems like we do at work, but that wasn’t the way it worked out. Attendees Errin Calhoun and Eduardo La Hoz were on my implementation team to talk over some software design and implementation ideas, but I ended up writing the code. I wasn’t completely helpless, but I definitely benefited from collaboration with speakers Shawn Wildermuth and Moses Ngone. Even with their assistance, we ended up with a simple 3 screen app that could navigate through mocked data to a check-in view that displayed data collected as the user tapped through.

Afterward, several of us attended the Day One After Party, where my co-workers and I had an informal retrospective about the Hackathon with one of the organizers. Now, you should know that I am a reform-from-within kind of person and love to focus on opportunities to improve while recognizing what didn’t go well. I am specific in my concerns about problems I perceive in order to have the best chance of making a difference. Here are some things I noticed:

  1. Creating an actual shippable product in 4 hours was not realistic, especially with the paucity of developers.
  2. Part of the “understaffing” was a snafu in the communication surrounding the Hackathon’s location, which was incorrect on the website, printed tickets, and e-mail reminders. I think more devs would have been present without that wrinkle and I wish this had been tested in advance.
  3. However, we wouldn’t necessarily have effectively used more development talent because we didn’t have very strong self-organizing teams. Maybe it would have gone better to have a little more structure, like an event coordinator to help us identify the roles that Hackathon volunteers could fill and what our current talent pool included.
  4. We spent too much time on planning what we would want the app to do without attempting to iterate, too much like Big Up Front Design and creating requirements for stories that would have been far down the road (for sprints of that length).
  5. We could definitely have used more time to develop and less time learning about the Windows Phone Marketplace, which would never have accepted the partially completed apps that we ended up producing.
  6. In order to submit our apps, we needed to test on a Windows Phone device, which was not available. The other testing option was the Marketplace Testing Tool, which I never saw.

My design manager co-worker, Will Sansbury, had these comments:

  • Claire is fearless. [I didn’t have any development reputation to protect so I had no problem admitting a need and asking for help from strangers right away. – Claire]
  • I loved pairing with Dev through the whole process.
  • Expression Blend has a huge learning curve, and I’m not sure it’d be all that helpful once you get over that initial pain.
  • The short time box and no feature constraints necessitated a laser-sharp focus on one thing.
  • I feel bad that at the end of the day the world is no better.

We found out from our informal retrospective that this was the first Hackathon that WebVisions Atlanta has organized, so I am sure that subsequent iterations will take these lessons to heart – and in that unexpected way we have given back to our community.

Image source

ET, Phone Home!

09 Friday Sep 2011

Posted by claire in Approaches, Experiments, Exploratory Testing

≈ 1 Comment

Composition

Although I am no longer the newest recruit on my employer’s Quality team, I am still something of an alien creature to the folks back at the mothership (i.e. home office). However, I have been slowly getting to know them through video conferencing, especially my fellow Quality team members. We have been experimenting with paired exploratory testing, but in my case we cranked it up a notch to *remote* paired exploratory testing. (You know testers don’t like to keep it simple, right?) This added an interesting layer of exploration to an already exploratory experience. (This meta goes out to you, Jace and Will S.) Now, each member of the team has a Skype account, establishing a common medium for communication, and we are learning the basics together. While we contended with screen repaint, we were forced to discuss the products more in depth to make use of the lag time and to give some context for each newly displayed page. This also gave us a chance to discuss the testing process, the collaborative online Quality space, our documentation strategy, and a bit of product history. Oh yeah, and we did some testing.

Since I’m still a newbie, I pretty much expect to feel a bit lost in the woods when it comes to the rest of the company’s product suite. Paired exploratory testing (or ET for the testing aficianados among you) gave me a peek into the Daxko-verse. My fellow testers know the lay of the land and so are better positioned to provide test ideas inspired by the suite’s world as we know it – soon to be rocked by my team’s product! In return, I got to ask the naive questions about what we were looking at, what terminology meant, and how it all fits together. Sometimes, having a second set of eyes isn’t enough. You need someone to ask the dumb questions. Stand back, people, I am a professional at this.

Paired ET fosters the Agile Principles:
1. Continuous Feedback
2. Direct Communication
3. Simplicity
4. Responding to Change
5. Enjoyment

We are still working out how to run the sessions. Does the person on the product team pilot or co-pilot the session? Or do we take this rare opportunity to do some concurrent exploratory testing? How long do we test together? Do we test both products back-to-back or does that just leave us yearning for caffeine and a stretch break? Personally, I am loving this. It’s so much fun to play with the new and novel, and I hope that this livens up the regression routine for my home office folks. If nothing else, it is a great opportunity to geek out about testing methodology and learn a bit about what works in our context.

The best parts:
•Finding bugs!
•Communication
•Knowledge sharing

Can’t wait to get into it again this afternoon.

Addendum: Now that we have completed the initial experiment in the vacuum of ignorance, I am free to research other approaches to paired exploratory testing. I paid particular attention to Agile testing as a new mindset that encourages transferring testing skills to other team members so that the whole team shares responsibility for testing.

Read more from Lisa Crispin, Janet Gregory, Brian Marick, Cem Kaner, and James Bach

Published in Tea-time with Testers

22 Wednesday Jun 2011

Posted by claire in Experiments, Publications, Tea-time With Testers

≈ 1 Comment

Stop the presses

In the June 2011 current issue, Tea-time with Testers has published an article I wrote as an expansion on my Esprit de Corps post.

Direct link to PDF

Please contact me with any feedback. Thanks!

Image credit

Newer posts →

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • November 2024
  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
    • Reliability
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.