• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Context

See me live!

16 Monday Jul 2012

Posted by claire in Agile, CAST 2012, Context, Experiences, Experiments

≈ Leave a Comment

CAST is online streaming the keynotes and Emerging Topics track again this year.

Last year, I was haunting the interwebs watching, Tweeting, and chatting. This year, I’ll be coming to you live through the magic of technology. (This is the first reason I’ve had to crack open PowerPoint so it should be entertaining!)

Catch my agile software testing emerging topic talk Big Visible Testing at 10 AM PDT today!

Again, here’s the link to watch me:
http://www.ustream.tv/channel/CASTLive

Update: Recording uploaded to YouTube

Image source

Creeping CRUD

13 Friday Apr 2012

Posted by claire in Context, Experiences

≈ 1 Comment


I recently tweeted that I was feeling frustrated with the medical billing industry. At that moment, I was particularly bothered by the seemingly endless wait of navigating the phone tree to get to a human being only to be shunted to voicemail. However, miracle of miracles, I did in fact get a return call from a person to discuss a recent explanation of benefits (EOB) I had received from my insurance company.

While my family had benefited from the services of this provider over the course of the last couple of years, we decided to move on and I was looking to settle accounts. I couldn’t help noticing that the EOB included Dates of Service that occurred after we had changed providers. Not wanting to immediately proclaim insurance fraud, I called up the provider to discuss what I imagined to be a data entry mistake.

The helpful billing staff member at the other end of the phone call fielded my questions calmly and clearly had heard this complaint before. She explained to me that sometimes there have been data entry mistakes related to the intake of a patient. Since recurring billing is scheduled based on this intake date, she had encountered situations where all of the billing they ever submitted for a patient had incorrect Date of Service values. She said that in this case she would have to make a note of the error, remember that it had occurred, and continue to submit that note with any problematic records since she wasn’t permitted to alter the medical record. I don’t think she meant that the software provided the option to enter an explanation:

If possible, explain why the earlier note was incorrect, the reason for the error, and the reason the error was noticed.
— Medical Economics

Rather, I think she had some separate, manual workaround for note-taking related to someone’s file.

Apparently there was an update/edit capability in the software but she was instructed not to use this due to legal concerns around altering a medical record. Presumably, their software did not provide differentiation between original and updated values with appropriate timestamps:

With electronic medical records, the computer program must show the dates of the original notes and the dates of any changes or new entries.
— Medical Economics

For the user, the result was effectively having no way to edit data. What a difference that context makes!

However, this was not our problem. She also elucidated a software bug in the system that involved a gradual creep of date ranges. For example, a monthly shipment of supplies would be billed over the course of 4 weeks after each ship date. After 2 years of shipments, the weekly billing dates had a lag of 11 days, which happened to put them after the date they discharged us from their care, making the incorrect billing appear to be fraudulent.

Despite having full CRUD functionality to allow correction of incorrect data entry, the staff would not correct records and in addition to that had to live with the pain of a creeping inaccuracy, leading to enough friction for the billing staff user that she had a canned answer for my concerns, which she had clearly addressed many times with other patients and their families. What a knot of confusion for the staff to untangle over and over again.

I can only hope that my new medical provider has a different medical record software provider and that the creeping crud doesn’t prove to be highly contagious.

Image source

A House Divided

12 Thursday Apr 2012

Posted by claire in Context, Experiences

≈ Leave a Comment

Amenities

With the recent The Hunger Games movie release came several fanboy and fangirl friends banding together to attend a performance on opening weekend. For good or ill, I ended up purchasing the tickets for the group. I made it to the box office days in advance and the transaction was happily mundane and successful.

The day before the show, one of the friends decided to bring a plus one along for the fun. However, she was very concerned that she couldn’t purchase a ticket online and didn’t want to cancel the date due to technical difficulties. She asked whether the theatre could add a ticket to my transaction and I agreed to look into it.

Give me something I can work with

Visiting the theatre’s website, I confirmed that the online information made the date seem doubtful, showing only 2 performances at that time with one of them in sold out status. When I tried the performance that wasn’t sold out, I noticed that the list of available tickets was rather short and only included reserved seating types. I hesitated to buy a single reserved ticket since the rest of us had general admission tickets.

(For those of you outside the U.S., we don’t have many movie theatres with reserved, or assigned, seating here. Almost every showing I’ve ever attended has been general admission. I only recently started patronizing a theatre that provided reserved seating – for a premium. For my American readers, one of the considerations for movie theatre software sold to international chains is the need to provide support for reserved seating as well as intermission.)

May the odds be ever in your favor

I tried calling the theatre to no avail, so I resolved to head over there after work to see about that ticket. When I spoke to the cashier, she explained that what had seemed like 2 separate performances online were really just 1 showtime.

Some programmer’s technical solution to the split house for a single performance came through to the web interface in a confusing form. In my experience, an auditorium, or house, had always been either general admission or reserved seating. And, although I tested movie theatre software for over 5 years, I had not encountered this feature request: splitting a single auditorium into 2 classes of ticketing.

Fortunately for my friend’s date, movie theatre software has a sold out threshold greater than zero, allowing for eventualities like broken seats, roof leaks, or other unexpected customer service issues. Knowing that, I confidently requested another ticket and easily obtained it. Granted, we ended up sitting in the front row craning our necks a bit as the pack of tween girls next to us excitedly discussed the movie play-by-play, but for once my testing savvy turned up a solution rather than a problem, averting a star-crossed lovers situation.

Testing Bliss

03 Tuesday Apr 2012

Posted by claire in #testchat, Context, Experiences, Experiments, Soft Skills, Techwell

≈ 5 Comments

It’s no secret: I adore testing software. It’s my weapon of choice, despite having happened upon it by chance many moons ago. (What other career transforms forgetfulness and clumsiness into strengths since they result in unexpected, non-happy path usage? Ultimately, I think it’s the variety that keeps me coming back for more on a daily basis.)

Given my feelings about testing, it came as no surprise to me that others would agree and rate this profession highly, whether on CareerBliss or elsewhere, as reported by Forbes. (I’ll also admit to having been a bit of an I/O Psych nerd back in the day, so this survey appeals to me in various ways.) I can’t seem to leave my curiosity at the door, so I had to go see for myself what questions were used as the basis of this data. (Yes, HR folks, that’s my story and I’m sticking to it.)

With categories like Company Culture, Work-Life Balance, The Place You Work, The People You Work For, The People You Work With, It’s Party Time!, Work Freedom, and Growth Opportunities, it almost felt like attending a company meeting at my current employer. (Did I mention we’re hiring a developer for my team?)

I was curious to see whether other testers had the same reaction to the questions used to generate the data that CareerBliss analyzed, so I culled out 5 questions of at-most-140-characters designed to find out.

  • Q1) Which people at work most affect your happiness: co-workers, boss, CEO?
  • Q2) How does the level of challenge in your work influence your feelings about your testing job?
  • Q3) Is there a job-provided perk/reward/tool that keeps you happy as a tester?
  • Q4) As a tester, do you have a good balance of freedom and growth?
  • Q5) How does the support at work make testing a great career?

Check out the storify-ed version of our #testchat on Twitter.

Not everyone has the same experience of software testing and my experience has certainly changed over time. I wanted to take a moment to consider the various aspects of software testing that the article identified:

  • requirements gathering – been there, done that both before and after implementation
  • documentation – frequent contributor, sometimes sole author
  • source code control – only for my automation code, but I didn’t set it up myself
  • code review – if you consider pairing with a developer on code during a sprint, then I’ve tried it and with some success
  • change management – not so much, though we did have a composition book in the testing lab to log all hardware changes to a system I worked on; sometimes it was more like a log of who I should hunt down to get the hardware back…
  • release management – the closest I get to this is being able to deploy to my cloud test environment and boy am I happy about that
  • actual testing of the software – bread and butter for me

I love having been involved in the entire software development process at various times during my career. (I’ve even prototyped some UI ideas, though I wouldn’t call that an area of strength or concentration. Glad to have those UXers on board these days!) I do feel that I’m an integral part of the job being done at the company. I am quite happy that my job involves frequently working with people.

However, I do take issue with this being presented as a positive aspect of the job:

software quality assurance engineers feel rewarded at work, as they are typically the last stop before software goes live

Doesn’t that smack of Gatekeepers to Quality to you? I don’t ever want to set up an adversarial relationship with my developers that says I need to defend the users against their disregard, and I don’t want to be involved only at the end as a last stop before kicking a product out the door. I know that happens at times but it’s not my preference. Positive personal interactions and preventative measures certainly contribute to my testing bliss.

Take the survey yourself at CareerBliss and let me know how your experience compares!

I’ll be analyzing the tagged responses from Twitter over on Techwell soon!

Here is some related reading that has come up in recent days:

Q3) Is there a job-provided perk/reward/tool that keeps you happy as a tester?

Jon Bach on tools for testing

Ajay Balamurugadas on tools for testing

Q5) How does the support at work make testing a great career?

Horizontal careers: “each of us will need to overcome our personal assumptions about moving up the career ladder, and think more about how we add value across.”

Scott Barber disagrees

Image source

Yo dawg, I herd you like ET

19 Monday Mar 2012

Posted by claire in Context, Experiences, Experiments, Hackathon, Retrospective, Testing Humor

≈ 1 Comment

I wrote out my Lab Days experience recently but didn’t get to bring you down the rabbit hole with me to experience the recursive testing goodness.

My project for Lab Days was an enhanced logging tool, but the logging is the heart of the matter, with users putting it through its paces much more stringently then the analysis functionality.

Since I usually do exploratory testing of applications at the day job and the time pressure of Lab Days left little room for formal test cases anyway, I decided to try out a new exploratory testing session logger: Rapid Reporter.

I didn’t have a lot of time to devote to learning Rapid Reporter, so I didn’t bother reading any documentation or preparing myself for how it worked, essentially exploratory testing my exploratory testing tool while exploratory testing my application under test.

It turns out this kind of recursive testing experience was just what I needed to liven things up a bit, all in the spirit of trying something new! I discovered that rapidly learning about a session logger while testing/learning a session logger, pulling log entries from an original session log, and reporting bugs via a session/chat room (HipChat) made for some perilous context-switching. More than once during the day, I had to stop what I was doing just to get my bearings.

I clearly enjoyed the experimentation because I decided to repeat the experience, though with a little less context-switching, when we upgraded our usual ET tool: Bonfire. The funniest thing about using Bonfire after working on my Lab Days project was that I realized there were tags available for log entries but the tagging indicators weren’t the same as our choice for our usability testing tool. I kept trying to use the tagging that I’d been testing all week and had to retrain myself, improving their documentation as a result of my questioning.

Still, seeing how another logging tool uses tags gave me some functionality to consider for our usability logger: how would users want to interact with tagged log entries? Clearly time to circle back with my UX designer to discuss some enhancements!

Image generated here

The status is not quo

09 Friday Mar 2012

Posted by claire in Context, Experiences, Experiments, Hackathon, Retrospective, Tester Merit Badges

≈ 3 Comments

Dr. Horrible http://drhorrible.com/

We tend to run “FedEx” with a fairly open format where you can do whatever you want as long as you can somehow relate it to our products.
– Atlassian

Last week, my company gave us an exciting opportunity: 5 days of work on a project related to our business.

Apparently, they’ve done something like this before, long before my time, so you’d have to ask some of the more tenured folks at Daxko about it.

I worked with the same folks who volunteered with me at the WebVisions Hackathon earlier this year and we kept in mind what my colleague Will said about that experience: “The short time box and no feature constraints necessitated a laser-sharp focus on one thing.”

So we noodled over several viable candidates and finally settled on building a better mousetrap – or, in this case, UsabLog.

A clarification on terminology from my UX colleague:
“Logging” in this context doesn’t mean “system logging of events.” It means human capture of what the user said, what the user did in the app (e.g., where user clicked), and any additional comments to provide context. The point of logging is to provide us with a record of what went down so we have an accurate recollection for later analysis.

I had the good fortune to be a user of the original UsabLog application over the course of many usability sessions as a session logger, so I was rather familiar with its strengths and weaknesses. I was able to contribute some bug reports and feature suggestions for consideration during our lunchtime planning discussions, but my Scrum team’s UX designer was our team’s sponsor. She compiled an experiment plan that identified our purpose and detailed the problems we considered in the pre-existing Usablog and the opportunities we had to satisfy those needs.

Our usability sessions up to this point involved an interview led by the facilitator (i.e. UX designer) and logged by another team member (e.g. me) via the free, open source, web application Usablog, which then exported logs to CSV for use in a program such as Excel and which we in turn manually fed into a mindmap program such as FreeMind. While this process did work for us, the export and manual copy-paste was rather tedious and laborious, or as she put it “it would directly contribute to user research process efficiencies.” We knew there could be a better way.

Goals of the experiment:

  • Rapidly capture rich user feedback during research interviews and usability tests through logging of user events and comments
  • Organize logs from multiple sessions into one study for ease of access and visibility
  • Use log entries to synthesize findings
  • Quickly jump to a spot in the session’s video by clicking on the associated log entry

In particular, we wanted these features:

  • Multi-session logging.
  • Log entries are timestamped when the logger starts typing for video synchronization.
  • Custom tags.
  • Multi-logger logging.
  • One tool for logging and post-session analysis.

We established a definition of done and recognized our dependencies since any impediments would have serious impact on our progress during the limited time of the competition.

I would love to tell you that we were entirely successful in meeting our goals and implementing all of our features, and then going on to take first prize in the competition. Alas, this was not to be. We only accomplished some of our goals and features and awesome projects from other teams placed above us.

However, the experiment was a roaring success in many ways:

  • I had first-hand experience with paired UX design under the tutelage of my UX designer colleague. She suggested that I man the helm and she steered me back on course when I went astray. I won’t claim that my first UI mockups were beauties, but the process and conversation certainly were.
  • I made my first commit to a Github open-source repository and thereby qualify for the Open Source Nerd Merit Badge (which happens to feature the Github mascot Octocat) which I had been hankering to do ever since I discovered its existence. Also, this was the first time I fixed a bug in the source code, so even though my changes were minor it was thrilling.
  • Exploratory testing based on Github commit notifications in the HipChat chat room we used for the team. Rather than pursuing session-based test management, I tried a looser structure based around the latest and greatest changes instead of setting charters and time-boxing exploration around the stated goal.
  • Real-time bug reporting of issues found during exploratory testing via HipChat messages and screenshot attachments was new and interesting. This is the lowest overhead asynchronous bug management approach I’ve tried and it was effective. Granted, we didn’t come out with a backlog of known issues written down somewhere, but we rectified the most critical problems before they had a chance to fester.
  • We didn’t let a little thing like heading home for the day stop us from collaborating remotely when we got back to business after hours. Being able to work at odd hours put some of my insomnia to good use. I also learned a bit about .NET and model/view/controller architecture, which turned out to be good preparation for the following – and last – day.
  • When one of our programmer teammates fell ill, I paired with our remaining developer to push on toward the goal. Although I think I spent more time asking questions to help think through the implementation than actually contributing code, it was a fruitful day, wrapping up an important feature a mere 30 minutes before the Big Reveal.
  • I used the resulting product to real-time log the presentations during the Big Reveal. Oh so meta, but also hopefully illustrative of the capabilities of the application for future use. If nothing else, it gave our sick friend a way to catch up on the excitement as he recovered over the weekend.
  • We accomplished only some of our goals and features but they were the most essential. Our product is usable as-is, though with some known bugs that do not inhibit happy-path use.
  • Why do they call it FedEx days? Because you have to ship! Our resulting application is ready for use – or enhancement if you’re feeling ambitious!
  • And last, but certainly not least, victory lunch! Nothing so sweet as celebrating effective teamwork.

Image source

Of Paths and Cycles

06 Tuesday Mar 2012

Posted by claire in Context, Experiences, Experiments

≈ 1 Comment

virtual ride

I joined the YMCA last summer and have been trying out different ways of being more active. (My active lifestyle resolution wasn’t just aimed at professional development.) Recently, I came to the conclusion that having more structure seems to work for me in learning testing and so might be helpful around my fitness progress as well.

To that end, I made a Coach Approach appointment and met with my coach to learn how the program works. She and I talked for nearly an hour about what my goals would be (set up along the SMART guidelines) and how I could work my current interests into a structured plan. She suggested that I try out some of the exercise equipment that they have and talked over different ways to cope with the boredom that creeps in and deters people from continuing their progress.

One of the machines she suggested for me was an exercise bike with a computerized screen. Today, I decided to give it a go. I dressed out, filled my water bottle, and found an available bike. Since I’m interested in taking up cycling, this seems like a nice way to build up my stamina until the weather warms a bit. I plugged in my headphones and turned my attention to the log in menu on the screen. Realizing this was not a touchscreen application, I observed that there were a variety of buttons to interact with the system.

Since I wasn’t sure I was going to stick with this workout method, I selected guest log in just to try out the system. I selected a beginner course, put my hands on the handlebars, and began pumping the pedals. I immediately noticed that the handlebars and foot pedals provided information to the system as did the buttons on the panel below the screen. I found some good music to keep my ears busy and started observing the software.

Happy Path

Normally, I bemoan working out on exercise equipment as “the race to nowhere,” finding myself immediately unsatisfied with the experience of running in place and staring straight ahead in a gym environment. However, having an application to test while burning calories certainly was a welcome change. I don’t think my coach realized just how easily I could avoid boredom with some software to occupy my attention!

I tried a couple of different paths, or courses as they called them, each with a different scenery motif and points of visual interest. I was amused to discover that steering with the handlebars was entirely unnecessary since the program forced me to stay on the path and stopped displaying any virtual cyclist I ran down. At first, I was a bit disconcerted when virtual cyclists would pass through me from behind and appear to pop out of my chest. Backpedaling served only as an indication that I wasn’t moving forward, as though I had stopped pedaling completely, and so didn’t help me to put more space between my virtual handlebars and the virtual chest-burster cyclists. I thought one of these virtual cyclists represented the “pacer” that appeared on my progress bar, but I eventually figured out that the pacer didn’t have a manifestation on the course, only in the ride-in-progress statistics reporting areas.

Push it real good

However, I noticed some issues during my first ride:

  • Objects in the scenery were drawn with perspective and would update with a jerk when they entered the middle of the field of vision.
  • A bush on the edge of the path happened to overhang the path enough that my virtual handlebars passed through it.
  • A virtual cyclist was stranded on the side of the path oriented sideways rather than in the direction of travel, as all of the other virtual cyclists were.
  • Another representation of a rider (ghost?*) appeared on the path oriented sideways but didn’t seem to be animated.
  • After I completed the ride, the screen showed my ride’s statistics as a modal dialogue, but I could see the heart rate monitor, RPM, speed, and ride timer were still updating on the disabled screen.
  • One of the post-ride statistics was the local facility’s leaderboard for that course and although my time ranked higher than the last person on the board my time was not displayed.

*I wasn’t clear on what the system meant by a ghost rider who could appear on the course, so this may have been correct software behavior.

Integration, schmintegration

After a trip home and a well-earned shower, I settled in with my laptop to check out the website that interfaces with the on-site system. The site proclaimed that their system engages your mind and I certainly found that to be true, perhaps in a way they hadn’t anticipated.

Although I had created an account through the log in screen of the exercise bike, the website prompted me to complete my profile online before I could access the information. Though I usually think of this as an annoyance, few required fields and a humorous selection of security questions made it a pleasant experience.

The News informed me that I could share my ghost through Facebook or Twitter, though I still had little idea of how that would be used, having not seen it in action. I declined to use the social media hook, deferring it until I have an opportunity for more investigation. I was happy to see that my first workout records and awards were available online, though I didn’t “post a ghost” through email or printable QR code. When I found the Ghost Selection options, I could see that a ghost was something like a pacer but more personalized or specific.

I noticed several issues online:

  • I was hopeful that the online system would show my ranking since the on-site exercise bike had not, but both the global leader boards and boards for my fitness facility omitted me.
  • The first attempt to view leaderboards for my fitness facility showed data from a location in some other state although subsequent refresh seemed to correct the problem.
  • I also encountered the server’s technical difficulties page.
  • Some header graphics failed to display, though sometimes page refresh corrected this.
  • Leaderboard page breadcrumbs did not always correspond to the displayed page (e.g. inaccurate, current page omitted).
  • Firebug showed me at least one typo in the Javascript that caused an error and one page’s source included comments in the code, which I have read is discouraged.
Game on

Although I was happy to know that my workout data was preserved and available online in some form, the leaderboards could use some work. While the software product team may not have been concerned with real-time updates to leaderboards, as a first-time user I really wanted to see how my performance stacked up against the more seasoned players, which is an important part of the gamification angle that this product leverages to defeat boredom and keep users involved in exercise. I’ll certainly try this system again and hope that I can ride out both the bugs and the boredom.

Image source

Composition

25 Thursday Aug 2011

Posted by claire in CAST 2011, Context, Soft Skills

≈ Leave a Comment

Composition

Programmers are an obvious choice for members of a software team. However, various points of view attribute different value to the other potential roles.

Matt Heusser‘s “How to Speak to an Agilista (if you absolutely must)” Lightning Talk from CAST 2011 referred to Agilista programmers who rejected the notion that testers are necessary. Matt elaborates that extreme programming began back in 1999 when testing as a field was not as mature, so these developers spoke about only two primary roles, customer and programmer, making an allowance that while the “team may include testers, who help the Customer define the customer acceptance tests. … The best teams have no specialists, only general contributors with special skills.” In general, Agile approaches teams as composed of members who “normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration’s requirements.” The Agile variation Scrum suggests teams of “people with cross-functional skills who do the actual work (analyse, design, develop, test, technical communication, document, etc.). It is recommended that the Team be self-organizing and self-led, but often work with some form of project or team management.” At the time, this prevailing view that the whole team should own quality diminished the significance of a distinct testing role. Now that testing has grown, Matt encourages us to revisit this relationship to find ways to engage world-changers and their methodologies to achieve a better result. Private questioning about the value of the activities that testers do to contribute is more constructive than head-on public confrontation. One source of the problem is that testers need to acknowledge Agile programmer awesomeness and their specific skills. This produces a more favorable environment to discuss the need for testing, which Agilists do see though they want to automate it. Matt advocates observing what works in cross-functional project teams to determine patterns that work in a particular context, engaging our critical thinking skills to develop processes that support better testing. As a process naturalist, Matt reminds us that “[software teams in the wild] may not fit a particular ideology well, but they are important, because by noticing them we can design a software system to have less loss and more self-correction.”

Having not experienced this rejection by Agile zealots for myself despite working only in Agile-friendly software shops, I have struggled with commenting on his suggestions. I decided to dig a bit deeper to find some classic perspectives on roles within software teams so that I can put the XP and Agile assertions into perspective.

One such analogy for the software team is Harlan Mills’ Chief Programmer Team as “a surgical team … one does the cutting and the others give him every support that will enhance his effectiveness and productivity.” In this model, “Few minds are involved in design and construction, yet many hands are brought to bear. … the system is the product of one mind – or at most two, acting uno animo,” referring to the surgeon and copilot roles, which are respectively the primary and secondary programmer. However, this model recognizes a tester as necessary, stating “the specialization of the remainder of the team is the key to its efficiency, for it permits a radically simpler communication pattern among the members,” referring to centralized communication between the surgeon and all other team members, including the tester, administrator, editor, secretaries, program clerk, toolsmith, and language lawyer for a team size of up to ten. This large support staff presupposes that “the Chief Programmer has to be more productive than everyone else on the team put together … in the rare case in which you have a near genius on your staff–one that is dramatically more productive than the average programmer on your staff.”

Acknowledging the earlier paradigm of the CPT, the Pragmatic Programmers assert another well-known approach to team composition: “The [project’s] technical head sets the development philosophy and style, assigns responsibilities to teams, and arbitrates the inevitable ‘discussions’ between people. The technical head also looks constantly at the bigger picture, trying to find any unnecessary commonality between teams that could reduce the orthogonality of the overall effort. The administrative head, or project manager, schedules the resources that the teams need, monitors and reports on progress, and helps decide priorities in terms of business needs. the administrative head might also act as a team’s ambassador when communicating with the outside world.” While these authors do not directly address the role of tester, they state that “Most developers hate testing. They tend to test gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are driven to find our bugs now, so we don’t have to endure the shame of others finding our bugs later.” In contrast, traditional waterfall software teams have individuals “assigned roles based on their job function. You’ll find business analysts, architects, designers, programmers, testers, documenters, and the like. There is an implicit hierarchy here – the closer to the user you’re allowed, the more senior you are.” This juxtaposition sets up a competitive relationship between the roles rather than seeing them as striving toward the same goal.

A healthier model of cross-functional teams communicates that all of the necesary skills “are not easily found combined in one person: Test Know How, UX/Design CSS, Programming, and Product Development / Management.” This view advocates reducing communication overhead by involving all of the relevant perspectives within the team environment rather than segregating them by job function. Here, a tester role “works with product manager to determine acceptance tests, writes automatic acceptance tests, [executes] exploratory testing, helps developers with their tests, and keeps the testing focus.”

Finally, we have approached my favorite description of a collaborative software team as found in Peopleware: “the musical ensemble would have been a happier metaphor for what we are trying to do in well-jelled work groups” since “a choir or glee club makes an almost perfect linkage between the success or failure of the individual and that of the group. (You’ll never have people congratulating you on singing your part perfectly while the choir as a whole sings off-key.)” When we think of our team as producing music together, we see that this group composed of disparate parts must together be responsible for the quality of the result, allowing for a separate testing role but not reserving a testing perspective to that one individual. All team members must pursue a quality outcome, rather than only the customer and the programmer, as Agile purists would have it. One aspect of this committed team is willingness to contend respectfully with one another, for we would readily ignore others whose perspectives had no value. Yet, when we see that all of our striving contributes to the good of the whole, the struggle toward understanding and consensus encourages us to embrace even the brief discomfort of disagreement.

Image Credit

Spare the Rod

10 Wednesday Aug 2011

Posted by claire in CAST 2011, Context, Metrics, Training

≈ 4 Comments

Ubiquitous

Paul Holland‘s interstitial Lightning Talk at CAST 2011 was a combination of gripe session, comic relief, and metrics wisdom. The audience in the Emerging Topics track proffered various metrics from their own testing careers for the assembled testers to informally evaluate.

Although I attended CAST remotely via the UStream link, I live-tweeted the Emerging Topics track sessions and was able to contribute my own metric for inclusion in the following list, thanks to the person monitoring Twitter for @AST_News:

number of bugs estimated to be found next week
ratio of bugs in production vs. number of releases
number test cases onshore vs. offshore
percent of automated test cases
number of defects not linked to a test case
total number of test cases per feature
number of bug reports per tester
code coverage
path coverage
requirements coverage
time to reproduce bugs found in the field
number of people testing
equipment usage
percentage of pass/fail tests
number of open bugs
amount of money spent
number of test steps
number of hours testing
number of test cases executed
number of bugs found
number of important bugs
number of bugs found in the field
number of showstoppers
critical bugs per tester as proportion of time spent testing

“Counting test cases is stupid … in every context I have come across” – Paul Holland

Paul mentioned that per tester or per feature metrics create animosity among testers on the same team or within the same organization. When confronted with a metric, I ask myself, “What would I do to optimize this measure?” If the metric motivates behavior that is counter-productive (e.g. intrateam competition) or misleading (i.e. measuring something irrelevant), then that metric has no value because it does not contribute to the goal of user value. Bad metrics lead to people in positions of power saying, “That’s not the behavior I was looking for!” To be valid, a metric must improve the way you test.

In one salient example, exceeding the number of showstopper bugs permitted in a release invokes stopping or exit criteria, halting the release process. Often, this number is an arbitrary selection that was made long before, perhaps by someone who may no longer be on staff, as Paul pointed out, and yet it prevents the greater goal of shipping the product. Would one critical bug above the limit warrant arresting a rollout months in the making?

Paul’s argument against these metrics resonated with my own experience and with the insight I gathered from attending Pat O’Toole’s Metrics that Motivate Behavior! [pdf] webinar back in June of this year:

“A good measurement system is not just a set of fancy tools that generate spiffy charts and reports. It should motivate a way of thinking and, more importantly, a way of behaving. It is also the basis of predicting and heightening the probability of achieving desired results, often by first predicting undesirable results thereby motivating actions to change predicted outcomes.”

Pat’s example of a metric that had no historical value and that instead focused completely on behavior modification introduced me to a different way of thinking about measurement. Do we care about the historical performance of a metric or do we care more about the behavior that metric motivates?

Another point of departure from today’s discussion is Pat’s prioritizing behavior over thinking. I think the context-driven people who spoke in the keynotes and in the Emerging Topics sessions would take issue with that.

Whoever spares the rod hates the child, / but whoever loves will apply discipline. – Proverbs 13:24, New American Bible, Revised Edition (NABRE)

My experience with metrics tells me that numbers accumulated over time are not necessarily evaluated at a high level but are more likely as the basis for judgment of individual performance, becoming a rod of discipline rather than the protective rod of a shepherd defending his flock.

Paul did offer some suggestions for bringing metrics back to their productive role:

  • valid coverage metric that is not counting test cases
  • number of bugs found/open
  • expected coverage = progress vs. coverage

He also reinforced the perspective that the metric “100% of test cases that should be automated are automated” is acceptable as long as the overall percentage automated is low.

Metrics have recently become a particular interest of mine, but I have so much to learn about testing software that I do not expect to specialize in this topic. I welcome any suggestions for sources on the topic of helpful metrics in software testing.

Image Credit

I do not think it means what you think it means

05 Friday Aug 2011

Posted by claire in Context

≈ Leave a Comment

When ubiquitous language isn’t

Ubiquitous

Definition:
ubiquitous = [Latin ubique everywhere; Latin ubi where] present, appearing, existing or being everywhere, especially at the same time; omnipresent; constantly encountered, widespread

For example, the passage of time is constantly encountered and occurring everywhere. We measure time in different increments, such as a year.

“What day is it? What year?” – Terminator Salvation movie

How do we define the term “year”?

1. Calendar year?
The Gregorian calendar is only one of many that have been used over time.
“There are only 14 different calendars when Easter Sunday is not involved. Each calendar is determined by the day of the week January 1 falls on and whether or not the year is a leap year. However, when Easter Sunday is included, there are 70 different calendars (two for each date of Easter).” – Wikipedia article

2. Fiscal year = financial year = budget year
This is a period used for calculating annual (“yearly”) financial statements in businesses and other organizations that “fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year” – Wikipedia article

3. Astronomical year numbering system = proleptic Gregorian calendar
This standard includes the year “0” and eliminates the need for any prefixes or suffixes by attributing the arithmetic sign to the date. This definition is used by MySQL, NASA, and non-Maya historians.

4. Billing year
Companies often expect you to sign a contract for service that may encompass the period of a year (e.g. signing a 2-year cell phone contract).

5. Year of your life/age
Count starts at zero and increments on your birthday.

6. Years of working for a company
Count starts at zero and increments on the anniversary of your hire date. This definition is often used to award benefits based on longevity (e.g. more vacation after “leveling up” having completed a given number of work years).

7. Religious year
For example, the Roman Catholic Church starts its liturgical year with the four weeks of Advent that precede Christmas. Other religious calendars include Julian, Revised Julian, Hebrew (a.k.a. Jewish), Islamic (a.k.a. Muslim, Hijri), Hindu, Buddhist, Bahá’í

8. National year
Some countries use nation-based calendars for internally organizing time (e.g. Chinese, Indian, Iranian/Persian, Ethiopian, Thai solar)

When we cannot even speak clearly about a familiar term like “year,” it should be no surprise that we have difficulty communicating on computing projects with cross-functional teams composed of individuals with different professional backgrounds. Each of us masters the jargon of our field as we encounter it, leaving us with gaping holes in our knowledge about domain-specific concepts that are essential when implementing a project.

Ubiquitous Language is “a language structured around the domain model and used by all team members to connect all the activities of the team with the software.”
– Domain-Driven Design by Eric Evans

In order to succeed in designing and implementing good software, we must be willing to revisit our assumptions about terminology, avoiding the “inconceivable” situation in which two team members in a discussion are using the same word to represent different ideas. In practical terms, that means asking dumb questions like “What do you mean by that?” even when the answer appears to be obvious, or we risk replaying the old story of the blind men and an elephant.

Once we have a firm grounding in a context-specific set of words to use when speaking about the work, we can proceed with confidence that we will find ourselves in the same position of confusion later in the project, as we iterate through modeling parts of the system again and again. Thus, we must remain vigilant for statements with multiple interpretations. In addition, Evans reminds us that a domain expert must understand this ubiquitous language so that it guides us to design a result that ultimately satisfies a business need.

Testers must consciously use the agreed upon expressions in our test ideas, test plans, test cases, and any other record of testing, whether planned or executed. Consistent usage is key in both explaining the testing approach to a new recruit and in maintaining the history of the project, including the testing effort.

Image credit

Newer posts →

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • November 2024
  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
    • Reliability
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.