• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Experiments

Big Visible Testing

07 Tuesday Aug 2012

Posted by claire in Agile, Approaches, CAST 2012, Context, Experiences, Experiments, Publications

≈ 2 Comments

Presented as an Emerging Topic at CAST 2012

This was my talk proposal:

I have always thought of myself as an agile tester. After all, my development teams have always delivered features in 2 week sprints. My testing activities included reviewing requirements or stories before the planning meetings to assemble a list of questions and test ideas that I would use to approach the work proposed. I participated in a review before code completion that allowed for some exploratory testing, brief and informal though it may have been at times. In the past couple of years, I also planned and coded test automation.

However, over the past year, I have been transforming from a pseudo-agile tester to a true agile tester. Rather than sitting apart from the software developers in my own quality engineering department, I am now seated in the same room as the other employees from a mix of disciplines who are on my product delivery team. Rather than testing in a silo, I have been gradually increasing the visibility of testing activities through exploratory test charter management, defect backlog organization, and paired exploratory testing with both testers and non-testers. The feedback loops have shortened and the abbreviated time between activities necessitated adjusting how I provide information.

Testers are in the information business. We take the interests and concerns of the business as communicated through the product owner – or in my case the product owner team – and combine those with the needs of the customer as expressed in the story and further augment those with our experience using and analyzing software for deficiencies, abberations, and oddities. We draw upon a variety of resources including the experience and perspectives of fellow testers, heuristics, and product history to approach the goal of delivering a product the customer values, focusing especially on the quality aspects of that value.

Now that the audience for my testing comprises a mix of disciplines and the work environment has shifted from a heavier process to transparent, quick information access, I have been experimenting with different ways to execute testing and to represent the outcomes of that testing activity so that the information consumers understand it in ways that best suit each of their perspectives.

In my brief presentation, we will examine 3 different agile team member personas and their implications for presenting and maintaining testing information as well as the inherent tensions between their distinct and various needs. I will trace my learning curve of adjusting to their needs through the various experiments I have completed in this context, though these lessons extend beyond a purely cross-functional agile product development team.

Other testers will come away with a fresh perspective on viewing their product team members and focus on the value testing artifacts provide to a software development team.

Big Visible Testing from Claire Moss

See me live!

16 Monday Jul 2012

Posted by claire in Agile, CAST 2012, Context, Experiences, Experiments

≈ Leave a Comment

CAST is online streaming the keynotes and Emerging Topics track again this year.

Last year, I was haunting the interwebs watching, Tweeting, and chatting. This year, I’ll be coming to you live through the magic of technology. (This is the first reason I’ve had to crack open PowerPoint so it should be entertaining!)

Catch my agile software testing emerging topic talk Big Visible Testing at 10 AM PDT today!

Again, here’s the link to watch me:
http://www.ustream.tv/channel/CASTLive

Update: Recording uploaded to YouTube

Image source

Ruby Anniversary

14 Saturday Jul 2012

Posted by claire in Automation, Experiences, Experiments

≈ Leave a Comment

A year ago this week, I first picked up Brian Marick‘s Everyday Scripting with Ruby and gave it a whirl. This week saw me branching out past automating web application checking to creating my own Ruby application.

I haven’t been a code monkey in many a moon, but my company’s recent innovation week gave me the perfect opportunity to try an alternate role. I anticipated either glorious success or failure – and as a tester I’m seasoned at failing gloriously so I was set to be happy whatever the results.

This time around, anyone in the company could propose an idea, perhaps even customer requests of suitable scope. The projects had to be related to our core business but the Lab Days crew encouraged us to try new teams or even new team roles.

Getting to hear and understand ideas from of our team and our customers. Small things that are annoying, new things that will open the door for a myriad of other possibilities and things I never would have thought would complement our existing solutions are all displayed during Lab Days. — Concetta

Since I had only a week of development time available, I chose a project with limited scope: a small utility that would detect information in the data warehouse and email an interested party about it. I planned to develop the simplest thing that could possibly work, which was important working as a team of one. I did not have any software developers to help me with structuring the program, though I did describe my plan to my regular product team folks, who also estimated that it would achievable.

Test-driven development was one of my goals for this project and this was my first time trying it as a programmer. Granted, I have been writing RSpec stubs for my web automation for a year, but I didn’t have to build all of that from scratch on my own. This time around, I began as usual for a story in our Scrum sprints by writing out the tests that I knew would indicate success before delving into the code mines.

As a result of coding solo, I spent quite a bit more time evaluating and troubleshooting Ruby gems than I would otherwise. While this was frustrating at times, I had a chance to form a better impression of the Ruby community and the problems they were trying to solve, gleaned from the gems tailored to different purposes. As it turned out, there aren’t a lot of Rubyists working with MSSQLServer in a Windows environment, as I was trying to do. I even went down the rabbit trail of evaluating Rails development until self-taught MVC under a tight deadline sounded questionable to me.

Undeterred, I dug up some promising leads and made some experiments, most of which failed. I learned a lot of ways not to solve my problem! However, for success you only need one and I found a way. Once I was able to communicate with my data and my email, all that remained was the business logic. Granted, by this time I was halfway through my week, but I had my infrastructure and I had my tests as goals.

I began with familiar code structures I’d used formerly in pseudocode, Visual Basic, and Java. However, I knew Ruby had its own interesting expressions, as I had noticed while studying my pickaxe. At this point, I decided to learn a bit more about the Ruby idioms and so cracked open an ebook version of Why’s Poignant Guide. This made for an entertaining intermission and pointed me in the right direction to enhance my code to be Ruby-style. I’m sure my Rubyist friends would still cringe, but I wasn’t expecting to have production-ready results as a Ruby newbie.

One-by-one my tests went from red to green. All of this gave me much more confidence that my changes were stable, which turned out to be essential. The morning of the competition presentations, I discovered a fatal flaw in my project: although it worked as designed, the data warehouse SLA for the data I was analyzing had a lag of 24 hours rather than the real-time I had anticipated. With the help of some coworkers in the know, I went to the source and found the real-time data. Fortunately, this last minute coding was an alternate implementation of the database interface and did not affect my program’s overall structure. Here here for modularity and DRY principles!

Out of time but satisfied with my progress, I was able to live demo the first solution against the data warehouse. However, I knew that what I really wanted was an end-to-end test from the source application all the way through to the email account, so I continued working and completed that while the competition voting period was still open. While I didn’t get a chance to present my enhanced code to the group, I had the satisfaction of knowing that my tracer bullet had found its target. And what’s more I had a backlog of enhancements needed for production-readiness prepared for the next iteration – if there is one. Anticipating that this project will become a product in the future, I also completed a proof of concept using tests against an API for the source data that’s in the works, applying some of my learning from Alex Kell‘s recent interface testing presentation to the Agile Atlanta group.

While my project did not find a place in the innovation week competition winners circle, I felt like a winner for having the drive and execution to complete even a small application on my own. Since I learned more about my company’s product suite, data warehouses, interface testing, Ruby, and TDD through RSpec in the process, I call the experiment an unquestionable success. I carry forward all these skills to my day-to-day work and can plan my usual testing with much more context.

So while Ruby and I are still in the early stages of our relationship, I think our first anniversary together was a shining moment.

Image source

Staying on track

24 Tuesday Apr 2012

Posted by claire in Agile, Automation, Experiences, Experiments, STAREast 2012, Volunteering

≈ 1 Comment


A while back, I was talking to Matt Heusser about my sticker shock when it came to conference attendance and he pointed me to his blog post on creative ways to reduce the cost:

No, you don’t have to be a speaker. That may be the most obvious, easy, usual way in, but there are plenty of ways to serve for people not interested in public speaking: you might serve on the program committee, work the registration desk, introduce speakers, organize lightning talks, or serve as a “runner” in some capacity.

I took one of his suggestions to heart and volunteered to be on staff at STAREAST this year, which gave me the opportunity to look behind the scenes this past Wednesday and Thursday. (Sure, I missed out on the awesome tutorials this time around, but I did get that one key day of floating peacefully in the pool.)

This is a considerable shift from last year’s STAREAST, when I was free to simultaneously learn about my professional needs while designing and executing my conference schedule. This became particularly apparent to me as the week progressed and I made new tester friends who turned out to be speakers, inviting me to their sessions and I just couldn’t shirk my duties. Bummer. I’ll know to look for their names in the program next time around.

This year, I put in my bids for track selections about a month ahead of time, based on the published schedule, and hoped at least one of my first choices would appear on my list. When my track chair packet arrived, I delightedly perused the list of my tracks and my speakers. On my roster were several notable testers and several newcomers. I loved knowing I would get a glimpse into the processes of both the polished professionals and the fresh first-timers.

I read over the instructions with a highlighter and multicolored pens, calling out the relevant details so that I could support these live performances without a hitch. I’m not good with form letters, and I feel that it’s fair warning to let others know that I have a more casual and enthusiastic style of interaction, so I drafted my own email for first contact. From there, all of the advance preparation went off without a hitch and I eagerly anticipated getting settled in on Wednesday morning.

Fortunately for me, my first track was Agile Testing, so the speakers were predisposed to understand iterating to better and better results. It helped that I knew the first few speakers from in-person and online interactions. Speakers aren’t the only ones who get nervous! One of my tester friends from Twitter was in the audience to help me work out the kinks of the tasks set before me and to tune my results to make things easier on all the session participants and to keep me from stressing out.

One thing I quickly realized was that my introduction of a speaker could hardly do him or her justice, especially when we had just met for the first time, so I resolved to keep it short and sweet, trusting everyone could read the program’s bio and knowing that we were all there for the benefit of the speaker’s wisdom, not my scintillating 30-second background recap.

Though troubleshooting the hardware was certainly on my mind, I was particularly concerned with making sure the session feedback was collected and returned to the conference organizers so that they could tabulate results and provide it to the speakers in a more succinct and organized way. I know from observation and discussion how much work speakers put into their presentations and how open they are to comments. Speakers care for their audience members!

The hardest part of track chairing for me was not being free to type, scribble, or live tweet all the wonderful information flying past me! Normally, I write everything down, but I had to take it in stride and trust that my familiarity with some of the material would carry me through. However, I was also supporting sessions I might not have chosen from the program since they seemed to have a focus that didn’t match my day-to-day duties or needs – so I snuck a moment here and there to note some new revelation. As it turned out, I gleaned some value from every session, despite my expectations. Sure, I don’t work on specialized medical hardware or ERP systems, but the generalizable lessons will stand in good stead as I broaden my understanding of the variations of software testing.

When it’s all said and done, when the conference attendees have all gone home, it’s the information transmitted that can make a difference in people’s work lives – and perhaps even their personal lives – that gave me a warm fuzzy feeling to go along with the sore feet.

Image source

Testing Bliss

03 Tuesday Apr 2012

Posted by claire in #testchat, Context, Experiences, Experiments, Soft Skills, Techwell

≈ 5 Comments

It’s no secret: I adore testing software. It’s my weapon of choice, despite having happened upon it by chance many moons ago. (What other career transforms forgetfulness and clumsiness into strengths since they result in unexpected, non-happy path usage? Ultimately, I think it’s the variety that keeps me coming back for more on a daily basis.)

Given my feelings about testing, it came as no surprise to me that others would agree and rate this profession highly, whether on CareerBliss or elsewhere, as reported by Forbes. (I’ll also admit to having been a bit of an I/O Psych nerd back in the day, so this survey appeals to me in various ways.) I can’t seem to leave my curiosity at the door, so I had to go see for myself what questions were used as the basis of this data. (Yes, HR folks, that’s my story and I’m sticking to it.)

With categories like Company Culture, Work-Life Balance, The Place You Work, The People You Work For, The People You Work With, It’s Party Time!, Work Freedom, and Growth Opportunities, it almost felt like attending a company meeting at my current employer. (Did I mention we’re hiring a developer for my team?)

I was curious to see whether other testers had the same reaction to the questions used to generate the data that CareerBliss analyzed, so I culled out 5 questions of at-most-140-characters designed to find out.

  • Q1) Which people at work most affect your happiness: co-workers, boss, CEO?
  • Q2) How does the level of challenge in your work influence your feelings about your testing job?
  • Q3) Is there a job-provided perk/reward/tool that keeps you happy as a tester?
  • Q4) As a tester, do you have a good balance of freedom and growth?
  • Q5) How does the support at work make testing a great career?

Check out the storify-ed version of our #testchat on Twitter.

Not everyone has the same experience of software testing and my experience has certainly changed over time. I wanted to take a moment to consider the various aspects of software testing that the article identified:

  • requirements gathering – been there, done that both before and after implementation
  • documentation – frequent contributor, sometimes sole author
  • source code control – only for my automation code, but I didn’t set it up myself
  • code review – if you consider pairing with a developer on code during a sprint, then I’ve tried it and with some success
  • change management – not so much, though we did have a composition book in the testing lab to log all hardware changes to a system I worked on; sometimes it was more like a log of who I should hunt down to get the hardware back…
  • release management – the closest I get to this is being able to deploy to my cloud test environment and boy am I happy about that
  • actual testing of the software – bread and butter for me

I love having been involved in the entire software development process at various times during my career. (I’ve even prototyped some UI ideas, though I wouldn’t call that an area of strength or concentration. Glad to have those UXers on board these days!) I do feel that I’m an integral part of the job being done at the company. I am quite happy that my job involves frequently working with people.

However, I do take issue with this being presented as a positive aspect of the job:

software quality assurance engineers feel rewarded at work, as they are typically the last stop before software goes live

Doesn’t that smack of Gatekeepers to Quality to you? I don’t ever want to set up an adversarial relationship with my developers that says I need to defend the users against their disregard, and I don’t want to be involved only at the end as a last stop before kicking a product out the door. I know that happens at times but it’s not my preference. Positive personal interactions and preventative measures certainly contribute to my testing bliss.

Take the survey yourself at CareerBliss and let me know how your experience compares!

I’ll be analyzing the tagged responses from Twitter over on Techwell soon!

Here is some related reading that has come up in recent days:

Q3) Is there a job-provided perk/reward/tool that keeps you happy as a tester?

Jon Bach on tools for testing

Ajay Balamurugadas on tools for testing

Q5) How does the support at work make testing a great career?

Horizontal careers: “each of us will need to overcome our personal assumptions about moving up the career ladder, and think more about how we add value across.”

Scott Barber disagrees

Image source

Yo dawg, I herd you like ET

19 Monday Mar 2012

Posted by claire in Context, Experiences, Experiments, Hackathon, Retrospective, Testing Humor

≈ 1 Comment

I wrote out my Lab Days experience recently but didn’t get to bring you down the rabbit hole with me to experience the recursive testing goodness.

My project for Lab Days was an enhanced logging tool, but the logging is the heart of the matter, with users putting it through its paces much more stringently then the analysis functionality.

Since I usually do exploratory testing of applications at the day job and the time pressure of Lab Days left little room for formal test cases anyway, I decided to try out a new exploratory testing session logger: Rapid Reporter.

I didn’t have a lot of time to devote to learning Rapid Reporter, so I didn’t bother reading any documentation or preparing myself for how it worked, essentially exploratory testing my exploratory testing tool while exploratory testing my application under test.

It turns out this kind of recursive testing experience was just what I needed to liven things up a bit, all in the spirit of trying something new! I discovered that rapidly learning about a session logger while testing/learning a session logger, pulling log entries from an original session log, and reporting bugs via a session/chat room (HipChat) made for some perilous context-switching. More than once during the day, I had to stop what I was doing just to get my bearings.

I clearly enjoyed the experimentation because I decided to repeat the experience, though with a little less context-switching, when we upgraded our usual ET tool: Bonfire. The funniest thing about using Bonfire after working on my Lab Days project was that I realized there were tags available for log entries but the tagging indicators weren’t the same as our choice for our usability testing tool. I kept trying to use the tagging that I’d been testing all week and had to retrain myself, improving their documentation as a result of my questioning.

Still, seeing how another logging tool uses tags gave me some functionality to consider for our usability logger: how would users want to interact with tagged log entries? Clearly time to circle back with my UX designer to discuss some enhancements!

Image generated here

The status is not quo

09 Friday Mar 2012

Posted by claire in Context, Experiences, Experiments, Hackathon, Retrospective, Tester Merit Badges

≈ 3 Comments

Dr. Horrible http://drhorrible.com/

We tend to run “FedEx” with a fairly open format where you can do whatever you want as long as you can somehow relate it to our products.
– Atlassian

Last week, my company gave us an exciting opportunity: 5 days of work on a project related to our business.

Apparently, they’ve done something like this before, long before my time, so you’d have to ask some of the more tenured folks at Daxko about it.

I worked with the same folks who volunteered with me at the WebVisions Hackathon earlier this year and we kept in mind what my colleague Will said about that experience: “The short time box and no feature constraints necessitated a laser-sharp focus on one thing.”

So we noodled over several viable candidates and finally settled on building a better mousetrap – or, in this case, UsabLog.

A clarification on terminology from my UX colleague:
“Logging” in this context doesn’t mean “system logging of events.” It means human capture of what the user said, what the user did in the app (e.g., where user clicked), and any additional comments to provide context. The point of logging is to provide us with a record of what went down so we have an accurate recollection for later analysis.

I had the good fortune to be a user of the original UsabLog application over the course of many usability sessions as a session logger, so I was rather familiar with its strengths and weaknesses. I was able to contribute some bug reports and feature suggestions for consideration during our lunchtime planning discussions, but my Scrum team’s UX designer was our team’s sponsor. She compiled an experiment plan that identified our purpose and detailed the problems we considered in the pre-existing Usablog and the opportunities we had to satisfy those needs.

Our usability sessions up to this point involved an interview led by the facilitator (i.e. UX designer) and logged by another team member (e.g. me) via the free, open source, web application Usablog, which then exported logs to CSV for use in a program such as Excel and which we in turn manually fed into a mindmap program such as FreeMind. While this process did work for us, the export and manual copy-paste was rather tedious and laborious, or as she put it “it would directly contribute to user research process efficiencies.” We knew there could be a better way.

Goals of the experiment:

  • Rapidly capture rich user feedback during research interviews and usability tests through logging of user events and comments
  • Organize logs from multiple sessions into one study for ease of access and visibility
  • Use log entries to synthesize findings
  • Quickly jump to a spot in the session’s video by clicking on the associated log entry

In particular, we wanted these features:

  • Multi-session logging.
  • Log entries are timestamped when the logger starts typing for video synchronization.
  • Custom tags.
  • Multi-logger logging.
  • One tool for logging and post-session analysis.

We established a definition of done and recognized our dependencies since any impediments would have serious impact on our progress during the limited time of the competition.

I would love to tell you that we were entirely successful in meeting our goals and implementing all of our features, and then going on to take first prize in the competition. Alas, this was not to be. We only accomplished some of our goals and features and awesome projects from other teams placed above us.

However, the experiment was a roaring success in many ways:

  • I had first-hand experience with paired UX design under the tutelage of my UX designer colleague. She suggested that I man the helm and she steered me back on course when I went astray. I won’t claim that my first UI mockups were beauties, but the process and conversation certainly were.
  • I made my first commit to a Github open-source repository and thereby qualify for the Open Source Nerd Merit Badge (which happens to feature the Github mascot Octocat) which I had been hankering to do ever since I discovered its existence. Also, this was the first time I fixed a bug in the source code, so even though my changes were minor it was thrilling.
  • Exploratory testing based on Github commit notifications in the HipChat chat room we used for the team. Rather than pursuing session-based test management, I tried a looser structure based around the latest and greatest changes instead of setting charters and time-boxing exploration around the stated goal.
  • Real-time bug reporting of issues found during exploratory testing via HipChat messages and screenshot attachments was new and interesting. This is the lowest overhead asynchronous bug management approach I’ve tried and it was effective. Granted, we didn’t come out with a backlog of known issues written down somewhere, but we rectified the most critical problems before they had a chance to fester.
  • We didn’t let a little thing like heading home for the day stop us from collaborating remotely when we got back to business after hours. Being able to work at odd hours put some of my insomnia to good use. I also learned a bit about .NET and model/view/controller architecture, which turned out to be good preparation for the following – and last – day.
  • When one of our programmer teammates fell ill, I paired with our remaining developer to push on toward the goal. Although I think I spent more time asking questions to help think through the implementation than actually contributing code, it was a fruitful day, wrapping up an important feature a mere 30 minutes before the Big Reveal.
  • I used the resulting product to real-time log the presentations during the Big Reveal. Oh so meta, but also hopefully illustrative of the capabilities of the application for future use. If nothing else, it gave our sick friend a way to catch up on the excitement as he recovered over the weekend.
  • We accomplished only some of our goals and features but they were the most essential. Our product is usable as-is, though with some known bugs that do not inhibit happy-path use.
  • Why do they call it FedEx days? Because you have to ship! Our resulting application is ready for use – or enhancement if you’re feeling ambitious!
  • And last, but certainly not least, victory lunch! Nothing so sweet as celebrating effective teamwork.

Image source

February results for Tester Merit Badge

09 Friday Mar 2012

Posted by claire in Approaches, Experiments, Techwell, Tester Merit Badges

≈ Leave a Comment

Tester Merit Badge - Explorer

My February was pretty crazy, so I didn’t pursue all of the parts of the Explorer Tester Merit Badge this month. I skipped these parts due to time constraints:

3. How Long and How Far
4. Walk the Distance
8. Bus and Train Maps

No sweat. I’ll certainly be coming back to this one since exploratory testing (ET) is meat-and-potatoes testing for me.

Read the rest on the Techwell blog and tell me about your results!

The sultry sound of testing

07 Wednesday Mar 2012

Posted by claire in Experiences, Experiments, Podcast, Publications, TWiST

≈ Leave a Comment

Mike Wazowski with mic

Follow the sultry sound of my voice – Mike Wazowski

… for some great testing conversation!

I have made an appearance on the TWiST podcast, which you can stream online or download after registering for the Software Test Professionals’ website.

I discuss differentiating yourself from other testing job applicants with Matt Heusser, Michael Larsen, Wade Wachs, and Ben Yaroch.

For your enjoyment, here are the direct links (once you log in):

Getting Hired as a Tester, Part 1

Getting Hired as a Tester, Part 2

Getting Hired as a Tester, Part 3

Image source

Of Paths and Cycles

06 Tuesday Mar 2012

Posted by claire in Context, Experiences, Experiments

≈ 1 Comment

virtual ride

I joined the YMCA last summer and have been trying out different ways of being more active. (My active lifestyle resolution wasn’t just aimed at professional development.) Recently, I came to the conclusion that having more structure seems to work for me in learning testing and so might be helpful around my fitness progress as well.

To that end, I made a Coach Approach appointment and met with my coach to learn how the program works. She and I talked for nearly an hour about what my goals would be (set up along the SMART guidelines) and how I could work my current interests into a structured plan. She suggested that I try out some of the exercise equipment that they have and talked over different ways to cope with the boredom that creeps in and deters people from continuing their progress.

One of the machines she suggested for me was an exercise bike with a computerized screen. Today, I decided to give it a go. I dressed out, filled my water bottle, and found an available bike. Since I’m interested in taking up cycling, this seems like a nice way to build up my stamina until the weather warms a bit. I plugged in my headphones and turned my attention to the log in menu on the screen. Realizing this was not a touchscreen application, I observed that there were a variety of buttons to interact with the system.

Since I wasn’t sure I was going to stick with this workout method, I selected guest log in just to try out the system. I selected a beginner course, put my hands on the handlebars, and began pumping the pedals. I immediately noticed that the handlebars and foot pedals provided information to the system as did the buttons on the panel below the screen. I found some good music to keep my ears busy and started observing the software.

Happy Path

Normally, I bemoan working out on exercise equipment as “the race to nowhere,” finding myself immediately unsatisfied with the experience of running in place and staring straight ahead in a gym environment. However, having an application to test while burning calories certainly was a welcome change. I don’t think my coach realized just how easily I could avoid boredom with some software to occupy my attention!

I tried a couple of different paths, or courses as they called them, each with a different scenery motif and points of visual interest. I was amused to discover that steering with the handlebars was entirely unnecessary since the program forced me to stay on the path and stopped displaying any virtual cyclist I ran down. At first, I was a bit disconcerted when virtual cyclists would pass through me from behind and appear to pop out of my chest. Backpedaling served only as an indication that I wasn’t moving forward, as though I had stopped pedaling completely, and so didn’t help me to put more space between my virtual handlebars and the virtual chest-burster cyclists. I thought one of these virtual cyclists represented the “pacer” that appeared on my progress bar, but I eventually figured out that the pacer didn’t have a manifestation on the course, only in the ride-in-progress statistics reporting areas.

Push it real good

However, I noticed some issues during my first ride:

  • Objects in the scenery were drawn with perspective and would update with a jerk when they entered the middle of the field of vision.
  • A bush on the edge of the path happened to overhang the path enough that my virtual handlebars passed through it.
  • A virtual cyclist was stranded on the side of the path oriented sideways rather than in the direction of travel, as all of the other virtual cyclists were.
  • Another representation of a rider (ghost?*) appeared on the path oriented sideways but didn’t seem to be animated.
  • After I completed the ride, the screen showed my ride’s statistics as a modal dialogue, but I could see the heart rate monitor, RPM, speed, and ride timer were still updating on the disabled screen.
  • One of the post-ride statistics was the local facility’s leaderboard for that course and although my time ranked higher than the last person on the board my time was not displayed.

*I wasn’t clear on what the system meant by a ghost rider who could appear on the course, so this may have been correct software behavior.

Integration, schmintegration

After a trip home and a well-earned shower, I settled in with my laptop to check out the website that interfaces with the on-site system. The site proclaimed that their system engages your mind and I certainly found that to be true, perhaps in a way they hadn’t anticipated.

Although I had created an account through the log in screen of the exercise bike, the website prompted me to complete my profile online before I could access the information. Though I usually think of this as an annoyance, few required fields and a humorous selection of security questions made it a pleasant experience.

The News informed me that I could share my ghost through Facebook or Twitter, though I still had little idea of how that would be used, having not seen it in action. I declined to use the social media hook, deferring it until I have an opportunity for more investigation. I was happy to see that my first workout records and awards were available online, though I didn’t “post a ghost” through email or printable QR code. When I found the Ghost Selection options, I could see that a ghost was something like a pacer but more personalized or specific.

I noticed several issues online:

  • I was hopeful that the online system would show my ranking since the on-site exercise bike had not, but both the global leader boards and boards for my fitness facility omitted me.
  • The first attempt to view leaderboards for my fitness facility showed data from a location in some other state although subsequent refresh seemed to correct the problem.
  • I also encountered the server’s technical difficulties page.
  • Some header graphics failed to display, though sometimes page refresh corrected this.
  • Leaderboard page breadcrumbs did not always correspond to the displayed page (e.g. inaccurate, current page omitted).
  • Firebug showed me at least one typo in the Javascript that caused an error and one page’s source included comments in the code, which I have read is discouraged.
Game on

Although I was happy to know that my workout data was preserved and available online in some form, the leaderboards could use some work. While the software product team may not have been concerned with real-time updates to leaderboards, as a first-time user I really wanted to see how my performance stacked up against the more seasoned players, which is an important part of the gamification angle that this product leverages to defeat boredom and keep users involved in exercise. I’ll certainly try this system again and hope that I can ride out both the bugs and the boredom.

Image source

← Older posts
Newer posts →

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • November 2024
  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
    • Reliability
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.