• About
  • Giving Back

aclairefication

~ using my evil powers for good

Category Archives: Metrics

Minimum Viable Product Manager

29 Wednesday Aug 2018

Posted by claire in Agile, Agile2018, Approaches, Context, Experiments, Metrics, Protip, Retrospective, Scrum, Soft Skills, Training, User Stories

≈ Leave a Comment

At Agile2018, I attended Richard Seroter’s Product Ownership Explained session, where I heard about bad and good product owners. Product ownership/management has many facets including

  • advocating processes and tools
  • style of leadership
  • customer interactions
  • relationship with engineers
  • approach to continuous improvement
  • product lifecycle perspective
  • sourcing backlog items
  • decomposing work
  • running through a sprint
  • meeting involvement
  • approach to roadmap
  • outbound communication

Now I’ve been working alongside many customer proxy team members (e.g. business analyst, product owner, product manager) over the years. I’ve learned how to create testable, executable, deliverable user stories in a real-world setting. I wasn’t going into this talk blind. I just haven’t always focused on the Product role.

This time, I looked at the role with the mindset of what it would take for me to check all the boxes in the “good” list. As each slide appeared, my list of TODOs lengthened. I started to feel overwhelmed by the number of things I wanted to improve…

“How you doin’, honey?” “Do I have to answer?!?”

I walked out of that talk thinking I’m not sure I want to sign up for this epic journey. The vision of the idyllic end state was more daunting than inspiring. How could I possibly succeed at this enormous task? Would I want to sign up for that? My initial reaction was no! How could I take on all the technical debt of stretching into a new role like Product? How long would the roadmap to “good” take?

Analysis

When I evaluate things off the cuff, I often consider good-bad-indifferent. Maybe knowing what “good” and “bad” look like wasn’t helping me. I knew I didn’t want to be merely “indifferent”… maybe what I really wanted to know was this:

What does a minimum viable product manager look like?

One of my big takeaways from Problem Solving Leadership (PSL) with the late, great Jerry Weinberg was limiting work in process (WIP) or “one thing at a time” (as close to single piece flow as possible) improves effectiveness. If I take that approach to a PO/PM role, I’m afraid that I would completely fail. So I will reduce the practices to as few as I possibly can without completely losing the value of the role. I want only the *critical* path skills or capabilities! Everything else can be delegated or collectively owned or done without. So what can I discard?

In this thought experiment, I’m proposing finding the least possible investment in each essential aspect of the PO/PM role that would move from bad past merely indifferent to viable (but only just!). I needed to reduce my expectations! If I allow minimum viable to rest somewhere in my default scale, then it fits between indifferent and good. That means I deliberately do *not* attempt to inject all of the good practices at once. So let’s revisit the axes of expertise and the lists of behaviors that are good and bad…

What’s the least I could do?

Decomposition

Advocating processes and tools

Good: contextual & explanatory & collaborative (fitting process to team + pragmatic tool choices + only important consistency + explains process value + feedback leading to evolving)
Minimum viable: pragmatic minimalism (choose a simple process & let practices earn their way back in as value is explained + choose an available tool + allow consistency to emerge + request feedback)
Indifferent: status quo (follow existing process/ceremony w/o questioning + let others choose tools + don’t justify)
Bad: dogmatism (one practice fits all + adhere to ceremony + prescribed toolchain + standardization + trust process + don’t justify)

Style of leadership

Minimum viable: leads by example (models behaviors for others without trying to modify their behaviors) + doesn’t worry about respect + consultative decisions + experiments/loosely decides + sometimes available to the team but not constantly + flexible + defaults to already available metrics

Customer interactions

Minimum viable: meets with customers at least once + builds casual relationship with a key customer + gets second-hand reports on Production pain + occasional customer visit + default information sources

For me, this one slides a bit too far toward indifferent… I’m not sure how little I could care about customers and still get away with being acceptable at PO/PM…

Relationship with engineers

Minimum viable: physically co-locates when convenient + T-shaped when it comes to the technical domain (i.e. aware but not trying to develop that skill as an individual contributor) + attends standup + shares business/customer/user information at least at the beginning of every epic + champion for the product & trusts everyone on the team to protect their own time

Approach to continuous improvement

Minimum viable: default timebox + takes on at most 1 action item from retrospective, just like everyone else + plans on an ad hoc/as needed basis (pull system) allowing engineers to manage the flow of work to match their productivity + prioritizes necessary work to deliver value regardless of what it’s called (bug, chore, enhancement, etc)

Product lifecycle perspective

Minimum viable: tweaks customer onboarding in a small way to improve each time + cares about whole cross-functional team (agile, DevOps, etc) + asks questions about impact of changes + allows lack of value in an existing feature to bubble up over time

Sourcing backlog items

Minimum viable: occasionally talks to customers + cares about whole cross-functional team (including Support) + backlog is open to whole team to add items that can be prioritized + intake system emerges + tactical prioritization

I do have twinges about the lack of strategy here, so I guess I’m looking at this part of minimum viable Product *Owner* (i.e. the mid-range focus that Richard points out in his 10th slide).

Decomposing work

Minimum viable: progressive elaboration (i.e. I need to know details when it’s near term work and not before) + thin vertical slices and willing to leave “viable” to the next slice in order to get a tracer bullet sooner + trusts the team to monitor the duration of their work & to self-organize to remove dependencies (including modifying story slicing)

Running through a sprint

Minimum viable: doesn’t worry about timeboxes (kanban/flow/continuous/whatever) + focus on outcome of each piece of work (explores delivered value) + releases after acceptance (maybe this is just continuous delivery instead of continuous deployment, depends on business context)

Meeting involvement

Minimum viable: collaborates with team members to plan as needed (small things more often) + participates in retrospectives + ongoing self-study of PO/PM

Approach to roadmap

Minimum viable: priorities segmented by theme + roadmap includes past delivery/recent accomplishments + adjusts communication as needed/updates for new info + flexible timeline in a living document + published roadmap accessible to all stakeholders on self-serve basis

Outbound communication

Minimum viable: allows org to self-serve info + shares priorities with manager & customers + environment for continuous self-demo/trying features + transparency

What are the minimum viable versions of the tools of a product owner?

  • Backlog – list of ideas not fleshed out until it’s time to run them
  • Sprint planning – ad hoc meetings in a pull system initiated by the need for work definition to execute
  • Roadmap – technical vision of system capabilities + compelling story of the product value proposition
  • Prototyping, wireframing – whiteboard pictures + text-based descriptions
  • Team collaboration – a big TODO list that everyone can access
  • Surveying/user testing – chat program that both team & user can access
  • Analytics – NPS score informally collected from customer conversation
  • Product visioning – I think this goes in with Roadmap for me?

So I’ll agree that the PO/PM role is critical and necessary. I would like for creative problem solvers to fill the role – and to be fulfilled by the role! In order for that to be viable, for people to grow into a Product role, there needs to be education on how to begin – and it can’t be spring-fully-formed-from-the-head-of-Zeus! Christening someone PO/PM doesn’t endow them with sudden wisdom and insight. Skill takes time to develop.

Set realistic expectations for beginners. Help teams to welcome people to grow in the role by offering both challenge and support from all the team members. As with any team need, the agile team has collective ownership to solve the problem, not relying on a single point of failure in the role specialist. Having a beginner PO/PM is an excellent time to reinforce that!

Don't worry, people. I so got this!

If I were a Product Manger, I would definitely prefer to be a full-featured representative of that specialization! However, I encourage you to revisit Richard’s presentation and do your own decomposition of the Product role. What is absolutely essential? What can you do without?

What is *your* minimum viable Product Manager?

Story Time!

16 Monday Sep 2013

Posted by claire in Acceptance Criteria, Agile2013, Approaches, Context, Experiences, Experiments, Metrics, Personas, Publications, Retrospective, Speaking, Training

≈ Leave a Comment

Agile2013-ClaireMossAs Agile2013 considers itself a best in class kind of conference “designed to provide all Agile Team Members, Developers, Managers and Executives with proven, practical knowledge”, the track committees select from a large pool of applicants and prefer vetted content that has worked its way up from local meetings to conferences. I have only one talk that fits this criteria since I presented Big Visible Testing as an emerging topic at CAST 2012. I developed several versions of this talk subsequent to that event and doing so had given me confidence that I would be able to provide valuable information in the time allotted and still leave enough time for attendees to ask questions and to give feedback on what information resonated with them.

I worked to carefully craft this proposal for the experience reports track, knowing that if I were selected that I would have a formal IEEE-style paper to write. Fortunately, my talk made the cut and I began the writing process with my intrepid “shepherd” Nanette Brown. I wasn’t sure where to begin with writing a formal paper, but Nanette encouraged me to simply begin to tell the story and worry about the formatting later. This proved to be wise advice since telling a compelling story is the most important task. Harkening back to my high school and early college papers, I found myself wading through different but largely similar drafts of my story. I experimented with choosing a different starting point for the paper that I ultimately discarded, but it had served its purpose in breaking through my writer’s block. Focusing on how the story would be valuable to my readers helped to hone in on sequencing and language selection. Once I had the prose sorted out, I began to shape the layout according to the publication standards and decided to include photographs from my presentation – the story is about big visible charts after all!

Investing sufficient time in the formal paper made preparing the presentation more about strong simple visuals. I have discovered my own interest in information visualization so prototyping different slide possibilities and testing them out with colleagues was (mostly) fun. I’m still not quitting my day job to go into slide deck production. Sorry to disappoint!

Performance anxiety

Despite all of this preparation, I couldn’t sit still at dinner the night before my presentation and barely slept that night. I woke before the sunrise and tried to school my mind to be calm, cool, and collected while the butterflies in my stomach were trying to escape. This was definitely the most challenging work of presenting!

As a first time speaker, I didn’t know what to expect, so I set my talk’s acceptance criteria as a rather low bar:

    1. Someone shows up
    2. No one hates it enough to leave a red card as feedback

When I walked into my room in the conference center, a lone Agile2013 attendee was waiting for me. Having him ready to go encouraged me to say hello to each of the people who came to my presentation, which in turn changed the people in the room from a terrifying Audience into many friends, both new and old. I think I managed not to speed through my slides despite my tendency to chatter when I’m nervous. I couldn’t stay trapped behind my podium and walked around to interact with my slides and to involve my audience more in the conversation. Sadly, I can’t share my energy with you since I forgot to record it. Oh well. Next time!

The vanity metrics

  • At 10 minutes into the presentation, 50 people had come to hear me speak and at 60 minutes I had somehow gained another 7 to end at 57 people. Thanks so much for your kind attention! I hope I made it worth your while…
  • 43 people stopped to give me the simple good-indifferent-bad feedback of the color-coded cards (which I liked as a simple vote about a presentation) and I received 37 green cards and 6 yellow – with no red cards! Whoo hoo!

Words of Encouragement

Two people kindly wrote out specific feedback for me and I want to share that with you in detail, hoping to elicit some late feedback from attendees who might like to share at this point (Agree or disagree, I want to hear from you!)

Feedback Card #1:
– Best session so far!
– Great presenter – great information – great facilitator
– Would like to see future sessions by this speaker

Feedback Card #2:
Great Talk – speaker very endearing, Her passion for the subject matter is obvious.
A fresh perspective of how Developers and Testers should interact.
Should find ways to engage the audience

Someone else got a kick out of my saying, “I’m serious about my stickies.” and left their notes behind on the table after leaving. So thanks for sharing that. 🙂

One friend spoke to me afterward with some helpful feedback about word choice and non-native English speakers. When I was writing my talk, I was trying to focus on people who would be likely audience members, but I had not considered that aspect of the Agile2013 crowd. Since I was simply speaking off the cuff, I ended up using some words that would have fit in at our dinner table growing up but that would make for tougher translation. And yet, I got some wonderful feedback from Hiroyuki Ito about the “kaizen” he said I made. I can’t read it directly, but Google Translate assures me it’s good stuff. 🙂

uneasy truce

Finally, I discovered that my relationship with a linear slide deck is not a comfortable one. I wanted to be flexible in referencing each of the slides and having to sequence them hampered my ability to respond easily with visuals when discussing questions or improvising during my talk. I haven’t experimented with other presentation options, but I hope there’s an easy solution out there.

Image Credit

Big Visible Testing (Full Length) from Claire Moss

Spare the Rod

10 Wednesday Aug 2011

Posted by claire in CAST 2011, Context, Metrics, Training

≈ 4 Comments

Ubiquitous

Paul Holland‘s interstitial Lightning Talk at CAST 2011 was a combination of gripe session, comic relief, and metrics wisdom. The audience in the Emerging Topics track proffered various metrics from their own testing careers for the assembled testers to informally evaluate.

Although I attended CAST remotely via the UStream link, I live-tweeted the Emerging Topics track sessions and was able to contribute my own metric for inclusion in the following list, thanks to the person monitoring Twitter for @AST_News:

number of bugs estimated to be found next week
ratio of bugs in production vs. number of releases
number test cases onshore vs. offshore
percent of automated test cases
number of defects not linked to a test case
total number of test cases per feature
number of bug reports per tester
code coverage
path coverage
requirements coverage
time to reproduce bugs found in the field
number of people testing
equipment usage
percentage of pass/fail tests
number of open bugs
amount of money spent
number of test steps
number of hours testing
number of test cases executed
number of bugs found
number of important bugs
number of bugs found in the field
number of showstoppers
critical bugs per tester as proportion of time spent testing

“Counting test cases is stupid … in every context I have come across” – Paul Holland

Paul mentioned that per tester or per feature metrics create animosity among testers on the same team or within the same organization. When confronted with a metric, I ask myself, “What would I do to optimize this measure?” If the metric motivates behavior that is counter-productive (e.g. intrateam competition) or misleading (i.e. measuring something irrelevant), then that metric has no value because it does not contribute to the goal of user value. Bad metrics lead to people in positions of power saying, “That’s not the behavior I was looking for!” To be valid, a metric must improve the way you test.

In one salient example, exceeding the number of showstopper bugs permitted in a release invokes stopping or exit criteria, halting the release process. Often, this number is an arbitrary selection that was made long before, perhaps by someone who may no longer be on staff, as Paul pointed out, and yet it prevents the greater goal of shipping the product. Would one critical bug above the limit warrant arresting a rollout months in the making?

Paul’s argument against these metrics resonated with my own experience and with the insight I gathered from attending Pat O’Toole’s Metrics that Motivate Behavior! [pdf] webinar back in June of this year:

“A good measurement system is not just a set of fancy tools that generate spiffy charts and reports. It should motivate a way of thinking and, more importantly, a way of behaving. It is also the basis of predicting and heightening the probability of achieving desired results, often by first predicting undesirable results thereby motivating actions to change predicted outcomes.”

Pat’s example of a metric that had no historical value and that instead focused completely on behavior modification introduced me to a different way of thinking about measurement. Do we care about the historical performance of a metric or do we care more about the behavior that metric motivates?

Another point of departure from today’s discussion is Pat’s prioritizing behavior over thinking. I think the context-driven people who spoke in the keynotes and in the Emerging Topics sessions would take issue with that.

Whoever spares the rod hates the child, / but whoever loves will apply discipline. – Proverbs 13:24, New American Bible, Revised Edition (NABRE)

My experience with metrics tells me that numbers accumulated over time are not necessarily evaluated at a high level but are more likely as the basis for judgment of individual performance, becoming a rod of discipline rather than the protective rod of a shepherd defending his flock.

Paul did offer some suggestions for bringing metrics back to their productive role:

  • valid coverage metric that is not counting test cases
  • number of bugs found/open
  • expected coverage = progress vs. coverage

He also reinforced the perspective that the metric “100% of test cases that should be automated are automated” is acceptable as long as the overall percentage automated is low.

Metrics have recently become a particular interest of mine, but I have so much to learn about testing software that I do not expect to specialize in this topic. I welcome any suggestions for sources on the topic of helpful metrics in software testing.

Image Credit

♣ Subscribe

  • Entries (RSS)
  • Comments (RSS)

♣ Archives

  • October 2019
  • September 2019
  • August 2019
  • March 2019
  • February 2019
  • November 2018
  • August 2018
  • June 2018
  • May 2018
  • March 2017
  • August 2016
  • May 2016
  • March 2015
  • February 2015
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011

♣ Categories

  • #testchat
  • Acceptance Criteria
  • Agile
  • Agile Testing Days USA
  • Agile2013
  • Agile2018
  • AgileConnection
  • Approaches
  • Automation
  • Better Software
  • CAST 2011
  • CAST 2012
  • CAST 2013
  • CAST2016
  • Certification
  • Change Agent
  • Coaching
  • Context
  • DeliverAgile2018
  • Design
  • Developer Experience
  • DevNexus2019
  • DevOps
  • Events
  • Experiences
  • Experiments
  • Exploratory Testing
  • Hackathon
  • ISST
  • ISTQB
  • Lean Coffee
  • Metrics
  • Mob Programming
  • Personas
  • Podcast
  • Protip
  • Publications
  • Retrospective
  • Scrum
  • Skype Test Chat
  • Social media
  • Soft Skills
  • Software Testing Club Atlanta
  • Speaking
  • SpringOne2019
  • STAREast 2011
  • STAREast 2012
  • STARWest 2011
  • STARWest 2013
  • Tea-time With Testers
  • Techwell
  • Test Retreat
  • TestCoachCamp 2012
  • Tester Merit Badges
  • Testing Circus
  • Testing Games
  • Testing Humor
  • Training
  • TWiST
  • Uncategorized
  • Unconference
  • User Experience
  • User Stories
  • Visualization
  • Volunteering
  • Weekend Testing

♣ Meta

  • Log in

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.