Wednesday 31 March 2010

Carnival of Testers #8

You've probably noticed that I haven't settled on a particular format for these carnivals. That's partly intentional - I'm trying different approaches and seeing which "sits" the best. 


So, if you're getting a bit 'sea-sick' with the constant motion of these formats, it's time to break out those pills for this month's installment. Welcome aboard...


Conferences

  • The EuroStar2010 video competition generated a lot of interest - plenty of tweets to/from esconfs and a couple of blogs posts, from Anne-Marie Charrett (link) and Rob Lambert (link) - all entries are different and worth a look to find your preference!
  • The Google gig this year was the focus of Fred Beringer's interesting prediction, here
  • SIGiSIT's March conference got good coverage by Stephen Hill, here.

Collaboration and Context

  • A lesson in team work from Lisa Crispin is worth a read.
  • Peter uses input from several sources to explore ideas around context-specific and context-free questions.
  • The STC re-launched crowdsource testing.
  • Collaboration and teamwork was demonstrated by Parimala Shankaraiah & Ajay in their 30 minute challenge. Some interesting learning experiences!

Chatting

  • The folks at uTest did a two-part interview with Jon Bach, first part of an interesting read here.
  • Anne-Marie Charrett highlighted a problem of communication between testers and non-testers.
  • Lanette Creamer wrote about her QASIG talk. The slides are worth checking out.

CSI and Comparisons

  • Zeger Van Hese wrote about crime scene investigations and an analogy to the testing 'observer effect'.
  • The analogies continue with Elizabeth Fiennes' comparison of software development and babies.



Credibility and Confidence

  • Read about Pradeep Soundararajan's lessons in discovering what real credibility is...
  • Customer confidence as a by-product of testing was in focus from Dhanasekar S, here.

Cars, Controversy and Consistency

  • Corporate statements about cars, software testing and computers got the precision attention of James Bach, firstly from Toyota, here and here, followed by CNN, here. Read them for a lesson in critical analysis.
  • A compelling case against test case counting was presented by Steve Rowe, here.
  • Chris McMahon makes the case to understand what is being estimated...

Copious Output

  • Markus Gärtner was busy publishing posts this month! He put together a very readable review of the four Quality Software Management volumes, here. There was also a take on some useful software craftsperson's attributes, here.

Certification

  • A case against pseudo-certification was given by Simon Morley, here. If you can do this then you won't need a certification!

Ok, I'll stop rocking the boat and you can step ashore now! Until next month...

Thursday 25 March 2010

Testing Lessons from a Clown


Last weekend saw a visit to a children’s theatre where a famous local clown was performing.

I sat on the sideline out of the way, or so I thought, and ended up both enjoying the performance and observing some tester's lessons.

Clowns are perfect models of communication analogies for many tester-related areas, whether it is issues with communication ambiguity, communicating with non-testers or looking at the problem from a non-expert (domain knowledge) angle.

Lessons
One of the early problems was the clown’s hat falling off. 

"It's behind you." 

The clown walks in a semi-circle to face the other direction, no hat. "No, turn around." Ahh... Clown steps towards and bends down to pick up hat and kicks it away with oversized boot. Repeat.

"No, you're kicking it!" Clown, "Should I kick it? Ok." Hat is kicked into the front row - one of the children hand it back.

Clown, "Ah, so if I kick it away I get the hat back!" 
Lesson: Not every result/explanation is the only explanation!
Hat falls off again. This time, when it's regained, it's put on upside down.

"It's the wrong way round, turn it around"

Hat is rotated in the horizontal axis.

"No the other way!"

Hat is rotated in the horizontal axis in the opposite direction.
Lesson: Ambiguous input leads to unexpected/undesired output.
Lesson: Sometimes we get tripped-up by language. 

There were lots more of these types of children interactions leading to misunderstandings all following a similar theme. What was interesting to see was that the children began modifying the way they gave their instructions - they became a little more precise as the show went on!

It's great to see them learning these ideas as well as having fun. They're exactly some of the skills that any future tester would be proud of.

I got dragged into the act - used as a prop in various ways. So, for the second time in a week I became the stage masters assistant - maybe I've got the perfect circus face...

Have you learnt anything from an unexpected source recently?

Tuesday 23 March 2010

Testing Balls /= Pseudo Certification

I wrote about how I came away recently from a Rapid Software Testing class with a pair of rubber balls - my so-called "testing balls", here.

In that post I also stated that these are probably worth more than a certificate that says I "can test". But I was also careful to say that it's the story that I can tell related to them that makes them more valuable than a certificate. Why?

Hook
The balls are just a "hook", a way to get me to tell my story to someone.

That story would state the problem I was given, how I approached it, some of the obstacles I met along the way, what I was thinking about those obstacles, strategies to get the information I needed, reporting back the status as I saw it with my reasoning - with this cycle repeating a few times.

The story would then continue about the de-brief of the activity from James Bach, what I could identify as improvement areas or alternative tactics, what could be confirmed as "good" or "appropriate" tactics with a summary of the lesson.

..Line..
So, those balls have just allowed me to demonstrate testing - on the spot! Way cooler than a certificate for me!

When I tell that story to colleagues they can see testing in action, they can question about different aspects, thoughts and ideas. That's testing thinking and feedback on the spot (in your face, so to speak...) It draws people in, it engages and it allows discussion.

..And Sinker!
That beats comparing results on a multiple choice test any day for me.

So, at the end of a long-and-winding road, that's why I was so happy with the testing balls. They're not a "pseudo-certificate" but a hook for me to demonstrate testing "live".

Got any other good hooks, stories out there?

Monday 22 March 2010

Trigger-Happy vs Rapid-Fire Testing


 #qa #softwaretesting

 So, what's that then? First impressions? Is it a desirable or less desirable action?

In General
Let's assume that Trigger-Happy has more "undesirable" associations and Rapid-Fire has more "desirable" associations.

When it's an undesirable activity or approach to testing it can be associated with lack of direct questioning, or lack of processing of the results to determine the next steps. It might be a sign that the tester is "lost" and isn't asking for help or it could be a sign of a misunderstanding about an action, procedure or step.

Alternatively, it could also be purposeful action to generate data/results without interpreting them, i.e. "I'll only need to interpret them if I see something interesting - if nothing interesting happens then I won't care too much."

So what is it?
Trigger-Happy Testing is an approach to testing where the thought process is put on hold unconsciously. It might be:
  • The pound-the-keyboard approach (sending in several strings of data to an input buffer without any connection between them or having a specific question in mind). [This is trigger-happy when there's no "time to stop" or "stop to change" guideline.]
  • Performing some action because "I saw someone do this and it always works for them."
  • "This step is written in my notebook and I always do this." Without any meaning or understanding behind the activity or its expected result.
Rapid-Fire Testing is the approach to testing where the feedback loop of result-analysis-decision is consciously suspended. Examples might be:
  • Let's try and break it. When we've broken it then we'll analyse if the way we did it was "reasonable".
  • "The pound-the-keyboard approach" when used to generate a batch of results which might be analysed as a group. Analysing the results may help determine a "test step" grouping, determine a pattern in the behaviour of the product or probing for certain responses.
The essence of trigger-happy testing is a warning sign and it's useful to recognise them either in you or someone else.

Indicators
This approach can be a sign of:
  • Running out of ideas - try something new. This is a positive approach, when you're at a dead end - "do something" - but try to know when to understand the results and have an idea about when to change approach.
  • Not interpreting the results so far from the application/platform and forming a hypothesis about what to do next. If this is done for a purpose ("I'm going to analyse later", fine) but if not then it could be a warning sign ("try anything", rather than "ok let's get my bearings and what shall I do next?")
  • Receiving a new/unseen response/result from the application/platform and resorting to a "this usually works or gets me back to a known state". This could indicate not using defined operating procedures (for certain HW) or knowing if what I'm doing is appropriate.
So, why have you named something Trigger-Happy Testing or Rapid-Fire Testing?
Because I could!

But, more importantly, after taking James' Rapid Testing (read previous post) I was triggered to re-look at some of my own assumptions and observations.

I've observed this type of activity on several occasions and asked the testers what their thinking behind it was, what led them down this path.

Many times it's frustration, reaching a dead-end in ideas and not knowing what to do next. From the perspective of "doing something" and not freezing, it's a good idea. But it can be dangerous if it becomes a general approach - i.e. getting stuck in the "pound the keyboard" mode without necessarily evaluating, "why did I do that and what shall I do next?"

Other times it's a "let's just break it mentality". Also good as long you operate within some criteria of "when to stop" and to understand the results.

My motivation is to highlight when it's a potential benefit (rapid-fire) and when it's a potential danger (trigger-happy) to a tester and help testers identify some of the warning signs.

Understanding your own approach and when each is applied is fundamental to learning how to be a better tester the next time around.

Oh, I haven't finished exploring this area yet...

Do you recognise a trigger-happy or rapid-fire approach?

Thursday 18 March 2010

Got My Testing Balls!

 #qa #softwaretesting 


 Well, it's been a while since I went on a "testing course". I think the last time was about 95/96 - it was a two week course in Dublin (I have hazy memories of lots of Irish breakfasts & Guinness.)


Today saw the completion of another testing class - Rapid Software Testing given by James Bach here in Stockholm. I feel a bit hazy again - but not in a bloated way - in a very satisfied and mentally stimulated way.


A Thinking Tester's Course?
I sum this course up for me as a practical thinking course for testers - with the emphasis on both practical and thinking! It was tough, challenging, stimulating, energizing (I kept thinking about refreshed applications from the course to my daily work) and fun! 


I'd recommend it to experienced n new testers alike.


It's not easy to say that about so many courses - but I had practical insights for my daily work popping up in the evenings and the mornings on the way into the course. I will do a de-brief and, in the days to come, write about the major lessons for me.


There will also be a follow-on about communication - specifically between testers and non-testers (a big interest of mine!)


Balls!
The very cool thing for me was that I came away with a set of testers balls and some wonderful de-briefing from the mysterious sphere exercise - where James plays a very difficult customer giving you a very difficult task (I won't say any more than that!)


I learnt masses from the de-brief, it wasn't necessarily new stuff, but there's a real pedagogic value when you're coached like that and the insights are put out in the open. It's like what your subconcious has always known is now stamped onto your concious mind! I hope the rest of the class got as much out of it as I did!


The exercise was given a great build-up with James recounting the occasions when volunteers had left the course in varying emotional states due to the activity. Well, I lived to tell the tale and now those balls will take pride of place in the office - much more valuable and worthy than any testing certification.


Certification?
Q: "Show me your testing certification!"


A: "Well, I can do one better, I'll show you my testing balls! I may even tell the story and the lessons behind them!"
I do have the dilema of needing an extra pair for the kids - they love them too!


Hall of Fame?
Anne-Marie suggested a testing balls "hall of fame" on the STC. Cool idea!


Any more potential candidates for the "hall of fame" out there?

Monday 15 March 2010

Time to get writing

 #STC #testingclub #testing 

 The STC announced the call for submissions for the next edition of the magazine.

Did you miss the chance in the first edition?

Are you a budding Shakespeare, Orwell or Pratchett? Got something to say or tell a good story? If so, take the plunge and contribute to the next edition.

The peeps at STC (Rosie, Phil, Joel & Rob) won't bite and will give you good feedback on any proposal - provided you haven't ripped it off, of course :-0

I got involved with the first edition and was very pleased with the result. However, I left everything to the last minute - I had plenty of half-ideas (maybe half-baked) that needed a little more development but didn't give myself the time to do so.

Give yourself time to work on the idea, walk away, think about it and re-work it (if needed.) Let the idea mature (maybe like a good cheese), but remember that what you think is a great cheese (or idea) others might just think it stinks! If that happens, keep at it - maybe you're just an acquired taste :-)

So, if you have something to say, but aren't sure whether anyone will listen or be interested then I've got one piece of advice, "just do it!" 

Time to get writing...

Saturday 13 March 2010

Uncertain Planning Assumptions - A Mental Roller-Coaster

 #qa #softwaretesting #blackswan #testing

 Queuing for the ride (anticipation)
A short post today triggered a lot of thoughts - just before going to bed, hence the need to get them off my chest - or at least unburden my "concious" mind before going to bed...

Paul's post, here, triggered a whole whirl of questions in my head and I started writing a comment on his blog.
Unfortunately, I noticed that my comments were turning into double the length of the post and thought it a bit unfair to unload on his site... (I had @testobsessed's comment about doing the same in my head.)

The thing that caught my attention was the idea of a test plan being judged afterwards as being sound or optimistic.

The ride begins
I've written many formal and informal test plans - some have been very accurate and some have been way off. In most cases I've tried to reflect on what led to them being good, bad or just ugly! I will leave the various reasons for another post (it's not the point of this post!)

With regard to planning assumptions being "sound" or "optimistic" I feel some words of caution are necessary.

Planning assumptions and being able to get guidance on when the test project will finish are very useful tools - or at least the PM likes to know when the testing will finish (potential for mental detour here!) However, the plan is only as good as the feedback into the plan, whether this is an iterative refinement of the plan or waiting until many of the unknowns are resolved before "fixing" the plan...

In both of these cases there is no guarantee that some "must work" feature is not found wanting until all the "high prio/must work" test cases are executed successfully (I'm being simplistic for the sake of argument - there are other potential show-stoppers also.)
Note, I'm not making any allowance for exploratory cases here (for simplicity), which I would normally allow for.
The point is that we can't know everything about the nature of the product from a planning certainty perspective (unless the plan is not fixed until we know nearly everything about the product) to determine afterwards whether the plan was sound or optimistic.
If we don't set the plan (point at when it's not updated anymore) until all the unknowns about the product are known (this is after we've concluded testing of the product), then what's the point of the plan?
The plan might be fine but miss a fault found 6-months down the line that should reasonably have been found in your test scope. However, the scope was too restrictive or "defective" in some other way.
When the testing is finished it might look like the plan was "sound" - that you made all the right assumptions...
Or the plan could fail due to some unforeseen issue being found late in the day (for whatever reason) - technically if it's a late req change or new 3rd-party-product in the baseline and then one might consider that the plan should change to reflect this. But, it could also be a basic interaction problem that takes longer to fix than your planned execution time (which you or the developers/architects might not be able to predict straight-away)...
In any of these cases it would be harsh to consider the plan as being optimistic if it's not met due to conditions that are not under the plans "control".

What to do?
So, does this mean there's no such thing as a good or bad plan? No, of course, bad plans exist that do not take into account vital factors needed to complete test execution.

Bad plans are not all due to bad luck and good plans are equally not all due to good luck.

My point is that a timescale where the exit/release criteria does not meet those set out in the plan does not necessarily make it a bad plan or that the plan was optimistic.

The project stakeholder may change those criteria at any time - or indeed ignore the criteria when a release decision is needed...


Conclusion
Sometimes there are factors outside the control or purview (I've recently seen In The Loop :)) of the test plan that do not make it a bad test plan or that the assumptions for the plan are incorrect or optimistic.


Photo finish
Anyway, many thanks to Paul for triggering this latest mental roller-coaster ride for me. It was fun, I'm a bit giddy, am looking forward to the photo as I reached the bottom of the fall, but now I can go to bed, mind unloaded :)

What triggered your last mental roller coaster ride?

Monday 1 March 2010

Testers from the Animal Kingdom

 #softwaretesting #fun

 Warning: Frivolity alert! If you're looking for a serious piece, please move on :-)

During the writing of my last post I wanted to use a group name for testers - a slightly loose way of referring to a group of people. My references from the animal kingdom were a herd (as in cows), a flock (sheep), a gaggle (geese) and a pack (wolves).

However, I immediately noticed that these names trigger a response (a reaction in the reader). I'm currently reading Gladwell's Blink so I'm very aware of concious & unconcious interpretations (at least just now...)
I mean, a gaggle of testers? 
What impression would that give? 
Two things spring to mind for me: either a bunch of chattery and disorganised folk or a group of Jimmy Savile look-a-likes with cigar, gold jewellry and catchphrases.
So, using the sort of subconcious bias that we have with words I thought I'd explore some animal groupings and how that might apply to testers. As with all testing, this is not exhaustive :)

  • A mischief of testers (Rat): Testers who just want to break things! (Oh, I might've found a winner already...)
  • An unkindness of testers (Raven): Testers who give feedback without phrasing in the 3rd person or being diplomatic... (Maybe some people transition through this...)
  • A tittering of testers (Magpie): Testers who can't keep a straight face when pointing out faults to a developer (or anyone.)
  • A pride of testers (Lion): Testers who flounce in thinking they own the place (is the quality police?)
  • A pudding of testers (Mallard): Slightly clumsly and disorganised...
  • A scourge of testers (Mosquito): Yes, they're the testers with slightly less developed people skills when it comes to feedback (feedback on a piece of paper attached to a brick if you're lucky!)
  • A bloom of testers (Jellyfish): Don't know but it's positive sounding - maybe a tester before they go on maternity leave or testers on their way to a wedding (flower buttonholes.)
  • A colony of testers (Badger): All sorts of connotations here - could it be an off-site grouping that is either faithfully loyal to the mother site or a grouping wanting to otherthrow the mother rule and go it alone (I've seen both in action...)
  • A shiver of testers (Shark): The ruthless efficiency with which the tester circles in to localize the fault (at least seen from the perspective of a 'pudding' dev.)
  • A crash of testers (Hippopotamus): Could be a combination of an unkindness and pudding of testers (maybe not a popular combination!)
  • A risk of testers (Lobster): A risk-based tester?
  • A marmalade of testers (Pony): A grouping of sweet, tart and pithy testers (or the ones located in the Seville office.)
  • A parliament of testers (Owl & Raven): A group that is filled with a lot of talk and occasional hot air, but tries to be democratic.
  • A business of testers (Ferret): They get on with the task at hand in a professional way (or spend a lot of time sleeping and are active around dusk & dawn - a la ferret.)
  • An army of testers (Ant): When the tester to developer ratio is very high!
  • A coffle of testers (Donkey): An easy-kept group with a tough digestive system (will tackle anything!) (I hope Lisa Crispin will correct me if I'm wrong here!)
  • A wisdom of testers (Owl): The gurus or architects that you turn to now and then (or the self-proclaimed group thinking they know it all..)
  • A troop or cartload of testers (Monkey): A group of testers that make the organisation tick - everything so choreographed! (Could this be a dream goal?)
  • A harem of testers (Seal):  No comment...


The interesting thing is that I can relate to most of these groupings via testers I have met, worked with or observed in the past...


Credit to @shrinik & @ElizaFx for setting off this divergent thinking exercise!


Any goodies that I missed?