March 27th, 2013
I was in Boston (US) yesterday to attend the kick off meeting of the OASIS MQTT Technical Committee (TC).
“As an M2M/Internet of Things (IoT) connectivity protocol, MQTT is designed to support messaging transport from remote locations/devices involving small code footprints (e.g., 8-bit, 256KB ram controllers), low power, low bandwidth, high-cost connections, high latency, variable availability, and negotiated delivery guarantees.”
More information on the OASIS MQTT TC web page and at mqtt.org.
I’ve been using (and testing with) MQTT for some time in various applications. I had meant to blog about my team’s ‘Buildosaurous’ at Christmas time. He was originally a night light until I upgraded him with a BlinkM Programmable RGB led, connected him to an arduino and subscribed him (over MQTT) to a ‘build event’ topic. He now passes his time sat on a desk flashing various colours during builds and then resolving to Green or (occasionally) Red based on the BVT results. The next model may even track down whoever brakes the build
March 14th, 2013
Anxiety, curiosity and disappointment are just a handful of emotions that I associate with field escapes. Strangely, I also spend time telling everyone I meet how it’s impractical to prove that even very simple code structures are defect free (see my earlier post on why software testing can be so exhausting). As testers, our secret weapon is well formed user stories that ensure we target the *right* things and seek to remove the defects that are most likely to impact our stakeholders.
One of my mentors helped adjust my outlook last week. He can see the positive in field escapes: we’ve uncovered something new, we can learn from this and we have an opportunity to put it right. That’s not to say we shouldn’t try harder – far from it!
Ultimately, it’s the consumer view of how good software is that counts. This will probably change over time and is likely to be far more subjective than the traditional software engineer would like it to be.
Field feedback would seem to be healthy then; a vital ingredient and central to successful software. Any thoughts?
March 5th, 2013
Not exactly surprised that I discovered a defect in my arduino remote control project. Although, I quite liked this one:
Symptom: remote occasionally wakes, does nothing and goes back to sleep.
Impact: Wastes power reducing battery endurance.
Steps to recreate: make some electrical noise – I used a Piezo Electric spark ignitor
I discovered it while testing the remote in my kitchen. I make use of the arduino platform hardware interrupts so that the CPU can be woken from a power saving ‘sleep state’. The code to do this is simple: attachInterrupt(0,wakeUpEvent,HIGH). All of my buttons connect via diodes to pin 2 (interrupt 0) forcing this pin HIGH when a button is pressed. At this point processing flips to my wakeUpEvent() routine. The problem is that this pin seems to be quite sensitive. I could fix this with a hardware redesign, but this would be quite expensive. Instead I resolved it with a software solution: I scan the buttons on wake and set an alarm flag accordingly. A false alarm condition causes the remote to go back to sleep.
I’m not 100% sure it was worth fixing as the remote will not live in the kitchen. Having said that, I can’t predict electrical noise.
March 4th, 2013
A friend just pointed me at an interesting article on Google’s Test analytics. It effectively replaces the existing ‘little used’ test plan with something more dynamic and representative of the outstanding risk in a project.
We have been doing some similar work around ‘Confidence maps‘ to help demonstrate the quality of a deliverable.
Now we get into the discussion of the value of subjective v objective data. Our confidence maps use objective data such as test results, defect counts and code coverage as input, but ultimately the decision is made by a skilled professional who sifts the data and uses it for the quality assessment.
I think it helps to understand the difference between precise and accurate in this discussion.
Accuracy & Precision
In the fields of science, engineering, industry, and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity’s actual (true) value. The precision of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.
So things like defect numbers and test results are very precise, but they are not necessarily accurate measures of quality. (Think about a product that has 95% of its tests passing and 10 defects and another that has 50% passing and 1 defect – which has the higher quality?)
This is where we need the subjective analysis to determine quality. By taking the different bits of objective data (and the more we can get of this the better) we can make a subjective view of the products quality.
I like what Google are proposing – in fact I like anything that moves us away from building long winded test plans and the traditional test metrics of counting tests and defects – now we need to focus on how we turn precise data into accurate data. This is where the really skilled tester plays their part.
February 26th, 2013
Caught an interview with Daniel Pink on Radio 4′s Today Programme* this morning about his new book ‘To Sell is Human: the Surprising Truth About Persuading’. It would seem that just asking ourselves this question could help us become more successful testers.
*Obviously, I usually listen to banging tunes in my car – my finger must have slipped
February 24th, 2013
I’ve built a few Arduino based projects over the last twelve months, including an MQTT based test throughput counter and a ‘Buildosaurus’ glowing RGB model dinosaur to indicate product build health. The most complex so far is an IR remote control platform to solve an accessibility problem. I’ve used 16×2 LCD displays in several of my projects, but for the remote control power is constrained as I want to run it from a 9v cell.
Driving the display itself is not a big problem; quiescent current is the enemy. In my prototype I used an NPN transistor to manage power for the display circuit and this worked perfectly well. For the final build I went with a white on blue HD4470 compatible display – same interface therefore a straight swap. However, I wired it all up and I just see random characters displayed, nice!
I found that if I ground the R/W pin then all is fine, which is weird because the pin is already grounded via my transistor. Anyhow, not a viable solution since a connection to ground will bypass the power management. OK, this display shares the same interface, but behaves slightly differently. A while later – having changed obvious components just in case they were faulty – I stop looking at the circuit and take a look at my code:
//Power up LCD and initialize.
Seems fine to me: turn on the display (via a transistor), initialize it and start using it. I started to wonder, what if the display is not quite powered up when I call lcd.begin() ? I add a 100ms delay before the begin() call and all works fine. I’ve convinced myself why the fix works, but is the defect a hardware or software one ?
Here’s a photo of the remote, in case you were wondering what it looks like…
February 22nd, 2013
Just seen a new link to this book “Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing” by Elisabeth Hendrickson.
Haven’t read it myself, but there’s a lot that attracts me to it.
Firstly, Reducing Risk and Increasing Confidence – that to me is what testing is all about. We get bogged down with this thing about test’s jobs being about finding defects and ‘owning quality’ when the reality is testing is a tool for reducing risk. I posted here 4 years ago on this topic and I have seen teams make real progress in this area. We now talk about building confidence in our deliverables and are using ‘confidence maps’ to demonstrate this in a very visual way (we will post on this later).
Secondly, the exploratory bit is exciting. In Lisa Crispin’s ‘Agile Test Quadrant‘ this testing fits into Quadrant 3 – the testing that critiques the deliverable from a business perspective. This testing is hard – not technically hard, but hard because most software engineers and test specialists don’t have the business background to make this testing really successful.
I’ll need to get reading!
February 15th, 2013
We all know security is important, so it’s no surprise many middleware products have some sort of hook into a user repository – e.g., LDAP – for user based authentication and authorisation. I’m currently at the critical point for testing this in a new product: that is, moving from function-level testing in an isolated repository (thrown together with maybe 100 fake users) to IBM’s live internal LDAP-based repository “Bluepages”. The advantage of the latter is that it comes prepopulated with hundreds of thousands of users and is excellent proof that our products integrate well with existing infrastructures: ideal for customer demos. I just need to tread carefully, as these are real systems I’m working with… so I will be paying very close attention to the behaviour of my product.
January 9th, 2013
Just made my first Arduino based phone call and sent myself a text message using an Arduino UNO, a ‘Cellular Shield’ > http://www.coolcomponents.co.uk/catalog/cellular-shield-with-sm5100b-p-490.html <, a putty session and some wire.
Really surprised how easy this was. The core Arduino application is just 25 lines of code. Once up and running the SMS and phone call was just a couple of ‘AT commands’. No defects to report so far!
January 10th, 2011
XKCD on writing good code: http://xkcd.com/844/