ITP Sites:   ITP Site|TechBlog|TechHub in schools|NZ CloudCode|All Tech Events|Software Escrow NZ

ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to Home

Price-based tendering to blame for Novopay?

John Rusk. 20 February 2013, 4:18 pm
Price-based tendering to blame for Novopay?

Is a procurement process weighted too heavily towards the lowest price to blame for major IT project failures in Government?  Can the problems we're now seeing with software projects like Novopay be traced right back to price-based tendering and if so, is there a better way? This week John Rusk explores an alternative approach to government tendering, where price is an afterthought but huge savings are made.  Sound whacky?  The US federal Government doesn't think so - they've been using it for engineering since 1972.

In the tender process for payroll in schools, we're all left wondering what price the losing bidders quoted.  Was Talent2 the lowest bidder and was that the main reason they won? And if so, does the fact the project has now cost millions more, and all the other problems to date, prove that the procurement was a failure?

Until recently I worked as a software architect for various Wellington IT vendors.  I was one of the geeks who came up with the prices; we'd estimate how many person-hours the project would require (using black arts which I can't possibly disclose here). Then we'd debate with Sales and Management to arrive at the final fixed price, propose it to the customer, and maybe win the work.

After 15 years as an insider, I can say one thing with certainty:  awarding contracts to the lowest bidder is optimistic at best, and dangerous at worst.

So why do it?  Why award software contracts based on price?

There are many reasons.  To pick just one: decision makers believe it works in "real" engineering - making bridges, roads and buildings - so should work for software too. But it doesn't. Although it is widely used in engineering, it's not widely successful!

The Truth about Engineering

A friend used to give me her old copies of e.nz, the magazine of New Zealand's Institute of Professional Engineers, due to my strong interest in what was going on in "real" engineering (of the buildings and bridges kind).  In 2005 I was struck by an article which stated that "the indiscriminate urge for lowest price" leads to "late and unsatisfactory completion, disputes and litigation"; It almost sounded like software and I've been thinking differently about this ever since.

The New Zealand Construction Industry Council said that "the lowest-bid approach is compromising design quality and integrity, health and safety, training, the environment… [Furthermore,] the lowest bid approach encourages unsustainable markets".

Discussing solutions to these problems, e.nz explained that: "A substantial culture change is involved in progressing towards best practice in procurement. All stakeholders must accept that they have a duty to collaborate towards a common objective: the full satisfaction of the project's objectives, in exchange for fair rewards for all who contributed."

Why is the Lowest Price Dangerous?

The lowest price is dangerous because you don't know why it's low. In the software industry a company may offer a low quote for any of the following reasons:

  1. They have misunderstood the difficulty of the task.
  2. They understand the task, however the estimate is low simply because software estimation is inherently difficult.
  3. They understand the task and are deliberately bidding low because they're eager (or desperate) for your business.
  4. They understand the task and have outstanding skills and technology which will allow them to complete it quickly.

Only the last reason is a good one. The others, to greater or lesser degrees, may all threaten your project. Yet in terms of price, they look the same.

So How Do You Tell the Difference?

You have to base your decision on assessments of the supplier's capability: the quality of their staff, the quality of their technology, their openness and honesty with customers and their overall track record.

Price tells you nothing about capability. A low price may signal superior capability (reason 4), inferior capability (reason 1), desperation (reason 3), or nothing (reason 2).

A Better "Real" Engineering Practice to Emulate

If we want to emulate "real" Engineering we would be better to take our lead from the US federal government. Since 1972, price-based selection for engineering services has been prohibited for federal government agencies, with selection being based on supplier quality instead. It should be pointed out that this only applies to design services in "hard" engineering. Interestingly, until the 90s software was specifically required to be price-based - in an attempt to break the monopoly IBM was perceived as having.

The non-price approach has proved highly successful and has been widely emulated by state governments in their own procurement of design services for buildings, bridges and other engineering works.

The name of this approach is "Qualifications-Based Selection" (QBS), because it's all about choosing the vendor who is best qualified (most capable) to perform the work.

QBS reflects an understanding of the risks noted above - that price tells you nothing about capability.  It may also reflect an awareness of just how embarrassing price-based selection would be after a major engineering disaster.

Imagine a TV interview shortly after the collapse of a bridge:

Reporter: So how did you award the contract?

Official: We… errrr…chose the cheapest.

Indeed.  It will certainly be interesting to see the answer to that question during the Novopay review.  Not to point the finger at any individual of course, but to question the very process we use to award IT contracts in New Zealand.

Could We Use Qualifications-Based Selection (QBS) in NZ?

QBS is ideally suited to public sector procurement.  The American Institute of Architects, California Council, says:

"[The] process is straightforward and easy to implement. It is objective and fair. It can be well documented, and it is open to public scrutiny."

QBS meets the public owner's primary concerns to get the best available professional services for the taxpayers' money and to conduct a fair and equitable selection process."

If this can be emulated, it certainly sounds like a good formula.

How does QBS work?

QBS can be summarised as follows.

  • During the tender process the customer can ask any question they like… except price. They must choose the best, most competent supplier based entirely on non-price criteria.
  • Once they have officially selected a preferred supplier, only then are they allowed to begin price negotiation with that supplier.
  • If it proves impossible to reach agreement on price then, and only then, are they allowed to abandon their first choice and start discussing price with their second choice.

Procurement in New Zealand

To New Zealand ears, this all sounds a little odd. We like to think that making suppliers compete with each other on price is a major goal of the tender process.  Make the vendors compete, then lock the winner into their quoted price.  That's our recipe for success.

But it's a lie.  After 15 years on the inside, I know it's a lie.  In the industry, we all know.  But we dare not say, because our sales and livelihoods depend on playing by the rules of the game; no matter how flawed that game might be.

But with Hon Steven Joyce now hinting at a review of how tendering works for software in Government, perhaps it's finally time for this to change.

Once in a while, we're lucky.  We get a stark reminder that nailing a vendor down to a price does not guarantee (or even encourage) project success. Will the latest example of this prove to be Novopay?  Time will tell.

If it does and we're really lucky, it might also be a catalyst for change.

John Rusk is a senior software professional based in Wellington.


Comments

You must be logged in in order to post comments. Log In

Cain Duell 20 February 2013, 4:43 pm

Hi John,

Good article. It did however get me pondering the question of how often software is actually selected by the lowest price?

Having sat on the opposite side of the fence many times, on selection panels, price is generally just one factor in the decision model.

Selection criteria is generally made up of; fit to requirements, vendors capabilities and experience, architectural fit and of course price.

Where software differs from bridges can be summarised by ROI (Return on Investment).

How confident am I that I will achieve by ROI if this vendor was selected? Price is an integral part of this calcultaion but not the be all and end all. Being able to realise the business benefits outlined in a business case or simialr is "the reason for the season".

I am confident that this is a common approach and would be genuinely surprised if Talent2 was selected solely on lowest price.

John Rusk 20 February 2013, 9:35 pm

Hi Cain,

Do you think it matters how big the price difference is? If two bids are within 5% of each other, then almost certainly price won't decide the winner. But what if the difference is 25%? Or what if the higher bid is 50% or even 100% above the low one? With a difference of that magnitude, and the appearance of competence from the low bidder, who would choose the higher one?

In other words, if a price difference is sufficiently large, does it trump the other excellent criteria you mention?

The problem is, the factors I outline in the article can easily lead to differences in the 50% to 100% range. In fact, they often do. (Practically all published material on software cost estimation supports my view on this point.)

So why do we see tenders where the bids cluster in a much narrower range? I believe, and I'm not alone in this belief, that the reason is as follows: during the sales process, sales people typically seek to discover the customer's pre-approved budget. In fact, one estimation researcher found that "knowledge of the customer's budget" was cited by vendors as the most important input to their estimation process (!). Having obtained knowledge of the budget, the sales person then uses their superior skills of persuasion (sometimes consciously and sometimes unconsciously) to influence their employer's estimation and pricing process... such that the quote comes out fairly close to the customer's budget. This factor, perhaps more than any other in my view, hides the magnitude of the pricing problem from those who make decisions and policy.

In summary: if you get a bunch of bids within 10% of your budget, it doesn't mean that expert estimators agree with your budget, but merely that expert sales people have discovered it.

Jan Wijninckx 21 February 2013, 6:38 pm

Hi John,

You raise valid points here, and with what I wrote below re CMMi in mind here is my CMMi take on what you raise:

1. What if the prices vary a lot - well as you may have seen from the CMMi pointers I provided, this is quite likely. A level 2 vendor may quote at 25% over the level 1 vendor, so the level 2 vendor in NZ sadly would be ruled out (even though the cheaper price is by bell-curve likely to come in at +100% !!)

2. And what if the prices are the same - well that may be accidentally so or it may indeed be a sales adjustment as you suggest.

So how can you evaluate? The function point analysis will yield a range in which you'd expect the pricing, time lines, *and* resourcing to be. Anything outside upper and lower bounds are suspect. Next question is: what is the capability maturity ("quality") ? - if vendor 1 of CMMi level 1 has the same price as vendor 3 of CMMi level 3, then select vendor 3 as the probability that vendor 3 will deliver as forecasted is by bell-curve a certainty, whereas vendor one by bell-curve would deliver twice of time/cost!

Ron Segal 20 February 2013, 5:30 pm

John, from many experiences you are on the money with your reasoning about price based tendering being at the root of many project failures.

Even without your recommended 'qualifications based selection', which is a great idea, certainly not 'whacky', its difficult to fathom why, when there is a significant difference in price, the underlying reasons for this are hardly ever explored!

In my observations Novopay is just the exposed tip of an iceberg of major computer systems that have been delivered with some kind of significant, costly, though less publicly overt failure, which can likely be traced back to price based tendering.

Jan Wijninckx 20 February 2013, 9:50 pm

Hi John

Here is another take on what you call "qualifications based selection" - it's known as the CMM and has been around since 1991 (now CMMi). The CMMi provides the framework to assess the capability maturity - i.e. "qualification" - of a software development organisation. Organisations at level 1 - on average - will deliver twice over time cost (could also be 3x or failure!); A level 2 organisation will estimate a higher cost than a level 1, but typically deliver to +/- 10% of that; a level 3 organisation will deliver as forecast and the forecast will be in the same order as that of a level 1. If you select on price alone, you rule out level 2 organisations. Because this happens all the time in NZ, there are no level 3 organisations except for EDS, now HP (you get up to level 2 and you lose business, so it is better to fly by the seat of your pants ?).

The other thing to validate a tender price is to use quantitative software estimation techniques. This is an almost lost art. It's called Function Point Analysis, and it would have been extremely applicable to the Novopay system. Function Points are the equivalent of "meters of software", and include a quality factor, resulting in an overall count. The FP count can be used in statistical models such as Construx estimator, to yield a likely, low, and high estimate; probability of coming in early, on time / late, likely defect rates, etc. With Function Point or ESLOC counting, you'd have a strong reality check on the tendered price.

Here's a real life example: In the late 90s at Dutch KPN (Telecom) they multiplied the tender value by the CMMi assessed capability. Thus CMMi level 1 organisations can't compete (as their price would be higher than level 2's), and you drive higher capability into the market.

Now to the issues with NZ government tendering. They don't have anyone to do CMMi assessments and the only two guys capable in the NZ market are small operators and don't get a look-in, as we don't make the "panel". As for Function Point analysis and quantitative estimation, there will be a few more in the market, but it is a skill equivalent to that of a financial actuary, and you can't teach junior to do that. Again, the people that can do it, are small operators and don't make the panel. Last but not least, the government has adopted PRINCE2 and believes that this is all you need to achieve good projects. Well, we can see that that is not true in Novopay's case, and the reason is simple. PRINCE2 covers only a portion of the software engineering capability necessary to operate at level 2 ("qualification"). You can do P3M3 and Gateways reviews till the cows come home, but the issue remains, you don't cover the essentials of requirements, testing and design plus organisational processes to ensure delivery as estimated, as those processes and practices are out of scope of the methodology of PRINCE2.

Taking the above into account, let's look at Talent2. Who in their right mind would award them the contract? They are a recruiter, and don't have the processes to manage and control the efficient delivery of software. Their CMMi capability maturity would be level 1. Who in the buying organisation has done a Function Point Analysis? The fact that the system costs several 10s of Millions, means it is more than 8000 function points. We know from the estimation tools (which incorporate 30,000+ project data), that systems of 8000+ FP have a 2% probability of ever being delivered. At 1 day or $1000 per FP this tallies with what the Dunedin researchers of the book "Dangerous enthusiasms" have found: over $10M of people time to develop a system has a very low success rate, and it is because a high Function Point count is equivalent to eating a large elephant in one go.

So all the pointers for failure were there, except that the PRINCE2 tick-box people had some dangerous enthusiasms ? and some low capability in selection.

So should we do "qualifications based selection" as you suggest? Yes if that means using the CMMi which by the way has been validated for 20+ years, time and time again on tens of thousands of projects by metricated organisations and the CMMi itself.

For those of you who want to read some more, start here:

bit.ly/XoqaIU

Jan Wijninckx 20 February 2013, 9:53 pm

Sorry about all the symbols, - somehow one can't do a copy / paste from MS-Word.

Paul Matthews 20 February 2013, 10:12 pm

Sorry about that, Jan. Have fixed the text and raised an issue with the dev.

Jan Wijninckx 20 February 2013, 10:20 pm

Wow, that was a fast turn around - less than five minutes, thanks Paul!

John Rusk 20 February 2013, 10:08 pm

Jan,

Do you have links to any publications which have research-based data on the extent to which CMMi can narrow the range of estimates? If so, could you post a few in a comment here? (I checked out the link you posted to your company, but the page didn't have any research-based numbers.)

PS I know what you mean with copy and paste from Word. I had the same problem with my comment above. Got the gist of yours fine tho.

Jan Wijninckx 20 February 2013, 10:29 pm

Hi John,

The whole premise of the CMMi levels is that they correspond with a stepped improvement in time/cost performance. It is a well hidden story, which you can only find in the original work of the CMM v1.1. You find the distilled explanation here:

bit.ly/XuNFlt

There is also a real life story on CMM tracking a real company, here

bit.ly/11UzQSY

To other readers - please ignore any marketing on these pages - forget about my company - just focus on the messages in the resource pages.

Jan Wijninckx 20 February 2013, 10:42 pm

Hi John,

Found it again, the original correlation:

www.sei.cmu.edu/reports/93t...

page 23 & 24

Jan Wijninckx 20 February 2013, 10:53 pm

Hi John

And here is the precursor report of the analysis of project data, which led to the CMM v1.1 (first publication) in 1993

www.sei.cmu.edu/reports/92t...

Note that the metricated approach was lost when a whole bunch of consultants got on board who interpreted the CMM as a type of body of Knowledge, similar to the PMBOK or ITIL (don't get me wrong those are excellent pieces of work). However, the CMM is more than just a qualitative expression of what a group of professionals believe makes for good software development. Rather the CMM correlates the processes and practices which make a measurable difference (step up) in time /cost performance. I have a more elaborate interpretation of this at:

bit.ly/UGHBb0

John Rusk 21 February 2013, 8:34 am

Hi Jan, I'm at work now so don't have time to read the new links. I understand however that you're not at all opposed to QBS, just suggesting one way it might be implemented, right?

Jan Wijninckx 21 February 2013, 6:26 pm

Hi John, indeed suggesting an implementation.


Web Development by The Logic Studio