Thursday, April 07, 2016

Benchmark Series IV - Using Benchmarks in Estimating

The third post in the Benchmark series looked at outputs that can be achieved once robust and quality assured data is being captured into a system that has been designed to consolidate and present the learnings from this information. It is possible, however, to do more than just re-purpose benchmark information for sense checking and comparative explanation. It is possible to generate new information. Two game changing possibilities in using benchmark information are in the generation of new estimates in project costs and in the phasing of these costs over time.

The goal of building parametric models that use benchmark averages and key design parameters to generate estimates is in providing decision makers with quick turnaround information that drives investment decision making. Property developers looking at new greenfield sites or the re-development or refurbishment of brownfield sites and get a quick cost comparison between these ideas or opportunities. And by quick we're talking minutes.

Our involvement on the Global Unite project at AECOM has demonstrated that the use of large data sets to drive robust averages can lead to remarkably accurate early stage estimates (plus or minus 10%). To clarify, large data sets are those greater than 100 projects with cost data that has been verified. The way these models work is that there is a key driver for each element within the estimate (typically floor rate but it could be elemental rate for some elements based off benchmark ratios for quantities). In this way, entering the likely floor area of the development and then tweaking certain elements for the design parameters of the site (e.g. substructure complexity, hydraulics and electrical requirements etc), a knowledgeable cost manager can quickly provide a project cost estimate in minutes instead of weeks.

Critical success factors to this parametric model are:

1) A large enough data set to generate a true benchmark average. Many estimators worry about the outliers of a data set but it is the outliers that make an average a true average, cancelling out each other and providing the standard deviation that assists in tweaking the estimate to the specific vagaries as they are known at the time of estimating.

2) A data set that includes elemental quantities and hence benchmark ratios for things like wall to floor or windows to floor.

3) Meta data classification that allows the end user to reduce the global data set to a sample of similar projects that fit the current characteristics of the early stage investment idea. These down have to be too specific but at least include the sector, project type, asset type work type, location and estimate date

4) Solid indexing data to translate cost information relating to one location and at a certain point in time to a different location at a future point in time

These four things are where cost consultancies can differentiate as their past experience and knowledge drive their capabilities to provide parametric estimates.

The same requirements exist for the other key new knowledge gained from benchmarking, that of cash flow phasing. Utilising the cost phasing profiles of projects that have actually happen is key to getting an accurate estimate of how funds will need to be drawn down on a new investment.

A significant advantage in the cash flow phasing capability is that the value of past projects is irrelevant (hence no need to index costs), what we're interested in is the way funds were spent (i.e. what % of total project cost was spent in each month from start to finish). This profile will be accurate again for like investments and again, providing a pool of 100 projects or more evens out the poor performing with the good to give an accurate benchmarked phasing profile.The simplicity and elegance of this solution is so strong, it is surprising that no-one (to our knowledge) is utilising it in the market to date.



More on cash flow phasing using benchmarked data can be found on our recent blog and in the demonstration of using this feature in UniPhi's product suite demonstrated at our recent Webinar on the topic above.

No comments: