top of page
Writer's pictureJohn Lowry

FF24 - Data Rich Future

A response to a call for discussion on questions to be put at the Australian Contractors Association conference 2024.

John Lowry lfaiqs, cqs, iceca, reg adjudicator (qld)

(Courtesy of Martin Paver - Projecting Success)

Key Takeaways

  • Will the commercial model for construction procurement change?


  • Before 1985, the most popular procurement system for construction was lump sum tendering, based on full design documentation.  Tenders were supported with rules-based shared financial (cost and time) data.  The data provided for ready comparison of bids, and as the basis for orderly, effective contract management.


  • Where construction management contracts were adopted on complex and fast-track projects, similar methodology was applied to subcontracts.  Whilst this method required more management, it provides for penalty-free flexibility for clients.


  • After 1990, there was a drive towards packaging risk, driven by risk-averse contracts.  Design, management and construction risk were placed with a single entity, usually a coordinating contractor.  The system moved towards clients buying a finished product (with conditions).  As a result, design and contract management data became corporatised.  Documentation, including critical management documentation (cost and time) became minimised, decentralised and ad-hoc, often passed to subcontractors, many of whom are not equipped with the skills to produce verifiable, repeatable, shareable data.  Systems are disparate, or non existent.  In this environment, it is much more difficult for businesses to produce reliable, repeatable results on major projects.  This is evident, when governments and major clients are not proceeding with major projects because of cost/time mistrust.  The media amplifies these concerns with every major project cost overrun.  Whilst this phenomenon is not unique to Australia (How Big Things Get Done - Bent Flyvbjerg & Dan Gardner),  we cannot ignore its impact.


  • We are entering a new era of a sharing economy.  In the new economy, business, and profits, are made from leveraging and value-adding vast volumes of data.


  • As the marginal cost of transformation of construction, from whence profits are derived, approaches zero, profitability in the risk-trading / whole product model, will become more difficult to find.


  • An alternative, more profitable model for construction companies is possible.  This will involve a shift to providing a high-value construction service, either from assembled teams, or from large full-service corporations.


  • In order to offer reliable, repeatable performance, it will be critical for all businesses to implement robust, repeatable systems throughout the entire process.  This requires verifiable, repeatable, rules-based shared financial (cost & time) data.


  • In order to improve the value of the service, and prepare for an automated, digital future, we must re-consider existing clumsy, slow vertical contract management processes, in favour of “super-flat” , instant-response processes.


  • In order to build trust with clients, continue to add value, improve efficiency, and  improve the accuracy and predictability of our service, we must work towards wide-area sharing of publicly available data at many levels (see attached WBS).


  • Better, more accurate, predictable results, both individually and industry wide will require standards for and implementation of rules-based, shared financial (cost & time) data.


  • It is recognised that better data and contract management is likely, in the short term to impose a cost on clients.  The advantage is that a) the management cost is disclosed up-front, therefore able to be included in budgets, and b) outcomes will be much more predictable and certain.


  • Business process automation is available now. It will create the foundation for future technology adoption, whilst creating significant immediate efficiency gains.


Measuring productivity

We know that productivity has declined, because the Australian Bureau of Statistics regularly calculates and publishes labour and multi-factor productivity statistics for a range of industries, including the construction industry. How these figures are calculated is a mystery to all but the most wonkish statisticians and economists.


This paper poses a series of questions, with the promise of the breathtaking world of AI, [described by Alvy Ray Smith (co-founder of Pixar) as a marketing pitch, noting that he entered the course in AI at  Stanford University in 1970, where he received his Ph.D. in computer science for his dissertation on cellular automata].

“One of the valid complaints about some AI systems is that they make stuff up, with confidence, and without sourcing, and then argue when challenged.  Unsurprisingly, this sounds a lot like people.  We often end up with what we are willing to tolerate” (Seth Godin)


This is not to discredit the power or promise of AI, in its many guises.


AI is much more than popular deep-learning Large Language Models (LLMs).  It will soon transform our work in many ways, together with many other process automations, both business and physical.  It’s just that we must understand its power and respect its limitations.


The advances in AI are driven by the exponential growth in computer power (Moore’s law).  Quantum computing promises to further accelerate the power of computers by unimaginable multiples, by enabling many steps to be performed at the same time in different dimensions.  But the basic systemised, stepped, logical  process remains the same.



Our challenge is to prepare for this future

#Imagine when we leverage the data we collect to define a new range of measures that everyone agrees are indicators of construction productivity; measures that you don’t need a maths degree to calculate; and measures that can be used to benchmark and track performance improvement.

The measures of productivity are efficiency & effectiveness, though the most important measure must be satisfied, trusting clients who are prepared to pay a premium for our combined service that adds clear value to their business and the community.


However, it is dangerous to equate improved productivity with improved profitability, since profitability will be harder to find as the net cost of transformation moves closer to zero.

For this reason, the “risk-trading” model of construction, (offering a completed product) which has become the preferred model over the past forty years, will become more and more difficult to sustain, since profit in that model derives from the diminishing value of inputs to the transformation from an idea to reality.


Future profitability is more likely to derive from a service that clients see as sufficiently valuable to pay a good premium for.  This service is the transformation of an idea (design) into reality, through a trustworthy process that guarantees a high-quality product, delivered on time and within budget. (i.e., we deliver on our promises).


Where to begin this journey?

First, we must identify, and agree on, the blockages inhibiting process efficiency and effectiveness, then create frictionless platforms and process to prepare for, and take advantage of, new and emerging technologies.


This will be much more difficult than simply adopting new technologies, hoping for improvement, because we must leave the comfort of our long-held beliefs, culture, and processes as we move into the future.


The problem is, we don’t have time to think about it, as knowledge and computer power are exploding exponentially.


Our first and most important challenge is to prepare for change.


Data-driven decision making

Construction projects create immense amounts of data but use very little of it, particularly once a project is complete. Data such as engineering calculations, project costs, equipment usage, hours worked, identified defects, embodied carbon, weather conditions, utility locations and ground conditions currently either end up in a contractual claim, in the hard-to-reach recesses of a contractor SharePoint site or printed off for a Client and filed in a dusty storeroom.

Rarely is this information used to identify process improvement opportunities, improve forecasting or refine designs. Nor is this data used to understand performance, to identify how well or how efficiently a project was delivered or the extent to which outcomes were achieved. This failure to leverage data is one of the reasons (but not the only reason) why the construction industry is stuck in the past and productivity is now worse than it was thirty years ago.

These remarks are a sad commentary on our times, and a clear message that knowledge can be lost without nurturing and valuing.  Like a tree, it can grow and thrive, but without tending it will die.


The siloing and randomising of valuable data in corporate data bases is a phenomenon that has occurred over the past forty years, as business of all kinds believed that owning data gave them power and competitive advantage.


Prior to that time construction data was more orderly, rules based, and as a minimum, shared at project level.  Some data, including cost data, was analysed, aggregated and shared for professional use.


Project costs, plant usage, labour usage and other cost data can be collected in any modern estimating system.  These are readily interactive with current business process automation, though adopting these technologies will rely on more seamless contractual processes.


Our challenge is not only understanding the value and importance of creating large, shared, data libraries, but creating verifiable, shareable data.  Unlike LLM’s with unlimited free access to publicly available data, the structuring, verifying and sharing of project data must be agreed on an industry-wide basis.


Weather data is readily available from BoM, though rarely used, because the answer tends to be inconvenient.  However, with simple automation, together with contractual change, we can demonstrate significant advantages with no new technology.


We estimate that simple, achievable process changes to data collection, variation management and payment management, with existing technologies, will generate savings of up to $15 billion per annum, across the industry.




Quantity surveyors are currently leading the industry on measuring embodied carbon. Automation will rely on the continued development of object definition and libraries.


# Imagine when we can use historical data, that draws on data from across the sector, to accurately predict when a project will complete and how much it will cost at any stage in the project.

We do not have to imagine this day.  We remember it.  The creation, collection, analysis and use of financial data for prediction, and management was a common feature of the process up to as late as 1990.  Quantity surveyors created independent, rules-based, shared financial data that provided for fair, orderly procurement, predictable, orderly, efficient contract management, and accurate project completion cost forecasts.


Reliable, rules-based collected data in structured formats provided for accurate cost forecasting from feasibility to completion.


Sharing this data in large, publicly accessible databases, for use by cost management professionals, will improve the accuracy and predictability of business plans and project cost planning.


All pre-contract documentation was produced and provided once, for all contractors, rather than by every contractor and subcontractor invited to bid.  One set of consistent, accurate, reliable, rules-based documentation was replaced by 200, ad-hoc, inaccurate, sets of data that can not be used beyond its initial function.


Those who complain about the high cost of tendering were responsible for this shift in bid costs to contractors, consultants and subcontractors.


Whilst current and emerging technologies will add to the ability to collect, analyse and utilise cost data for multiple purposes, the known structures can form the foundation for future development and automation.


This will include, importantly, agreed standards for object definition that is the bedrock of automated data creation and analysis.


# Imagine when we can use historical data to quickly compare the impacts of different design options.

We do not have to imagine this day.  We remember it.  Whilst  BIM technologies can aid the process, comparing cost and time alternatives requires a move away from clumsy contract management processes in use.


It also presupposes the ability to move seamlessly from an idea to a “construction-ready” change.  This is not yet automated or integrated through BiM, and may not be optimal process for the foreseeable future.


We proved the effectiveness and accuracy of alternative methods for comparing design options in 1983.  It relied on cooperative design/management/construction process, shared financial and time data, and reliance on the advice of professional team members, rather than the clumsy vertical processes currently in use.


Changes to a BiM model will, one day, allow us to automatically compare the cost / time effect of change.  That day is still some time off until complete object assemblies are fully automated.  Even so, most modern quantity surveyors’ estimating systems can already report on cost comparisons and changes to BiM models, through object assemblies created within their systems.  However, the measuring process required to create object assemblies is still relatively manual and project-specific.


# Imagine when we have a complete digital twin of all underground utilities that can be used by all parties in project design and delivery.

This is a BiM / Spacial Sciences integration challenge.  In my recent experience, the creation of as-constructed electrical services is still in a very early stage of development, where as-constructed surveys are completely separated from engineering design.  A significant reason for the lack of progress in this area is contractual responsibility.  Electrical Contractors, who have no interest in the long-term benefit, are required to undertake as-constructed surveys.


Governments and clients have the most to gain from this integration, if they understand the value of owning, leveraging and sharing this data.


As with all facilities management, the legacy question is a major challenge.  Since underground utilities are mostly a public/government responsibility, it will require database integration and open sharing of digital documents.


Separate strategies will be required to prioritise the collection and collation of legacy data.


# Imagine when  we can use data from construction plant to improve haul routes and reduce idle times and diesel usage in real-time.

Mining and logistics industries are well advanced in this area.  It is a business process automation / integration challenge that does not require any new technology.  These technologies are readily shareable with construction.


Our company developed a business process/workflow integration platform from 2010, that is very easily programmed to interface with IoT, systems and people to collect, distribute, analyse and implement any of these tasks.



Economic theorist Jeremy Rifkin discusses the IoT here >>


Questions asked by FF24

My knowledge area, since 1970, is the creation and use of cost and time data for construction. In 2005 we turned to IT development to automate business processes.  Whilst recognising advancements in BiM, AI, and physical processes,  I will confine my response to construction cost & time, and business process automation.

What data should we collect/track?

The raw source of construction cost data is quantity and time.

Construction costs are created from:

  • Materials,

  • Labour,

  • Manufactured components,

  • Plant & Equipment,

  • On-site overheads,

  • Off-site (business) overheads.

  • Soft costs - design, documentation, contract management.


Rules-based, shared financial data

Knowledge about the creation and use of financial data has declined over the past 40 years.


Before that time, quantity surveying was an integral part of every major project team, undertaking cost planning from feasibility to pre-contract, from collected, analysed and shared data, and preparing rules-based, shared financial data for comparative bidding, contract and subcontract management.


Contracts were written to facilitate these processes.


Innovations introduced included the integration of specifications and bills of quantities (Rawlinsons WA 1970), and the expansion of traditional BQ trades into "contract ready" supply and subcontract trade packages. (Lowry Thompson Douglas - 1980).


A significant reason for the decline in the quality of documentation is that clients were convinced that independent contract documentation services, including data creation and gathering, were not only an unnecessary expense but a cause of cost increases and uncertainty, even though accuracy levels of 95 to 99.5% were demonstrated.


Though it was the opposite of the reasons for a structured procurement process that included structured cost data,  key interests believed that they could gain an advantage by owning, and siloing, critical data.


A great deal of critical cost data was not gathered at all, as data creation was pushed onto subcontractors, many of whom do not have skills to create structured, repeatable, verifiable data, suitable for sharing at any level.


This major shift has resulted in a failed, chaotic, system of procurement and contract management that has culminated in a loss of capability, poor performance,  and loss of client trust on major projects.


Even worse, we almost lost the systemised, structured, logical processes that computers require to operate.


The cost of time

Time is a key driver of the cost of labour, plant & equipment, on and off-site overheads.  It takes precedence in “high-plant” civil engineering construction.    Apart from specific project conditions and circumstances,  for the purpose of estimating, the cost of time is typically built into labour and plant constants for discrete items of work, and for overheads, project forecasts and estimates will be based on average and estimated project (or sub-project) times.


This is an important distinction when comparing building estimating with object assemblies vs time-based estimating.


Plant optimisation is well-advanced in the mining world.  The collection of data is relatively simple with a combination of IoT and business process automation.  As with most other construction data, it is held in corporate silos (if at all).


Design & contract management services and costs are more difficult to define and compare, particularly where commissions are based on performance (i.e., whatever it takes).  As with construction, to be comparable, the specification and briefing of services offered must be verifiably consistent to be of any value as a benchmark for clients.


How should the data be collected and aggregated?


Computers require systematic, stepped processes.  They do not operate randomly.


One of the major advantages of computers is that they amplify the process of analysing, comparing and creating data patterns (including numbers), enabling the execution of vastly more steps than the number of instructions. (Pixel, Alvy Ray-Smith).    


Whilst LLM’s and other AI advances will facilitate searching and categorising data,  the accuracy and usability of shared data depends on consistent, rules-based, verifiable data sources.  LLM’s are at the forefront of this development because they utilise the open-source rules of language. These common rules are not the case with construction design and cost data.


If data is to be collected and shared outside corporate silos, the industry must agree on and implement industry-wide rules-based data structures.  Structured data may be more fluid as LLM’s become more sophisticated, but this does not obviate the fact that, if data is to be shared industry-wide, it must be consistent, reliable and verifiable.


Cost data standards were developed in Australia in the 1950’s by the Australian Institute of Quantity Surveyors (AIQS), and have been amended and refined over time.  These standards for building works are contained in The Australian Cost Management Manual Volumes 1 -5) and the Australian Standard Method of Measurement, published by the AIQS.

Apart from an ageing, and little-used, Australian Standard Method of Measurement for Civil Engineering Works the Civil Engineering sector in Australia has never developed data standards for use on engineering projects.  The ICS in the UK, led by Martin Barnes, the father of project management, produced and updated excellent Standards for engineering contracts, that can be adopted for Australian use.


The civil engineering sector has not developed, or adopted, common yardsticks and measures for forecasting and cost planning engineering construction projects.  Some work was produced, around 20 years ago, in the UK towards developing cost planning elements for civil engineering.


Developing suitable standards is not a difficult task that could be accomplished with a collaboration led by the AIQS and Engineers Australia.


The base cost data for a building construction contract is developed in bills of quantities (BQ).  Typically, levels below this - labour, materials, plant & equipment, are embedded in the pricing of BQ items, either as part of cost planning processes or as part of contractors’ bid pricing. 


Whilst BiM can provide some raw materials and sub-assemblies data directly to estimating systems, the transformation into useable object assemblies, with attendant labour, plant and on-costs remains a relatively manual process.


Modern estimating systems, since 1985, have provided for materials costs and quantities, labour hours and costs, and plant constants to be aggregated and reported separately.


Once again, engineering bills of quantities are mostly ad-hoc, poorly drafted and not to any recognised standard.  They are not suitable as a foundation for cost data collection in their current state.


These standards are ready, and available to form the basis for rules-based shared financial data.


Object Assemblies -  Bills of Quantities

Apart from being the vehicle for financial information flows in construction contracts, and the basis for consistent, accurate, verifiable base data, BQs fulfill several functions through the life of a project, including:

  • Pre-tender budget verification;

  • Comparable tenders (trade and co-ordinating contracts)

  • Contract management - Payment, variations, delays, etc.,

  • Tax management - depreciation

  • Facilities management, and

  • Prediction of future project costs.


Collecting and collating this data is a cooperative process.  Best practice requires measurement by competent, qualified professionals.  Detailed pricing is carried out by trade contractors and co-ordinating contractors.  Modern BQ’s are typically published initially in “Contract Package” format for ease of buy-in and subcontract pricing at the bid or pre-construction stage, and for contract (including subcontract) management during the construction phase.


To be practically useable for comparable bids and subsequent analysis, transformation, and aggregation for a wide range of functions, the data between projects must be collected consistently, using rules-based standards to avoid variations in the data.


BQ pricing is disassembled and re-assembled or aggregated into a number of yardsticks for different measurement, cost planning and management functions.  These range from volume, area and other “large component” yardsticks for business and feasibility planning, construction elements for cost planning, from preliminary to detailed verification estimates, and resource analysis.


Further aggregations are readily produced for facilities management, planned maintenance and replacement planning, sinking funds and tax requirements.


I developed the “Australian System of Building Work and Cost Definition”, (attached) adopted by the Australian Institute of Quantity Surveyors in 2005, as a guide to current and future data needs.   It includes several Work Groups defined for future automation and integration.  This includes:

  • Object Definition

  • Object Assemblies

  • Work activity

  • Room Data

  • Functional Element



Room Data standards are not developed, or if they are, there have been no attempts to integrate cost and design data.   This will be achieved with cooperation between facilities management professionals and constructional professionals.  It can be achieved, by committee, at professional association levels.


Functional elements are similar.  Even though there are separate standards or schedules for tax-depreciable items, sinking funds and planned maintenance, or embodied energy and sustainability, I do not know of any integrations or automation with base collected data.

The AIQS has been involved with ATO committees, developing depreciable items schedules for construction.


Work Activity. There has been no attempt, that I know, to standardise (as much as possible), work activities for construction planning & programming.   Whilst there is an Australian Standard for terminology, standardisation of definitions has not been addressed.   Whilst it is a simple task to integrate contract management data with programmes (and vice-versa) cross-coding is still a manual process.  Time and cost are so tightly integrated on any construction project, that automating this integration will bring immediate and substantial benefit to the industry. This integration allows for significant automation that will reduce the cost of clumsy vertical contract management processes.


Object Definition is a foundational building block for integration and automation of design and cost management processes.  Attempts have been made over the past twenty-five years to establish standards for and libraries of BIM objects, with limited success.

The UK NBS has an extensive BIM library that includes specification data, but no pricing.

Natspec has developed standards in the form of a “BiM Properties Generator”, but there is no publicly accessible BiM library, to my knowledge.  Object definition, using agreed standards, will be the responsibility of individual suppliers.


Even with this, automating cost of materials is a challenge, since suppliers still use random negotiations for product sales. This will ultimately be automated with the adoption of share-trading style bidding systems, and structured volume discounts, that have already been trialled.


To automate the production of useable cost data, BiM objects must be consolidated into Object Assemblies, that include objects and groups of objects, with the addition of labour, plant and other associated costs.


To my knowledge, this work is either not maintained, or silo’d in corporate databases.  Consolidation and integration at a publicly accessible level will require the adoption of existing standards for rules-based, shared data.


Who should be responsible for collecting the data?

Much of the data definition, required for industry-wide collection within verifiable rules-based parameters, requires cooperation between two or more knowledge areas.

This will include:

  • Suppliers of raw materials, manufactured and assembled products;

  • Plant & equipment manufacturers and suppliers;

  • Designers - Architects, Engineers;

  • Quantity surveyors, planners & programmers, project managers and contract managers;

  • Coordinating Contractors;

  • Subcontractors;

  • IT vendors and developers.


A government-sponsored coordinating authority may be the most effective mechanism to drive the many strands forward, with industry-based interest groups undertaking the standard definition, and industry groups or private business developing the capabilities required.


The Australian Institute of Quantity Surveyors has developed standards for rules-based, shareable object assemblies.  These standards fell out of favour as the work of preparing cost data was pushed onto subcontractors.  The results are ad-hoc, and not useable or shareable for any purpose other than the subcontractors’ bids.


What are the IP considerations in collecting data?

Though this is a legal question, noting that nothing should inhibit starting now to coordinate and develop standards and processes required to facilitate automation and public data sharing.


Contract should follow process, not the reverse. Current practice clearly demonstrates that contract processes seriously inhibit progress, that should be the driving priority for every construction project from the moment that physical work on site commences.


Should the data be commercialised?

This is a question that will be answered as systems and products unfold. It ties into the question above on the creation of standards and the collection of data.


What are the indicators of construction productivity?

The key indicator of productivity will be the demonstrated restoration of trust with clients and governments that we, the collective construction industry, can deliver on our promises, offering and delivering a service that can command a value premium.


We must be seen as an industry collective with a single focus - working together to offer clients a high-quality, unbiased, trusted service.


Where would the most value be delivered from better use of data/AI?

The first priority is to prepare for delivering added-value data, process automation, analytics and other AI developments as they emerge.


This must include:

Leveraging the value of data, through adopting standards for rules-based shared financial data, and

Leveraging the value of created data by creating seamless, frictionless contract processes, instead of the clumsy, ineffective vertical management processes currently in use.

Shared, rules-based data, at project level, is the platform for business process automation.  This alone will build trust with clients and stakeholders, add certainty to project budgets and save up to 15% of construction costs, across the board.


Existing contract mechanisms have developed around processes that inhibit progress. To take advantage of the benefits of business process automation, and the promise of AI, the industry must prepare for change.  This will be achieved by first ensuring that our contracts and contract management processes are seamless, frictionless and fit-for-purpose.


It’s easy to get swept up in the latest sexy technology or promise.  However, as noted above, computers as we know them today, still rely on systematic, rules-based process, exactly as proposed by Alan Turing in 1935. (Alvy Ray-Smith)


There are existing IoT/contract process integrations that can be implemented immediately, with no new technology.  All they need is replacing existing clumsy, embedded, vertical contractual processes with frictionless processes that,

a) do not inhibit progress, and,

b) create a platform for integrations and automations.


More importantly, these process changes are the foundation for, and critical to, contract process automation, through to smart contracts, that will revolutionise contract management.

Some examples of streamlined business-automated contract management processes using current known systems and technologies are included in the attached DFCRC Use Case, prepared for the RBA’s consideration of CBCD.


The only inhibitor to making this change is changing the culture of vertical management to much more horizontal management processes.  There is no technological or any other reason not to flick this switch now, and begin to reap rewards.


Further thoughts on AI

“Large Language Models (LLMs) are already facilitating the production of documents of all kinds, fast analysis and filtering of large data sets, and facilitating computer programming.

A key characteristic of LLMs is their ability to respond to unpredictable queries. A traditional computer program receives commands in its accepted syntax, or from a certain set of inputs from the user. A video game has a finite set of buttons, an application has a finite set of things a user can click or type, and a programming language is composed of precise if/then statements. By contrast, an LLM can respond to natural human language and use data analysis to answer an unstructured question or prompt in a way that makes sense. Whereas a typical computer program would not recognise a prompt like "What are the four greatest funk bands in history?", an LLM might reply with a list of four such bands, and a reasonably cogent defence of why they are the best.


In terms of the information they provide, however, LLMs can only be as reliable as the data they ingest. If fed false information, they will give false information in response to user queries. LLMs also sometimes "hallucinate": they create fake information when they are unable to produce an accurate answer. For example, in 2022 news outlet Fast Company asked ChatGPT about the company Tesla's previous financial quarter; while ChatGPT provided a coherent news article in response, much of the information within was invented.

In terms of security, user-facing applications based on LLMs are as prone to bugs as any other application. LLMs can also be manipulated via malicious inputs to provide certain types of responses over others — including responses that are dangerous or unethical. Finally, one of the security problems with LLMs is that users may upload secure, confidential data into them in order to increase their own productivity. But LLMs use the inputs they receive to further train their models, and they are not designed to be secure vaults; they may expose confidential data in response to queries from other users”. (Cloudflare)

Optical recognition paired with visual data analytics can identify hazards in real time, reducing potential accidents on-site. A Common Data Environment (CDE) facilitates seamless communication within a business and with clients and contractors. ((Autodesk 2024 Industry Report Card)


The Sharing Economy

“Generative AI means that a new project proposal doesn’t need to start from scratch, instead leveraging material and pricing based on projects completed by the company with similar specifications”. (Autodesk 2024 Industry Report Card)


As noted above, AI analytics is locked in corporate silos.  We now know that the most profitable businesses in the world create their business by value-adding publicly shared data.

There are too many examples to list.  It’s not to say that businesses don’t compete fiercely within a sharing paradigm.  A most obvious example is Google, Facebook and Apple,  where their success is founded on information shared, freely, by their millions of users.


Equally, LLM’s have unleashed, and will continue to build, mountains of readily available data and knowledge.  There is no going back.


We will not be immune from these seismic changes in business practice.


A compelling case for the sharing economy is made by economic theorist Jeremy Rifkin here >>   (Long).



His case is that, driven by technology, the marginal cost of transformation of products and services is approaching zero.  Since profits are taken from the marginal cost of transformation, it will become increasingly harder to profit from competitive bidding at any level.


The construction industry is experiencing this phenomenon now, and it will continue.


Future profits will be made by adding value to client businesses; that is, providing a trusted, reliable service.   Business process automation, and wide-area sharing of all kinds will help to ensure consistent, quality results across the sector.


Can the construction industry see itself as a contained network, working as a group to offer clients the highest standard of product?


Can the construction industry see itself as a contained network, freely sharing data, that may be used at any level, from analytics to subcontract management, together offering a trusted, reliable, valuable service?


In order to achieve this, the construction sector, from top to bottom, must think of itself as a single mega-enterprise, providing a service to its clients.


Competition will come from how businesses leverage the masses of available data, and manage systemised processes, including adopting business process automation.


7 views0 comments

Recent Posts

See All

Comments


bottom of page