AI Applications and Risks in Construction
26th March 2024
In July 2023, the Construction Management Association of America hosted a webinar discussing the merits and use of artificial intelligence (AI) in construction. This article summarizes the panel discussion and contains considerations for AI adoption based on that discussion. The panel was a cross-section of experienced AI users and firms beginning to explore AI’s use in practice. Panel members using AI in practice included Kylan Greenville, assistant virtual design and construction manager with Balfour Beatty, and Mike Giacco, associate vice president responsible for new technology and IT at AI Engineers. New AI adopters exploring AI’s place in their workflow included Sharnette Tucker, program manager with HNTB, and Mark Bloom, partner with ArentFox Schiff, co-leader of the firm’s national construction practice, and member of the firm’s interdisciplinary AI team. HKA Partner John Paolin moderated the panel.[1]
How Is AI Applied in Construction?
The construction market is changing before our eyes. The speed of adopting new technology is accelerating, making construction safer and more efficient with higher-quality results. But there is another side to the AI coin. As AI becomes a more significant component of construction-related technologies, new risks, considerations, and challenges are surfacing. The concept of AI can be ambiguous and is often interchangeable with automation. For this discussion, AI can be defined as any computer-based system that can understand natural language, recognize patterns, make predictions, draw conclusions, and learn from evolving data sets. In their highest form, AI systems attempt to simulate human intelligence.
AI applications can be grouped into three categories: proven applications accepted as best practices, newly developed applications awaiting acceptance in the mainstream, and exploratory applications that exist in more of a beta-testing environment. The panel discussed the considerations for each application and addressed potential risks.
Balfour Beatty’s use of OpenSpace.AI is an example of AI in a proven, accepted application. The firm has been using OpenSpace.AI to connect building information modeling models, plans, and weekly site walk-through 360 imagery to analyze and compare as-built with as-planned conditions since 2019.
These applications predict discrepancies and allow the construction team to address problems before or as they occur. The AI tool can also monitor schedule progress; however, that portion of the tool is primarily used by self-performing contractors.
AI Engineers’ use of drone imagery and AI algorithms is a good example of a newly developed AI application for a traditional inspection process. The firm uses drone imagery of complex structures such as bridges combined with AI algorithms to create 3D models that can highlight and classify structural deficiencies.
“It is critical that we are ‘in the know’ about AI tools. AI can expedite the data collection process, improve reaction time, and put us in a better position to help our clients deliver successful projects.”
Sharnette Tucker, HNTB
HNTB, an infrastructure solutions firm, works alongside transportation agencies across the United States, offering a broad range of services to meet its clients’ needs. This includes a digital transformation solutions team that works with clients interested in analyzing AI adoption and implementation to help them learn to use AI predictive analytics and machine learning for asset management and assessment processes. The HNTB team, which supports AI in the infrastructure and architecture design processes, is in the early prototyping stage of how AI is being implemented for project control applications such as risk management, schedule development, submittals, data analysis (KPI dashboards), and reporting. Clients are beta testing technologies such as AI bots to address the speed and efficiency of various construction processes.
ArentFox Schiff, a law firm with a national construction practice and interdisciplinary AI team, is advising clients on AI-related considerations and risks for their businesses. Additionally, ArentFox Schiff is using a sandboxed version of ChatGPT to test the merits of the AI tool for research while keeping its data safe in a secure, confidential environment (the firm calls its tool ChatAFS). The term sandboxed refers to the fact that no data is shared outside ArentFox Schiff, and the firm retains full data ownership and control.
HKA, an expert witness and litigation consulting firm, is keeping a close eye on AI in practice, noting where AI has factored into potential risks during construction while at the same time working through concerns regarding client confidentiality and other issues that must be considered when implementing AI in a consulting environment.
Exploring the appropriate use of AI in a construction workflow is the first step in adopting AI. Companies need to recognize that AI is here to stay and will continue to evolve. The idea of using data to predict the life cycle of assets brings tremendous benefits, such as allowing companies to gather information in an expedited time frame. “When it comes to project controls, it’s all about quality data,” said Sharnette Tucker of HNTB. “It is critical that we are ‘in the know’ about AI tools. AI can expedite the data collection process, improve reaction time, and put us in a better position to help our clients deliver successful projects.” Simply put, AI is a tool that can be leveraged to improve construction and will continue to impact the industry exponentially.
Evaluating AI Tools
One of the most daunting tasks for a firm considering AI is evaluating various tools and deciding whether they will fit into traditional workflows. According to the contractors participating in this panel, buy-in, transparency, and bias are three big considerations in getting an AI application off the ground. Their comments are not intended to be advice per se but can be referenced as lessons learned by other AI users.
ArentFox Shiff’s Considerations
When engaging with an AI provider, the user agreement is a critical document for protection. Unlike phone applications, where user agreements are often glossed over and accepted, AI user agreements can present a significant risk if not carefully reviewed. A user agreement is a contract that defines the relationship between an AI provider and a user. Until recently, AI applications were treated like traditional software, and user liability was limited. User agreements set the parameters for future privacy, use of data, and liability. A key parameter is data ownership.
“It is important that you keep control of your data when using AI.”
Mark Bloom, ArentFox Schiff
“We surveyed 30–40 user agreements and found clauses that, among other things, addressed data ownership and intellectual property, imposed varying state laws that would be applicable should there be a dispute, and provided for different levels of user liability and indemnification,” said Mark Bloom of ArentFox Schiff. “You need to thoroughly understand what you are signing up for before you click accept.”
Indeed, in many instances, AI software uses your data to learn, which benefits other users of the platform. Important questions need to be addressed, such as whether the data is proprietary and who can access it. The user agreement will define whether an AI vendor can legally use your data in other ways and who owns the intellectual property. “It is important that you keep control of your data when using AI,” said Mark Bloom. “The sandboxed implementation of ChatAFS allows our firm to reap the benefits of testing a cutting-edge AI platform while keeping control of our data. It also enables us to measure the progress of AI learning against our own input.”
AI Engineers’ Considerations
AI Engineers (AIE) had some skepticism from their inspectors when adopting an additional process for collecting data for an AI application. However, AIE was encouraged despite the skeptics, adding an extra step and more time to its inspection process. After using the application for a while, inspectors found that they spent significantly less time tagging and cataloging inspection photos, making the overall inspection process more efficient and effective.
“At this point in time, AI-driven inspections are not certifiable; however, AI augmentation (tagging and logging) on a traditional inspection process is accepted.”
Michael Giacco, AI Engineers
Fear of change is often a big component of buy-in. “You certainly don’t want to push technology on your team if they deem it as gimmicky,” said Michael Giacco of AI Engineers. “It is up to leadership to sell the benefits and build trust in its use. For AIE, it started with a pilot program and developed internal champions for the application. The second level of buy-in was with our clients – the state agencies who certify our inspections.”
“We don’t get paid until we have buy-in from our clients,” added Michael. “At this point in time, AI-driven inspections are not certifiable; however, AI augmentation (tagging and logging) on a traditional inspection process is accepted.”
Transparency in the use of AI can only be addressed in one way – with early and frequent communication. Lack of transparency, or withholding the fact that AI played some role in a process, could result in legal consequences or loss of trust (and business) if discovered after the fact. Full transparency after a process has begun could cause unnecessary concern, skepticism, or removal of AI from the process. Stakeholders should know what AI is, why it is being used, and how it will benefit the process. Anticipated risks or downsides should be identified and addressed. All AI users should be on the same page in terms of how they explain the AI application to colleagues and clients.
Balfour Beatty’s Considerations
With any data-driven application, variations in how data is collected (by humans) can create bias and compromise data quality. For Balfour Beatty, it was critical to develop a data collection process early on to ensure consistency among the personnel collecting the 360 images. “It sounds simplistic, but the most important element of adopting AI for quality control is implementing a solid process for its use early on,” said Kylan Greenville of Balfour Beatty. “We mapped out the course of how our team would capture data on-site, we trained each user in the same fashion, and we monitored the data quality to assure consistency.”
“It sounds simplistic, but the most important element of adopting AI for quality control is implementing a solid process for its use early on.”
Kylan Greenville, Balfour Beatty
When it comes to imagery, machine learning kicks in with higher capture volume, so it is important to set up capture processes early on to ensure consistency of collection. “Human interaction and foresight are imperative when creating paths for data collection through OpenSpace,” said Kylan. “If our data collectors disregard where pillars or walls will be as the structure is built, it will confuse the AI algorithms. Understanding the model and landmarks is crucial early on while the AI is learning the correct pathways.”
HKA’s Considerations
Verifying that the data is correct has been a challenge for many AI tools, including ChatGPT. “There have been reported instances where documents referenced content generated by ChatGPT, and the references didn’t exist,” said Tyler Donnelly, HKA Director and forensic accounting specialist. “On the other hand, it is because of this vast and evolving repository of information that it is possible to significantly reduce the time required for market research.”
“There have been reported instances where documents referenced content generated by ChatGPT, and the references didn’t exist.”
Tyler Donnelly, Director, HKA
“AI tools could be particularly valuable when preparing estimates, evaluating shifts in the labor market, or tracking bulk material prices, for example,” adds Seyran Celik, HKA Partner and claims expert. “AI tools may be particularly adept at quickly identifying hard-to-find information such as the top ten differences between two revisions to codes of federal regulations. The human element value lies in the interpretation and evaluation of those differences.”
“AI tools could be particularly valuable when preparing estimates, evaluating shifts in the labor market, or tracking bulk material prices, for example.”
Seyran Celik, Partner, HKA
Seyran goes on to emphasize the importance of further verification to avoid false hallucinations. “Hallucinations is a term used to describe when AI provides false or non-existing references,” she explained.
The construction industry is a large user of legal services, which is an area already being impacted by AI. According to a recent article published in Law360, when Phi Finney McDonald was conducting discovery in shareholder litigation over a decrease in the company’s stock price following bad business news, it used an AI tool developed by legal technology/e-discovery provider Relativity, called sentiment analysis, to identify key documents. “E-discovery software providers may be early adopters of AI because construction disputes often tend to involve a significant volume of records,” said Daniel Kwon, HKA Partner, and claims expert. “The key will be matching AI technology with human interaction and verification.”
“E-discovery software providers may be early adopters of AI because construction disputes often tend to involve a significant volume of records.”
Daniel Kwon, Partner, HKA
Conclusions
One of the key concerns when using an AI platform is security and privacy. Beyond having a sound approach to user agreements, implementing a sophisticated cybersecurity protocol is critical. A firm’s IT leader should be actively involved throughout the AI implementation process, and the application should be tested for weak links and potential data compromise.
Much of the data collected for AI is generated from imagery. Large volumes of imagery could impose a greater chance of privacy risk. A drone or 360 camera collecting site information on-site could compromise private information such as trade secrets. Images of uninvolved participants that are inadvertently collected could be in violation of privacy laws.
Another key risk for many AI adopters is the lag in regulation catching up to technology. Even drone regulations are still antiquated, and they have been in the mainstream for 5 years. As an example, American Association of State Highway and Transportation Officials regulations do not reference or account for AI. As a result, the state agencies who accept AI work products must be willing to take on the risk so that the federal agencies who may be paying their bills will not balk at the work product.
AI is not just the future of technology; it is the here and now. When legislation and regulations catch up to the progress of technology, we anticipate that the implementation of AI will soar. We asked our panel what the construction workflow will look like in 3–5 years, and all agreed that AI will be part of almost every construction tool and process. But at the end of the day, AI will not replace people or human interaction, rather it is simply another tool in the toolbox. In terms of job opportunities, some roles and skill sets will become obsolete, but the new roles, skills, and expertise that are created will offer even more employment opportunities. In that vein, a Blueprint for an AI Bill of Rights | OSTP | The White House is already underway.
[1] HKA does not use AI in performing its expert services, and this article is not intended as an endorsement of its use.
This article presents the views, thoughts, or opinions only of the author and not those of any HKA entity. While we take care at the time of publication to confirm the accuracy of the information presented, the content is not intended to deal with all aspects of the subject referred to, should not be relied upon as the basis for business decisions, and does not constitute legal or professional advice of any kind. This article is protected by copyright
© 2024 HKA Global, LLC.