Published on August 14th, 2018 | by Jake Richardson0
AI Could Better Manage Pothole Repair & Prevention (Research)
August 14th, 2018 by Jake Richardson
John Zelek, a professor in the Department of Systems Design Engineering at the University of Waterloo, has been leading a research group to study how AI could be used to better maintain road surfaces. (Their work may also apply to bridges and buildings.) AI has shown promise in its ability to process huge amounts of information in collections of images to detect patterns. Zelek and his colleagues used images from Google Street View to develop their system. He answered some questions for Cleantechnica.com about the research.
A city such as Ottawa spends $8 million a year on pothole repairs. In addition, the cost for collecting data from roads is very expensive if a private firm is used, if it is just a city engineering crew, the results can very subjective. We have not done a formal cost study.
Could the AI system be used to prevent large potholes from developing by detecting them at the first stage?
On the research side, we are interested in investigating using infrared to see distresses forming below the surface before they become visible. Infrared picks up heat and activity below the surface which should generate some heat signature.
Images taken via vehicle-mounted phone cameras would be analyzed by AI, but which vehicles would be used?
Hypothetically we were considering vehicles that regularly traverse a city on a regular basis such as garbage trucks, police cars and other municipal vehicles.
In order to use the road images sourced from Google Street View, are they in the public domain, or did you need to ask for permission to use them?
This is how we started our research just under 3 years ago. Our original premise was to mine Google Street view data virtually. We spent a couple of months negotiating with Google. Google collects imagery at about 4 times the resolution that they provide to the public sector. The high-resolution imagery that they capture is only available at a cost of a fraction of a penny. We costed this for a small municipality, lets say of population of 30,000 and we would end up paying over $100K for this data which did not make sense from a practical business point of view. Google typically collects imagery every couple of years for larger municipalities but not at the same frequency for the smaller communities.
If I’m not mistaken, using AI requires a training phase where you ‘teach’ the software what to look for by using examples. Did you use the Google Street View images and if so, how many for the training or study phase?
We have worked hard the last year or so at trying to minimize the training samples as well as making the learned system generalizable to various pavement conditions, for example asphalt, concrete and composites. We have only trained for 4 distresses: longitudinal, transverse, alligator cracks and potholes; these 4 distresses constitute over 90% of all distresses. There are about 20 other distresses that civil engineers keep track of. We are currently conducting field trials to determine the generalizability of the system based on a single training, investigating ways of the system continually learning and generalizing upon each use. Current results show that we are able to do our classification as accurately as human analyses. Our studies to date have only consisted of 200-300 training samples that have been human annotated to train the system, which we are pleased with.
How close are you to having a real-world AI software and system to start using today?
We are currently working with a large civil engineering firm to help them do analysis on data they collect from their highly instrumented vehicles. We are also working with the Ontario Ministry of Transportation to test our system on Ontario highway data. These are two current field trials that we are conducting. In the lab, we are focused on the generalizability and devising ways for the system to continually learn from every use. We are also spending some time devising algorithms that make use of fast hardware to process thousands of images in a few hours. We have a working system. We are talking to various partners trying to figure out how to commercialize this technology in the next year or so. There are 3 components to a system, the image data collection, the data analysis, and the data visualization with statistics. We have focused on the data analysis side. We are investigating how we can collect data with low-cost hardware that either mounts on a roof rack or just a cell phone mounted on the dash. There are many GIS visualization packages out there that can do the visualization for us.
How might the technology use drones to examine photos of bridges? Would that be during construction, or could the images be studied to detect problems so they can be fixed before a structural failure?
Currently a bridge assessment requires a team of engineers and technicians to survey the bridge, gather sensor data and report back. A drone solution would fly around the bridge, and collect video. The video would be used to recreate the 3D shape of the bridge using other automated photogrammetry software we are developing. Similar to the analysis of roads, the surface of the bridge would be analyzed for defects or anomalies. The collection of video data can be done in 15-20 minutes. Visualization of the 3D bridge with annotated defects detected by the algorithm superimposed on the 3D can be visualized at the engineers office. This also minimizes the shut down of the bridge to perform the survey as well.
The above scenario is for an existing bridge. We are currently investigating how this technology can also be used for QA for new construction as well.
Image Credit: CC BY 3.0
Complete our 3-minute reader survey!
Sign up for our free daily newsletter to never miss a story.
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.