“The future is already here — it's just not very evenly distributed.” - William Gibson, Author
The William Gibson quote above came to mind as I was compiling my thoughts on FutureTech in San Francisco from earlier this month. From case studies on virtual reality to exoskeletons and machine learning, attendees got an end-user-narrated tour of where the future is already in the present. I’ll present some of those examples below.
Overall, the event was quite well attended with close to 400 participants, including over 20 vendors (more than I ever remember at an event). With the large number of vendors and lack of space in the exhibit hall, the event felt very busy and active. There was a lot to absorb from vendors and end users alike.
Speaking of vendors, Procore came away as the #1 brand from the event. This was largely to do with having prominent signage (Diamond sponsorship level), mindshare (hello Procore goody bags with beer in every attendee room!), and--most importantly--highly visible participation from senior executives, including the CEO, VP’s of strategy, products, marketing and others. Procore committed to expanding their base from mid-market firms to the ENR 400 in the construction management space, and they are executing extremely well. Every large general contractor customer we talk to is either already on Procore or is strongly considering a move. Their impressive ability to execute was on display at the event.
We had a great presence too with a booth to support our customer Suffolk Construction in their Thursday session on our jointly released case study. The success story highlights a hospital project in Boston, where Suffolk found that by moving to a video and “Smart Tag”-based process, they saved 2.3 months of person time in the first six months of usage. That’s a 60% decrease in the time required to capture, organize, and find key field imagery, as summarized in the table below.
We were proud to show real data on how our technology helped Suffolk save time and create a better deliverable. Other sessions also focused on real world results. Here are some of my highlights (for more coverage see ENR's own write up here):
-
Ekso Bionics, the wearable exoskeleton, was a huge hit. Joe Williams of Rogers O’Brien (@vdcjoe) even got to try it on and use it (image below). The result is reduced load on the upper body by 100%, making holding up your arms become “weightless.” Joe seemed pretty psyched about that.
- Walt Terry of Skanska presented on a project with Redpoint Positioning that was focused on jobsite safety. This tag-based system is like individualized “indoor GPS”, showing in real time where workers are on the construction site and sending alerts if they enter safety risk zones.
-
Mortenson’s Rick Khan presented on the importance of the skill gap in construction and the role of distributed training in bridging it. Rick had some fascinating data on the aging of the workforce, pulling from a study conducted by FMI in 2015. One key point was that most companies are now struggling to find and retain skilled labor, as shown in the following graph.
After setting the stage with this current market dynamic, Rick brought the conversation back to how technology can help bridge the gap between the older and newer generations by first capturing knowledge from the field (e.g., in short “how to” videos) and then delivering it as needed to the newer field generation (e.g., via mobile or wearables).
- In the last presentation of the conference, the famous Martin Fischer, Director of the Center for Integrated Facility Engineering (CIFE) at Stanford, introduced the first public presentation of his Ph.D. student, Iro Armeni. She presented her research wherein she was able to create model objects for spaces (e.g., rooms) and contents (e.g., chairs, desks) automatically from laser scan data. The method was to combine information on point cloud density with machine learning to recognize objects and place them in space. The approach was fascinating. To establish spaces like rooms, the researcher showed how she could interpolate between walls to define spaces from the void between laser scanned surfaces. By looking at the “void” between the boxes, she could create the shapes of rooms. To complete the second part of the exercise, placing objects in the rooms, they were able to create chairs, tables, and other FF&E by looking at the point cloud visual data and using machine learning to recognize the shapes.
Indeed, the use of machine learning by CIFE’s Iro Armeni to recognize objects related nicely to the vision (no pun intended) that Chris Mayer, Chief Innovation Officer of Suffolk, had outlined during the results segment of his presentation of our joint case study. In it, he described a future state where visual data from mobile devices, drones or even wearables in the field can be recognized automatically and flag everything from unsafe conditions to instances of particular equipment. We’re excited to partner with firms like Suffolk to build out this future for machine learning by being able to capture and classify key imagery from project video and photo streams. Contact us and join in the journey.