With a heavy reliance on code, autonomous and connected vehicles are especially vulnerable to technology risk
The dark side of connected cars is technology risk. A panel of cybersecurity and autonomous driving experts discussed which threats keeps them up at night, and how to think about risk mitigation without hindering innovation.
Risk management and other support for AV technology has to be done in synchrony with cities, according to Colin Dhillon, CTO of the Automotive Parts Manufacturers’ Association.
APMA is working with Stratford, Ontario to test a fleet of 20 vehicles and 200 pieces of connected technology. Every one of Stratford’s 26 intersections will be smart by the end of the year.
“You cannot have Level 4 AVs and cities that really aren't smart and aren't connected,” he said. “They have to move at the same pace. If they don't, we're going to be challenged to actually pull back the pace of development on vehicles.”
Michael Westra, who manages vehicle cybersecurity for Ford Motor Company, agreed. “The functional safety engineers say the vehicle has to be able to safely drive itself, but the optimization is really where the city comes in.”
Regarding the AV, a primary concern is how to balance the needs and desire to incorporate the latest technology with where to establish the connectivity in those embedded technologies, according to Rob Bathurst, whose firm, Cylance, consults manufacturers on the careful application of new technologies to mitigate risk.
“At some point you have to trust something, and what part do you trust?” he asked. “We run into an interesting position in trying to enable organizations to be safe and secure while also building that innovation.”
As great as the technologies are, it’s the data that makes AVs possible, and it’s the data that needs to be secured, according to Mike Krajecki, who leads IoT risk solutions for KPMG. “We're moving into a transit model where it's not electricity, it's not gas—our cars are powered really by data and by algorithms we put into them. That data is what has the value, and it's what makes the vehicle a threat from a cyber perspective.”
The industry has to view data security over its lifecycle, understanding where data reside at all points in time, across the dozens of companies that come together to make these AV ecosystems possible, he added.
Machine learning, a subset of AI, represents its own unique risks. Westra discussed a demonstration by a University of Michigan professor who used some strategically placed tape to trick machine learning algorithms into reading a stop sign as a 45 mph road sign, and he extrapolated this finding to show that similar attempts could work against a whole range of vision-driven machine learning systems.
In order to analyze how such systems are built, trained, and secured, firms like Cylance use what’s called adversarial machine learning or adversarial networks. “The whole purpose of it is to take all those well-meaning models that people have spent a lot of time training, and circumvent them, change them, manipulate them, and cause the output of the answer to be incorrect,” Bathurst said.
This allows manufacturers to better understand which AI-driven subsystems within a vehicle, such as LiDAR technology, are vulnerable, he continued. “Because the machine has trained itself and because you've told it that it can make decisions. Attacking that model becomes very important for circumventing the safety system.”
The auto industry is known for building the world’s safest products, and yet its cybersecurity practices are lagging in part due to a general naiveté about technology risk, Dhillon said, citing our collective willingness to fire up our laptops on free Wi-Fi at Starbucks. “As an industry, we need to be ahead when it comes to security and privacy, just like we are with safety.”
At the same time, there’s work to do in getting the general public to accept AI and other technologies that make autonomous driving possible, Krajecki said.
“People inherently don't trust technology the same way we trust physical things,” he said. “As an industry, we have a big opportunity to bring trust and transparency to AI, to get the general public on board. This actually can save millions of lives.”
There is a wide breadth of risk mitigation activities the industry can take and is taking. Bathurst stressed the importance of understanding residual risk within the supply chain. “You're inherently trusting somebody that is inherently trusting somebody that is inherently trusting the silicon that was made somewhere else, checked by someone else.”
Westra highlighted issues with a number of OEMs, including one Tier One supplier that had been hacked yet had not corrected many of its vulnerabilities, and another that built a system on a version of Android that Google had stopped supporting.
“It highlights the weakest link problem,” Krajecki said. “And all it takes is one supplier, one Tier One to not want to cooperate or want to cut costs and cut corners [to create the] the opening an adversary needs, and all of a sudden the entire system’s taken down.”
On the flip side, added Dhillon, “As a Tier One, it's quite simple. If your cybersecurity is up to scratch, it makes you more competitive.”
Krajecki suggested that the same AI technology we seek to protect can also be used to defend against nefarious activity more quickly and more in depth than humans can do alone, referring to one client who wrote software to perform security testing on as many as 15 releases a day.
“There's no way we can move forward with traditional, manual SDLC where you have somebody running test scenarios and marking things on a spreadsheet, or somebody pointing a tool at it,” he said. “That will move way too slow. We'll never keep up with the clockspeeds in this industry. It all has to be automated in some way.”
To access this panel’s presentation, please click here.
Click the below links for KPMG whitepapers on this topic: