Spatial computing applied to enrich the understanding of autonomous cars


Posted on: June 28, 2019 by Sven Paproth

A futuristic holographic experience for consumers and engineers

House-hold automotive brand names are that of Lexus, Toyota and BMW. But soon consumers will begin to see different automotive brand names as cars become more digital and autonomous. Since cars are transforming into highly capable computers that rely on software, just as much as they rely on the engine, consumers will begin to see technology companies emerge as brand names synonymous with car manufacturing. Move aside, Chevy. Google cars are inevitable.

Spearheading the development of autonomous vehicles are the latest GPU’s, edge computing devices, lidar sensors or cameras used for computer vision, and the ever-evolving intelligent algorithms for autonomous driving. Spatial computing, in regard to mixed reality applications, is the next level of development in the immersive world of digital reality and will enrich how the everyday user perceives and understand autonomous cars. In addition, spatial computing will help the move from focusing on operating cars to experiencing the environment in the most immersive way.

This blog post focuses on a specific application of how mixed reality could allow humans, every-day users and also engineers, to visualize how autonomous systems see the environment. As a futuristic outlook, spatial computing as the ability to provide a new form of learning and entertainment environment right in your connected vehicle.

Spatial computing builds trust for users

The evolution from manually operated vehicles to autonomous cars means putting one’s life and trust into the “hands” of a computer. Most often, every accident an autonomous vehicle is involved in generates breaking news. Therefore, who would trust a computer to bring you safely to your destination when you can drive yourself? After all, trust is the easiest to lose, but the hardest to regain.

The idea of giving up control over one’s vehicle is a worrying thought for those who have never experiences this nuance technology. However, I expect users soon to be able to experience autonomous test drives enriched with live spatial-computing information, which is a real-time holographic experience inside the vehicle, allowing users to ease from suspicion to trust. This use case has the potential to transition users into autonomous vehicle consumers. You may ask how this is possible. While test driving vehicles equipped with self-driving capabilities, the experience could be enhanced for people who either simply don’t trust this or the ones who like to understand the depth behind how the autonomous car works. Leveraging smart glasses which are connected to the car’s backend can visualize what the car analyzes, i.e. information of surrounding cars, objects, lane markings and traffic lights.

Engineers could use spatial computing to further understand and train more sophisticated autonomous systems.

While mixed reality helps the every-day user rise their comfort levels with autonomous driving, digital reality software technologies can also used by engineers and software architects to help them visualize the thinking and behaviors of advanced autonomous systems. The view through the eyes of the computer as an immersive mixed reality experience will help engineers and architects understand and train autonomous systems better, and may contribute to make level 4 and 5 of autonomous driving a reality. Digital reality for autonomous systems will provide insights as to how an autonomous system sees its environment, how it recognizes objects and how it decides to act. Engineers could use the holographic vision to detect errors and train autonomous vehicles. In situations where a car approaches an obstacle it isn’t able to identify, the autonomous training model can be enriched on the go. The architect would ‘select’ the physical obstacle in the mixed reality environment and could create actions for the autonomous vehicle to take the next time it encounters a similar situation, like an obstacle in a construction area where lane markings are missing.

To accomplish this, computing engines are needed that transfer physical objects into 3D holograms which can be placed in the view of the user. Multiple engineers or users could share the experience using multiple mixed reality devices, as an example. In some cases, for autonomous cars, LIDAR (Light Detection and Ranging) is used to create a 3D point cloud where the edges of roads are estimated. This way the car can ‘understand’ its environment. This information from the LIDAR radar can be extracted and projected into the point-of-view of a mixed reality smart glass.

Limitless potential for the future

Spatial computing may become the next big thing when computing moves beyond the age of screens and input devices. When mixed reality breaks out of the smart glass environment, I expect cars to be one of the first to leverage this emerging technology. For example, mixed reality might be projected into a windshield and the holograms could be controlled through haptic gestures. This will offer car manufactures the ability to offer new experiences and services.

Imagine visiting Yosemite national park with your family, and while you drive by beautiful redwood trees, a tree is recognized and the software generates an exact holographic digital twin of the tree you just saw, which moves from the side of the street to your central point of view. It then explains to you and your family their facts and history throughout thousands of years. This use-case of recognizing and visualizing objects like trees in spatial computing creates a new learning and entertainment environment for young and old.

The next generation of digital natives expect, I would even say require, a totally new form of education, which spatial computing provides beyond the imaginable. Mixed reality is an immersive way to experience the future of computing, and there’s limitless potential.

In Atos’ Journey 2022 report, the vision for the future of technology in business, augmented reality (AR) and spatial computing are noted as becoming more commonplace. The report states AR more easily unlocks a new kind of instant, contextual slice of content, while combining it with the reality surrounding the user. The content isn’t just one-way but rather becomes a sort of “conversation” between the user and their virtual and physical environments.

Share this blog article


About Sven Paproth

Director of Digital Innovation and Strategy and member of the Scientific Community
Sven Paproth is the Director of Digital Innovation and Strategy for Atos North America’s Digital Transformation Office. Paproth is an experienced digital native with knowledge in the digital transformation and digital business innovation fields. He also serves as a member of the Atos Scientific Community, a global network comprised of 150 of the top scientists, engineers and forward thinkers from across the Group. As a Scientific Community member, he researches the impact of spatial computing and human machine interfaces on digitization. Paproth focuses on best practice strategy for the digital transformation of large enterprises, developing and implementing new business models. He is a subject-matter expert for customers’ business technology innovation workshops and open campus innovation strategies. His expertise includes Internet of Things, blockchain, digital reality, human machine interfaces, drones, and additive manufacturing. Paproth solves cross-industry business challenges using innovative new technologies, while mindful of enhancing value and creating new revenue streams for customers.

Follow or contact Sven