Free Novel Read

The Spatial Web Page 4


  Whether we are talking about shipping, trucking, and logistics companies that need to track shipments of assets geographically, or the ability of a mother to transfer a virtual character from the park across the street to her children’s playroom, or the ability to rent a virtual Ferrari for a race across Monaco or the Moon, Spatial Computing humanizes our relationship with information.

  With each generation of new computing technology, human to computer interfacing has continually evolved to be more and more natural and intuitive. Early interactions with computers required highly trained technicians; these days the average toddler has no problem interacting with the touchscreen of a smartphone or speaking directly to voice-controlled assistants like Alexa or Siri as if they were extended members of the family.

  But what is driving this evolutionary trend and how does that impact the future of computing?

  We could say that advancements in Human to Computer Interfacing (HCI) are ultimately biologically determined. From keyboards to mice, from alphanumeric displays to graphical user interfaces, and on to touchscreen displays of smartphones, we’ve seen a steady progression of our interfaces towards the more intuitive and natural. And now we are moving on to VR and AR or “spatial” interfaces as the next step, including voice, gaze, and gesture. Spatial Computing is not only a new technological advance—it is also deepening the link between the human brain and the computer brain.

  We are not migrating to VR and AR merely because they are a fun, new technology, but because humans have binocular vision with depth perception, and these are the only interfaces that match our biology. They will increasingly become more useful, enabling us to perform more efficient and more effective interactions in the world, driven by the biology of the human brain and nervous system.

  Our retinas contain an astounding 150 million light-sensitive rods and cones. Neurons dedicated to visual processing in our brains take up close to 30 percent of the cortex compared to 8 percent for touch and 3 percent for hearing.

  But this is only part of the story. Humans respond to and process visual data better than any other type of data. Some statistics suggest that the brain processes images 60,000 times faster than text and that 90 percent of information processed by the brain is visual. What is true is that 30% of the brain is dedicated to the visual system. We recognize visual patterns very quickly and can respond much quicker than to words and numbers.

  The recent multibillion-dollar investments in VR and AR technologies in order to advance towards an “eye-centric” interface tier is driven by the biological need for a 3D binocular interface. Anything else is just too inefficient. Ninety percent of the world’s data was created in the last two years alone—and it is not slowing down.

  The explosion of big data by a multitude of sources—from across websites and social media to the expanded use of mobile devices—makes it difficult for individuals and organizations to make sense of the data today, but as a new generation of wearables and IoT sensors are added, it will be nearly impossible to translate the oceans of real-time data into useful decision-making information, unless we upgrade the way this information is presented.

  Spatial interfaces will be necessary in order to cope with this explosion of data. We need to be able to view it, navigate it, modify it, share it, make decisions about it, use it to simulate multiple alternative possible futures, and much more. The spreadsheet of 2025 will more likely be a simulation space that enables us to ask “What if?” questions and see the results displayed as a 3D immersive example of what we are testing or requesting. It’s one example of the new model type of “the world,” rather than that of “the book.”

  The language of humanity will also likely become more visual over time. The “memes” and emojis of Web 2.0 and the face masks and “lenses” of Web 2.5 companies like Snap and Tik Tok are heralds of this future.

  The rise of Spatial Computing marks an essential next step in the evolution of our computing systems and highlights the importance of the Spatial Web in the ongoing evolution of computer-human interactions. During the years from 2020 to 2030, 5G mobile technologies will spread globally, giving us a global mobile network that can deliver low-latency spatial experiences. The spread of 5G network technology combined with the ever-decreasing cost and ever-increasing quality of spatial interface technologies will drive the global adoption of spatial not just because it is a more exciting interface—but because it is a biologically determined one.

  As you can clearly see from these examples, the Spatial Web will make our world and how we interface with it, thousands of times more efficient than by using text and numbers. This will accelerate and improve education, create greater abundance in our economies and faster evolution of our technologies. History will see this as a State Change - similar to that moment when water turns to ice.

  SPATIAL WEB TECHNOLOGY STACK

  T he prospect of a new, powerful, real-world web operating under the same technological frameworks and “surveillance capitalism” schemes as our current web is a recipe for disaster. Instead of our websites being hacked, it will be our homes, offices, drones, cars, robots, senses and biology that get hacked. The protocols at the heart of the current web, with their overall logic and architecture, were not designed for and not adequate for the emerging opportunities and risks that this new web of the world makes possible.

  To enable the promise of the Spatial Web and address the shortcomings of the old web, a new set of spatial web protocols and standards for a new multi-dimensional web are required. We need a well-defined and robust specification for a comprehensive suite of new protocols and standards that support the trends of spatial, cognitive, physical, and distributed computing. We need a specification that is capable of laying the foundations for a web that natively supports universal values of privacy, security, trust, and interoperability by design, by default, from the foundation up—a specification designed to become a universal standard for people, things, and currency to seamlessly move between spaces, both real and virtual.

  THE WEB 3.0 STACK OVERVIEW

  T he Web 3.0 era will not be defined by any single, individual technology, but rather by an integrated “stack” of computing technologies known in classic computer science as a three-tier architecture, comprised of Interface, Logic, and Data Tiers.

  Web 3.0 will utilize Spatial (AR, VR, MR), Physical (IoT, Wearables, Robotics), Cognitive (ML, AI,) and Distributed (Blockchain, Edge) computing technologies, simultaneously as part of an integrated stack. These four computing trends make up the three tiers of Web 3.0.

  Interface Tier: Spatial - Computing that takes place in a spatial environment, typically with special peripherals like AR or VR headsets, smart glasses, and haptic devices used to see, say, gesture and touch digital content and objects. Spatial Computing allows us to interface with computers naturally, in the most intuitive ways, best aligned with our biology and physiology.

  Interface Tier: Physical - Computing embedded into objects, including sensors, wearables, robotics, and other IoT devices. This enables computers to see, hear, feel, smell and touch and move things in the world. Physical Computing will allow us to interface with computers everywhere in the world and receive information and even send “actions” into environments.

  Logic Tier: Cognitive - Computing that models and mimics human thought processes, including smart contracts, machine, and deep learning, neural networks, AI and even Quantum computing. It enables the automation, simulation and optimization of activities, operations and processes, from production in factories to self-driving cars, while also augmenting and assisting in human decision making.

  Data Tier: Distributed - Computing that is shared across and between many devices that each participate in a portion of the computer storage like blockchains and distributed ledgers or computer processing like edge and mesh computing. in general, this provides greater quality, speed, security and trust for the massive amounts of data storage and processing that are required for the Spatial Web.

&n
bsp; THE WEB 3.0 STACK IN DETAIL

  The Interface Tier: Spatial Computing

  Virtual, Augmented and Mixed Reality

  Spatial Computing is a way of seeing and interacting with digital information, content, and objects in 3D space in the most physically natural and intuitive ways.

  Every 15 years or so, a new computing interface emerges and dominates our interactions with computers: the desktop PC in the ’80s, the web browser in the mid-’90s, touchscreen smartphones in the 2010s. Spatial computing technologies bring a fundamental change to the computer interface.

  Three significant “Ages” define human interaction with information at scale: the First Age was the shift from spoken language to the invention of writing. The Second Age was triggered by the invention of the printed word (from written to printed). And the Third Age was the screen (from physical to digital). Each of these Ages radically shifted our economics, politics, and society. You may recognize these eras under the more familiar terms of the Agricultural, Industrial, and Information Ages, respectively. But viewing these Ages as evolutionary shifts brought about by advancements in our relationship with information highlights the importance of this next Age. Spatial technologies are the next evolution of the interface, progressively moving our attention away from the screen and into the world around us. This will have a far greater impact at a greater scale than any of the previous Ages.

  Our most direct experience of Web 3.0 will be via its interface. With Spatial Computing, the interface is literally the entire world, with data displayed everywhere, all around us, allowing us to interact with it via speech, thought, touch, and gesture, adding a new dimension to our information, ideas, and imaginations, enabling them to be immersive, collaborative.

  First, let’s look at the nuances of the main types of Spatial Computing.

  Virtual Reality (VR) is a form of technology that allows a person to experience being somewhere else. It produces images, sounds, and even sensations to create an immersive sensory experience so that a user feels like they are really present in another place. That other place can be a virtual tour in another country, for instance, or a VR world like No Man’s Sky or any combination of the real and virtual—sometimes called Mixed Reality (MR). Immersion in virtual reality gives a sense of being physically present in a non-physical world. VR is enabling us to enter fully immersive simulations for education, training, prototyping, and entertainment.

  In VR anything you dream of can be experienced. Put on a headset and experience being transported to anywhere in the physical world, the universe, or any fictional universe, at any point in history—past, present, or future. Experience the widest scope of possible situations and scenarios. Be yourself, or be any character you wish—big or small, young or old, human or…other. Enter an artery to watch white blood cells fight off an invading virus, or travel through space and time at light speed to watch the universe being born. VR is programmable imagination. It is unlimited in its experiential applications.

  On the more practical side, VR can enable us to collaboratively iterate on a city plan, a home design, or construction worksite to alter design and layout. Designers can simulate the ideal user experience long before the first shovel hits the ground and the build-out begins. While legacy technologies also allow us to prototype with immersive tech, VR provides a more direct experience by being able to walk, fly, and interact with simulations and prototypes. As a result, we’re going to get better creations of our homes, offices, cities, and products.

  Immersive media may also allow us to feel closer to each other and connect personally to global issues such as humanitarian crises. VR can enable a form of telepresence that evokes the kind of empathetic and emotional responses usually reserved for when we are physically present. It offers an experience that is simply impossible in other mediums, granting us the magical power to step into a 3D replica of a 1,500-year-old cave full of Buddhist art, to be transported into the shoes of a Syrian child living in a Jordan refugee camp, or to watch the Notre Dame tower burn down from across the bridge. We feel connected not because we are any more Buddhist or Syrian or Parisian, but because the medium reminds us that we all share the experience of being human.

  Augmented Reality (AR) differs from VR in that it shows the physical location that a person is in, but allows digital imagery, information, and 3D objects to be overlayed and displayed in the physical world. Digital content or objects can be linked spatially to physical objects. You can, for example, attach a maintenance document to a piece of equipment or hide a character in the living room to be discovered. Objects in AR can react dynamically to an environment in all of the ways that we expect physical objects to do (e.g., texture, lighting, etc.).

  With AR, you can simply hold up your smartphone or (soon) don a pair of smart glasses while on vacation at the Colosseum in Rome and see it as it looked in 200 CE or watch a historical gladiator battle from the stands. You can visit Times Square and see all the Instagram photos, Facebook posts, and Yelp reviews from your friends from the actual location they were posted, or view virtual art in a real gallery, or see the actual food pop up on your menu in place of mere words. AR lets you try on a new pair of glasses, shoes, or a watch simply by selecting your preference and pointing at the relevant part of your body. You can travel to a foreign country and see all of the signs in your native language or add a layer over the world that allows you to see every building and person as if they came from Westeros, Star Wars , or the Victorian era.

  AR allows maintenance workers, whether biological, algorithmic, or robotic, to view the maintenance history of equipment at a factory, mining site, farm, or power grid by querying the equipment itself to request that any related documents, plans, diagrams, reports, or analytics about a thing appear in 2D or 3D on the device itself. A home appliance or new car could offer an interactive tutorial. Industrial equipment could display diagnostic or maintenance history. A grocery store or an entire mall could offer a 3D map and navigation, not just appearing as a map on the screen of your phone, but displaying in the air in front of you, routing you, a delivery service person, or a robot picker through the ideal route to complete tasks. And the products themselves display all of their relevant information and even supply chain history to verify their organic origin, fair use or sustainable practices.

  For the Enterprise, AR can significantly increase productivity. As it evolves, AR will be able to provide immersive step-by-step instructions for technicians, leading to time-saving and cost reduction through improved performance. AR makes work more accurate and work environments safer through effective, engaging simulation and training. The precise visualization of internal components of machines and their parts facilitates a greater depth of knowledge and comprehension by providing rich simulation of different scenarios.

  Interface Tier: Physical Computing:

  The Internet of Things or IoT

  Physical Computing is a way of sensing and controlling the physical world with computers. It enables us to understand our relationship to the digital world via our computers’ relationship with the physical world. Physical Computing is the sensory and muscular hardware layer of the Spatial Web.

  We’ve entered the fourth wave of the Industrial Era. The first was powered by steam, the second by electricity, the third by computing, and the fourth by integrated networks of sensors, beacons, actuators, robotics, and machine learning. These “cyber-physical” systems—a central feature of “Industry 4.0—will power the smart grids, virtual power plants, smart homes, intelligent transportation, and smart cities of tomorrow. The IoT allows objects to be sensed or controlled remotely using the existing Internet network infrastructure which creates new opportunities for more direct integration between the physical world and computer-based systems. This will result in improved efficiency, accuracy, and economic benefit.

  Just as we interface with the computer, in Web 3.0 the computer will interface with the world via the Internet of Things. “Things,” in the IoT sense, can refer to
a wide variety of devices including heart-monitoring implants, biochip transponders on farm animals, cameras streaming live feeds of wild animals in coastal waters, automobiles with built-in sensors, DNA analysis devices for environmental/food/pathogen monitoring, and field operation devices that assist firefighters in search and rescue operations. A more formal definition by Noto La Diego and Walden in their paper titled “Contracting for the ‘Internet of Things’: Looking into the Nest” describes the IoT as an “inextricable mixture of hardware, software, data and services.” Generally speaking, it is a network of physical devices that are connected to the Internet and able to share data. These connected devices include sensors, smart materials, wearables, ingestibles, beacons, actuators, and robotics that will enable smart appliances, real-time health monitoring, autonomous vehicles, smart clothing, smart cities, and more to be interconnected, to exchange data, and perform activities in the world.

  The Internet of Things will enable the digitization of every object in the world and capture data from every person, place, and thing. Think of this as the “read/write” Interface Tier to the planet. A trillion sensors will be laid over the world, like planetary-scale skin and senses with the ability to detect temperature, pressure, moisture, light, sound, motion, speed, position, chemicals, smoke, and more. This gives the IoT superhuman capabilities for good that allow these networked devices to see through walls to detect smoke in a highrise in New York, or sense the rising of a tide far in advance of a Tsunami in Indonesia, or the blood flow and pressure of an aging centenarian in Dubai, preventing the burning of a building, saving the citizens of an island paradise, and the life of a grandmother.