- Fast Principles
- Posts
- From Directed To Ambient Compute
From Directed To Ambient Compute
The convergence of spatial compute combined with AI agents will be the final form factor of technology devices

Today, interacting with AI agents is as basic as an early chat room — with structured text responses to basic inputs. AI agents are evolving to accept and return different forms of multimedia, but these actions are still very utilitarian in nature. But, the form factor for these AI agents is still a chat interface, but this won’t be the case for much longer immersive technologies will begin to converge into an intelligent experience that augments human intelligence by orienting itself to our senses more authentically than any other innovation wave. Further, AI agents to better interpret and understand human knowledge will need to understand the spatial relationship of the data we input. This may sound like sci-fi, but I believe an ambient computing interface to collaboratively interact with AI agents and other humans is much closer than we might think.
Language alone is not enough to create a world model which can also be described as artificial intelligence that understands the real world, and language alone cannot do that and that’s why spatial intelligence is so promising.
The shift from directed computing to ambient computing represents a significant paradigm change in how we interact with technology and how AI agents will interpret and understand data within the real-world and virtual worlds. This transition towards spatial compute is moving us from self-directed interactions to more goal-based, AI-driven experiences. Let's explore this shift through the lens of control and response mechanisms, and how AI agents are reshaping these interactions with both our control mechanisms (input) and response mechanisms (output).

At a stripped down stand-point these are the same sensors and responders we humans use when inspecting and influencing our world. In a similar fashion researchers are striving to converge technologies that represent these same functions for AI agents to augment human effort.
Control Mechanisms

Example control mechanisms
1. Voice Control
Voice control is becoming increasingly sophisticated, allowing for more natural and intuitive interactions with AI systems. Voice allows us to more naturally communicate with one another in a faster manner than more traditional text input. Enabling voice as an input method enables a higher rate of input and can result in more natural working relationship with our technology.
OpenAI Voice Mobile Application: Enables voice-based interactions with AI models.
Google's LaMDA-powered conversational AI: Offers more natural voice interactions.
Apple's Siri with on-device processing: Improves privacy and speed of voice commands.
2. Gesture Control
Gesture control is evolving to provide more intuitive ways to interact with digital environments. Gestures when paired with voice are our short-hand communication method for convey spatial relationships in our communication. AI agents to naturally interact with us will need to understand where we are in the world to truly understand the context of our instructions.
Apple VisionOS finger gesture control: Allows manipulation of virtual objects.
Microsoft's Project Prague: Enables custom gesture commands for various applications.
Google's Project Soli: Uses radar technology for precise gesture recognition.
Ultraleap's hand tracking technology: Offers touchless interfaces for various devices.
3. Eye Control
Eye tracking is becoming a powerful tool for intuitive control and user intent prediction. If a system can understand where someone is looking it will be able to intuit that is the subject of a users instructions.
VisionOS eye tracking control: Enables selection and interaction through gaze.
Tobii's eye-tracking technology: Used in various applications, including gaming and accessibility.
Google's Project Starline: Utilizes eye tracking for more natural video conferencing.
4. Brain-Computer Interfaces (BCI)
BCI technology is pushing the boundaries of direct neural control. Imagine being able to think instructions for an AI agent and it just does it. This sci-fi scenario is what could be a reality in the near future. Mapping the brain and understanding where control instructions come from is key to this technology.
Neuralink's brain implant technology: Aims to enable direct neural control of devices.
Precision Neuroscience: A thin film microelectrode array that is engineered to conform to the brain's cortex without damaging tissue. Thousands of channels can be delivered anywhere on the brain's surface using our patented, minimally invasive insertion method.
CTRL-labs (acquired by Meta) neural interface armband: Translates neural signals into digital commands.
5. Haptic Feedback
Haptic technology is evolving to provide more immersive and informative tactile responses. Our sense of touch gives us a lot of information about the real-world, to bring our sense of touch into our virtual spaces teams are developing feedback methods to bring sensations into virtual experiences.
Apple's Taptic Engine: Provides precise haptic feedback in various devices.
Meta's haptic gloves for VR: Aim to simulate touch in virtual environments.
HaptX's full-body haptic feedback suit: Offers immersive tactile experiences in VR.
Ultraleap's mid-air haptics: Creates touchless tactile sensations.
bHaptics' haptic vest: Enhances immersion in gaming and VR experiences.
Response Mechanisms

Example response mechanisms
1. Audio Response
Audio responses are becoming more natural and context-aware. Above we describe voice input, a natural extension is for an AI agent to respond with voice as audio output in conversational, natural language. The more interacting with AI agents feels like speaking to a human on a phone call the more integral to our daily lives these systems will be in accomplishing basic tasks.
OpenAI mobile application: Provides AI-generated audio responses.
Google's Project Euphonia: Improves speech recognition for diverse speech patterns.
Replica Studios' AI voice generation: Creates realistic voice overs for various applications.
Descript's Overdub: Allows editing of audio content using AI-generated voices.
Resemble AI's voice cloning: Enables creation of synthetic voices for various uses.
2. Integrated Visual Response
Visual responses are becoming more seamlessly integrated into our environment. Augmenting the real-world with contextual overlays of important data or 3d objects will help give AI agents the capacity to show not tell their work. To bring these visual representations to life the inputs surrounding touch are required to make manipulating the virtual objects natural to human users.
Meta Orien AR glasses: Project visual information into the user's field of view.
Microsoft HoloLens 2: Provides mixed reality visual responses for various applications.
Magic Leap 2: Offers advanced AR visual experiences for enterprise use.
Snap's AR Spectacles: Integrate AR visuals into everyday eyewear.
3. Holographic Displays
Holographic technology is advancing to provide more immersive 3D visual responses. A key element to the human condition is shared experiences. In this articulation of representing virtual 3D objects from AI agents we explore holographic displays that do not require an individual user to wear glasses. Imagine our televisions, billboards, monitors and even phone screens displaying data in 3D.
Looking Glass Factory's holographic displays: Offer glasses-free 3D visuals.
Light Field Lab's SolidLight: Aims to create realistic holographic objects.
PORTL's holographic telepresence: Enables life-sized holographic communications.
Voxon Photonics' volumetric display: Creates 3D images viewable from any angle.
Leia Inc.'s Lume Pad: Provides a glasses-free 3D display for mobile devices.
4. Projected Interfaces
Projected interfaces are creating new ways to interact with digital information in physical spaces. This technology is an evolution of projector technology that will be scaled up into the built environment or scaled down into contact lenses.
Lightform's projection mapping: Turns any surface into an interactive display.
Lumus' waveguide technology: Enables compact AR displays with wide field of view.
Bosch's smartglasses light drive: Projects information directly onto the retina.
Microsoft's IllumiRoom: Extends gaming experiences beyond the TV screen.
5. Tactile Feedback Systems
Advanced tactile feedback is providing more immersive and informative physical responses. Imagine clothing that can give us the feedback about a virtual world we are visiting.
Teslasuit's full-body haptic feedback: Offers immersive tactile experiences for training and VR.
Dexta Robotics' force feedback gloves: Provide realistic touch sensations in VR.
Woojer's haptic wearables: Enhance audio experiences with physical sensations.
Razer's HyperSense: Integrates advanced haptic feedback into gaming peripherals.
This shift towards ambient computing, driven by AI agents, is creating more intuitive, goal-oriented interactions with technology. These emerging technologies are paving the way for a future where computing becomes an seamless, integrated part of our environment, responding to our needs with minimal direct input.
It’s not the Matrix, but it is a convergent future around human collaboration with machines
The convergence of advanced control and response mechanisms is paving the way for a more intuitive and seamless interface between humans and AI agents. By integrating voice, gesture, eye tracking, and even brain-computer interfaces with sophisticated audio, visual, and haptic feedback systems, we're moving towards a future where interaction with technology becomes more natural and ambient. This shift is exemplified by the emergence of new control paradigms in robotics, where AI agents handle high-level decision-making while instinctual protocols manage basic tasks, mirroring the human brain-brainstem relationship.
As these technologies mature, we're likely to see a transformation in how humans collaborate with AI systems. The integration of AI and machine learning into control systems is enabling more precise and proactive decision-making, while advancements in human-machine interfaces are making these interactions more intuitive and immersive. This evolution is not just about technological capability, but also about creating a symbiotic relationship where human judgment and AI efficiency complement each other. However, as we progress towards this future, it's crucial to address ethical considerations and ensure that these advancements enhance human capabilities rather than replace them, fostering a responsible and balanced approach to human-AI collaboration.
Human Agentic Interaction Article Sources
Next Big Era of AI Begins — Fei Fei Li via A16z
LLMs are not enough to have world models — Yann Lacun via Lex
Ambient Intellignece—IEEE Xplorer
Human-Computer Interaction — An Empirical Research Perspective
Until next time, keep innovating and stay curious!
💬 Hit that reply button tell us how we did this week or ask a question.
♻️ If you’re enjoying Fast Principles, please share it with friends and colleagues.