This work started from a different premise.
As environments become more complex—driven by connected systems, distributed teams, and real-time data and sometimes with automation and robotics in the loop—the tools people use to understand and operate within those environments haven’t kept pace. Information is often fragmented across dashboards, reports, and disconnected systems, making it difficult to maintain a clear, shared view of what’s happening.
The idea was to explore a new kind of interface: a spatial platform that could represent a physical environment as a dynamic, explorable system—accessible to both on-site and remote users in real time.
At its core, the platform combines spatial capture with a lightweight, continuously updating model of a given environment. Using LiDAR-style scanning where available, physical spaces can be quickly mapped and redefined as navigable digital environments without the overhead of traditional digital twin infrastructure.
Within those environments, data from multiple sources—APIs, sensors, live feeds, and connected systems—is not simply surfaced—it’s transformed into context-specific intelligence. Instead of pulling raw data from separate tools, this spatial platform presents what matters in the moment, tied directly to where it applies in the environment.
The result is less about visualization and more about awareness.
The spatial platform was designed to support both local and remote participation. On-site users can move through a physical space while accessing overlaid information, while remote participants can enter the same environment as avatars—sharing a common operational view. With spatial audio and position-based interaction, collaboration begins to behave more like it does in a physical setting.
Early prototypes were intentionally direct. In one case, I scanned my own kitchen and used it as a shared environment for remote collaboration. The experience was rough, but functional—team members could drop into the space, move through it, see live pins with relative data, and interact in real time. A simple recording of that session, captured directly from the device, became one of the most effective ways to communicate the idea. It wasn’t polished, but it made the concept tangible in a way that resonated with potential partners and stakeholders.
The response to the underlying system was consistently strong. The ability to move through environments, access relevant information in context, and collaborate within a shared spatial layer translated across industries—from operations to distributed leadership teams.
At the same time, it became clear that the interface layer introduced friction. While extended reality hardware made the concept possible, it wasn’t yet practical for broad deployment in operational settings.

That insight led to a shift in direction. The focus moved toward delivering the same underlying system through more accessible interfaces, prioritizing mobile while maintaining the core principles: spatial awareness, contextual intelligence, and shared visibility.
This work represents an exploration of how environments themselves can function as interfaces—where information is not just available, but usable in the moment, and where teams can operate with a shared understanding of what’s happening in real time.

