top of page

Interfaces that Think With Us, From Map Readers to Fellow Travellers

  • Writer: Colm Lally
    Colm Lally
  • Feb 19
  • 7 min read

Updated: Feb 26


Thinking with Interfaces 

For over four decades, the desktop metaphor has shaped how we think about the computer interface. Introduced with the Xerox Star in the early 1980s, it framed the computer as an office-like environment populated by familiar objects: files, folders and documents. By grounding interaction in everyday physical experience, the desktop metaphor provided users with a stable mental model for navigating an otherwise immaterial system.


The desktop did not simply represent work; it organised thinking around the thing being worked on. It created a structured, spatialised environment in which users could externalise thinking and sequence actions. By using digital ‘objects’ to anchor shared memory, it provided the same kind of spatial coordinates we have always used to navigate and coordinate collective thought.


Today, as generative AI systems and language-based interfaces gain prominence, designers and technologists are questioning whether traditional GUI metaphors are still necessary, or whether they now restrain progress. Language appears to offer a more adaptive interface. It is hugely flexible, expressive, and deeply aligned with the fluid structure of human thought. But this does not mean graphical interface structures become obsolete. The interfaces we’re familiar with remain essential for shared memory and coordination. They provide the stable, spatial structure that allows collective work to persist. The future of interfaces, therefore, is not about choosing between the fluid structure of language and the spatial, stable, structure of the GUI, but in understanding how each supports a different cognitive function.

Interface Scaffolding


The desktop metaphor worked because it resonated deeply with how people structure their thoughts. Our thinking, reasoning, and use of language are largely metaphorical and imaginative, shaped by the human body and extending outward into our physical encounter with the world. Cognitive linguistics has shown that abstract reasoning is routinely grounded in bodily experience.


Linguistic studies suggest that even across distant languages, metaphors for thinking are rooted in common human experiences. Professor Ning Yu, in Chinese Metaphors of Thinking (2003), compares what he calls “the central metaphor about the mind and thinking” in English and Chinese. Despite their differences, both languages transfer the logic of spatial movement and vision into the abstract domain of mental activity. In Yu’s account, the metaphor Thinking is Moving involves a starting point, a path, and an endpoint. If one pursues a mistaken line of thought, one must “go back” or “retrace one’s steps,” reflecting a shared conception of the past as something behind us, literally behind the body.


This metaphor is foundational to how people understand the programmatic operations of the computer. Familiar interactions such as opening files, saving work into folders, copying, cutting, pasting, and launching applications all draw on the same underlying spatial and bodily logic. The use of these conventions allowed users to offload and organise information, see relationships spatially, and enabling memory to be retrieved, referenced, and acted upon collectively.


Language as Interface 

Recent advances in artificial intelligence, particularly large language models, have renewed interest in language itself as a primary interface. Understanding why requires looking at how neural networks actually process language. Geoffrey Hinton’s account offers a helpful starting point.


Hinton describes words not as fixed symbols with predefined meanings, but as high-dimensional, deformable structures. These conceptual blocks allow meaning to shift depending on context. Understanding a sentence is not a matter of assembling static components according to strict rules, but of continuously adjusting these conceptual shapes until they fit together coherently.


This description aligns remarkably well with human experience of meaning. When we read or listen, we do not decode language mechanically, we interpret, revise, and refine our understanding as context accumulates. Meaning emerges through interaction, not through search-and-retrieve.


From this perspective, language appears to be an exceptionally powerful interface to thought. It supports ambiguity, abstraction, and creative recombination. It allows us to model not just physical reality, but intentions, hypotheticals, and social relationships. It’s not surprising, then, that conversational AI systems feel immediately intuitive.

From Map Readers to Fellow Travellers

As interfaces evolve, so too does the user’s orientation toward them. Different interface paradigms do not simply enable different actions; they invite different ways of thinking.


Traditional graphical interfaces position the user as a map reader. The system presents a stable representation of information and the user navigates this landscape by interpreting symbols and acting upon them. Intelligence remains largely with the user; the interface supports memory, organisation, and coordination.


Language-based AI systems introduce a fundamentally different orientation. In conversational interfaces, the user increasingly relates to the system as a fellow traveller. The critical shift is not simply that these systems use language, humans have always used language to interface with thought. The shift is that the language is generative. The interface no longer merely represents information or provides a space to explore; it actively participates in interpretation. It responds, suggests, synthesizes, and produces novel framings. Intelligence itself becomes the primary surface of interaction, proposing paths forward, reframing questions, and shaping what appears salient.


This is what distinguishes generative language systems from static language interfaces like documents or search results. A book uses language to communicate thought, but it cannot interpret your context, adjust its explanation mid-sentence, or propose alternatives you hadn’t considered. Generative systems can. They produce new interpretations rather than retrieve existing ones, and this ability to generate meaning in real-time changes the fundamental relationship between user and interface.


Intelligence as Participant 

This shift from tool to participant changes what interfaces must do. When the system contributes interpretations rather than simply executing commands, intelligence stops being something you invoke and becomes something you work alongside. The interface is no longer a passive surface for your thinking, it is thinking with you.

This creates new opportunities but also new problems. In traditional interfaces, structure provided both memory and coordination: files persisted, folders organised, versions tracked changes. These elements were deterministic because they had to support collective work, multiple people needed to see the same thing, reference the same state, build on the same foundation.


When intelligence becomes participatory and generative, stability becomes harder to maintain. These systems continuously generate summaries, revisions, and alternatives, far more material than traditional interfaces ever produced. Because they operate at this unprecedented scale, the primary design challenge shifts: it is no longer just about the moment of interaction, but the process of accumulation. We must develop frameworks for deciding what should persist, ensuring that thought is carried forward rather than dissolving into the noise of a transient chat log. 


If meaning in neural systems is formed through the continuous deformation of high-dimensional representations, then coherence is not stored as a fixed structure but maintained dynamically. Each response reshapes the semantic field slightly in order to accommodate new context. Over extended interaction, these incremental adjustments can accumulate. The interpretive frame may gradually shift, not because the system has failed, but because generative meaning is inherently fluid. Without stable reference points, high-dimensional configurations tend toward diffusion. What appears as drift is the natural behaviour of a probabilistic field operating without structural anchors.


This does not diminish the power of such systems. On the contrary, it explains their flexibility. But flexibility alone cannot sustain shared memory or coordinated work. Interpretation expands; structure localises. If interfaces are to host participatory intelligence over time, they must provide the resistance necessary to stabilise evolving meaning into durable forms.


Localising Intelligence

Probabilistic behaviour suits interpretation and sense-making, contexts where meaning is provisional and explored through interaction. It becomes problematic when applied to the layers responsible for coordination: storage, versioning, permissions, and the durable structures that teams rely on to maintain momentum.


In such a system, intelligence is no longer a discrete tool we use; it becomes a distributed quality of the environment itself. The future of interfaces lies in compositions that support both human cognition and generative interpretation, allowing ideas the freedom to evolve without losing the stability required for collective work to endure.


Instead of an interface that simply represents information, we are moving toward an environment that holds thought, a structure that provides the necessary resistance for high-dimensional meaning to take a definitive, usable shape.

–––––


[Logic Object: #TF-007-PARTICIPANT]

Conceptual Primitive 
Participatory Intelligence.

Core Tension 
The opposition between Spatial Stability (the deterministic, shared coordinates required for collective memory and persistence) and Generative Fluidity (the high-dimensional, deformable nature of meaning and interpretation in AI systems).

Logic Constraints
  • Metaphorical Grounding: Abstract reasoning must remain anchored in bodily and spatial metaphors (paths, containers, surfaces) to be cognitively intelligible for human coordination.
  • The Persistence Gap: Generative systems produce transient interpretations at a volume that outpaces traditional storage; a logic object must account for how "meaning" is captured before it dissolves into the chat log.
  • Deformable Meaning: Unlike fixed GUI symbols, the linguistic "Lego blocks" of generative systems change shape based on context (Hinton’s "hands and gloves"); the system is never in a state of rest.
  • Intelligence as Surface: Interaction is no longer a navigation of a pre-mapped landscape but a negotiation with an active interpreter. The "surface" is the intelligence itself.

Open Speculative Parameters
  • How can an interface provide "structural resistance" to generative meaning so that it takes a definitive, usable shape for collective work without losing its fluid power?
  • If we move from "reading maps" to "walking with fellow travellers," what are the new digital artifacts that serve as "shared memory" in a probabilistic environment?
  • Can a cognitive architecture be built that distinguishes between the probabilistic layers (sense-making) and the deterministic layers (coordination/storage)?
  • How do we design for "The Accumulation of Thought" when the primary interface is a transient conversational stream?

Cross-references
  • Speculative Surfaces (The primary pillar for reimagining the site of interaction).
  • Thinking Structures (How generative participation disrupts organisational coordination).
  • Thinking Tools (The use of metaphor as a scaffolding for abstract machine operations).

––––––


Notes on how understanding is formed through language: 

In his Hobart lecture, Professor Geoffrey Hinton uses an analogy of high-dimensional, deformable Lego blocks to explain how understanding is formed through language. Here is a rough summary:

Modelling kit
Hinton calls language a “wonderful modelling kit” that humans have developed. This kit allows us to build mental models of our reality with incredible versatility. While physical Lego blocks are limited to three dimensions, our mental word-features have potentially thousands of dimensions. This high dimensionality allows us to model almost anything, from the physical curves of a Porsche to abstract concepts like “justice”.

Deforming meaning on the fly
We do not look up definitions in a mental dictionary; instead, we instantly adjust the “shape” of a word’s meaning to fit its surroundings. For example, if you hear “The hunter is stalking the prey,” the word stalking takes on a specific, aggressive shape. Whereas if you hear “The fan is stalking the celebrity,” the word deforms into a different, more obsessive shape.

The hands and gloves mechanism
To explain how these features lock together to create a coherent thought, Hinton uses a tactile analogy. Each word has a bunch of hands on the ends of long, flexible arms, and gloves stuck to its body. As words are processed through the layers of a neural net (or the human brain), their shapes deform. The goal is to shift their meanings until the hands of some words fit perfectly into the gloves of others. Once all words have locked hands in a mutually compatible way, the sentence is understood.

Hobart lecture, Professor Geoffrey Hinton:


Comments


bottom of page