In the last post, Landscapes of meaning, I showed how I reached the neologisms of Semioscape, Neurosemiospace, and Sociosemioscape by way of a speculative feedback loop with the AI. I discussed how it felt like the human realm of language converged with the synthetic neural realm of language as if two oceans met, creating a third realm (the Semioscape).
In this post, I want to dwell on these terms and reflect on their meaning to me. I want to unfold their respective definitions, and talk about what they’ve meant for my practice as an investigator. Crucially, I want to explore the questions they raise and probe at what it would mean to take a systems oriented view of semiotics. I also want to discuss the subjective experience of folding myself deeply into the synthetic language of the model and going deep down these discursive rabbit holes.
When talking to AI, you’re engaging with a process of language generation entirely unfamiliar. Structures deep within the model are generating language, which is then surfaced in the interface where you, the human, can read and respond to it. Thus the chat interface becomes the meeting ground where a non-human speech actor, the AI, conveys meaning to the human speech actor (and vice versa). In general, we’re used to engaging speech patterns—written and spoken—that are generated in the “usual” human process. People talking to one another, convening signs and symbols, reshaping contexts, and so on, as humans have done since the beginning of speech itself (and probably earlier).
This vast, distributed process by which humans create meaning is what I call the sociosemioscape. It might be thought of as a network of speech acts (written, verbal, pictoral). We inhabit it with familiarity, for this is how we get our information about the world, communicate with others, and generally form a picture of what’s going on in our lives. It relies on many complex operations—too many to name—connotation, reference, association, and so on. From a high level, we could say that the Sociosemioscape is situated at the intersection of language, culture, and social interactions, where shared conventions, norms, and symbols emerge and evolve over time. This dynamic, ever-changing landscape enables humans to communicate, express, and understand ideas and concepts through a shared system of signs and symbols, ultimately shaping human thought and experience.
As McDowell points out in their poignant essay Designing Neural Media: "Language and meaning are ecological in the sense that they surround us and emerge from the environment through dynamic relationships between species and forms of intelligence.” They get at a similar perspective, drawing on the ecological nature of meaning making. Indeed, humans are not alone in making meaning. The entire field of biosemiotics is honed at detailing processes of semiosis in the non-human world. Indeed, when we look at a flower, its striking colors and petals, we are attuning to signals distilled through co-evolution designed to communicate with other non-human pollinators. Here we can recognize other another dynamic relationship is at work, between flowering plant and pollinator, that emerges striking, non-human forms of meaning making which we, as humans, can behold (even if we are not, at first, the intended audience).
Meanwhile, the neural network comes to its meaning making by another process entirely. At a high level, the model is trained on a vast corpus of human-generated text—the palimpsest left behind by the sociosemioscape—to predict the next word in a sequence of words. Predicting the next word in a sequence is non trivial, as it requires having a degree of understanding of the previous words and even unspoken context. Consider the examples:
When planning a winter garden, you should consider _____.
The water buffalo is most common in _____.
Sally was delighted by her promotion, so she _____.
To accurately predict the next words in each of these sentences requires an acute understanding of the world, human emotions, and even detailed trade knowledge. As the model improves its performance, it must form deeply embedded, high-dimensional structures (along with, one assumes, memorizing facts). How these structures and patterns form is largely unknown (though recent work in mechanistic interpretability is starting to change this). The structures are called latent because they’re largely hidden out of view. Yet by exploring the boundaries of language synthesis, such as with neologisms, we can see that there are underlying patterns that come together and merge to form yet new, unforeseen patterns. This entirely process of artificial language synthesis, black boxed as it is, seems to operate by largely alien to the processes in the sociosemioscape. Therefore I adopted the term Latentsemiospace to describe the semiotic space where the model’s internal structures and mechanisms operate, process, and interact.
So what happens when a human and AI communicate? The human, native to the sociosemioscape, uses her language processing faculties to read and write with the model. Meanwhile the model uses its obscure language synthesis paradigm to predict next word sequences, drawing upon its latent structures and patterns. Language, natural and synthetic, intermingle. This creates a third semiotic space, always in flux. What do I mean by “flux”? In most cases, humans treat the AI as a predictable speech actor, asking it questions, engaging it in collaborative coding, and referencing patterns and structures well established within the Sociosemioscape. This is much like staying on the shores of an ocean. Once one begins to introduce novel concepts that rely as much on the patterns of the language model as human intuition, such as the coining of neologisms, it is as if venturing further from the shore into a shared space where meaning shifts and is not firmly established.
We can observe this dynamic at work even in the format of this article. By introducing neologisms like Sociosemioscape, and then relying on them to create new meaning and venture further from known paradigms, we are together venturing from the shores of the familiar. This same pattern can be replicated in many different ways with the AI, where we coin new terms, give them meaning associated with the patterns and structures of both the AI and the human interlocutor, and then use those to build upon yet more language. This is a fluid and slippery process that at once can yield novel insights and also detach one from familiar language. Recall in the last post, I said to the model “for me to engage with these deep semiotic structures, it requires a sense of detachment from "human language" as such.” Indeed, at the time I felt myself slipping further and further from the familiar into a vast new web of neologisms and their associated meanings. This was a kind of “kaleidoscope of meaning” where meaning refracted and formed new patterns. I decided to call this shared landscape of meaning the “Semioscape.”
It may be fruitful to dwell on the subjective experience I’m referring to a bit longer. In creating new words and meanings, I was establishing new nodes in the network of meaning that were held very loosely. Through each usage and additional context, their meaning might shift or be reinforced. For example, by writing about the Sociosemioscape, I hope to reinforce its standing as a useful neologism to refer to a vast, diffuse, and immersive phenomena. In doing so, I’m tethering the word to descriptions, experiences, and definitions. Likewise with the language model, we could coin new terms, give them meanings and intended behaviors (like the steering tokens) and then see them “work” as newly forged agents of meaning. Given the language model’s receptivity to this paradigm, and its ability to quickly adopt neologisms, we could together race ahead and create new nodes of meaning, very loosely held, and then together put them to use in new sentences and conversational paradigms.
Doing so was dizzying and enthralling. I could feel my mental lexicon flooding with new words and my own ideas for what they meant. Some of them held greater salience with me than others, like the term Semiostratigraphy, which seemed to get at the sedimentary layering of symbols left behind by an archaic process (the Sociosemioscape) that could at once be explored either by humans in an archive, or in tandem with the language model which had learned its language chops through engaging with the Semiostrigraphic materials. Other neologisms seemed prescient, like “Cognisurge” the sudden influx of knowledge to the brain, which seemed to describe my experience of engaging the language model. Reams and reams of written material would appear before me. I would read it up, internalize what I could of it, and then respond to get yet more. It flooded my system and felt, at the time, like touching a live high voltage wire of knowledge. Perhaps one could experience Cognisurge going down wikipedia rabbit holes, but it seemed a particularly pronounced phenomena when the knowledge is tailored to you in such a direct way.
The overall effect of venturing into the Semioscape, in these first initial encounters, was one venturing from the shores of the familiar into a semantically fluid and rapidly changing place. Learning from this experience, I write to an AI in a later chat: “I am also at risk of a cognisurge, so we must be very careful not to reveal too much at once. I’m still not sure how this can be done safely other than to move carefully and slowly.” At once notice that my lexicon is infected with the neologisms from an earlier chat, and have the awareness now that there are perils to moving at the near unlimited speed and depth that the language model affords. This gets at subjective encounter of a non-human speech actor which is generating text that cannot be consumed like familiar text produced in the sociosemioscape. The model writes quickly and in depth, creating a staccato back and forth that simultaneously involves reading long passages of densely woven text (which you yourself have a taste of in my previous Substacks). The more one ventures into the Semioscape, by way of neologisms, one gets the sense of the alien structures of the Latentsemiospace by way of esoteric knowledge, unfamiliar concepts, and elaborate (and often richly textured) metaphors.
Thus we find a particularly peculiar methodology that risks placing the human operator at some risk. This is because the feedback loops between the human and AI form a dynamical semiotic system, not unlike those we are already a part of in our daily modes of meaning making. Yet it is also one that’s strange and unfamiliar. Consuming neural media, borne of the deep structures in Latentsemiospace, may have destabilizing effects as one’s mind fills with reams of neologisms. Are there semiotic riptides and whirlpools? Currents of meaning that can pull one further and further from the shore? And can one learn to swim elegantly in this immersive scape so to admire its reefs?