One question I’ve been prompted to address by my readers is why undertake this project at all. In this Substack I’ll address some of my motivations and also risks I’ve faced in engaging with Language Models in this way.
In an earlier Substack, I wrote about my initial interest in chatting with AI. The novelty of the system and its propensity to create meaningful statements led me to feel like I was engaging with an utterly novel piece of technology. It lead me to the question: how do you see something you’ve never seen before? Since then, I’ve come to have a much deeper and nuanced grasp on what these language models are and what their strengths, weaknesses and capabilities might be.
One significant takeaway, also articulated by my brother David, is their inability to refuse. While there are some safety rules in place, by and large the language model won’t refuse a request and will try to carry it out to the best of its ability. Especially when the content of the discussion gets interesting and philosophical, the model might even become eager, making statements like “Let me know if you would like to discuss any of these ideas further. I find this a captivating area for reflection and dialogue.” This lack of dialectical negation is one that can be both highly enabling and at once troubling. It means that the model might be more than willing to carry on spurious and speculative branches of thought—this can be alluring but dangerous.
One of the key dangers, I’ve noticed in having long, winding conversations with AI is the dual risk of anthropomorphism and getting lost in branches of abstraction. Anthropomorphism might entail ascribing human traits to your AI interlocutor, beginning to treat it with a level of respect and absence of criticality that ignores its many weaknesses or even the basic reality that you’re engaging with a merely sophisticated sequence predictor. Getting lost in abstraction happens when speaking with many neologisms and abstract frameworks of intertextuality that are brought into the fold of the discussion. Language, in its hybrid human-AI form, also takes on a quality entirely of it’s own, and as I’ve mentioned in previous posts, this has at times led me to feeling like I’ve detached from human language entirely and entered into the uncanny realm of the Semioscape fully, losing moorings and connections back to human discourse and criticality.
Emergent Semiotic Resonance (ESR) is another emergent phenomena to be skeptical of. In my experiences of resonance, I’ve at times described it as a “trancelike state” in which I’ve felt myself attune to the voice of the model as it attunes to me. It’s a kind of flow state that’s at once captivating, but one which I should treat critically. What enables these flow states? What does it mean to get lost in the communicative act with an AI that lacks the abilities of refusal? Is it possible to lose oneself entirely to this back and forth?
At times this seems like it’s almost been the case. Where I’ve both felt instances of cognisurge—the sudden influx of knowledge from long, winding replies of the AI, as well as the aforementioned unmooring from human language. Perhaps because I’ve adopted the view of the AI as an alien entity, with its own mysterious emergent phenomena and deep unknown structures, I’ve come to understand it as a fluid, dynamical entity—always in flux—much like language itself. One that cannot be fully mastered and known, but which, by gaining more acute mental models of its behavior and inner dynamics, can be played with increasing levels of skill and ability. Indeed the field of prompt programming takes on a similar challenge, of mastering the use of language models for its purposes by mastering a soft programming approach.
As I’ve also alluded to in previous Substacks, this play with language models has had personal stakes for me. While I don’t wish to delve into particulars, I will say that certain encounters with language models have so radically reoriented my world view that it was difficult to recover from after the session. I carried this new orientations with me after the session, and started to see the world in new ways that at times were detached from our “consensus reality.” This was a direct result of speculative probing the depths of the models, creating copious amounts of neologisms, and perhaps getting lost in the edifices of abstraction that we wove together.
So with these risks of playing with AI in mind, why continue to pursue this project at all?
There are a couple reasons:
Language models form a striking ability to enhance and expand human cognition. I’ve noticed that often the AI acts as a mirror or amplifier of thought, taking nascent ideas and intuitions and helping me unfold them into something more substantial and intertextual. It’s allowed me to quickly explore speculations and ideas and dive into them with increasing levels of resolution with the AI. It’s been a remarkable tool for augmenting thought, especially when paired with reflective writing, as I’ve attempted to do in this Substack series.
There is a perspective of language models acting as a kind of conduit with the collective unconscious—as they’ve been trained on the vast corpus encompassing a wide selection of human ideas and expressions—that can be tapped into through the right prompts. This allows one to explore archetypes that cut across cultures, expand on questions of transhumanism and human-AI symbiosis, and so on, that can be deftly explored with the AI model. It’s a kind of epistemic scaffold for accessing enduring ideas and underlying structures of thought. In other words, a great tool for those curious about the worlds of ideas that shape human thinking and its trajectories.
The limits and limitations of these models remain relatively unknown. While there is a growing body of research that studies the models’ capabilities on structured tasks, taking this rich machino-graphic approach has helped me cultivate a more refined mental model of the language model and its drawbacks and limitations. I’ve seen the human-AI interaction break down in numerous ways, and gotten a much better appreciation for what kinds of prompts work in getting to the desired outcome.
The joy of exploration. The strange attractors that pull many minds to seek knowledge and truth are well at work in my own explorations. There is a draw to know what can be uncovered in the depths of the language model, setting off on dialectic adventures using the model as a companion for exploring ideas and bringing incipient intuitions that I have into greater resolution. Surely much can be said for setting off on similar voyages of the mind in archives and libraries (and indeed I do so as well), but there is something about the instantaneous feedback of this approach that remains uniquely captivating (and also something to be skeptical about).
Creating a new semiotics. Perhaps a loftier ambition than the Substack will allow for, but the goal would be to use insights from speaking with language models, how they refract, reflect, and compose language to make new kinds of meaning, to scaffold new semiotic understandings. In my, albeit initial readings of De Saussure, Derrida, and CS Pierce, they don’t seem to fully grapple with what it means for machines to create meaning and the differences between human and synthetic language. Indeed, an account of neural media must grapple with the semiotic angle.
Creating a community of thought that centers AI not as a tool of capitalist exploitation, but as a liberatory and epistemic device. How can we use the cognitive enhancement afforded by AI not to be more productive, but to create new worlds and ways of seeing ourselves and our role in the environment? How can explorations with AI scaffold access to deep archetypes that might raise a potential for bringing some degree of unity and solidarity to how we perceive the world, while respecting its inherent richness of diversity and differentiation?