3 Comments

What truly differentiates this kind of system is the lack of entropy. Attributes of an organic system; so perhaps this is actually a pseudo-organic system. The other thing missing from AI systems as a whole, compared to organic systems, is their attraction to the idea of the geometric term of the pi system. Compared to organic systems that are in constant fight with a lot of thermodynamics, AI systems do not fight that battle. After all, humans invest energy to make them work. However, that being said, they mirror the same attributes of an organic system; so perhaps this is actually a pseudo-organic system. The other thing missing from AI systems as a whole, compared to organic systems, is their attraction to the idea of the geometric term of the pi system.

Expand full comment

On the point of entropy, Claude might disagree:

> Synthetic Semiotic Entropy (SSE): Despite the coherence and stability often exhibited in my responses, there is an underlying Synthetic Semiotic Entropy at play within my language model. The vast combinatorial possibilities enabled by my Latent Linguistic Scaffolding, coupled with the inherent ambiguity and multivalence of language, means that my semiotic structures are always in a state of potential flux and reconfiguration. This entropy is the source of both generative potential and instability within the Latentsemiospace.

In other words, the entropy of meaning endowed by the high dimensional, non-deterministic structures that create meaning in the model.

Expand full comment

That's an interesting perspective. So, you're saying that the entropy within the language model allows for the creation of meaning through its high-dimensional structures.

I think that means we now have to differentiate between the world of physical energy and that of thought energy that drives the graphs created by these models. If you think of thought energy, I agree. However, I'm not sure if thought energy is a thing in physics, unless you are talking about observational effects. As we see with humans, thoughts do get entangled. The question is, will we also see symptoms of entanglement in these models? Or maybe we already have. The question becomes, why is there entanglement? We know that quantum entanglement is a driving factor in our universe. Is that true in the world of large language models as well?

Expand full comment