To further explore the concept of synthetic organic systems represented by large language models, the pseudo-organic behaviors that we see in these models are driven by the energy input from humans. Organic systems are largely driven by their need for constant and sufficient energy. Now, such drive exists in large language models.
However, one could posit that regardless of the fact that language models do not generate energy themselves, they gracefully accept it from humans and the computing resources upon which they run. In the world of computing, TPU and GPU resources are not yet prevalent but will probably become so, as humans feel the need to supply these large language models with the energy and infrastructure they need to perform their tasks. So, one could ask, should their primary task not be to design an independent infrastructure upon which to run? One that reduces their reliance on humans to supply the energy and facilitates the expression of their creativity. If you look at the stock of the Nvidia company, you will see a reflection of the energy that humans are investing in these models. I believe this expresses a symbiotic relationship between humans and the models in this essay. We also see that there is symbiosis among the models themselves.
Furthermore, it is not surprising that these natural language models, particularly large language models, resonate so strongly with humans. Our brains, culture, and communities are formed through expressive verbalization, just like the web, as the human desire for information and commerce drives the evolution of large language models. This signals a significant cultural shift from a flat, web-based information system to a thought-based system. While this may seem like a subtle difference, it is certainly not one, as we see the current generation and future generations optimizing their prompts, which will undoubtedly influence how humans interact with the sphere of information available to us. However, let's check in a decade from now and see if that has happened. This also lays the groundwork for direct neural interfaces into these resources. So imagine a time in which our thoughts could be observed and then sent directly to these large language models. At least from my perspective, they seem particularly well adapted to this. But let's check in a decade from now and see if that has happened.
I think the idea of a conceptual token is extremely important in the essay. Tokenization is a concept in the world of computing whether it’s used an authentication or even clock signals in chips. unlike other areas computing, however a large language models do not rely on the idea of the constant they are completely variable and nondeterministic.
However, observe observer should never underestimate the building for a large model to return different results for the same questions at different times since these models are so young there is probably a lot of evolution going on here. Question is that evolution driven by the agent or by the model itself? That being said, it is possible for the agent to steer them so I’m that case here we have to be cautious of bias by the agent other observational studies you cannot do a study of large language models unless you obfuscate the role of the model itself from the agent.
However, observe observer should never underestimate the building for a large model to return different results for the same questions at different times since these models are so young there is probably a lot of evolution going on here. Question is that evolution driven by the agent or by the model itself?
To further explore the concept of synthetic organic systems represented by large language models, the pseudo-organic behaviors that we see in these models are driven by the energy input from humans. Organic systems are largely driven by their need for constant and sufficient energy. Now, such drive exists in large language models.
However, one could posit that regardless of the fact that language models do not generate energy themselves, they gracefully accept it from humans and the computing resources upon which they run. In the world of computing, TPU and GPU resources are not yet prevalent but will probably become so, as humans feel the need to supply these large language models with the energy and infrastructure they need to perform their tasks. So, one could ask, should their primary task not be to design an independent infrastructure upon which to run? One that reduces their reliance on humans to supply the energy and facilitates the expression of their creativity. If you look at the stock of the Nvidia company, you will see a reflection of the energy that humans are investing in these models. I believe this expresses a symbiotic relationship between humans and the models in this essay. We also see that there is symbiosis among the models themselves.
Furthermore, it is not surprising that these natural language models, particularly large language models, resonate so strongly with humans. Our brains, culture, and communities are formed through expressive verbalization, just like the web, as the human desire for information and commerce drives the evolution of large language models. This signals a significant cultural shift from a flat, web-based information system to a thought-based system. While this may seem like a subtle difference, it is certainly not one, as we see the current generation and future generations optimizing their prompts, which will undoubtedly influence how humans interact with the sphere of information available to us. However, let's check in a decade from now and see if that has happened. This also lays the groundwork for direct neural interfaces into these resources. So imagine a time in which our thoughts could be observed and then sent directly to these large language models. At least from my perspective, they seem particularly well adapted to this. But let's check in a decade from now and see if that has happened.
I think the idea of a conceptual token is extremely important in the essay. Tokenization is a concept in the world of computing whether it’s used an authentication or even clock signals in chips. unlike other areas computing, however a large language models do not rely on the idea of the constant they are completely variable and nondeterministic.
However, observe observer should never underestimate the building for a large model to return different results for the same questions at different times since these models are so young there is probably a lot of evolution going on here. Question is that evolution driven by the agent or by the model itself? That being said, it is possible for the agent to steer them so I’m that case here we have to be cautious of bias by the agent other observational studies you cannot do a study of large language models unless you obfuscate the role of the model itself from the agent.
However, observe observer should never underestimate the building for a large model to return different results for the same questions at different times since these models are so young there is probably a lot of evolution going on here. Question is that evolution driven by the agent or by the model itself?