The internet for the mind

Think back to the early days of the internet. It felt like a new world—a seemingly endless, ever-expanding network of information.

Suddenly, you could access knowledge and ideas from around the globe.

It wasn’t just about the facts; it was about making connections that were previously impossible. The internet changed how we interact with information, but it also comes with a lot of noise—misinformation, low-quality content, and rabbit holes that lead us further from the truth.

LLM (large language model) chatbots are a lot like that. Only, instead of navigating web pages, we’re navigating our own thoughts through conversation. LLMs offer us the ability to pull from the vast pool of collective human knowledge near-instantly, distilling it into responses that are tailored to our immediate needs. It’s like having the internet in a more intimate form—an internet that talks back, that reflects our ideas and questions in real-time.

But just like the internet, there’s more to it than meets the eye. LLMs aren’t just a tool for knowledge—they’re reshaping how we work. They bring us closer to the information we want, but they also come with new challenges and subtle trade-offs that we’re only beginning to understand.

Curated intelligence in a sea of junk

The internet revolutionized how we access information, but there’s a lot of junk out there. You have to sift through low-quality content, ads, and unreliable sources to find what you’re looking for. LLM chatbots take that same ocean of data and curate it for you. They’re fast, efficient, and sometimes insightful.

But like the internet, they don’t always filter out the noise. Sometimes, LLMs pull in misinformation or give you a polished answer that sounds right but isn’t quite accurate. For example, ask an LLM for a historical fact, and it will synthesize conflicting sources without letting you know the quality of those sources. Just like with the early internet, we need to approach LLMs with a discerning eye, questioning the information we receive rather than taking it at face value.

Living in personalized bubbles

One of the most helpful aspects of LLMs is how responsive they are to the individual. Every time you ask a question, an LLM chatbot learns a bit more about your interests and preferences. LLMs will tailor their responses to fit your specific needs and habits, much like how search engines and social media algorithms learn to show you content you’re likely to engage with.

This kind of personalization is perfect for creating bubbles. Just as we’ve seen with the internet, where algorithms reinforce what we already believe, LLMs can start to reflect back our own ideas, reinforcing our biases. The more we interact with these systems, the more they adapt to our worldview, shaping the responses we receive and reinforcing the ideas we’re already comfortable with.

It’s easy to see how this could lead to a more insulated experience, where our access to new perspectives is limited by the very tools that are supposed to expand our knowledge. While LLMs offer a personalized, conversational interface with knowledge, they also risk isolating us in a bubble of our own making. Bringing peoples' realities even farther apart.

Bridging time, but not people

One of the most exciting things about LLMs is their ability to bring knowledge across time. With a simple prompt, you can access the thoughts of authors, researchers, and thinkers from any era. It’s a kind of time travel, where the insights of the past are instantly available and contextualized for you in real-time. LLMs allow you to engage with centuries of knowledge, almost as if you’re having a conversation with the past.

But while LLMs connect us to information, they don’t connect us to people. Instead of engaging with a real person, you’re interacting with a digital version of their thoughts—a representation built from data. It’s efficient, but it lacks the nuance of a true human conversation. The warmth, the subtlety, the back-and-forth that comes from engaging with a real person is replaced by an algorithm.

In many ways, this makes interactions more efficient, but it also makes them feel more distant. It’s the same trade-off we see with the internet, where connecting with others became easier but less... human. LLMs reflect back the collective intelligence of humanity, but they lack the thing that makes communication deeply meaningful.

Democratizing knowledge, amplifying the noise

LLMs hold incredible potential for making knowledge more accessible. In the past, learning complex topics required years of study or access to expensive resources. Now, you can ask a question and get an answer within seconds. It’s a kind of democratization, where anyone with access to these tools can engage with high-level knowledge without the traditional barriers of education or expertise.

But this democratization comes with its own set of challenges. Just as the internet made it possible for anyone to publish content, LLMs can pull from sources that are unverified or misleading. They amplify not just the best knowledge, but also the noise. The key to using LLMs effectively is learning how to navigate this noise, how to critically evaluate the responses we receive and separate valuable insights from misleading information.

An internet for the mind

In many ways, LLMs are changing not just how we access knowledge, but how we think. They’re not just tools for finding information—they’re an internet for the mind, allowing us to interact with knowledge in a fluid, dynamic way. Instead of passively consuming information, we’re engaging with it, questioning it, shaping it in real time.

But how we choose to use these tools will shape not just our understanding of the world, but our place within it. Will we use LLMs to challenge ourselves, to step outside our bubbles, and engage with new ideas? Or will we allow them to reinforce what we already know, creating a more insular, isolated experience?

The potential of LLMs is immense. They offer us a new way to interact with the collective intelligence of humanity, to ask questions and explore ideas in ways we’ve never been able to before. But, like the internet, they come with complexities and trade-offs that will shape us. Let's make sure we let them shape us for the better.