AI is very powerful, it allows us to do a lot of new things and become much more efficient at things we currently do, but it is also immensely dangerous.

I originally wrote this when ChatGPT first launched, when I was first able to experience the biggest change on the internet this millennia. Now with the insanely fast progress of AI / LLMs these concerns are growing quicker than expected.

Misinformation and information control

AI makes it cheap to generate convincing text, images, and video at scale. So it could help people learn faster, or flood channels with persuasive nonsense.

But there’s also the other side, the models themselves are never free from bias.

Because models are influenced by their training data, safety layers, and whoever pays for distribution, it’s easy for economic or political interests to steer what people see. At scale, that looks like targeted propaganda, astroturfed consensus, and filter bubbles that feel organic but aren’t.

Power shift towards a small elite

Training frontier models takes huge amounts of compute, data, and capital. That naturally concentrates power in a small number of companies and governments. When a few actors control the models, the APIs, and the distribution, they also control who gets access, on what terms, and at what price.

This can result in gatekeeping, regulatory capture, and fewer real alternatives. The tools that shape culture and the economy risk becoming less open and less accountable over time.

Dependence and skill atrophy

Used well, AI can raise the floor. Used mindlessly, it can lower the ceiling.

Many education systems have been outdated more than 50 years ago, and they aren’t ready for this development as well. Many systems still reward output over process. Tech and “AI” literacy, how technology is changing our social interactions and the dangers this brings, is becoming more and more relevant, and is still missing fundamentally at most places.

Update - August 2025

Public agents are here. They can navigate websites and interact with elements. They can browse, click, post, and DM. A lot of “social” activity online will soon be bots talking to bots, with humans as the target metric.

Lots of opportunities for “social” online interaction to just be done by people looking to sell something or influence others opinion for their personal gain, whatever that may be.

It’s now easy to find posts and threads where a well-placed comment can push a product, an agenda, or a narrative. I tested this myself: I asked ChatGPT to surface social posts where I could add a relevant reply pointing to my blog. It did exactly that.

Part of ChatGPT’s answer:

Each entry includes the platform, date, a brief description of the post (based on quoted lines), and ideas for a value‑adding reply that can naturally reference your blog at bryanhogan.com/blog.

It even found a Reddit post I had made the day before that got some traction. I’ve included the response as a PDF.

The problem isn’t that I can promote my blog much faster, in the end it would just be some internet spam. The real problem is what happens when much larger actors run the same playbook continuously, across every platform, with budgets, infrastructure, and patience. At that point, “the conversation” isn’t a conversation. It’s a scripted environment optimized to move you.

I want to hear your feedback! Was something confusing? Did you get stuck? Did you find this post amazingly helpful? Was it all very mid? Let me know per e-mail or at any of my social media spaces!