AI is very powerful, it allows us to do a lot of new things and become much more efficient at things we currently do, but it is also immensely dangerous.
I originally wrote this when ChatGPT first launched, when I was first able to experience the biggest change on the internet this millennia. Now with the insanely fast progress of AI / LLMs these concerns are growing quicker than expected.
Misinformation and information control
You can generate a lot of things with AI, text, images, videos and more. So people can use it to help them learn, or to flood the internet with trash.
Then there are also the models themselves, and the data they have been trained on. In machine learning there is always a bias, which is impossible to get rid of completely.
But then there is also the bias of the party behind the model, the people that finance its creation. This includes both political and economical interest which has an effect on the training data used, as well as things such as the safety layers built into the model.
This critically influences what people can see when interacting with such models, but there is much less transparency behind the results.
Power shift towards a small elite
Training frontier models takes huge amounts of compute, data, and capital. That naturally concentrates power in a small number of companies and governments. When a few actors control the models, the APIs, and the distribution, they also control who gets access, on what terms, and at what price.
This can result in gatekeeping, regulatory capture, and fewer real alternatives. The tools that shape culture and the economy risk becoming less open and less accountable over time.
Dependence and skill atrophy
AI can help us learn quicker, to produce better content more quickly. On the other hand, it can also be used as an alternative for thinking, used for filling the world with garbage.
Many education systems have been outdated more than 50 years ago, and they aren’t ready for this development as well. Many systems still reward output over process. Tech and “AI” literacy, how technology is changing our social interactions and the dangers this brings, is becoming more and more relevant, and is still missing fundamentally at most places.
Update - August 2025
AI “agents” are easily available now. They can navigate websites and interact with elements. They can interact with the internet like a human. A lot of “social” activity online will soon be bots talking to bots, with humans as the target metric.
Lots of opportunities for “social” online interaction to just be done by people looking to sell something or influence others opinion for their personal gain, whatever that may be.
It’s now easy to find posts and threads where a well-placed comment can push a product, an agenda, or a narrative. I tested this myself, I asked ChatGPT to surface social posts where I could add a relevant reply pointing to my blog. And that’s what it did.
Part of ChatGPT’s answer:
Each entry includes the platform, date, a brief description of the post (based on quoted lines), and ideas for a value‑adding reply that can naturally reference your blog at bryanhogan.com/blog.
It even found a Reddit post I had made the day before that got some traction. I’ve included the response as a PDF.
The problem isn’t that I can promote my blog much faster, in the end it would just be some internet spam. The real problem is that bigger actors with worse intentions can apply this approach on a much larger scale. Digital spaces for social interactions are becoming a scripted environment, optimized to move you.
I want to hear your feedback! Was something confusing? Did you get stuck? Did you find this post amazingly helpful? Was it all very mid? Let me know per e-mail or at any of my social media spaces!