
Member Reviews

Never through I would be interested in a book of this context, but here we are. So many of the issues drawn upon in James Barrat’s, The Intelligence Explosion, made me question my own personal use of AI and the use of AI among those around me.
Some of the elements addressed also made me rethink the entire concept of AI. For example, the way it cannot be controlled and its level of unpredictability. Honestly makes me compare the world we live in to that of a dystopian society in books I read when I was much younger.
Overall, fantastic work!

If you’re looking for ammunition to bolster your belief that AI is bad - or at least dangerous - there’s a lot of solid material in this book. Fundamentally, it comes down to the following:
- AI makes mistakes (call them hallucinations if you like) and we shouldn’t trust it.
- AI has serious ethical issues, and is riddled with biases.
- There are major safety concerns, especially as AI becomes more autonomous.
- While AI can be used for good, it can easily be used by terrorists, hackers, and rogue states.
- AI is almost certainly going to upend our economy.
- All these things have already happened, but it’s getting worse.
- We have absolutely no idea how AI works and we can’t control it.
- When AI reaches a certain point, it will be out of our hands.
- This can literally pose an existential threat to the human race.
- The AI companies and governments know all of this but don’t care - they’re just trying to make money and/or win.
All of which is true, but unless you’ve been living in a remote cave for the last 18 months, you probably already know all of the above. The Internet is awash with people sounding the alarm on a daily basis, from everyday bloggers and disgruntled artists to journalists, tech CEOs, AI experts. (And, as you’re equally aware, they have managed to do absolutely nothing to slow the inexorable rise of AI.) There’s very little new in the book, even for someone with only a passing interest in AI.
That said, one of the few things I learned was absolutely terrifying: the Israeli autonomous assassination AIs, which sound like something out of a Terminator movie, but are in fact real. They use AI to scour social media and other online sources to guess who might be a Hamas leader, and where they might be at any given time. Then the AI dispatches autonomous drones to kill them, all with no human intervention. Shockingly, the system is called Where’s Daddy? because it extrapolates from children’s locations where their father is likely to be. The AI comes with guardrails: killing 10 civilians for one Hamas member is acceptable, but for a leader, it can be up to 200 collateral deaths. And 70% accuracy in selecting targets is good enough. Regardless of the rights or wrongs of what’s happening in Gaza, the fact that that technology is actually in use should scare every single person on Earth. If the Israelis can do it, so can anyone else.
I also wonder how much of this book will be out of date by the time it’s published, let alone six months later. Predictions about what will happen in 2026 will very soon be irrelevant, and in the world of AI, predicting what will happen in 2028, let alone 2035, is a fool’s game.
Overall, I was left with the feeling that Barrat would have done better to focus his ire on the AI companies and visionaries, rather than the technology. The direct quotes from the likes of Altman and Hassabis show that they know just how dangerous this technology is, and they are acutely aware that they could destroy humanity, but they’re pressing ahead anyway. That’s pretty much the definition of pure evil, on a scale that dwarfs even a Bond supervillain.
“AI will mostly likely lead to the end of the world, but in the meantime, there will be great companies created.”
(Sam Altman)

This was an unbelievably eye-opening read about the unknown unknowns of AGI and ASI. I highly recommend this book for its powerful insight into the real dangers of artificial intelligence. I thought I had a good understanding of the risks, but this book made clear that no one truly knows what is going happen and the damage could be irreversible.
I recommend this to all readers as our lives will inevitably be affected by The Intelligence Explosion. A wonderfully insightful and urgent warning we should all take extremely seriously.

When I first got this boom I was worried they it would be a defense of AI and talk about how awesome it is and how everyone should use it (not gonna lie I only read the title and not the synopsis go me). It turns out this is all about AI yes, but it is also about how dangerous it can be and how it needs to be examined, understood, and regulated.
The book goes into depth about AI systems, trying to explain everything as best as able in terms that are easy to understand, I say trying because it turns out that most of what goes on inside of AI systems is completely unknowable. The author does a good job of explaining what is able to be explained in my opinion. It is very understandable and readable and I came out of this with a lot more knowledge that in entered into this with.
A lot of this book covers how we have no idea what is going on with AI. Many systems are “black box” meaning we know what the inputs and outputs are but we have no idea what caused those outputs to happen. It also goes into the different ways people are trying to make AI safe for people to use.
This has a very negative/doomsayer approach to AI. If a lot of the people involved with AI can be believed, or at least the people involved mention in this book can be believed, AI is basically going to cause the apocalypse and kill everything. I don’t like AI personally but that seems a little extreme to me?
While the book does a very good job at explaining things, it can go a little far with it at times. The book can be very repetitive as it explains the same concept multiple times in multiple chapters.
Overall despite the bleak doom and gloom vision of the book I really did like it much better than I thought I would. Overall I give this 3.5 stars. Recommend for people looking to know more about AI.