I wrote a book about (and with) AI

At the end of 2023, I was approached by an editor working for Hachette to write a book about artificial intelligence.
This was the first time I’d been asked to write a specific book rather than pitching it myself, but I was intrigued. The book was intended to be part of the relaunch for a series called “50 Ideas You Really Need to Know” and what Nicole Thomas, the editor in question, didn’t know was that I had edited and published the four first books in this series, back in the day – philosophy, physics, mathematics and management. I knew the series inside out, having defined it, so I thought “how hard can it be?”




Writing books is hard. And now I look back, I recall my authors telling me how difficult it was to write these particular books because of their combination of breadth and depth. They were right, but in the end I am very proud of 50 AI Ideas You Really Need to Know. I even had a helping hand writing it, but more on that later.
Nowadays I’m a full-time publisher at Penguin Press and I’ve also published two excellent AI books by other authors this year: Salman Khan’s Brave New Words and Neil Lawrence’s The Atomic Human.


Sal is the founder of Khan Academy with a mission to offer every child a free world class education, and with something like 150 million signed up using the service. When OpenAI was looking to release ChatGPT, Sam Altmann and Greg Brockman, the CEO and President of OpenAI respectively, approached Sal to offer him the next iteration, then GPT4, from which to create a positive use-case for generative AI as a personal tutor and teaching assistant. Pandora’s box had been opened, and Sal’s title is about embracing what’s good about it, rather than trying to slam the lid shut and ban school kids from using it. Instead we should adopt a policy of educated bravery.
I’d long thought Neil Lawrence was the most knowledgeable person I could find to write about AI, and he didn’t disappoint. Neil opens his incredible readable, storified book revealing a conjuring trick in which Yann LeCun and Mark Zuckerberg came together in rebranding machine learning as artificial intelligence – and the rest is history. Neil went on to be the Amazon Director of Machine Learning for three years, and is nowadays the DeepMind Professor of Machine Learning at Cambridge. He’s unusual having been at the coalface in industry and also the pinnacle of academia. His book is about the multifaceted nature of intelligence, and what makes us Human, that the machine can never take away. And, in his mind, why it’s misguided to talk about Superintelligence as a concept in which machines are vastly smarter than us, posing what’s called an “existential risk”, for that reason. Neil believes the nature of Human intelligence (and possible even our consciousness) comes from our constraints, not our abilities, because our intelligence is embodied. He thinks the biggest risk from AI is how it undermines our decision-making in the here and now – if we’ll let it.
I mention this rebranding because modern AI is the result of a machine learning technique known as neural nets, trying to emulate how the Human brain works – it was Alan Turing who first suggested we shouldn’t try to build a very intelligent computer, but rather should split the challenge into two: a computer capable of learning, and a means of training it. In the old days, good old-fashioned AI was instead much more about programming Human intelligence directly into machines, but that’s now been supplanted, but the name has remained and is what we think of as modern AI.
Back in the 1990s I published unfashionable but pioneering books on neural nets and machine learning from the likes of Donald Michie and Chris Bishop, before going on to publish more traditional AI textbooks in the noughties. But something that’s special about some books is how they can change the conversation – sometimes they can change the world.

In 2014, I published Swedish thinker Nick Bostrom’s Superintelligence about how we were at risk of being the agents of our own demise. The basic premise, that we’re not the fastest animals, or the strongest, or the ones with the sharpest teeth or claws, but Humans dominate the world because we have the best brains. And what happens when we invent brains more powerful than ours, and they invent better brains still, and there’s an “intelligence explosion” that leaves us far behind? And it’s not that this superintelligence might want to eliminate its creators – rather that our fate would be largely irrelevant to it. Just as no Humans actively want gorillas to go extinct, but their fate is in our hands and we’re not doing much to prevent this happening because we have bigger priorities.
Books can change conversations, or start new ones. Elon Musk tweeted about Nick’s book, and he and Bill Gates publicly discussed it – and it became a New York Times bestseller. The field of “AI safety” went overnight from being a fringe subject hardly anyone even knew existed, into the global mainstream. Elon is the person who (along with Nick and perhaps even me) thinks most about the future of Humanity, and securing this, so he organized for a new open-source vision for AI – by setting up OpenAI. Which ultimately led to the generative AI revolution we see today.
In the meanwhile, in 2019 I was approached by an American publisher BenBella to see if I would edit a book on AI authored by the then co-lead of artificial intelligence in the US Air Force, Michael Kanaan. Mike’s book T-Minus AI is a great story of what AI is and how it came about, and was published just as the new era was beginning to unfold. OpenAI had been developing a Google invention called a transformer through which AI seemed to be genuinely creative for the first time. It was such a shocking development that OpenAI felt unable to publicly release the software, which they called GPT-2, in case it fell into the wrong hands.

However, Mike was allowed access and used it to write a final paragraph of his book. On just the second attempt, his 43-word prompt produced an AI-written paragraph of 77 words that blew our minds. I wonder if it was the first time AI had contributed to a book in this way.
The world has shifted, and Humans quickly acclimatise – it’s one of our strengths. With my own 50 AI Ideas You Really Need to Know I always had in mind that I should give AI a voice and have it write the final chapter – the 50th idea, which is entitled “The View from AI”. But writing an entire chapter is quite a leap from writing a single paragraph. Over the months of writing my other chapters, I would sometimes have ideas for what I hoped the AI would discuss, so I’d jot them down into a growing prompt for when the day finally came. The first sentence of 50 AI Ideas reads:
“By now, probably all of you reading this book will have experienced having a conversation with a computer, and feeling as if you are understood”
Creative AIs are known as “large language models” and the particular model I wanted to use for this task was Anthropic’s Claude Sonnet. To began our conversation, I explained what I wanted it to do, and it asked some basic questions around word count and tone. There’s a fun text feature in the book of standalone boxes that discuss a self-contained topic of interest, relating to the main theme but which can be read on their own. Claude wondered if I wanted to tell it what to write about in the box, or if it should choose a topic of its own. I was too busy, running up against my publisher’s deadline, so I simply told it to pick something it thought would be interesting.
The first attempt was impressive, but failed in a way AI often still does by being far too vague and generic. I wanted this final chapter to be useful, and have specific insights that would be useful to readers, so I asked it to try again on that basis. The second attempt was far more detailed, but came with the problem of having far too much text, that would never fit in the space provided. I wondered about editing it down myself, but that would be time-consuming and would turn the AI’s words more into my own. Thinking it was a tough ask, I requested a third attempt to keep the specifics but stick to the original word count. And that, entirely unedited on my part, is what appears in the final book. I feel it’s quite mind-blowing, but I also know AI will only get better.
I also felt seen. Readers of this site or my followers on social media generally, will know me as a space-geek. But there was no reason for the AI to know that. There was nothing in my prompt that talked about space. But there, in the standalone text box in front of me, AI had chosen to address the Fermi paradox – the mysterious apparent absence of alien intelligence in the Universe. And how and why AI might assist in the search for it.
Later, when discussing this with Nick Bostrom, he said “it used at least to be a rule I noticed that every conversation with an interesting person eventually would bring up the Fermi Paradox”. I think it’s a deep insight we might now consider AI to be an interesting person.
The book was published in September 2024, and I hope you all enjoy it. There wasn’t room for me to publicly acknowledge the inspiration of all the authors I’ve worked with over the years, so I do that here.
Related
~ by keithmansfield on October 25, 2024.
Posted in Authors, Publishing Industry, Science, Writing
Tags: AI, Artificial Intelligence, book blogs, chatgpt, machine learning, publication, publishing, Quercus, technology, writing

Subscribe to Keith Mansfield's RSS feed