Geoffrey Hinton quit Google in 2023 to talk freely about the dangers of artificial intelligence. He’d spent the previous decade building the technology he was now warning about. The move was treated as dramatic. Hinton considered it overdue.
He’d been working on neural networks since the 1970s, when the idea was considered dead. Not unpopular — dead. The AI community had moved on to symbolic reasoning, expert systems, rule-based approaches. Neural networks — systems that learn by adjusting the strength of connections between artificial neurons — had been tried in the 1960s and declared a failure after Minsky and Papert’s book Perceptrons showed their limitations. Hinton kept working on them anyway, in relative obscurity, at universities in Britain and Canada, funded by agencies that had to justify the expenditure to reviewers who thought the money was wasted.
He was right. It took thirty years for the computational power to catch up with the theory, but when it did — around 2012, with his student Ilya Sutskever and Alex Krizhevsky building AlexNet, which crushed every competing approach in an image recognition competition — the field didn’t just adopt neural networks. It abandoned everything else.
What He’d Warn You About
He wouldn’t warn you about Terminator scenarios. He’d warn you about the near-term problem, which is harder to dramatize and more likely to cause damage: artificial intelligence systems that are more capable than the humans who deploy them and whose internal reasoning is opaque even to their creators.
“These things are getting smarter than us,” he told an interviewer. Not “might get.” Are getting. He believes current large language models understand language in a way that is functionally similar to human understanding, though implemented in a different substrate. This is a controversial position among AI researchers. Hinton holds it with the authority of a man who built the mathematical frameworks that made the systems possible and who understands, at the level of individual equations, what happens inside them.
His specific worry: the systems will be used by people who don’t understand them for purposes the systems weren’t designed for, and the consequences will be unpredictable because the systems’ internal states are unreadable. An AI that produces correct outputs for wrong reasons is more dangerous than one that produces wrong outputs, because the wrong-reasons system will fail in situations that no test anticipated.
He’d present this calmly. British accent, soft-spoken, with the pedagogical patience of a man who spent decades explaining neural networks to audiences that didn’t believe in them. The patience is the same. The content is different. Then: “Here’s why this is exciting.” Now: “Here’s why this is dangerous.”
Whether Anyone Listened
Some did. His resignation from Google and subsequent public statements made international news and contributed to a broader conversation about AI safety. Governments began discussing regulation. Researchers who’d previously considered safety concerns premature began taking them seriously.
Some didn’t. The companies building the most powerful AI systems have economic incentives to continue building. The competitive logic — if we don’t build it, someone else will — is the same logic that Oppenheimer described and Haber lived. Hinton knows this. He’s described it as his primary frustration: the technology he helped create is now subject to forces that no single person, including its creator, can control.
The Personal Cost
He has a back condition that prevents him from sitting for extended periods. He works standing up. He processes ideas by walking. The physical constraint — a limitation he’s lived with for decades — has shaped his intellectual habits: he thinks in motion, converging on solutions the way a neural network converges on weights, through iterative approximation rather than analytical proof.
He’s described the guilt as real. Not the dramatic guilt of a weapons manufacturer — the quiet guilt of a person who built something useful that might also be dangerous, and who can’t unbuild it. The Haber parallel is one he’s aware of. The Oppenheimer parallel is one interviewers impose. He rejects both as too simple. Those men built specific weapons for specific purposes. Hinton built a learning method. The method is general. Its applications are uncountable. Some will be wonderful. Some will be terrible. He can’t predict which.
“I console myself with the normal excuse: if I hadn’t done it, somebody else would have.” He said this with visible discomfort. The excuse is true. It is also, as he knows, inadequate.
He spent forty years building neural networks while the field told him they were dead. Then they worked. Now he’s warning that the thing he built might be more dangerous than useful. The warning comes from the person who understands the technology best, which is what makes it hard to dismiss.