The AI pessimists
If we don't let ourselves believe that AI will change the world, it definitely won't.
I was at a conference last week on AI in education. One of the speakers made the point that, throughout all of human history, educational attainment has only made piecemeal improvements. Nothing - from the Victorian age to the Internet age - had caused the educational attainment graph to take a jump.
“Education hasn’t fundamentally changed in hundreds of years,” he pointed out, “nothing suggests to me it will, or should, now.”
That moved the debate into a more profound one, about the best way of teaching (with or without AI). He was an advocate for a traditional style of teaching known as ‘sage on the stage’ - where a teacher lectures to a large group of students who are largely passive consumers of the information. The alternative is a ‘guide on the side’: a teacher who is more interactive with their class and guides students to find answers themselves. Most of the panel, and I suspect most of society, are more enamoured to the latter.
A ‘sage on the stage’, insists its advocates, enhances equitability: every learner, no matter their background or existing ability, gets given the same information in the same manner: a completely level playing field.
The thing that perspective ignores, though, is that the most advantaged learners will benefit from both the sage on the stage and a guide on the side. 31% of the most affluent schools have private tutors on their books in addition to school teachers, compared to just 12% of the most deprived schools. If you offer just one method of teaching and suggest that’s an equal playing field, you ignore the fact that the most well-off families will just pay to access to the other method.
So, if most people favour this ‘guide on the side’ teacher, and we all accept it would at least be beneficial in addition to the sage on the stage, what’s the argument against it?
It comes down to scalability. A sage on stage can teach 100 learners, a guide on side would start to struggle around the mid-20s (in fact, their impact would exponentially decrease from one learner to two and onwards).
AI’s potential is in scaling the guide on the side, this teaching style that right now is only available to the most affluent, either at the best schools with small class sizes or through private tutoring. Yes, AI has enormous faults that prevent it from being a flawless guide today (hallucinations), but to insist that those faults are enough to write it off completely is doing a disservice to the millions of learners that would benefit from the personalised, guided learning that is currently out of reach.
And this is my view on AI pessimism in the macro. Ed Zitron, whose newsletter I read every time it arrives in my inbox, is the king of the AI pessimists and insists - fairly convincingly - that AI will never drive enough value and productivity to recoup the enormous investments and energy consumption that it gobbles up.
Perhaps it’s a bit less technical and a bit more philosophical, but pessimism is usually the easier position to hold on just about anything. Taking the pessimistic position, and being proved wrong, is far less embarrassing than promising a revolution when nothing comes. That’s why I’m generally cautious of those in the AI pessimism camp.
But more than that, I think this pessimism does a disservice to those who could potentially benefit the most from AI. When I look at AI, I see two enormous potential applications: the first is scaling things that have traditionally been held back by scarcity and dependent on humans. Education has been held back by availability of teachers: and the same fate befalls medicine, therapy, mentorship, fitness, sports training, nutrition. Put bluntly, these are things the rich pay for, and the poorest in society wait for, or never get access to. If AI can scale access to things like this, that will be a huge step to levelling the playing field in so many walks of life. Even if an AI personal trainer never competes with the human trainer, surely it’s better than the current paradigm of ‘something or nothing’.
The second opportunity is a jobs shake-up. When we think of AI, we’re typically thinking about generative AI: those text or image-generating tools like ChatGPT and Dall-E. These are very cool, but they are also just a window into how machine learning can mimic human behaviours very accurately, and rapidly learn and improve. It points to a future where AI can take on other monotonous tasks that humans are currently burdened with. Jobs that are uninteresting, or unchallenging, or dangerous - can be replaced by meaningful work that only humans can do (as long as we make a concerted effort to ensure those jobs are created and distributed fairly). The mistake is that when people say ‘AI will take jobs’, many think that means ChatGPT will take on those jobs. ChatGPT is not AI: ChatGPT is just the demonstration of a technology.
That’s the thing about AI pessimism, it misses the key point - which is that what we see today is the beginning. If you look beyond the outputs (the tools we have) and towards the patterns (the ability to learn rapidly and exponentially, mimic humans, follow multi-step processes), it’s easy to apply a bit of creativity and see how AI could scale up traditionally scarce resources, and create an abundance of high quality jobs. Failing to have that creativity is only failing the people who will benefit most.