The stratospheric rise of artificial intelligence (AI) over the last couple of years has been stunning to watch.
We've gone from barely knowing about it to being able to log on and use any of a number of powerful generative AI features whenever we like, and the genie seems like it's fully out of the bottle now.
However, anyone who's had ChatGPT return a pretty mediocre draft, or found a factual error in the response given out by Google Gemini (which used to be called Bard) will have seen that AI is far from foolproof right now.
Advert
To add to that, enter Bill Gates, someone who should know a thing or two about big steps forward in computing as the co-founder and former boss of Microsoft.
Gates recently talked about AI during an appearance on the Armchair Expert podcast with Dax Shephard.
It might not be a huge surprise to learn that while he's clearly very impressed by some AI tools' capabilities, Gates also thinks you need to take some claims about AI's immediate potential with a grain of salt.
Advert
He basically explains that using AI is currently a process of finding out its strengths and weaknesses: "I'm using it all the time and saying, 'Okay, no, it's not good enough for this - but wow, it is good enough for that.'"
Where Gates does see potential is in jobs that humans do slowly but are fundamentally quite simple, an area where AI could majorly streamline things.
However, where things like creativity or emotional intelligence are part of the equation, Gates is less confident in AI's ability to give useful input - and he even questions whether employing it instead of a human is the right thing to do.
Advert
Furthermore, he expresses concern about AI's ability to scrutinize its own work: "It doesn't know to check its answers," he said.
Giving an example, he explained that in "a Sudoku puzzle, you have to do a lot of recursive reasoning and it doesn't know to take extra time". When it therefore gets an answer wrong and is told that, according to Gates, the problem isn't fixed.
"It's so apologetic, and it says it'll try again, but of course it gets it wrong again." This makes it look like Gates thinks there's still a pretty long road ahead for AI.
After all, if we're going to employ it in high-stakes situations like healthcare or any remotely dangerous workplace, that sort of error loop can't really be allowed to happen. This sort of hesitancy from a tech figure as big as Gates is interesting, too - a lot of big players are all-in on AI, but it sounds like thinks a bit more of a measured approach could be sensible.