IBM’s Watson gets sentimental
It wasn't quite as iconic a moment as IBM's Deep Blue supercomputer beating world champion Gary Kasparov at chess.
But seeing Watson, IBM's question-answering computer system take on Harish Natarajan, one of the world's top debaters last year in San Francisco, was a fascinating insight into where natural language processing is heading.
The functionality that powered Project Debater in that demonstration, including sentiment analysis, summarisation of large amounts of textual data and clustering of topics, is now coming to Watson's commercial products, which are used in New Zealand by the likes of Soul Machines to power "artificial humans", computer-generated characters who are increasingly being deployed to handle customer service queries.
The problem to date with AI systems, as anyone who talks to Alexa, Siri or Google Assistant on a regular basis knows, is that they aren't very good at understanding the sentiment in language. It's likely to take idioms like 'let's get the ball rolling on this" quite literally. They really fall short on getting to grips with the casual speech that makes up most of our day to day conversations.
AI systems are very good at pulling vast amounts of relevant information together, but don't typically perform well at doing so quickly in response to the arguments of others - and the tone of the language. Watson services, with a bit of training no doubt, will now be able to take on those challenges.
"It is able to identify and understand complicated words that when put together represent a sentiment shift," says IBM New Zealand's Chief Design and Technology Officer, Isuru Fernando.
"That's really a combination of words that when put together take on a brand new meaning. That's a capability that hasn't really been in the market."
Those features will roll out during the course of the year in existing Watson products, such as Watson Assistant and Watson Discovery.
It comes as the other tech giants develop their natural language processing offerings to be more useful for real-life conversations and information gathering. In 2018 I also watched on as Google's chief executive Sundar Pichai unveiled Google Duplex at the company's I/O conference.
He played a recording of the Duplex-powered digital assistant calling a restaurant to make a reservation, a natural-sounding voice conducting a convincing albeit brief conversation with the maitre d' to book a table. A call to book a hair appointment was equally successful. This is my dream come true, someone to take care of the stuff I'm so bad at, like remembering to make restaurant bookings.
Duplex was used last year by Google to call a number of New Zealand businesses to check what their Labour weekend opening hours were, updating Google Search and Maps results with the information. Duplex has had a low-key launch in the US, so it is early days for the technology. But its potential to automate the low-level admin in our lives is quite compelling.
Ultimately, the audience judged Natarajan the winner in IBM's computer versus human debate. Project Debater did well at assembling a logical argument on the topic of why subsidising preschools helps kids get the best start in life. It was the more informative competitor. But Natarajan had the innate understanding of how to really persuade people.
So the robots still have some learning to do to truly understand us. But new tools rolling out through Watson do promise to help us make better sense of the mountain of text we produce and consume, summarising it and coming up with coherent responses to arguments based on its analysis of language and content.
You can imagine in the legal profession, or any business setting, that this is an invaluable skill set to draw on as we start to explore the next wave of AI-powered tools and what they can do for us in our personal lives and in the enterprise space.
Peter Griffin is one of New Zealand's leading tech commentators. Follow him on Twitter at @petergnz or contact him on [email protected].
You must be logged in in order to post comments. Log In