Siri Answers 78.5% Of Questions Correctly In A New AI Assistant Focused Test, While It Still Lags Behind Google

A few months ago, market research firm Loup Ventures conducted a test that was mainly targeted at the smart speakers from a number of technology companies to see how they performed in answering questions. Now, they conducted a new test focused solely on digital AI assistants. Though Siri still needs work, the correct rate has significantly improved.


In order to get more objective results, Loup Ventures asked 800 questions on Apple's Siri on iOS 11.4, Google Assistant, Amazon's Alexa and Microsoft's Cortana on a smartphone during the test. The result shows, Apple's artificial intelligence voice assistant Siri was able to understand 99% of the questions and correctly answered 78.5% of it.

Compared to a similar AI-focused test from April 2017, which Siri answered 66.1 percent of 800 questions correctly, it was a big improvement with no doubt. However, researchers explain that it's "not worthwhile to compare" the results across these tests since "the use cases differ greatly between digital assistants and smart speakers."

Regarding the results, Siri performed best in the “Command” category, with 90% of the questions were answered accurately, better than its all competitors. In other categories, the “Location” queries were 87% correct, “Navigation” 83%, “Information” 70%, and “Commerce” of only 60%.

As for other AI assistants, Google answered 85.5 percent of the 800 questions correctly and understood all of them. Alexa accurately answered 61.4 percent but misunderstood 13. Meanwhile, Microsoft's Cortana was way lag behind as it only answered 52.4 percent of question right and misunderstood 19.

Post a Comment

Previous Post Next Post