Artificial intelligence has a checkered past. It has gone through multiple waves of huge expectations followed by incredible disappointments. We have seen the rise and fall of expert systems, neural networks, logic (hard and fuzzy) and the use of statistical models for determining reasoning.
We seem to be, once again, in an era of heightened expectations regarding A.I. We now have Siri, IBM Watson, self-driving cars and the proliferation of machine learning, data mining and predictive systems that promise an unprecedented, even frightening, level of machine intelligence.
But how is this current rise of A.I. different from what we have experienced before? What has changed to make us believe that the technology will make good on its countless promises?
I would argue that, ironically, the core technologies of A.I. have not changed drastically and today’s A.I. engines are, in most ways, similar to years’ past. The techniques of yesteryear fell short, not due to inadequate design, but because the required foundation and environment weren’t built yet. In short, the biggest difference between A.I. then and now is that the necessary computational capacity, raw volumes of data, and processing speed are readily available so the technology can really shine.
For example, IBM Watson leverages the idea that facts are expressed in multiple forms and that each match against each possible form equals evidence of the answer. The technology first analyzes the language input to extract the elements and relationships needed to determine what you might be looking for, and then uses thousands of patterns made up of the words in the original query to find matches in massive corpora of text. Each match provides a single piece of evidence, and each piece of evidence is added up to provide Watson with a number associated with each answer. While an exceptional system, there is not a lot of new A.I. technology at play in Watson.Watson tends to converge on the right answer because of the sheer volume of the different matches against the truth as it is expressed in the text.
If it only has a small number of documents to examine, however, the odds of it finding the information it needs, expressed in a way that it can understand, is small. In parallel, the odds that faulty information is going to get in the way of finding the truth is reduced as the corpus size grows. So, in much the same way that search results are improved as the data sets are expanded, the raw volume of text available to Watson directly correlates with the probability that the system will get to the right answer.
http://www.computerworld.com/article/2982482/emerging-technology/why-artificial-intelligence-is-succeeding-then-and-now.html
No comments:
Post a Comment