The Unreasonable Effectiveness of Data
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference The Unreasonable Effectiveness of Data Paper
Sciences that involve human beings rather than elementary particles have proven more resistant to elegant mathematics. Our goal for modeling human behavior should not be to try to make extremely elegant theories to model it, but to make use of the best ally we have: the unreasonable effectiveness of data. The reason that statistical speech recognition and machine translation were more successful than other tasks like document classification or sentiment analysis at first is because of the size f data available for these tasks. The first lesson of Web-Scale learning is to use available large-scale data rather than hoping for annotated data that isn’t available. Another important lesson from statistical methods in speech recognition and machine translation is that memorization s a good policy if you have a lot of training data. The statistical language models that are used in both tasks consist primarily of a huge database of probabilities of short sequences of consecutive words (n-grams). These models are built by counting the number of occurrences of each n-gram sequence from a corpus of billions or trillions of words. Simple models based ina lot of data invariably trump good models based on less data.
Simple n-gram models or linear classifiers based on millions of specific features perform better than elaborate models that try to discover general rules. For many tasks, words and word combinations provide all the representational machinery we need from text. The experimental evidence of the last decade (2000s) suggests that throwing away rare events (in an attempt to address the curse of dimensionality) is almost always a bad idea, because much Web data consists of individually rare but collectively frequent events. The Semantic Web is a convention for formal representation languages that lets software services interact with each other “without needing artificial intelligence.” “The Semantic Web will enable machines to comprehend semantic documents and data, not human speech and writings.” The semantic interpretation problem - the problem of interpreting human speech and writing - is quite different from the problem of software service interoperability.
Because of a huge cognitive and cultural context, linguistic expression can be highly ambiguous and still often be understood correctly. The same meaning can be expressed in many different ways, and the same expression can express many different meanings. Choose a representation that can use unsupervised learning on unlabeled data, which is so much more plentiful than labeled data.
Comments
You have to be logged in to add a comment
User Comments
There are currently no comments for this article.