Feature Learning
I want to go through the Wikipedia series on Machine Learning and Data mining. Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
References
Notes
In machine learning (ML), feature learning or representation learning is a set of techniques that allow a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
Feature learning is motivated by the fact that ML tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Feature learning can be either supervised, unsupervised, or self-supervised:
- In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model, and it must produce the ground truth label as the output.
- supervised dictionary learning - dictionary learning develops a set of representative elements from the input data such that each data point can be represented as a weighted sum of the representative elements
- neural networks - Neural Networks are a family of learning algorithms that use a
network
consisting of multiple layers of inter-connected nodes
- In unsupervised feature learning, features are learned with unlabeled input data by analyzing the relationship between points in the dataset.
- K-means clustering is an approach for vector quantization.
- Principal component analysis is often used for dimension reduction.
- Local linear embedding is a nonlinear learning approach for generating low-dimensional neighbor-preserving representations from (unlabeled) high-dimension input
- Independent component analysis is a technique for forming a data representation using a weighted sum of independent non-Gaussian components
- Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements.
- Multilayer/deep architectures
- The hierarchical architecture of the biological neuron system inspires deep learning architectures for feature learning by stacking multiple layers of learning nodes. These architectures are often designed based on the assumption of distributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data.
- Restricted Boltzmann machines are often used as a building block for multilayer learning architectures
- Autoencoder an autoencoder consisting of an encoder and a decoder is a paradigm for deep learning architectures
- In self-supervised feature learning, features are learned using unlabeled data like unsupervised learning, however input-label pairs are constructed from each data point, enabling learning the structure of the data through supervised methods such as gradient descent.
Comments
You have to be logged in to add a comment
User Comments
There are currently no comments for this article.