Meta Learning
I want to go through the Wikipedia series on Machine Learning and Data mining. Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
References
Notes
Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.
Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem.
Definition
A proposed definition for a meta-learning system combines these requirements:
- The system must include a learning subsystem
- Experience is gained by exploiting meta knowledge extracted
- in a previous learning episode on a single dataset, or
- from different domains
- Learning bias must be chosen dynamically
Bias refers to the assumptions that influence the choice of explanatory hypotheses and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias:
- Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space
- Procedural bias imposes constraints on the ordering of the inductive hypotheses.
Common Approaches
There are three common approaches:
- Using (cyclic) networks with external or internal memory (model-based)
- learning effective distance metrics (metrics based)
- Explicitly optimizing model parameters for fast learning (optimization based)
Model Based
Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learning model.
Memory-Augmented Neural Networks
A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after inly a few examples.
Meta Networks
Meta Networks lean meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.
Metric-Based
The core idea in metric-based meta-learning is similar to nearest-neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.
Convolutional Siamese Networks
Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data and sample pairs.
Matching Network
Matching networks learn a network that maps a small labelled support set and an unlabeled example to its label, obviating the need for fine-tuning to adapt to new class types.
Relation Network
The Relation Network (RN) is trained end-to-end from scratch. During meta-learning, it learns a deep distance metric to compare a small number of images within episodes, which is designed to simulate the few-shot setting.
Prototypical Networks
Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class.
Optimization Based
What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples.
LSTM Meta-Learner
LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime.
Temporal Discreteness
Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent.
Reptile
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic.
Comments
You have to be logged in to add a comment
User Comments
There are currently no comments for this article.