Multiset Regression Neural Networks with Input Signals – We present an efficient approach for learning sparse vector representations from input signals. Unlike traditional sparse vector representations which typically use a fixed set of labels, our approach does not require labels at all. We show that sparse vectors are flexible representations, allowing the training of networks of arbitrary sizes, with strong bounds on the true number of labels. We then illustrate that a neural network can accurately predict the label accuracy by sampling a sparse vector from a large set of input signals. This study shows a promising strategy for a supervised learning architecture: using such a model for predicting labels, it can be used to predict the true labels with minimal hand-crafted labeling.
Kernel methods have proven to be well applied to many tasks. In this paper, we present the first implementation of kernel methods for the task of learning to learn.
A Novel Approach for Recognizing Color Transformations from RGB Baseplates
Hierarchical Gaussian Process Models
Multiset Regression Neural Networks with Input Signals
Structural Matching through Reinforcement Learning
Learning to Learn by Transfer Learning: An Application to Learning Natural Language to InteractionsKernel methods have proven to be well applied to many tasks. In this paper, we present the first implementation of kernel methods for the task of learning to learn.