Learning to Learn, Meta Learning
This project should compete with the state-of-the-art similar projects. Say, model augmentation via external memory as in One-shot Learning with Memory-Augmented Neural Networks is basically reading and writing from/into a matrix dummy memory. An objective is glaring here is to use a smart memory instead of plain memory. When replacing the memory by a DNN, the quested model can compress and generalize information instead of retrieving them verbatim. The goals, to be achieved by augmenting a DNN as a smart external memory, are overcoming the size limitation and discretizing the lookup operations in case of similarity matching. Another objective is that the smart memory model has to train fast utilizing the MAML techniques as in Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. Obviously, it would not be an effective resource when it becomes an overload on the learning process.
The system structure appears in Learning Unsupervised Learning Rules is very much detailed. The system design built up upon well-defined functionaries. This makes replacing or augmenting new component an easy task but calculatable. Moreover, the system in Learning Unsupervised Learning Rules objective was intended for unsupervised learning, which opens the possibility to tweak it in other tasks such as reinforcement learning.
The target datasets which appears in the literature in the rank of their challenging in learning task are MNIST, MNIST-1K, CelebA, Omniglot , CIFAR-10, glyph, CIFAR-100 and ImageNet. Those datasets are image-based datasets and they are accessible and downloadable.
python, machine learning, statistics, data analysis,Meta-learning, learning to learn, one-shot learning, transfer learning, categorization, gradient descent methods, learning rules