Recently, I came across an interesting article exploring potential directions for future machine learning research "
From Machine Learning to Machine Reasoning". The main theme of this article involves a plausible definition of "reasoning": "algebraically manipulating previously acquired knowledge in order to answer a new question".
It is an interesting perspective as it explains how representation learning, transfer learning and multi-task learning could help construct practical machine learning systems for computer vision and natural language processing. Below depicts an example of training face recognition system in the paper:
Figure 1 in the paper "From Machine Learning to Machine Reasoning"
If we consider trained models (either for the underlying task or other related tasks) as previously acquired knowledge, then it actually advocates a sequential construction manner by re-using the representations (for sample or for category) obtained. In this sense, the recent
Dark Knowledge and
FitNets also share similar spirit in the realm of neural networks.
No comments:
Post a Comment