Abstract: Meta-learning stands for ‘learning to learn’ such that generalization to new tasks is achieved. Among these methods,Gradient-based meta-learning algorithms are a specific sub-class that excel at quick adaptation to new tasks with limited data. This demonstrates their ability to acquire transferable knowledge, a capability that is central to human learning. However, the existing meta-learning approaches only depend on the current task information during the adaptation, and do not share the meta-knowledge of how a similar task has been adapted before. To address this gap, we propose a ‘Path-aware’ model-agnostic meta-learning approach. Specifically, our approach not only learns a good initialization (meta-parameters) for adaptation, it also learns an optimal way to adapt these parameters to a set of task-specific parameters, with learnable update directions, learning rates and, most importantly, the way updates evolve over different time-steps. Our approach is simple to implement and demonstrates faster convergence compared to the competing methods. We report significant performance improvements on a number of datasets for few-shot learning on classification and regression tasks.