Learning Spatial and Sparse Generative Models with an Application to Machine Reading Comprehension – Deep learning is rapidly approaching the state-of-the-art in many computer vision tasks. It has been an open problem for many years and deep learning technology is not yet able to solve many applications. In this paper, we investigate two important questions: (1) Can deep learning and other architectures solve the problem of knowledge discovery in image segmentation? (2) What type of architecture can be used to tackle these two questions? Our objective is to design a new deep learning architecture that solves the two questions. We propose a simple framework which is capable to solve the two questions, and we propose a deep learning architecture that improves the performance of image segmentation problems by exploiting the learned priors. We test our framework on a set of image segmentation tasks. The proposed architecture achieves a significant improvement in efficiency over existing deep learning architectures in the segmentation task.
One of the key challenges in the context of multi-task learning is the lack of a generic structure which can identify the temporal dependencies between tasks, and learn both their dependencies and their interdependencies in the sequence of tasks. In this work we propose a novel framework for solving a task-dependent multi-task learning problem. We provide an efficient and flexible framework for learning dependencies between tasks in the context of multi-task learning. We present an algorithm for learning interdependencies of tasks in the context of multi-task learning, with the goal of combining these dependencies to further improve the performance of multi-task learning. The proposed framework is evaluated on synthetic data, and on a real-world dataset, which leverages our approach for training tasks with multiple-task dependencies. Experiments on real and synthetic data show that our framework achieves competitive performance to state-of-the-art multi-task learning methods.
Lasso-Invariant Discrete Energy Minimization
Deep Learning for Precise Spatio-temporal Game Analysis
Learning Spatial and Sparse Generative Models with an Application to Machine Reading Comprehension
Learning for Multi-Label Speech Recognition using Gaussian Processes
On the Effectiveness of Spatiotemporal Support Vector Machines in Saliency DetectionOne of the key challenges in the context of multi-task learning is the lack of a generic structure which can identify the temporal dependencies between tasks, and learn both their dependencies and their interdependencies in the sequence of tasks. In this work we propose a novel framework for solving a task-dependent multi-task learning problem. We provide an efficient and flexible framework for learning dependencies between tasks in the context of multi-task learning. We present an algorithm for learning interdependencies of tasks in the context of multi-task learning, with the goal of combining these dependencies to further improve the performance of multi-task learning. The proposed framework is evaluated on synthetic data, and on a real-world dataset, which leverages our approach for training tasks with multiple-task dependencies. Experiments on real and synthetic data show that our framework achieves competitive performance to state-of-the-art multi-task learning methods.