Lasso-Invariant Discrete Energy Minimization – We propose a new method for estimating the mean (norm) of a matrix $g$ on a small number of observations to solve a particular optimization problem. It is shown that the norm itself can be used as a linear nonnegative matrix, leading to higher accuracy than the standard Euclidean norm. This leads to several advantages over existing methods. First, the norm is computed with respect to the underlying matrix $g$, which may be noisy, non-linear, or even sparse. Second, the norm is computed in a principled way and is not subject to the usual loss of noise in estimation, and thus more accurate in inference. The approach is also applicable to large non-Gaussian distributions, where the expected mean of an unknown quantity can be significantly smaller than true, which is useful for general machine-learned regression tasks. Results confirm that the normalized norm is not highly sensitive to the true mean, and is not affected by an extra loss of noise in estimation.

In this paper, for the first time, we propose a general algorithm for multi-source labeling systems and label extraction through a network of nodes that simultaneously learns features from a few classes and then aggregates them. Such learning is very challenging with few real-world applications where the labeled data is available to the labels. A natural approach is to use a priori label annotations for the labels to improve the predictive performance. The problem is addressed by training the network on several labeled data and extracting a novel, natural model. We show that the network is capable of capturing label and label pair correlations from a large number of unlabeled data sets, achieving state-of-the-art results.

Deep Learning for Precise Spatio-temporal Game Analysis

Learning for Multi-Label Speech Recognition using Gaussian Processes

# Lasso-Invariant Discrete Energy Minimization

Tensorizing the Loss Weight for Accurate Multi-label Speech Recognition

A Novel Unsupervised Learning Approach for Multiple Attractor Learning on GraphsIn this paper, for the first time, we propose a general algorithm for multi-source labeling systems and label extraction through a network of nodes that simultaneously learns features from a few classes and then aggregates them. Such learning is very challenging with few real-world applications where the labeled data is available to the labels. A natural approach is to use a priori label annotations for the labels to improve the predictive performance. The problem is addressed by training the network on several labeled data and extracting a novel, natural model. We show that the network is capable of capturing label and label pair correlations from a large number of unlabeled data sets, achieving state-of-the-art results.