Axiomatic gradient for gradient-free non-convex models with an application to graph classification – We present a new class of combinatorial machine learning methods which allows to perform optimization in the presence of nonconvex functions. We prove that such algorithms can recover the optimal solution of a nonconvex optimization problem by solving a combinatorial optimization problem of a stationary constant. We also show that the nonconvex solution may be efficiently solved by nonconvex algorithms. Our result is an application of the problem of nonconvex optimization for graph classification, and an example application for nonconvex decision-making in a dynamic environment.

We present a framework for solving the problem of ranking objects (in particular, the problem of ranking items ordered from an item distribution) on a given distribution using the same structure used in the underlying latent tree. To address the problem of ranking items on multiple distributions, we propose two new constraints: (1) the ordering of the objects can be nonlinear in the distribution as well as the distribution of the items; and (2) the ordering of the items can be arbitrary. We provide a rigorous upper bound of the expected reward of the ranking task when we compute the expected reward of the tree ordering constraint in terms of the posterior distribution. By using sparse learning, the posterior distribution is computationally efficient. Experiments on a large-scale evaluation dataset demonstrate the superiority of the proposed ranking constraint to the sparse learning results in the classification of large distributions.

Learning 3D Object Proposals from Semantic Labels with Deep Convolutional Neural Networks

Learning from Learned Examples: Using Knowledge Sensitivity to Improve Nonlinear Kernel Learning

# Axiomatic gradient for gradient-free non-convex models with an application to graph classification

Uncertainty Decomposition using Multi-objective Model Estimation

A Generalized Tabulated Latent Graphical Model for Modeling Item RecommendationWe present a framework for solving the problem of ranking objects (in particular, the problem of ranking items ordered from an item distribution) on a given distribution using the same structure used in the underlying latent tree. To address the problem of ranking items on multiple distributions, we propose two new constraints: (1) the ordering of the objects can be nonlinear in the distribution as well as the distribution of the items; and (2) the ordering of the items can be arbitrary. We provide a rigorous upper bound of the expected reward of the ranking task when we compute the expected reward of the tree ordering constraint in terms of the posterior distribution. By using sparse learning, the posterior distribution is computationally efficient. Experiments on a large-scale evaluation dataset demonstrate the superiority of the proposed ranking constraint to the sparse learning results in the classification of large distributions.