Learning with Generalized Negative Dependence: Probabilistic Models of Diversity for Machine Learning

Zelda Mariet

PhD thesis [Defense slides]

triforce

This thesis establishes negative dependence as a powerful and computationally efficient framework to analyze machine learning problems that require a theoretical model of diversification. Examples of such problems include experimental design and model compression: subset-selection problems that require carefully balancing the quality of each selected element with the diversity of the subset as a whole. Negative dependence, which models the behavior of “repelling” random variables, provides a rich mathematical framework for the analysis of such problems.

The first part of this thesis develops scalable sampling and learning algorithms for determinantal point processes (DPPs), popular negatively dependent measures with many applications to machine learning. For scalable sampling, we introduce a theoretically-motivated generative deep neural network for DPP-like samples over arbitrary ground sets. To address the learning problem, we show that algorithms for maximum likelihood estimation (MLE) for DPPs are drastically sped up with Kronecker kernels, and that MLE can be further enriched by negative samples.

The second part of this thesis leverages negative dependence for core problems in machine learning. We begin by deriving a generalized form of volume sampling (GVS) based on elementary symmetric polynomials, and prove that the induced measures exhibit strong negative dependence properties. We then show that classical forms of optimal experimental design can be cast as optimization problems based on GVS, for which we derive randomized and greedy algorithms to obtain the associated designs. Finally, we introduce exponentiated strongly Rayleigh measures, which allow for simple tuning of the strength of repulsive forces between similar items while still enjoying fast sampling algorithms. The great flexibility of exponentiated strongly Rayleigh measures makes them an ideal tool for machine learning problems that benefit from negative dependence theory.

@phdthesis{mariet-19d,
  author = {Zelda Mariet},
  school = {Massachusetts Institute of Technology},
  title = {{Learning with Generalized Negative Dependence: Probabilistic Models of Diversity for Machine Learning}},
  year = {2019}
}

DppNet: Approximating Determinantal Point Processes with Deep Networks

Zelda Mariet, Yaniv Ovadia, Jasper Snoek
preprint [arXiv]
dppnet

Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DPPNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPPs so powerful and unique. Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative.

@misc{mariet-19c,
  author = {Zelda Mariet and Yaniv Ovadia and Jasper Snoek},
  title = {{DppNet: Approximating Determinantal Point Processes with Deep Networks}},
  year = {2019},
  eprint = {arXiv:1901.02051},
}

A Tree-Based Method for Fast Repeated Sampling of Determinantal Point Processes

Jennifer Gillenwater, Alex Kulesza, Zelda Mariet, Sergei Vassilvtiskii
ICML 2019 [PDF]
alg
It is often desirable in recommender systems and other information retrieval applications to provide diverse results, and determinantal point processes (DPPs) have become a popular way to capture the trade-off between the quality of individual results and the diversity of the overall set. However, sampling from a DPP is inherently expensive: if the underlying collection contains N items, then generating each DPP sample requires time linear in N following a one-time preprocessing phase. Additionally, results often need to be personalized to a user, but standard approaches to personalization invalidate the preprocessing, making personalized samples especially expensive. In this work we address both of these shortcomings. First, we propose a new algorithm for generating DPP samples in time logarithmic in N, following a slightly more expensive preprocessing phase. We then extend the algorithm to support arbitrary query-time feature weights, allowing us to generate samples customized to individual users while still retaining logarithmic runtime; experiments show our approach runs over 300 times faster than traditional DPP sampling on collections of 100,000 items for samples of size 10.
@InProceedings{gillenwater19,
  title = 	 {A Tree-Based Method for Fast Repeated Sampling of Determinantal Point Processes},
  author = 	 {Gillenwater, Jennifer and Kulesza, Alex and Mariet, Zelda and Vassilvtiskii, Sergei},
  booktitle = 	 {Proceedings of the 36th International Conference on Machine Learning},
  pages = 	 {2260--2268},
  year = 	 {2019},
  editor = 	 {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
  volume = 	 {97},
  series = 	 {Proceedings of Machine Learning Research},
  address = 	 {Long Beach, California, USA},
  month = 	 {09--15 Jun},
  publisher = 	 {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v97/gillenwater19a/gillenwater19a.pdf},
  url = 	 {http://proceedings.mlr.press/v97/gillenwater19a.html},
}

Foundations of Sequence-to-Sequence Modeling for Time Series

Zelda Mariet, Vitaly Kuznetsov
AISTATS 2019 [arXiv]
tree

The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practitioners choosing between different modeling methodologies.

@inproceedings{mariet-19b,
  Author = {Zelda Mariet and Vitaly Kuznetsov},
  Title = {Foundations of Sequence-to-Sequence Modeling for Time Series},
  Year = {2019},
  Booktitle = {International Conference on Artificial Intelligence and Statistics ({AISTATS})}
}

Learning Determinantal Point Processes by Corrective Negative Sampling

Zelda Mariet, Mike Gartrell, Suvrit Sra
AISTATS 2019, NIPS’18 Workshop on Relational Representation Learning [arXiv] [workshop paper]
negdpp

Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. We consider learning DPPs from data, a key task for DPPs; for this task, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about “negative” samples into the basic learning model. CE is grounded in the successful use of negative information in machine-vision and language modeling. Depending on the chosen negative distribution (which may be static or evolve during optimization), CE assumes two different forms, which we analyze theoretically and experimentally. We evaluate our new model on real-world datasets; on a challenging dataset, CE learning delivers a considerable improvement in predictive performance over a DPP learned without using contrastive information.

 

@inproceedings{Mariet19,
  author = {Zelda Mariet and Mike Gartrell and Suvrit Sra},
  title = {Learning Determinantal Point Processes by Sampling Inferred Negatives},
  year = {2019},
  booktitle = {International Conference on Artificial Intelligence and Statistics ({AISTATS})}
}

 

Exponentiated Strongly Rayleigh Distributions

Zelda Mariet, Suvrit Sra, Stefanie Jegelka
Advances in Neural Information Processing Systems 32 (NeurIPS 2018) [PDF]
edpp2

Strongly Rayleigh (SR) measures are discrete probability distributions over the subsets of a ground set. They enjoy strong negative dependence properties, as a result of which they assign higher probability to subsets of diverse elements. We introduce in this paper Exponentiated Strongly Rayleigh (ESR) measures, which sharpen (or smoothen) the negative dependence property of SR measures via a single parameter (the exponent) that can intuitively understood as an inverse temperature. We develop efficient MCMC procedures for approximate sampling from ESRs, and obtain explicit mixing time bounds for two concrete instances: exponentiated versions of Determinantal Point Processes and Dual Volume Sampling. We illustrate some of the potential of ESRs, by applying them to a few machine learning tasks; empirical results confirm that beyond their theoretical appeal, ESR-based models hold significant promise for these tasks.

@incollection{Mariet18b,
  title = {Exponentiated Strongly Rayleigh Distributions},
  author = {Mariet, Zelda E. and Sra, Suvrit and Jegelka, Stefanie},
  booktitle = {Advances in Neural Information Processing Systems 31},
  editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
  pages = {4464--4474},
  year = {2018},
  publisher = {Curran Associates, Inc.},
  url = {http://papers.nips.cc/paper/7698-exponentiated-strongly-rayleigh-distributions.pdf}
}

Maximizing Induced Cardinality Under a Determinantal Point Process

Jennifer Gillenwater, Alex Kulesza, Zelda Mariet, Sergei Vassilvitskii
Advances in Neural Information Processing Systems 32 (NeurIPS 2018) [PDF]
mic

Determinantal point processes (DPPs) are well-suited to recommender systems where the goal is to generate collections of diverse, high-quality items. In the existing literature this is usually formulated as finding the mode of the DPP (the so-called MAP set). However, the MAP objective inherently assumes that the DPP models “optimal” recommendation sets, and yet obtaining such a DPP is nontrivial when there is no ready source of example optimal sets. In this paper we advocate an alternative framework for applying DPPs to recommender systems. Our approach assumes that the DPP simply models user engagements with recommended items, which is more consistent with how DPPs for recommender systems are typically trained. With this assumption, we are able to formulate a metric that measures the expected number of items that a user will engage with. We formalize this optimization of this metric as the Maximum Induced Cardinality (MIC) problem. Although the MIC objective is not submodular, we show that it can be approximated by a submodular function, and that empirically it is well-optimized by a greedy algorithm.

@incollection{Gillenwater18,
  title = {Maximizing Induced Cardinality Under a Determinantal Point Process},
  author = {Gillenwater, Jennifer A and Kulesza, Alex and Mariet, Zelda and Vassilvitskii, Sergei},
  booktitle = {Advances in Neural Information Processing Systems 31},
  editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
  pages = {6911--6920},
  year = {2018},
  publisher = {Curran Associates, Inc.},
  url = {http://papers.nips.cc/paper/7923-maximizing-induced-cardinality-under-a-determinantal-point-process.pdf}
}

Elementary Symmetric Polynomials for Optimal Experimental Design

Zelda Mariet, Suvrit Sra
Advances in Neural Information Processing Systems 31 (NIPS 2017) [PDF] [arXiv]
oed

We revisit the classical problem of optimal experimental design (OED) under a new mathematical model grounded in a geometric motivation. Specifically, we introduce models based on elementary symmetric polynomials; these polynomials capture “partial volumes” and offer a graded interpolation between the widely used A-optimal and D-optimal design models, obtaining each of them as special cases. We analyze properties of our models, and derive both greedy and convex-relaxation algorithms for computing the associated designs. Our analysis establishes approximation guarantees on these algorithms, while our empirical results substantiate our claims and demonstrate a curious phenomenon concerning our greedy algorithm. Finally, as a byproduct, we obtain new results on the theory of elementary symmetric polynomials that may be of independent interest.

@incollection{MarietSra17,
  title = {Elementary Symmetric Polynomials for Optimal Experimental Design},
  author = {Mariet, Zelda E. and Sra, Suvrit},
  booktitle = {Advances in Neural Information Processing Systems 30},
  editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett},
  pages = {2136--2145},
  year = {2017},
  publisher = {Curran Associates, Inc.},
  url = {http://papers.nips.cc/paper/6809-elementary-symmetric-polynomials-for-optimal-experimental-design.pdf}
}

Kronecker Determinantal Point Processes

Zelda Mariet, Suvrit Sra
Advances in Neural Information Processing Systems 29 (NIPS 2016) [PDF] [arXiv]
kron

Determinantal Point Processes (DPPs) are probabilistic models over all subsets a ground set of N items. They have recently gained prominence in several applications that rely on diverse subsets. However, their applicability to large problems is still limited due to O(N^3) complexity of core tasks such as sampling and learning. We enable efficient sampling and learning for DPPs by introducing KronDPP, a DPP model whose kernel matrix decomposes as a tensor product of multiple smaller kernel matrices. This decomposition immediately enables fast exact sampling. But contrary to what one may expect, leveraging the Kronecker product structure for speeding up DPP learning turns out to be more difficult. We overcome this challenge, and derive batch and stochastic optimization algorithms for efficiently learning the parameters of a KronDPP.

@incollection{mariet16b,
  title     = {Kronecker Determinantal Point Processes},
  author    = {Zelda Mariet and Suvrit Sra},
  booktitle = {Advances in Neural Information Processing Systems 29},
  editor    = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
  pages     = {2694--2702},
  year      = {2016},
  publisher = {Curran Associates, Inc.},
  url       = {http://papers.nips.cc/paper/6296-kronecker-determinantal-point-processes.pdf}
}

Learning and enforcing diversity with Determinantal Point Processes

Zelda Mariet
Master’s thesis [PDF]

As machine-learning techniques continue to require more data and become increasingly memory-heavy, being able to choose a subset of relevant, high-quality and diverse elements among large amounts of redundant or noisy data and parameters has become an important concern. Here, we approach this problem using Determinantal Point Processes (DPPs), probabilistic models that provide an intuitive and powerful way of balancing quality and diversity in sets of items. We introduce a novel, fixed-point algorithm for estimating the maximum likelihood parameters of a DPP, provide proof of convergence and discuss generalizations of this technique. We then apply DPPs to the difficult problem of detecting and eliminating redundancy in fully-connected layers of neural networks. By placing a DPP over a layer, we are able to sample a subset of neurons that perform non-overlapping computations and merge all other neurons of the layer into the previous diverse subset. This allows us to significantly reduce the size of the neural network while simultaneously maintaining a good performance.

@mastersthesis{mariet2016dpp,
  title = 	{Learning and enforcing diversity with Determinantal Point Processes},
  author = 	{Zelda Mariet},
  school = 	{Massachusetts Institute of Technology},
  year = 	{2016},
}

Diversity Networks: Neural Network Compression Using Determinantal Point Processes

Zelda Mariet, Suvrit Sra
International Conference on Learning Representations (ICLR 2016) [PDF] [arXiv]
divnet

We introduce Divnet, a flexible technique for learning networks with diverse neurons. Divnet models neuronal diversity by placing a Determinantal Point Process (DPP) over neurons in a given layer. It uses this DPP to select a subset of diverse neurons and subsequently fuses the redundant neurons into the selected ones. Compared with previous approaches, Divnet offers a more principled, flexible technique for capturing neuronal diversity and thus implicitly enforcing regularization. This enables effective auto-tuning of network architecture and leads to smaller network sizes without hurting performance. Moreover, through its focus on diversity and neuron fusing, Divnet remains compatible with other procedures that seek to reduce memory footprints of networks. We present experimental results to corroborate our claims: for pruning neural networks, Divnet is seen to be notably superior to competing approaches.

@InProceedings{mariet16,
  author    = {Zelda Mariet and Suvrit Sra},
  title     = {Diversity Networks: Neural Network Compression Using Determinantal Point Processes},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year      = {2016}
}

Outlier Detection in Heterogeneous Datasets using Automatic Tuple Expansion

C. Pit–Claudel, Z. Mariet, R. Harding, S. Madden
Tech report (2016) [PDF]
dboost

Abstract—Rapidly developing areas of information technology are generating massive amounts of data. Human errors, sensor failures, and other unforeseen circumstances unfortunately tend to undermine the quality and consistency of these datasets by introducing outliers – data points that exhibit surprising behavior when compared to the rest of the data. Characterizing, locating, and in some cases eliminating these outliers offers interesting insight about the data under scrutiny and reinforces the confidence that one may have in conclusions drawn from otherwise noisy datasets. In this paper, we describe a tuple expansion procedure which reconstructs rich information from semantically poor SQL data types such as strings, integers, and floating point numbers. We then use this procedure as the foundation of a new user-guided outlier detection framework, dBoost, which relies on inference and statistical modeling of heterogeneous data to flag suspicious fields in database tuples. We show that this novel approach achieves good classification performance, both in traditional numerical datasets and in highly non-numerical contexts such as mostly textual datasets. Our implementation is publicly available, under version 3 of the GNU General Public License.

@article{dBoost,
  title  = {Outlier Detection in Heterogeneous Datasets using Automatic Tuple Expansion},
  author = {Pit--Claudel, Clément and Mariet, Zelda and Harding, Rachael and Madden, Sam},
  year   = {2016}
}

Fixed-point algorithms for learning determinantal point processes

Zelda Mariet, Suvrit Sra
Proceedings of the 32nd International Conference on Machine Learning (ICML 2015) [PDF] [arXiv]
dpp

Determinantal point processes (DPPs) offer an elegant tool for encoding probabilities over subsets of a ground set. Discrete DPPs are parametrized by a positive semidefinite matrix (called the DPP kernel), and estimating this kernel is key to learning DPPs from observed data. We consider the task of learning the DPP kernel, and develop for it a surprisingly simple yet effective new algorithm. Our algorithm offers the following benefits over previous approaches: (a) it is much simpler; (b) it yields equally good and sometimes even better local maxima; and (c) it runs an order of magnitude faster on large problems. We present experimental results on both real and simulated data to illustrate the numerical performance of our technique.

@InProceedings{mariet15,
  title = {Fixed-point algorithms for learning determinantal point processes},
  author = {Zelda Mariet and Suvrit Sra},
  booktitle = {Proceedings of the 32nd International Conference on Machine Learning},
  pages = {2389--2397},
  year = {2015},
  editor = {Francis Bach and David Blei},
  volume = {37},
  series = {Proceedings of Machine Learning Research},
  address = {Lille, France},
  month = {07--09 Jul},
  publisher = {PMLR},
  pdf = {http://proceedings.mlr.press/v37/mariet15.pdf},
  url = {http://proceedings.mlr.press/v37/mariet15.html},
}
function res = picard(L, sets, a)
    N = size(L, 1);
    I = eye(N,N);
    res = a * L * (pseudoinv(L, sets) - inv(I+L)) * L + L;

function theta = pseudoinv(L, T)
% T.Ys is the training set
% T.Y_fracs(j) represents how often the training sample T.Ys{j} appears in the training set
    theta = zeros(size(L, 1));
    for j = 1:length(T.Ys),
        set = T.Ys{j};
        theta(set,set) = theta(set,set) + T.Y_fracs(j) * inv(L(set,set));
    end