On two ways to use determinantal point processes for Monte Carlo integration
2026-04-21 • Machine Learning
Machine Learning
AI summaryⓘ
The authors look at ways to improve how we estimate integrals using samples. Normally, Monte Carlo uses random, independent samples with error shrinking at a rate of 1 divided by the number of samples. The authors discuss using a special kind of sampling called determinantal point processes (DPPs), which spread samples out more evenly. They study two previous methods: one by Bardenet and Hardy that works better for smooth functions but uses a fixed DPP, and another by Ermakov and Zolotukhin that adapts the DPP to the function and has similar error rates to Monte Carlo. The authors update these methods for continuous cases and provide new ways to sample from these DPPs.
Monte Carlo estimatorintegral estimationdeterminantal point processvariance raterepulsive distributionsampling algorithmsunbiased estimatorsmooth functionscontinuous setting
Authors
Guillaume Gautier, Rémi Bardenet, Michal Valko
Abstract
The standard Monte Carlo estimator $\widehat{I}_N^{\mathrm{MC}}$ of $\int fdω$ relies on independent samples from $ω$ and has variance of order $1/N$. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to $f$ and $ω$. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of $\mathcal{O}(N^{-(1+1/d)})$ for smooth $f$, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order $1/N$, like Monte Carlo, but its DPP is tailored to $f$. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.