Anthony Praino, Lloyd Treinish, et al.
AGU 2024
We introduce π-variance, a generalization of variance built on the machinery of random bipartite matchings. π-variance measures the expected cost of matching two sets of π samples from a distribution to each other, capturing local rather than global information about a measure as π increases; it is easily approximated stochastically using sampling and linear programming. In addition to defining π-variance and proving its basic properties, we provide in-depth analysis of this quantity in several key cases, including one-dimensional measures, clustered measures, and measures concentrated on low-dimensional subsets of βπ. We conclude with experiments and open problems motivated by this new way to summarize distributional shape.
Anthony Praino, Lloyd Treinish, et al.
AGU 2024
Paul Soulos, Aleksandar Terzic, et al.
NeurIPS 2024
Felicia Jing, Sara Berger, et al.
CSCW 2024
Sara Capponi
ACS Fall 2023