Juliann Opitz, Robert D. Allen, et al.
Microlithography 1998
This paper introduces a class of probabilistic counting algorithms with which one can estimate the number of distinct elements in a large collection of data (typically a large file stored on disk) in a single pass using only a small additional storage (typically less than a hundred binary words) and only a few operations per element scanned. The algorithms are based on statistical observations made on bits of hashed values of records. They are by construction totally insensitive to the replicative structure of elements in the file; they can be used in the context of distributed systems without any degradation of performances and prove especially useful in the context of data bases query optimisation. © 1985.
Juliann Opitz, Robert D. Allen, et al.
Microlithography 1998
Jonathan Ashley, Brian Marcus, et al.
Ergodic Theory and Dynamical Systems
J. LaRue, C. Ting
Proceedings of SPIE 1989
Harpreet S. Sawhney
IS&T/SPIE Electronic Imaging 1994