Given a set of matches with success probabilities:

one might want to estimate the number of elements in $A$, $B$, or $A \cup B$ that are involved in a successful match. Done naïvely, this involves calculating an expectation that is as complex to calculate as calculating the PMF of the Poisson binomial distribution. We can sum $\text{ops}(n,k)$, from the previous post Poisson binomial distribution, over $k=0,1,\dots ,n$ to get:

where we add in the multiplication operations necessary to calculate the expectation. This shows that when calculated naïvely, it would take an exponential amount of operations to calculate the number of entities with a success.

## Example 1

Let’s say that we are doing biparitite matching and we’ve produced the following matches:

Then we can determine the number of entities expected to see at least one success with the following formula:

The probabilities (on the right) should be familiar to those who’ve seen the Poisson binomial distribution. The coefficients $\left\{ 0, 2, 2, 4 \right\}$ need some explaining. In this example, we’re ascribing the successful events, when they occur, to both entities involved in the match. The coefficient is the number of unique entities that appear in success probability subscripts of each term or addend. For instance, in the first line, there are no success probabilities—only failure probabilities—so the coefficient is $0$.
$b$ and $d$ occur in the second term so the coefficient is 2; $a$ and $c$ occur in the third term so the coefficient is 2; $a$, $d$, $c$ and $d$ occur in the fourth term so the coefficient is 4. The expectation, once simplified is $2 p_{ac} + 2 p_{bd}$.

## Example 2

This example has the following matches:

and the expectation calculation, as calculated in the previous example, becomes:

which simplifies to:

This calculation, while accurate requires exponential runtime and it also requires an understanding of the topology of graph to determine the coefficients.

## A Better Way

When thinking about what we’re trying to accomplish, it should be pretty clear that we are trying to determine the expectation of the number of entities in at least one successful match. The probability of an entity being involved in at least one successful match is one minus the probability of being involved in zero successful matches. So, if for an entity, $a$, we aggregate the matches involving $a$, we can write the probability of $a$ being in at least one successful match as:

where $M_{a}$ is the set of matches involving $a$. Then if we want to calculate the distribution of the number entities involved in at least one successful match, the appropriate distribution is the Poisson binomial distribution. The expectation of the Poisson binomial distribution is just the sum of the success probabilities that parameterize the distribution. See the wikipedia page for details. So, if we want to calculate the expectation, we take the sum of the probabilities of the entity being in at least one successful match. This is just:

It may not be immediately obvious but this algorithm is $O(N)$ where $N$ is the number of matches. Here’s the algorithm:

### Algorithm 1

1. Create a hash map with entity IDs for keys and a linked list of probabilities for each match involving the entity ID in the associated map key.
2. For each match: $\left(i, j, p_{ij} \right)$, place $p_{ij}$ in the list associated with key $i$ (Optionally, place $p_{ij}$ in the list associated with key $j$, if the match is ascribed to both entities).
3. For each key-value pair in the map, calculate the probability of at least one successful match using equation 1.
4. Sum the resultant probabilities.

Since we only ever iterate over the match probabilities twice, and hash map and linked list insertion and iteration is $O(1)$ per operation, the algorithm is $O(N)$.

### Equivalence

simplifies to $2 p_{ac} + 2 p_{bd}$. AND

simplifies to

These are the same as the expectations from the naïve calculation.

### Algorithm 2

One might notice that the linked lists in Algorithm 1 are overkill and completely unnecessary. Instead we can do the following:

1. Create a hash map, $M$ with entity IDs in $\mathbb{N}$ for keys and values in $\mathbb{R}$. The starting value, for each associated key, should be 1.
2. For each match: $\left(i, j, p_{ij} \right)$
3. $\quad \left(i, v\right) \leftarrow \left(i, v \left(1 - p_{ij}\right) \right)$
4. $\quad \left(j, v\right) \leftarrow \left(j, v \left(1 - p_{ij}\right) \right)$ (Optionally, if match is ascribed to both entities).
5. Compute the sum of $1 - v, \forall v \in M$.

#### Algorithm 2 Remarks

One very special property to note here is that the input to entitiesWithASuccessfulMatch is a TraversableOnce, which, as should be obvious by the name, can only be traversed once. Since we don’t copy matches to a data structure that can be traversed multiple times, this algorithm is an online algorithm. If we think of the matches as a graph, then this algorithm takes $O(E)$ time and $O(V)$ auxiliary space.

What’s even cooler is that with a very small amount of work, we can turn this algorithm into a monoid. This allows the algorithm to be trivially parallelized.

## Remarks

There’s something really satisfying about this algorithm, not only because of the linear rather than exponential runtime, but also because it doesn’t keep track of the graph topology. This not only makes the algorithm much faster, but it’s much simpler conceptually.

We’ve shown that this algorithm is both an online algorithm meaning we can update the results as data becomes available, and a monoid, so we can easily parallelize the algorithm.