You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are situations where we can expect ahead of time that the output of a sandwich product will be sparse:
If the data is dominated by one very high-dimensional categorical. For example, say there are D dense columns, and M categorical columns. The fraction of nonzeros in the resulting sandwich product will be at most (D^2 + 2 DM + M) / (D + M)^2. If M >> D this matrix will be quite sparse. This could happen in an e-commerce pricing context, if features are a categorical product ID that is very high-dimensional and a small number of scalars.
Perhaps we have computed x.sandwich(d1) and now we want to know x.sandwich(d2). The latter will have the same sparsity pattern as the former.
We can analytically upper-bound the number of nonzero elements in a sandwich product of categoricals using the number of rows. If the data is made up mainly of categoricals and is very wide relative to its length, its sandwich product will be fairly sparse. This would be true in, for example, the German employer-employee data set used in the AKM papers. Since a typical worker will usually not have worked for a randomly-chosen firm, the sandwich product would be very sparse.
The text was updated successfully, but these errors were encountered:
There are situations where we can expect ahead of time that the output of a sandwich product will be sparse:
(D^2 + 2 DM + M) / (D + M)^2
. IfM >> D
this matrix will be quite sparse. This could happen in an e-commerce pricing context, if features are a categorical product ID that is very high-dimensional and a small number of scalars.x.sandwich(d1)
and now we want to knowx.sandwich(d2)
. The latter will have the same sparsity pattern as the former.The text was updated successfully, but these errors were encountered: