Model counting methods that cache the number of satisfying assignments of connected components of the residual formula were recently proposed by Bacchus, Dalmao, and Pitassi as solutions for #SAT and Bayes net evaluation. These component caching methods were shown theoretically to be at least competitive with previous methods, and to be better on certain inputs. However, it was not clear whether the overhead of component caching would outweigh its benefits in practice.
We show how to combine the ideas of component caching with clause learning in zchaff, a state-of-the-art SAT solver, and build a solver for #SAT that is orders of magnitude better on many problems, both random and structured, than previous #SAT solvers. We resolve some subtle interactions between component caching and clause learning that are far from apparent and can compromise the values returned if handled incorrectly.