I graduated with a PhD from the Department of Computer Science at Rice University in August 2021, advised by Prof. Moshe Vardi. I obtained a BS in Computer Science and a BA in Mathematics from Rice University as well.
My research interests are in:
Planning and Execution for Discrete Integration
PhD Thesis, Rice University, August 2021.
Discrete integration is a fundamental problem in artificial intelligence with numerous applications, including probabilistic reasoning, planning, inexact computing,
engineering reliability, and statistical physics. The task is to count the total weight,
subject to a given weight function, of all solutions to given constraints. Developing
tools to compute the total weight for applied problems is an area of active research.
Over the last ten years, hundreds of thousands of research hours have been poured into lowlevel computational tools and compilers for neural network training and inference. Simultaneously, there has been a surge in highlevel reasoning tools based on graph decompositions, spurred by several competitions. While some existing tools for discrete integration (counters) tightly integrate with these lowlevel computational or highlevel reasoning tools, no existing counter is able to leverage both together. In this dissertation, we demonstrate that a clean separation of highlevel reasoning (planning) and lowlevel computation (execution) leads to scalable and more flexible counters. Instead of building tightly on any particular tool, we target APIs that can be fulfilled by multiple implementations. This requires novel theoretical and algorithmic techniques to use existing highlevel reasoning tools in a way consistent with the options available in popular lowlevel computational libraries. The resulting counters perform well in many hardware settings (singlecore, multicore, GPU). 
ProCount: Weighted Projected Model Counting with Graded ProjectJoin Trees
Proceedings of International Conference on Theory and Applications of Satisfiability Testing (SAT), 2021.
Recent work in weighted model counting proposed a unifying framework for dynamicprogramming algorithms. The core of this framework is a projectjoin tree: an execution plan that specifies how Boolean variables are eliminated. We adapt this framework to compute exact literalweighted projected model counts of propositional formulas in conjunctive normal form. Our key conceptual contribution is to define gradedness on projectjoin trees, a novel condition requiring irrelevant variables to be eliminated before relevant variables. We prove that building graded projectjoin trees can be reduced to building standard projectjoin trees and that graded projectjoin trees can be used to compute projected model counts. The resulting tool ProCount is competitive with the stateoftheart tools D4P, projMC, and reSSAT, achieving the shortest solving time on 131 benchmarks of 390 benchmarks solved by at least one tool, from 849 benchmarks in total.

Taming Discrete Integration via the Boon of Dimensionality
Proceedings of Neural Information Processing Systems (NeurIPS), 2020.
Discrete integration is a fundamental problem in computer science that concerns the computation of discrete sums over exponentially large sets. Despite intense interest from researchers for over three decades, the design of scalable techniques for computing estimates with rigorous guarantees for discrete integration remains the holy grail. The key contribution of this work addresses this scalability challenge via an efficient reduction of discrete integration to model counting. The proposed reduction is achieved via a significant increase in the dimensionality that, contrary to conventional wisdom, leads to solving an instance of the relatively simpler problem of model counting.
Building on the promising approach proposed by Chakraborty et al, our work overcomes the key weakness of their approach: a restriction to dyadic weights. We augment our proposed reduction, called DeWeight, with a state of the art efficient approximate model counter and perform detailed empirical analysis over benchmarks arising from neural network verification domains, an emerging application area of critical importance. DeWeight, to the best of our knowledge, is the first technique to compute estimates with provable guarantees for this class of benchmarks. 
DPMC: Weighted Model Counting by Dynamic Programming on ProjectJoin Trees
Proceedings of International Conference on Principles and Practice of Constraint Programming (CP), 2020.
We propose a unifying dynamicprogramming framework to compute exact literalweighted model counts of formulas in conjunctive normal form. At the center of our framework are projectjoin trees, which specify efficient projectjoin orders to apply additive projections (variable eliminations) and joins (clause multiplications). In this framework, model counting is performed in two phases. First, the planning phase constructs a projectjoin tree from a formula. Second, the execution phase computes the model count of the formula, employing dynamic programming as guided by the projectjoin tree. We empirically evaluate various methods for the planning phase and compare constraintsatisfaction heuristics with treedecomposition tools. We also investigate the performance of different data structures for the execution phase and compare algebraic decision diagrams with tensors. We show that our dynamicprogramming modelcounting framework DPMC is competitive with the stateoftheart exact weighted model counters cachet, c2d, d4, and miniC2D.

Parallel Weighted Model Counting with Tensor Networks
Proceedings of SAT Workshop on Model Counting (MCW), 2020.
A promising new algebraic approach to weighted model counting makes use of tensor networks, following a reduction from weighted model counting to tensornetwork contraction. Prior work has focused on analyzing the singlecore performance of this approach, and demonstrated that it is an effective addition to the current portfolio of weightedmodelcounting algorithms.
In this work, we explore the impact of multicore and GPU use on tensornetwork contraction for weighted model counting. To leverage multiple cores, we implement a parallel portfolio of treedecomposition solvers to find an order to contract tensors. To leverage a GPU, we use TensorFlow to perform the contractions. We compare the resulting weighted model counter on 1914 standard weighted model counting benchmarks and show that it significantly improves the virtual best solver. 
ADDMC: Weighted Model Counting with Algebraic Decision Diagrams
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2020.
We present an algorithm to compute exact literalweighted model counts of Boolean formulas in Conjunctive Normal Form. Our algorithm employs dynamic programming and uses Algebraic Decision Diagrams as the primary data structure. We implement this technique in ADDMC, a new model counter. We empirically evaluate various heuristics that can be used with ADDMC. We then compare ADDMC to stateoftheart exact weighted model counters (Cachet, c2d, d4, and miniC2D) on 1914 standard model counting benchmarks and show that ADDMC significantly improves the virtual best solver.

Transformations of Boolean Functions
Proceedings of Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2019.
Boolean functions are characterized by the unique structure of their solution space. Some properties of the solution space, such as the possible existence of a solution, are well sought after but difficult to obtain. To better reason about such properties, we define transformations as functions that change one Boolean function to another while maintaining some properties of the solution space. We explore transformations of Boolean functions, compactly described as Boolean formulas, where the property is to maintain is the number of solutions in the solution spaces. We first discuss general characteristics of such transformations. Next, we reason about the computational complexity of transforming one Boolean formula to another. Finally, we demonstrate the versatility of transformations by extensively discussing transformations of Boolean formulas to "blocks," which are solution spaces in which the set of solutions makes a prefix of the solution space under a lexicographic order of the variables.

Efficient Contraction of Large Tensor Networks for Weighted Model Counting through Graph Decompositions
(Preprint) Submitted to Artificial Intelligence Journal, 2019.
Constrained counting is a fundamental problem in artificial intelligence. A promising new algebraic approach to constrained counting makes use of tensor networks, following a reduction from constrained counting to the problem of tensornetwork contraction. Contracting a tensor network efficiently requires determining an efficient order to contract the tensors inside the network, which is itself a difficult problem.
In this work, we apply graph decompositions to find contraction orders for tensor networks. We prove that finding an efficient contraction order for a tensor network is equivalent to the wellknown problem of finding an optimal carving decomposition. Thus memoryoptimal contraction orders for planar tensor networks can be found in cubic time. We show that tree decompositions can be used both to find carving decompositions and to factor tensor networks with highrank, structured tensors. We implement these algorithms on top of stateoftheart solvers for tree decompositions and show empirically that the resulting weighted model counter is quite effective and useful as part of a portfolio of counters. 
Random CNFXOR Formulas
Masters Thesis, Rice University, April 2017.
Boolean Satisfiability (SAT) is fundamental in many diverse areas such as artificial intelligence, formal verification, and biology. Recent universalhashing based approaches to the problems of sampling and counting crucially depend on the runtime performance of specialized SAT solvers on formulas expressed as the conjunction of both kCNF constraints and variablewidth XOR constraints (known as CNFXOR formulas), but random CNFXOR formulas are unexplored in prior work. In this work, we present the first study of random CNFXOR formulas. We prove that a phasetransition in the satisfiability of random CNFXOR formulas exists for k=2 and (when the number of kCNF constraints is small) for k>2. We empirically demonstrate that a stateoftheart SAT solver scales exponentially on random CNFXOR formulas across many clause densities, peaking around the empirical phasetransition location. Finally, we prove that the solution space of random CNFXOR formulas 'shatters' at all nonzero XORclause densities into wellseparated components.

The Hard Problems Are Almost Everywhere For Random CNFXOR Formulas
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2017.
Recent universalhashing based approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both CNF constraints and variablewidth XOR constraints (known as CNFXOR formulas). In this paper, we present the first study of the runtime behavior of SAT solvers equipped with XORreasoning techniques on random CNFXOR formulas. We empirically demonstrate that a stateoftheart SAT solver scales exponentially on random CNFXOR formulas across a wide range of XORclause densities, peaking around the empirical phasetransition location. On the theoretical front, we prove that the solution space of a random CNFXOR formula 'shatters' at all nonzero XORclause densities into wellseparated components, similar to the behavior seen in random CNF formulas known to be difficult for many SAT algorithms.

Combining the kCNF and XOR PhaseTransitions
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016.
The runtime performance of modern SAT solvers on random kCNF formulas is deeply connected with the `phasetransition' phenomenon seen empirically in the satisfiability of random kCNF formulas. Recent universal hashingbased approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both kCNF and XOR constraints (known as kCNFXOR formulas), but the behavior of random kCNFXOR formulas is unexplored in prior work. In this paper, we present the first study of the satisfiability of random kCNFXOR formulas. We show empirical evidence of a surprising phasetransition that follows a linear tradeoff between kCNF and XOR constraints. Furthermore, we prove that a phasetransition for kCNFXOR formulas exists for k = 2 and (when the number of kCNF constraints is small) for k > 2.
