Deep Amortized Inference for Probabilistic Programs using Adversarial Compilation
We propose an amortized inference strategy for probabilistic programs, one that learns from past inferences to speed up the future inferences. Our proposed inference strategy is to train neural guidance programs via a minimax game, with the probabilistic program as a correlation device. From a game-theoretical vantage point, the role of a correlation device is to enforce better outcomes by sharing information between players. The shared information, in our case, is the execution trace, which gets used for computation of payoffs in the minimax game.