Tue 9 Jan 2018 10:12 - 10:15 at Bradbury - POSTER SESSION (14 posters - not talks)

The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable modular construction of high dimensional distributions and transformations not possible with previous libraries (e.g., pixelCNNs, autoregressive flows, and reversible residual networks). They are the workhorse behind deep probabilistic programming systems like Edward and empower fast black-box inference in probabilistic models built on deep-network components. TensorFlow Distributions has proven an important part of the TensorFlow toolkit within Google and in the broader deep learning community.

Tue 9 Jan

Displayed time zone: Tijuana, Baja California change

10:00 - 10:30
POSTER SESSION (14 posters - not talks) PPS at Bradbury
10:00
2m
Talk
Probabilistic Programming for Robotics
PPS
Nils Napp SUNY at Buffalo, Marco Gaboardi University at Buffalo, SUNY
10:02
2m
Talk
Game Semantics for Probabilistic Programs
PPS
C.-H. Luke Ong University of Oxford, Matthijs Vákár University of Oxford
10:04
2m
Talk
Interactive Writing and Debugging of Bayesian Probabilistic Programs
PPS
Javier Burroni , Arjun Guha University of Massachusetts, Amherst, David Jensen University of Massachusetts Amherst
Pre-print
10:06
2m
Talk
Deep Amortized Inference for Probabilistic Programs using Adversarial Compilation
PPS
10:08
2m
Talk
Comparing the speed of probabilistic processes
PPS
Mathias Ruggaard Pedersen Aalborg University, Nathanaël Fijalkow Alan Turing Institute, Giorgio Bacci Aalborg University, Kim Larsen Aalborg University, Radu Mardare Aalborg University
10:10
2m
Talk
Using Reinforcement Learning for Probabilistic Program Inference
PPS
Avi Pfeffer Charles River Analytics
10:12
2m
Talk
TensorFlow Distributions
PPS
Link to publication Pre-print
10:15
2m
Talk
Constructive probabilistic semantics with non-spatial locales
PPS
Benjamin Sherman Massachusetts Institute of Technology, USA, Jared Tramontano Massachusetts Institute of Technology, Michael Carbin MIT
Pre-print
10:17
2m
Talk
Using probabilistic programs as proposals
PPS
Marco Cusumano-Towner MIT-CSAIL, Vikash Mansinghka Massachusetts Institute of Technology
10:19
2m
Talk
Probabilistic Program Equivalence for NetKAT
PPS
Steffen Smolka Cornell University, David Kahn Cornell University, Praveen Kumar Cornell University, Nate Foster Cornell University, Dexter Kozen , Alexandra Silva University College London
Link to publication File Attached
10:21
2m
Talk
Reasoning about Divergences via Span-liftings
PPS
Tetsuya Sato University at Buffalo, SUNY, USA
10:23
2m
Talk
Probabilistic Models for Assured Position, Navigation and Timing
PPS
Andres Molina-Markham The MITRE Corporation
10:25
2m
Talk
The Support Method of Computing Expectations
PPS
Avi Pfeffer Charles River Analytics
10:27
2m
Talk
Combining static and dynamic optimizations using closed-form solutions
PPS
Daniel Lundén KTH Royal Institute of Technology, David Broman KTH Royal Institute of Technology, Lawrence M. Murray Uppsala University