Dan Rosenbaum

Dan Rosenbaum

I am an assistant professor in the Department of Computer Science at the University of Haifa, working on machine learning and computer vision.

Before joining the University of Haifa I was a research scientist at DeepMind (2016-2021). I completed my PhD at the Hebrew University of Jerusalem in 2016, advised by Yair Weiss, studying generative models for low-level vision problems. [thesis]

Publications     Contact

I am interested in computational models of vision and 3D scene understanding, and in using generative approaches that model vision as an inverse problem.

I co-organised a workshop at NeurIPS 2019 titled “Perception as Generative Reasoning: Structure, Causality, Probability”.
See the website for all papers, invited talks and videos.

Publications

Research

treating datapoints as functions Functa: data as neural fields - In these two papers (arXiv, arXiv) we explore the representation of data points such as images, manyfolds, 3D shapes and scense using neural fields (aka implicit neural representations). Many standard data representations are a discretization of an underlying continuous signal, and can be more efficiently modeled as functions. We develop a method to map samples of datasets to a functional represention, and demonstrate the benefits of training generative models or classifiers on this representation.


protein structure Dynamic Protein Structure - Understanding the 3D structure of proteins is a fundamental problem in biology, with the potential of unlocking a better understanding of the various functions of proteins in biological mechanisms, and accelerating drug discovery. I am studying models of protein structure that explicitly reason in 3D space, predicting structure using probabilistic inference methods. In this work (arXiv) we propose an inverse graphics approach based on VAEs to model the distribution of protein 3D structure in atom space, using cryo-EM image data.


3D scene understanding with Generative Query Networks (GQN) 3D scene understanding with Generative Query Network (GQN) - In this paper (Science) we show how implicit scene understanding can emerge from training a model to predict novel views of random 3D scenes (video). In a follow-up paper (arXiv) we extend the model to use attention over image patches, improving its capacity to model rich environments like Minecaft. We study the camera pose estimation problem comparing an inference method with a generative model to a direct discriminative approach (video, datasets).


Neural processes Neural processes - We introduce conditional neural processes (arXiv) and neural processes (arXiv), that are trained to predict values of functions given a context of observed function evaluations. These models provide a general framework for dealing with uncertainty, demonstrating fast adaptivity, and allowing a smooth transition between a prior model that is not conditioned on any data, and flexible posterior models which can be conditioned on more and more data. In follow-up work we extend the model with an attention mechanism over context points (arXiv) and study different training objectives (pdf).


Contact

Dan Rosenbaum danro@cs.haifa.ac.il

twitter.com/danrsm