This website provides materials, exercises and further readings for the lectures "Probabilistic Graphical Models" and "Deep Learning".

## Probabilistic Graphical Models

**Lecture Slides**

**Python Code**

Jupyter Python notebooks for the vehicle localization example from the lecture:
Note: You can either look at the results of the Jupyter notebooks directly in your browser or execute & modify the Jupyter notebooks yourself. For executing the Jupyter notebook, download bp.ipynb, install Jupyter (see Jupyter documentation) and execute the code. If you don't want to install Jupyter you can also visit https://try.jupyter.org/, upload the notebook and run the Python interpreter directly in your browser.

**Exercise**

**Literature**

**Further Readings**
## Deep Learning

**Lecture Slides**

**Exercise**

**Literature**

**Further Readings**
## Graphical Models & Deep Learning

**Literature**

**Further Readings**

- Probabilistic Graphical Models: graphical_models.pdf (5 MB)

Jupyter Python notebooks for the vehicle localization example from the lecture:

- Belief Propagation: bp.ipynb
- Sum-Product Belief Propagation: bp_sum_product.ipynb | results
- Max-Product Belief Propagation: bp_max_product.ipynb | results

- Modify the Python program bp.ipynb for a vehicle localization scenario with 4 lanes.
- Modify the Python program bp.ipynb for localizing two vehicles simultaneously. Introduce a new set of random variables representing the second vehicle and change the factor graph accordingly. Introduce additional pairwise factors which penalize the event of collision between the two vehicles.

- J. Pearl: Reverend Bayes on inference engines: A distributed hierarchical approach. AAAI, 1982.
- D. Barber: Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012.
- S. Nowozin and C. Lampert: Structured Prediction and Learning in Computer Vision. Foundations and Trends, 2014.

- Graphical Models in Computer Vision Class (SS 2016):

http://cv.is.tue.mpg.de/ - Structured Prediction Tutorial by Sebastian Nowozin and Christoph Lampert:

http://www.nowozin.net/sebastian/cvpr2011tutorial/ - Video lecture by Sam Roweis on probabilistic graphical models:

http://videolectures.net/mlss05au_roweis_pgm/

- Deep Learning: deep_learning.pdf (5 MB)

- Consider the logistic regression setup: a neural network with 2 input variables, 1 output variable and no hidden layers. Calculate the derivative required for implementing gradient descent with respect to the model parameters. Assume a sigmoid activation function.
- Implement a simple gradient descent algorithm for learning the model parameters. Train your model on the dataset below. Visualize the decision boundary.
x1 x2 y

2.7810 2.5505 0

1.4654 2.3621 0

3.3965 4.4002 0

1.3880 1.8502 0

3.0640 3.0053 0

7.6275 2.7592 1

5.3324 2.0886 1

6.9225 1.7710 1

8.6754 0.2420 1

7.6737 3.5085 1

- Add two hidden layers with 10 neurons, calculate the gradient update equations and implement the gradient descent algorithm. Visualize the decision boundary. What do you observe?

- D. Rumelhart, G. Hinton and R. Williams: Learning representations by back-propagating errors. Nature, 1986.
- Y. LeCun, L. Bottou, Y. Bengio and Patrick Haffner: Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1989.
- M. Zeiler and R. Fergus: Visualizing and Understanding Convolutional Networks. ECCV, 2014.
- K. He, X. Zhang, S. Ren, and J. Sun: Deep Residual Learning for Image Recognition. CVPR, 2016.
- O. Vinyals, A. Toshev, S. Bengio and D. Erhan: Show and Tell: A Neural Image Caption Generator. CVPR, 2015.

- Fortune article about AI and deep learning:

http://fortune.com/ai-artificial-intelligence-deep-machine-learning/ - Very good tutorial on deep learning from Stanford, also covers basics (e.g., logistic regression):

http://ufldl.stanford.edu/tutorial/ - Great video lectures on deep learning by Hugo Larochelle on youtube:

https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH - Train your own CNN online in your web browser on MNIST (by Andrej Karpathy):

http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html - Nice visualization of CNNs in MATLAB by MIT:

http://vision03.csail.mit.edu/cnn_art/ - drawCNN: Online visualization of CNNs:

http://people.csail.mit.edu/torralba/research/drawCNN/drawNet.html

- G. Hinton and R. Salakhutdinov: Reducing the Dimensionality of Data with Neural Networks. Science, 2006, Vol. 313. no. 5786, pp. 504--507.
- D. Kingma and M. Welling: Auto-encoding variational Bayes. ICLR, 2014.
- L. Chen, A. Schwing, A. Yuille and R. Urtasun: Learning Deep Structured Models. ICML, 2015.
- J. Domke: Learning graphical model parameters with approximate marginal inference. PAMI, 2013, Vol. 35, no. 10, pp. 2454--2467.

- Raquel Urtasun's ICLR 2016 tutorial on incorporating structure into deep learning:

http://videolectures.net/iclr2016_urtasun_incoporating_structure/ - Raquel Urtasun's Vision and Sports Summer School 2015 slides on deep structured models:

http://www.cs.toronto.edu/~urtasun/deep_structured_sports_small.pdf