Current Topics in Natural Language Processing (WS 2016)


Deep Learning is an interesting new branch of machine learning where neural networks consisting of multiple layers have shown new generalization capabilities. The seminar will look at advances in both general deep learning approaches, and at the specific case of Neural Machine Translation (NMT). NMT is a new paradigm in data-driven machine translation. In Neural Machine Translation, the entire translation process is posed as an end-to-end supervised classification problem, where the training data is pairs of sentences and the full sequence to sequence task is handled in one model.

Here is a link to last semester's seminar.

There is a Munich interest group for Deep Learning, which has an associated mailing list (initially organized by David Kaumanns). See the link here:


Alexander Fraser

Email Address:

CIS, LMU Munich

Hinrich Schütze

CIS, LMU Munich


Thursdays 14:30 s.t., location is C105 (CIS Besprechungsraum).

Click here for directions to CIS.

If this page appears to be out of date, use the refresh button of your browser

Date Paper Links Discussion Leader
Thursday, October 6th Dan Gillick, Cliff Brunk, Oriol Vinyals, Amarnag Subramanya (2016). Multilingual Language Processing From Bytes. HLT-NAACL 2016. paper Hinrich Schütze
Thursday, October 13th Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li (2016). Modeling Coverage for Neural Machine Translation. ACL 2016 paper Tsuyoshi Okita
Thursday, October 20th Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research 2011. paper, focus on sections 1,3,7,8 Hinrich Schütze
Thursday, October 27th Yonghui Wu (and many others). Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv. paper Matthias Huck
Thursday, November 10th Ivan Vulić and Anna Korhonen (2016). On the Role of Seed Lexicons in Learning Bilingual Word Embeddings. ACL 2016 paper Viktor Hangya
Thursday, November 17th Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. Knowledge Discovery and Data Mining (KDD) 2016. paper Hinrich Schütze
Thursday, November 24th Melvin Johnson, Mike Schuster et al. (2016). Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. arXiv. paper Matthias Huck
Thursday, December 1st Yftah Ziser and Roi Reichart (2016). Neural Structural Correspondence Learning for Domain Adaptation. arXiv. *AND* Ira Leviant and Roi Reichart (2015). Separated by an Un-common Language: Towards Judgment Language Informed Vector Space Modeling. arXiv. paper paper2 Roi Reichart
Thursday, December 8th Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu (2016). Neural Machine Translation in Linear Time. arXiv. paper Valentin Deyringer
Thursday, December 15th Maxima Hashimoto and Yoshimasa Tsuruoka (2016). Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings. ACL 2016. paper Eva Maria Vecchi
Thursday, December 22nd Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, Kilian Weinberger (2016). Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification. arXiv. paper Fabienne Braune
Thursday, January 12th Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins (2016). Globally Normalized Transition-Based Neural Networks. ACL 2016. paper Heike Adel
Thursday, January 19th Jason Lee, Kyunghyun Cho, Thomas Hofmann (2016). Fully Character-Level Neural Machine Translation without Explicit Segmentation. arXiv. paper Alex Fraser
Thursday, January 26th NO MEETING
Thursday, February 2nd Junyoung Chung, Sungjin Ahn, Yoshua Bengio (2016). Hierarchical Multiscale Recurrent Neural Networks. arXiv. paper Also: ICLR 2017 reviews Helmut Schmid
Thursday, February 9th Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier (2016). Language Modeling with Gated Convolutional Networks. arXiv. paper Ben Roth
Thursday, February 16th Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, Alexandra Birch (2017). Syntax-aware Neural Machine Translation Using CCG. arXiv. paper Matthias Huck
Thursday, February 23rd W. James Murdoch, Arthur Szlam (2017). Automatic Rule Extraction from Long Short Term Memory Networks. arXiv. paper Heike Adel
Thursday, March 2nd Angeliki Lazaridou, Georgiana Dinu, Marco Baroni (2015). Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning. ACL 2015. paper Fabienne Braune
Thursday, March 16th Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser (2016). Multi-task Sequence to Sequence Learning. ICLR 2016. paper Matthias Huck
Thursday, March 30th Shonosuke Ishiwatari, Nobuhiro Kaji, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa (2015). Accurate Cross-lingual Projection between Count-based Word Vectors by Exploiting Translatable Context Pairs. CoNLL 2015
*and* Shonosuke Ishiwatari, Naoki Yoshinaga, Masashi Toyoda and Masaru Kitsuregawa (2016) Instant Translation Model Adaptation by Translating Unseen Words in Continuous Vector Space. CICLing 2016
Fabienne Braune
Thursday, April 20th Katharina Kann and Yadollah Yaghoobzadeh will present ongoing work (for paper, contact alex fraser)

Further literature:

Please click here for an NMT reading list, but also see the more general RNN reading list here (scroll down).