University of Massachusetts Amherst

Search Google Appliance


Deep Learning Whiteboard Talks

DS Tea
October 4, 4:00pm

This week's event will focus on deep learning and be in the format of whiteboard presentations. Presenters will be at the whiteboards describing their latest ideas and recent work in deep learning. Multiple presentations will be happening at the same time, similar to a poster session. It will be a fun discussion-based Data Science Tea where students will be exchanging ideas-- a great forum for both presenters to get feedback and the audience to ask questions about deep learning. Students can bring their own work to the event to present on the spot! Alternatively, let me know by email ( if you are interested.

What: tea, refreshments, presentations and conversations about topics in data science
Event: Deep Learning Whiteboard Talks
When: 4-5 pm October 4
 Computer Science Building Rooms 150 & 151
Who: You!  Especially MS & PhD students and faculty interested in data science.

The presenters include:

James Atwood (PhD Student advised by Prof. Don Towsley)
Diffusion-Convolutional Neural Networks 

Abstract: We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data.  Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU.  Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. 

Daniel Cohen (MS/PhD Student advised by Prof. Bruce Croft)
Memory Networks for First Sentence Detection 

Abstract: In passage retrieval, detecting the beginning and end of a passage at the sentence level granularity is a challenging problem. Due to the varying levels of relevance for each sentence, We employ a deep memory network to capture long term dependencies which a variety of RNN's fail to recognize.

Huaizu Jiang (PhD Student advised by Prof. Erik Learned-Miller)
Learning to predict object occlusions in videos

Abstract: Given first two frames of a video containing a set of moving objects, the goal is to predict where they are going (locations) and their visibility (fully visible, partially visible, or fully invisible) in the next immediate frame. Due to the perspective effect of the camera, the object closer to the camera (thus more likely to be visible) tends to be larger and moving faster. In this project, we are interested in training Convolutional Neural Networks (CNNs) to learn such common sense knowledge.  Additional information, such as  appearance and shading, may also provide cues to the object occlusions prediction.