* Below is the final schedule. *

Main Workshop on September 9, 2018

Room: N1090 ZG, Building N1

Note: All ECCV workshops will take place in Technische Universität München (TUM). Find here instructions to reach the venue.

1:00 - 1:20 pm   Lunch bags in the poster area
1:20 - 1:30 pm   Introduction
1:30 - 2:00 pm   Keynote
            Vision & Language, by Tamara Berg (UNC Chapel Hill, Shopagon)
2:00 - 2:15 pm   Oral session 1
           Deep Video Color Propagation
             by Simone Meyer (ETH Zurich)
           Fashion is Taking Shape: Understanding Clothing Preference based on Body Shape from Online Sources
             by Hosnieh Sattar (Max Planck Institute for Informatics and Saarland University)
           Unsupervised Learning and Segmentation of Complex Activities from Video
             by Fadime Sener (University of Bonn)
2:15 - 2:45 pm   Keynote
           Adapting Neural Networks to New Tasks, by Svetlana Lazebnik (University of Illinois at Urbana-Champaign)
2:45 - 4:20 pm   Poster session and coffee break
4:20 - 4:50 pm   Keynote
           Explainable AI Models and Why We Need Them, by Kate Saenko (Boston University)
4:50 - 5:00 pm   Oral Session 2
           Tracking Extreme Climate Events
             by Sookyung Kim (Lawrence Livermore National Laboratory)
           A Deep Look into Hand Segmentation
             by Aisha Urooj (University of Central Florida)
5:00 - 5:50 pm   Panel session: Submit your anonymous questions here
           Tamara Berg (UNC Chapel Hill, Shopagon)
           Andrew Fitzgibbon (Microsoft HoloLens)
           Svetlana Lazebnik (University of Illinois at Urbana-Champaign)
           Kate Saenko (Boston University)
           Jitendra Malik (UC Berkeley)
5:50 - 6:00 pm   Closing remarks and prizes
7:00 pm      Apple WiCV Banquet (by invitation)

Keynote Talks

Keynote speakers will give technical talks about their research in computer vision.

Tamara Berg (UNC Chapel Hill, Shopagon)

Title: Vision & Language

Abstract: TBA

Bio: Tamara Berg received her B.S. in Mathematics and Computer Science from the University of Wisconsin, Madison in 2001. She then completed a PhD in Computer Science from the University of California, Berkeley in 2007 under the advisorship of Professor David Forsyth as a member of the Berkeley Computer Vision Group. Afterward, she spent 1 year as a research scientist at Yahoo! Research. From 2008-2013 she was an Assistant Professor in the Computer Science department at Stony Brook University and core member of the consortium for Digital Art, Culture, and Technology (cDACT). She joined the computer science department at the University of North Carolina Chapel Hill (UNC) in Fall 2013 and is currently a tenured Associate Professor. She is the recipient of an NSF Career award, 2 google faculty awards, the 2013 Marr Prize, and the 2016 UNC Hettleman Award.

Svetlana Lazebnik (University of Illinois at Urbana-Champaign)

Title: Adapting Neural Networks to New Tasks

Abstract: TBA

Bio: Svetlana Lazebnik received her Ph.D. at UIUC in May 2006 under the supervision of Prof. Jean Ponce. From August 2007 to December 2011 she was an assistant professor at the University of North Carolina at Chapel Hill, and as of January 2012, she has returned as faculty to U of I. Her research specialty is computer vision. The main themes of her research include scene understanding, joint modeling of images and text, large-scale photo collections, and machine learning techniques for visual recognition problems. Current and former sources of support for her research include the National Science Foundation (under grants IIS 1718221, IIS 1563727, IIS 1228082, CIF 1302438, and IIS 0916829), Microsoft Research Faculty Fellowship, Xerox University Affairs Committee Grants, DARPA Computer Science Study Group, Sloan Foundation Fellowship, Google Research Award, ARO, and Adobe.

Kate Saenko (Boston University)

Title: Explainable AI Models and Why We Need Them

Abstract: TBA

Bio: Kate Saenko is an Associate Professor of Computer Science at Boston University and director of the Computer Vision and Learning Group. She is also a member of the IVC research group. Her past academic positions include: Assistant Professor at the Computer Science Department at UMass Lowell, Postdoctoral Researcher at the International Computer Science Institute, Visiting Scholar at UC Berkeley EECS and a Visiting Postdoctoral Fellow in the School of Engineering and Applied Science at Harvard University. Her research interests are in the broad area of Artificial Intelligence with a focus on Adaptive Machine Learning, Learning for Vision and Language Understanding, and Deep Learning.

Panel

Panelists will answer questions and discuss about increasing diversity in computer vision.

Feel free to ask your anonymous questions here.

Tamara Berg (UNC Chapel Hill, Shopagon)

Bio: Tamara Berg received her B.S. in Mathematics and Computer Science from the University of Wisconsin, Madison in 2001. She then completed a PhD in Computer Science from the University of California, Berkeley in 2007 under the advisorship of Professor David Forsyth as a member of the Berkeley Computer Vision Group. Afterward, she spent 1 year as a research scientist at Yahoo! Research. From 2008-2013 she was an Assistant Professor in the Computer Science department at Stony Brook University and core member of the consortium for Digital Art, Culture, and Technology (cDACT). She joined the computer science department at the University of North Carolina Chapel Hill (UNC) in Fall 2013 and is currently a tenured Associate Professor. She is the recipient of an NSF Career award, 2 google faculty awards, the 2013 Marr Prize, and the 2016 UNC Hettleman Award.

Andrew Fitzgibbon (Microsoft HoloLens)

Bio: Andrew Fitzgibbon is a scientist with HoloLens at Microsoft, Cambridge, UK. He is best known for his work on 3D vision, having been a core contributor to the Emmy-award-winning 3D camera tracker “boujou” and Kinect for Xbox 360, but his interests are broad, spanning computer vision, graphics, machine learning, and occasionally a little neuroscience. He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. Before joining Microsoft in 2005, he was a Royal Society University Research Fellow at Oxford University, having previously studied at Edinburgh University, Heriot-Watt University, and University College, Cork.

Svetlana Lazebnik (University of Illinois at Urbana-Champaign)

Bio: Svetlana Lazebnik received her Ph.D. at UIUC in May 2006 under the supervision of Prof. Jean Ponce. From August 2007 to December 2011 she was an assistant professor at the University of North Carolina at Chapel Hill, and as of January 2012, she has returned as faculty to U of I. Her research specialty is computer vision. The main themes of her research include scene understanding, joint modeling of images and text, large-scale photo collections, and machine learning techniques for visual recognition problems. Current and former sources of support for her research include the National Science Foundation (under grants IIS 1718221, IIS 1563727, IIS 1228082, CIF 1302438, and IIS 0916829), Microsoft Research Faculty Fellowship, Xerox University Affairs Committee Grants, DARPA Computer Science Study Group, Sloan Foundation Fellowship, Google Research Award, ARO, and Adobe.

Jitendra Malik (UC Berkeley)

Bio: Jitendra Malik was born in Mathura, India in 1960. He received the B.Tech degree in Electrical Engineering from Indian Institute of Technology, Kanpur in 1980 and the PhD degree in Computer Science from Stanford University in 1985. In January 1986, he joined the university of California at Berkeley, where he is currently the Arthur J. Chick Professor in the Department of Electrical Engineering and Computer Sciences. He is also on the faculty of the department of Bioengineering, and the Cognitive Science and Vision Science groups. During 2002-2004 he served as the Chair of the Computer Science Division, and as the Department Chair of EECS during 2004-2006 as well as 2016-2017. Since January 2018, he is also Research Director and Site Lead of Facebook AI Research in Menlo Park.
Prof. Malik's research group has worked on many different topics in computer vision, computational modeling of human vision, computer graphics and the analysis of biological images. Several well-known concepts and algorithms arose in this research, such as anisotropic diffusion, normalized cuts, high dynamic range imaging, shape contexts and R-CNN. He has mentored more than 60 PhD students and postdoctoral fellows.
He received the gold medal for the best graduating student in Electrical Engineering from IIT Kanpur in 1980 and a Presidential Young Investigator Award in 1989. At UC Berkeley, he was selected for the Diane S. McEntyre Award for Excellence in Teaching in 2000 and a Miller Research Professorship in 2001. He received the Distinguished Alumnus Award from IIT Kanpur in 2008. His publications have received numerous best paper awards, including five test of time awards - the Longuet-Higgins Prize for papers published at CVPR (twice) and the Helmholtz Prize for papers published at ICCV (three times). He received the 2013 IEEE PAMI-TC Distinguished Researcher in Computer Vision Award, the 2014 K.S. Fu Prize from the International Association of Pattern Recognition, the 2016 ACM-AAAI Allen Newell Award, and the 2018 IJCAI Award for Research Excellence in AI. He is a fellow of the IEEE and the ACM. He is a member of the National Academy of Engineering and the National Academy of Sciences, and a fellow of the American Academy of Arts and Sciences.

Kate Saenko (Boston University)

Bio: Kate Saenko is an Associate Professor of Computer Science at Boston University and director of the Computer Vision and Learning Group. She is also a member of the IVC research group. Her past academic positions include: Assistant Professor at the Computer Science Department at UMass Lowell, Postdoctoral Researcher at the International Computer Science Institute, Visiting Scholar at UC Berkeley EECS and a Visiting Postdoctoral Fellow in the School of Engineering and Applied Science at Harvard University. Her research interests are in the broad area of Artificial Intelligence with a focus on Adaptive Machine Learning, Learning for Vision and Language Understanding, and Deep Learning.

Oral Presentations

A few accepted abstracts are invited to give oral presentations.

Presenter instructions: The presentations should be 4 minute talk and 1 minute Q/A.


Accepted orals
Presenter Name
Institution
Paper Title

Simone Meyer
ETH Zurich
Deep Video Color Propagation
Hosnieh Sattar
Max Planck Institute for Informatics and Saarland University
Fashion is Taking Shape: Understanding Clothing Preference based on Body Shape from Online Sources
Fadime Sener,
University of Bonn
Unsupervised Learning and Segmentation of Complex Activities from Video
Sookyung Kim
Lawrence Livermore National Laboratory
Tracking Extreme Climate Events
Aisha Urooj
University of Central Florida
A Deep Look into Hand Segmentation

Poster Presentations

Authors of all accepted abstracts will present their work in a poster session.

Presenter instructions: Please note your poster number below to find your board. All posters should be installed in at most 10 minutes at the start of the poster session. The posters have to be in portrait mode. The poster boards are 1.20x1.00 meters and cannot hold landscape posters.


Accepted Posters

No
Presenter Name
Institution
Paper Title


1
Maryam Babaee
Chair of Human-Machine Communication, TU Munich, Germany
Gait Energy Image Reconstruction from Degraded Gait Cycle Using Deep Learning
2
Noelia Vallez
UCLM
Deep learning for the Eyes of Things platform
3
Clara Fernandez Labrador
University of Zaragoza
Full 3D Layout Reconstruction from One Single 360º Image
4
Melani Sanchez
University of Zaragoza
Smart Representation of Indoor Scenes under Simulated Prosthetic Vision
5
Rania Briq
The University of Bonn
Convolutional Simplex Projection Network for Weakly Supervised Semantic Segmentation
6
Farzaneh Mahdisoltani
University of Toronto
Hierarchical Video Understanding
7
Alina Roitberg
KIT
Towards Human Activity Recognition in Autonomous Vehicles
8
Míriam Bellver
Barcelona Supercomputing Center
From Pixels to Object Sequences: Recurrent Semantic Instance Segmentation
9
Sara Elkerdawy
University of Alberta
Fine-Grained Vehicle Classification with Unsupervised Parts Features Learning
10
Aina Ferrà Marcús
Universitat de Barcelona
Multiple Wavelet Pooling for CNNs
11
Jhan Alarifi
Manchester Metropolitan University
Automated Facial Wrinkles Annotator
12
Nazanin Mehrasa
Simon Fraser University
Learning Trajectory Representations for Human Activity Analysis
13
Mengyao Zhai
Simon Fraser University
Deep Learning of Appearance Models for Online Object Tracking
14
Nazre Batool
Scania CV AB
Real-time Recognition of Turn and Brake Signals for Autonomous Urban Buses
15
Jingyuan Liu
Fudan University
Deep Fashion Analysis with Feature Map Upsampling and Landmark-driven Attention
16
Marcella Cornia
University of Modena and Reggio Emilia
Towards Cycle-Consistent Models for Text and Image Retrieval
17
Antitza Dantcheva
INRIA
From attribute-labels to faces: text based face generation using a conditional generative adversarial network
18
Bingbin Liu
Stanford University
Temporal Modular Networks for Retrieving Complex Compositional Activities in Video
19
Kanami Yamagishi
Waseda University
How Do Computers See Makeup?: Investigating the Effects of Makeup on 3D Facial Reconstruction
20
Simone Meyer
ETH Zurich
Deep Video Color Propagation
21
Hosnieh Sattar
Max Planck Institute for Informatics and Saarland University
Fashion is Taking Shape: Understanding Clothing Preference based on Body Shape from Online Sources
22
Jadisha Ramirez Cornejo
University of Campinas
Dynamic Facial Expression Recognition by means of Visual Rhythm and Motion History Image
23
Sandra Aigner
Technical University of Munich
FakeFutureGAN: Generating the Future using Spatio-Temporal 3d Convolutions in Progressively Growing GANs
24
Obioma Pelka
University of Applied Sciences and Arts Dortmund
Optimizing Body Region Classification With Deep Convolutional Activation Features
25
Uldanay Bairam
NTNU
Highlight removal from fruit images for improved quality control and digital phenotyping
26
Fadime Sener
University of Bonn
Unsupervised Learning and Segmentation of Complex Activities from Video
27
Leissi Margarita Castaneda Leon
University of São Paulo
Efficient Interactive Multi-Object Segmentation in Medical Images
28
Michelle Guo
Stanford University
Neural Graph Matching Networks for Fewshot 3D Action Recognition
29
Deniz Engin
Istanbul Technical University
Face Frontalization for Cross-Pose Facial Expression Recognition
30
Michelle Guo
Stanford University
Focus on the Hard Things: Dynamic Task Prioritization for Multitask Learning
31
Amanda Duarte
Universitat Politecnica de Catalunya
Cross-modal Embeddings for Video and Audio Retrieval
32
Sookyung Kim
Lawrence Livermore National Laboratory
Tracking Extreme Climate Events
33
Pallabi Ghosh
University of Maryland
Understanding Center Loss Based Network for Image Retrieval with Few Training Data
34
Yasemin Timar
Bogazici University
Stylistic Features in Affective Movie Analysis
35
Aisha Urooj
University of Central Florida
A Deep Look into Hand Segmentation
36
Rafia Rahim
National University of computer and emerging sciences, islamabad
End-to-End Trained CNN Encoder-Decoder Networks for Image Steganography
37
Shreya Hasmukh Patel
IIT Jodhpur
Cancellable knuckle template generation based on LBP-CNN
38
Ruba Alkadi
Khalifa university
A 2.5D Deep Learning-based Approach For Prostate Cancer Detection on T2-weighted Magnetic Resonance Imaging
39
Geethu Jacob
Indian Institute of Technology, Chennai
GreenWarps: A Two-Stage Warping Model for Stitching Images using Diffeomorphic Meshes and Green Coordinates
40
Anjali Chadha
Texas A&M University
Enabling Pedestrian Safety using Computer Vision Techniques: A Case Study of the 2018 Uber Inc. Self-driving Car Crash

Mentoring Dinner on September 9, 2018

by invitation only

Dinner sponsored by Apple

The dinner event is an opportunity to meet other female computer vision researchers. Poster presenters will be matched with senior computer vision researchers to share experience and career advice. Invitees will receive an e-mail and be asked to confirm attendance.


Dinner speakers

Megan Maher (Apple)

Bio: Megan Maher is an AI researcher at Apple. Her research interests include machine learning, computer vision, and robotics. She received her Bachelor’s degree from Bowdoin College in Brunswick, Maine, where she majored in computer science and mathematics, achieving the highest distinction in the computer science program. Before joining Apple, she was a co-captain of Bowdoin’s RoboCup team, the Northern Bites, which placed internationally as one of the only all-undergraduate teams in the Standard Platform League. Pursuing her passion to support underrepresented groups in STEM, she mentors in the local community and serves on a leadership team to support women at Apple. She is a recipient of various research awards including the Clare Booth Luce Fellowship, the Alan B. Tucker Computer Science Research Award, the Maine Space Grant Consortium Summer Fellowship, and the National Science Foundation Summer Fellowship.

Tinne Tuytelaars (KU Leuven)

Bio: Tinne Tuytelaars received a Master of Electrical Engineering from the K.U. Leuven , Belgium in 1996. Since her graduation, she has been working at the VISICS -lab within ESAT - PSI of the Katholieke Universiteit Leuven , where she defended her PhD on december 19, 2000, entitled "Local Invariant Features for Registration and Recognition". During most of 2006 and early 2007, she was also parttime (20%) visiting scientist at the LEAR group of INRIA in Grenoble. Summer 2008, she visited the Making Sense from Data group at NICTA in Canberra, Australia. Summer 2010 she visited Trevor Darrell's group at ICSI/EECS UC Berkeley. Since October 2008, she is appointed research professor (BOF-ZAP) at the KU Leuven.

Raquel Urtasun (University of Toronto, Uber ATG)

Bio: Raquel Urtasun is the Head of Uber ATG Toronto. She is also an Associate Professor in the Department of Computer Science at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She received her Ph.D. degree from the Computer Science department at Ecole Polytechnique Federal de Lausanne (EPFL) in 2006 and did her postdoc at MIT and UC Berkeley. She is a world leading expert in machine perception for self-driving cars. Her research interests include machine learning, computer vision, robotics and remote sensing. Her lab was selected as an NVIDIA NVAIL lab. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award and two Best Paper Runner up Prize awarded at CVPR in 2013 and 2017 respectively.