logo coeichi logo kmutt logo TCEB


Kin-Man Lam

Kenneth K.M. Lam, The Hong Kong Polytechnic University

Title: Current and Future Research on Face Recognition: from Low-resolution to High-resolution


A lot of research on face recognition has been conducted over the past two decades or more. Various face recognition methods have been proposed, but investigations are still underway to tackle different problems and challenges for face recognition. The existing algorithms can only solve some of the problems, and their performances degrade in real-world applications. In this talk, we will first discuss the performances of face recognition techniques on face images at different resolutions. Then, we discuss issues and methods for low-resolution and high-resolution face recognition. For low-resolution face recognition, we will present different approaches, and focus more on the use of feature super-resolution and fusion. To perform face recognition, image features from a query image are first extracted and then matched to those features in a gallery set. The amount of information and the effectiveness of the features used will determine the recognition performance. To improve the performance, information about face images at the original low resolution and a higher resolution are considered. As the features from different resolutions should closely correlate with each other, we introduce the cascaded generalized canonical correlation analysis (GCCA) to fuse the information to form a single feature vector for face recognition. For recognition of HR face images, we will show that pore-scale facial features can be explored when the resolution of faces is greater than 700600 pixels. We will describe the use of the facial features for recognition under conditions of different facial expressions, lighting, poses and captured times. We will also present the minimum area in face images that can retain a high recognition level. Experiment results indicate that the facial pores can be used as a new biometric for recognition, even distinguishing between identical twins easily. Furthermore, the use of deep learning for face recognition will also be presented and discussed.


Seishi Takamura

NTT Media Intelligence Labs., NTT Corporation

Title: Image/Video Coding: Past, Future and Further Beyond


We are enjoying digital image/video contents at every moment in various situations; watching TV, chatting via mobile devices, sharing taken pictures/videos via SNS, browsing news on the web, showing your work with presentation applications, etc. Image/video contents in those situations are compressed down to 1/10 - 1/1000 from its original size. Since it is unimaginable to store such tremendous amount of image/video contents without compression, there is no doubt that image/video coding technology had/has made significant impacts on our daily life and business. It has been reported that the internet traffic is increasing 31% per year, within which the video traffic will occupy 82% by 2020. So further development of better compression technology is eagerly demanded.

In this talk, we will first overview the advances of video coding technology in the last several decades, recent topics and expected growth in the next few years. Some new functional/technical trends other than conventional 2D video coding will be also discussed. Then, two of our new attempts in image/video coding, which further enhance video compression, will be presented: 1) Evolutive video coding, which creates content-specific compression algorithms by machine, and 2) Real-entity mining video coding, which aims to offer better decoded quality than original.


Important Dates

Special session proposal

July 15,2017

Extended abstract submission

August 11, 2017

September 10, 2017

Notification of acceptance

October 13, 2017

Camera Ready Full Paper Submission

November 1, 2017

November 24, 2017

Prospective authors are invited to submit their original papers in 2~4 pages including figures and tables in English. All papers are to be submitted electronically in the PDF format. Please refer to Submission Instructions for more details.