Grand Challenge of 106-p Facial Landmark Localization

ICME2019
Click to sign up
    Introduction
  • As the deep learning methods have been largely developed in facial landmark localization task, the requirements of practical applications are growing fast. However, for large poses and occlusion, the accuracy of localization needs to be improved. Here, JD AI Research and NLPR, CASIA sincerely invited researchers and developers from academia and industry to participate in this competition and encourage further discussion on technical and application issues.

  • Official email: facial_lmgc_icme@163.com

    News

  • 2019/7/22 - The JD-landmark dataset has been released and can be downloaded from https://sites.google.com/view/hailin-shi.     If you use this dataset, please cite the paper "Yinglu Liu, Hao Shen, Yue Si, Xiaobo Wang, Xiangyu Zhu, Hailin Shi, et al. "Grand Challenge of 106-Point Facial Landmark Localization." In 2019 IEEE International Conference on Multimedia and Expo (ICME) Workshop. IEEE, 2019."

    2019/4/18 - Thank you for participating in Grand Challenge of 106-p Facial Landmark Localization! The score of the final evaluation has been announced. Congratulations to the top three teams: Baidu VIS, USTC-NELSLIP and VIC iron man. You can check the leaderboard to view your ranking.

    2019/4/1 - We have released the Test dateset 1 and modified few landmark files that are not accurate enough. Please click on this link to download : Google Drive or BaiduDisk

    2019/2/20 - Please note that all the participants should submit the binary or model along with the technical report to the organizer (facial_lmgc_icme@163.com) before April 8th, 2019. The purpose of the tech report is to validate the performance (without outside training data) of proposed algorithms from the participants. The participants should introduce the algorithms and the experimental configurations clearly in the technical report. The format of technical report is unlimited, but we highly recommend you to adopt the template offered by ICME from the website (http://www.icme2018.org/author_info). All the submitted binaries will be evaluated on the Test Dataset 2 and the top three teams will be awarded. We will write the final report for this challenge, in which the algorithms of the top three teams will be descripted and the related participants will be taken as the collaboration authors. This paper will be published on ICME workshop.

    2018/12/29 - At the request of some participants, we have appropriately cropped each test image on the basis of the detection bounding box, generated by our face detector, which is same as used in the training set (Note: Our detector is trained on the WIDER FACE, at the same time, we expanded the width and height outward by 1/8 on the generated detection box.). Our test interface still remains unchanged, your model needs to accept two parameters as input and be executed like "./Binary_filename parameter1 parameter2", Here, parameter1 refers to the absolute path for the cropped image (.jpg) and parameter2 refers to the absolute path for the output file (.txt). Of course you can use any other face detection method you prefer, then the parameter1 refers to the absolute path for the original image (.jpg), and your submitted items should include your detector. Please see the “Details” document for details.

    2018/12/18 - We have released the baseline evaluated on the test dateset 1, you can check them by clicking “Leaderboard” on the homepage. Please refer to the “Details” on the homepage for the evaluation criteria.

    2018/12/4 - You can start submitting the model or binary and corresponding runtime environment with a brief method description to: facial_lmgc_icme@163.com and the organizers will evaluate them on the validation set and send the performance to you. The input and output format are given in the document. Please visit the follow website for details: Details

    2018/12/4 - We have provided the face detection bounding boxes in the training set by our face detector as a reference. Click on this link to download the bounding box files : Download

    2018/11/26 - We are very sorry that there are few wrong landmark. TXT files in the data set. You can click on this link to download the corrected landmark files: Download

    2018/11/26 - The key rule: You can only use the data we provide for the competition. DO NOT use external data.

    2018/11/16 - The competition has officially started and the training data has been released! Now, you can register for the competition to get training data.

  • Facial landmark localization serves as a key step for many face applications, such as face recognition, emotion estimation and face reconstruction. The objective of facial landmark localization is to predict the coordinates of a set of pre-defined key points on human face. 106-key-point landmarks enable abundant geometric information for face analysis tasks. The purpose of this competition is to promote the development of research on face landmark localization, especially dealing with the complex situations, e.g. large face poses, extreme expressions and occlusions.

According to the final result, the first prize, second prize and third prize will be established.

First prize

Cash: 1000 USD & JD cloud voucher: 10,000 RMB

Second prize

Cash: 500 USD & JD cloud voucher: 5000 RMB

Third prize

Cash: 500 USD

From now on, we start the competition registration and release training data.

2018/11/16 Challenge registration start
2018/12/1 - 2019/4/1 Test 1(Phase validation)
2019/4/8 Test 2(Phase final evaluation): Model & paper submission deadline
2019/4/22 Paper acceptance notification
2019/4/22 Final evaluation results announcement
2019/4/29 Camera-ready paper submission deadline
Tips: You must register for the competition to get training data, and you are not allowed to use external data. The submission model must be Tensorflow, Pytorch, Caffe, Caffe2, MXNet and a detailed description(including preprocessing, etc.).

Dr. Hailin Shi

JD AI Platform and Research

Dr. Xiaobo Wang

JD AI Platform and Research

Dr. Yinglu Liu

JD AI Platform and Research

Dr. Xiangyu Zhu

Chinese Academy of Sciences

Please send model or binary with a brief method description to facial_lmgc_icme@163.com.

Ackownledge to the authors of these datasets:300W [1][2], LFPW [3], AFW [4], HELEN [5] and IBUG [6].

[1] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic.300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. InInternational Conference onComputer Vision - Workshops (ICCVW), pages 397–403, 2013.

[2] C. Sagonas, E. Antonakos, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: Database and results. IVC, 47:3–18, 2016. 3.

[3] Belhumeur, P., Jacobs, D., Kriegman, D., Kumar, N.. ‘Localizing parts of faces using a consensus of exemplars’. In Computer Vision and Pattern Recognition, CVPR. (2011).

[4] X. Zhu, D. Ramanan.‘Face detection, pose estimation and landmark localization in the wild’, Computer Visionand Pattern Recognition (CVPR) Providence, Rhode Island, June 2012.

[5] Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Boudev, Thomas S. Huang. ‘Interactive Facial Feature Localization’, ECCV2012.

[6] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. A semi-automatic methodology for facial landmark annotation. In CVPR, 2013.