Compare to other view angles in gait recognition, frontal-view walking is a more challenging problem since it contains minimal gait cues. To facilitate this research, we collect the Front-View Gait (FVG) database in the course of two years, 2017 and 2018. FVG includes significant variations, e.g., walking speed, carrying, and clothing from frontal view angles.

Overview Description

FVG provides frontal walking videos from 226 subjects. In addition, 12 of them were collected twice from the year 2017 and 2018, in total 2,856 videos. The videos were captured by the camera Logitech C920 Pro Webcam or GoPro Hero 5 on a tripod at the height of 1.5 meters at 1, 080 × 1, 920 resolution with the average length of 10 seconds. More details can be found in our paper section 4.

Overview Example

Figure 1: Examples of FVG Dataset. (a) Samples of the near frontal middle, left and right walking view angles in session 1 (SE1) of the first subject (S1). SE3-S1 is the same subject in session 3. (b) Samples of slow and fast walking speed for another subject in session 1. Frames in top red boxes are slow and in the bottom red box are fast walking. Carrying bag sample is shown below. (c) samples of changing clothes and with cluttered background from one subject in session 2.

Session Details

FVG is collected in three sessions. In session 1, in 2017, videos from 147 subjects(#1 to 147) are collected with four variations (normal walking, slow walking, fast walking, and carrying status). In session 2, in 2018, videos from additional 79 subjects(#148 to 226) are collected. Variations are normal, slow or fast walking speed, clothes or shoes change, and twilight or cluttered backgrounds. Finally, in session 3, we collect repeated 12 subjects(#1,2,4,7,8,12,13,17,31,40,48,77) in the year 2018 for the extreme challenging test with the same setup as section 1. The purpose is to test how time gaps affect gait, along with changes in clothes/shoes or walking speed.

Session Details

Figure 2: Session arrangement details of FVG.

Files and Naming

All the video files are released as PNG frames, named as frame index, .e.g. 00010.png as frame number 10. Each video folder is named as SubjectID_VideoIndex. SubjectID ranges from 001 to 226. VideoIndex always ranges from 01 to 12. There are three folders session1, session2 and session3. Session 1 contains subjects 1 to 147; session 2 contains and session 3 contains the repeated 12 subjects.

                                        Dataset structure of FVG:

                                            session1
                                                -001_01
                                                    -00001.png
                                                    ...
                                                    -0000x.png
                                                -001_02
                                                ...
                                                -147_12

                                            session2
                                                -148_01
                                                -148_02
                                                ...
                                                -226_12

                                            session3
                                                -001_01
                                                -001_02
                                                ...
                                                -077_12
                

Evaluation Protocols

Evaluation protocols can be found in our paper section 4.

Download

Request for accessing FVG can be submitted via this form.

Publications

  • Gait Recognition via Disentangled Representation Learning
    Ziyuan Zhang, Luan Tran, Xi Yin, Yousef Atoum, Jian Wan, Nanxin Wang, Xiaoming Liu
    In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2019), Long Beach, CA, Jun. 2019 (Oral presentation)
    Bibtex | PDF | arXiv
  • @inproceedings{ gait-recognition-via-disentangled-representation-learning,
      author = { Ziyuan Zhang and Luan Tran and Xi Yin and Yousef Atoum and Jian Wan and Nanxin Wang and Xiaoming Liu },
      title = { Gait Recognition via Disentangled Representation Learning },
      booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
      address = { Long Beach, CA },
      month = { June },
      year = { 2019 },
    }
  • On Learning Disentangled Representations for Gait Recognition
    Ziyuan Zhang, Luan Tran, Feng Liu, Xiaoming Liu
    IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2019 (Under Review)
    Bibtex | PDF | arXiv
  • @inproceedings{ on-learning-disentangled-representations-for-gait-recognition,
      author = { Ziyuan Zhang and Luan Tran and Feng Liu and Xiaoming Liu },
      title = { On Learning Disentangled Representations for Gait Recognition },
      booktitle = { IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2019 (Under Review) },
      month = { June },
      year = { 2019 },
    }