Identifying Gender From Facial Features
Previous research has shown that our brain has specialized nerve cells responding to specific local features of a scene, such as lines, edges, angles or movement. Our visual cortex combines these scattered pieces of information into useful patterns. Automatic face recognition aims to extract these meaningful pieces of information and put them together into a useful representation in order to perform a classification/identification task on them. While we attempt to identify gender from facial features, we are often curious about what features of the face are most important in determining gender. Are localized features such as eyes, nose and ears more important or overall features such as head shape, hair line and face contour more important? There are a plethora of successful and robut face recognition algorithms on the web. Instead of using the inbuilt tools that they provide, we start building various algorithms from scratch to gain a rich learning experience. In this project, the following methods were used for classification : Eigenface Method K-means GDA that performs supervised learning on reduced space of PCA SVM that performs supervised learning on reduces space of PCA Fisherfaces Method SVM that performs supervised learning on features provided by the Histogram of Oriented Gradients (HOG) method We look at how these methods perform on our data, discuss the relative advantages and disadvantages of these methods and investigate the limitations on accuracy posed by the dataset itself.
Research Paper Link: Download Paper