As technology evolves, we develop new devices equipped with embedded cameras. An important application using this technology is face classification, usually applied to: surveillance, biometric verification or gesture recognition in user-friendly interfaces. Traditionally, images are treated as high dimensional vectors with the pixel values. Feature extraction is used to reduce this dimensionality, learning invariant discriminant characteristics that improve the posterior classification on the face subspace. The first part of this book introduces the classifier combination methods to derive a new family of feature extraction techniques making no specific statistical assumptions on the data to classify. Psychological studies suggest that humans give a lot of importance to external features (hair, forehead and lateral zone). In the second part of this book we introduce a top-down fragment-based framework to model the external information of face images, solving the lack of alignment of the external regions and the extreme diversity among subjects. We conclude with some methods to combine internal and external features, improving the face classification results.