While humans generally excel at recognizing faces, analyzing subjective cues from faces can still be confusing and challenging. Yet, we do it subconsciously. Humans tend to form quick subjective first impressions of non-physical attributes when seeing someone’s face. These are extremely complex cues with different meaning to different observers. Understanding aspects that could improve the experience during interpersonal interactions can be extremely useful. To understand what variations in a face lead to different subjective impressions, our work uses generative models to find semantically meaningful edits to a face image that change perceived attributes. We evaluate these samples through human study to verify its alignment with aggregated human perception. The ultimate goal would to understand subjective visual tasks better and develop learning algorithms that understand the variability.