Abstract: We propose an approach for generating photorealistic facial expressions for multiple virtual identities in dyadic interactions. To this end, we study human-human interactions to model one individual’s facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial networks, where in the first stage generates one’s face shapes conditioned on facial action features derived from their dyadic interaction partner and the second stage synthesizes high quality face images from sketches. A ‘layer features’ L1 regularization is employed to enhance the generation quality and an identity-constraint is utilized to ensure appearance distinction between different identities. We demonstrate that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that generated agent facial expressions reflect valid emotional reactions to behavior of the human partner.