For tasks such as face recognition, facial landmarks detection or gaze estimation state-of-the-art is often achieved using deep neural networks. One major drawback of this technology is that it requires a huge amount of training data. Fortunately, there are huge datasets publicly available for RGB image so when your application is based on RGB image there is no difficulty in gathering data. However, for the application of those networks on infrared (IR) imagery there are often not enough annotated data available (if not none). To tackle this issue I study how to translate a RGB face image to a physically realistic IR face image using Generative Adversarial Networks (GAN). It enable to translate annotated RGB dataset to IR and thus create a large training dataset enabling us to create state-of-the-art networks on IR data.