Full Paper View Go Back

Deep Fake Video Detection Using Recurrent Neural Networks

Abdul Jamsheed V.1 , Janet B.2

Section:Research Paper, Product Type: Journal-Paper
Vol.9 , Issue.2 , pp.22-26, Apr-2021


Online published on Apr 30, 2021


Copyright © Abdul Jamsheed V., Janet B. . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
 

View this paper at   Google Scholar | DPI Digital Library


XML View     PDF Download

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Abdul Jamsheed V., Janet B., “Deep Fake Video Detection Using Recurrent Neural Networks,” International Journal of Scientific Research in Computer Science and Engineering, Vol.9, Issue.2, pp.22-26, 2021.

MLA Style Citation: Abdul Jamsheed V., Janet B. "Deep Fake Video Detection Using Recurrent Neural Networks." International Journal of Scientific Research in Computer Science and Engineering 9.2 (2021): 22-26.

APA Style Citation: Abdul Jamsheed V., Janet B., (2021). Deep Fake Video Detection Using Recurrent Neural Networks. International Journal of Scientific Research in Computer Science and Engineering, 9(2), 22-26.

BibTex Style Citation:
@article{V._2021,
author = {Abdul Jamsheed V., Janet B.},
title = {Deep Fake Video Detection Using Recurrent Neural Networks},
journal = {International Journal of Scientific Research in Computer Science and Engineering},
issue_date = {4 2021},
volume = {9},
Issue = {2},
month = {4},
year = {2021},
issn = {2347-2693},
pages = {22-26},
url = {https://www.isroset.org/journal/IJSRCSE/full_paper_view.php?paper_id=2342},
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
UR - https://www.isroset.org/journal/IJSRCSE/full_paper_view.php?paper_id=2342
TI - Deep Fake Video Detection Using Recurrent Neural Networks
T2 - International Journal of Scientific Research in Computer Science and Engineering
AU - Abdul Jamsheed V., Janet B.
PY - 2021
DA - 2021/04/30
PB - IJCSE, Indore, INDIA
SP - 22-26
IS - 2
VL - 9
SN - 2347-2693
ER -

598 Views    693 Downloads    74 Downloads
  
  

Abstract :
Generative adversarial networks progressed to the point where it is very difficult to distinguish the difference between what is real or fake. In recent times, different face manipulating tools are used to generate credible face swaps in videos that leave a very little trace of manipulation, which is commonly referred as “AI based Deep Fake videos”. Nowadays, these realistic fake videos are used in pornography, blackmailing, political distress etc. Creation of deep fake videos are a simple task, but when it comes to detection, it’s a major challenge. The advancement in the creation of AI based deep fake videos has made the older detection system less accurate. In this work, we describe a new method which use deep learning based methodology to effectively detect manipulated/fake videos from real videos. Our system uses a combination of CNN and LSTM to detect these manipulations. Convolutional Neural Networks (CNN) is used to extract frame level features and then these features are trained using a Long Short Term Memory (LSTM) Recurrent Neural Network that classifies real and fake videos separately. We also compared the results with existing methodologies and found out to be great. Dataset for training Deep Fake detection model were picked from different sources which include DFDC, Face Forensics, Celeb DF, deep fake detection challenge dataset to name a few. We are successful in obtaining a competitive result of 92 percent accuracy while using a simple architecture.

Key-Words / Index Term :
Deep Fake, Deep Learning, Face Manipulation, Convolutional- LSTM, Fake videos

References :
[1] D. G¨uera, Y. Wang, L. Bondi, P. Bestagini, S. Tubaro, and E. J. Delp. A counter-forensic method for CNN-based camera model identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1840–1847, July 2017. Honolulu,
[2] D. G¨uera, S. K. Yarlagadda, P. Bestagini, F. Zhu, S. Tubaro, and E. J. Delp. Reliability map estimation for cnn-based camera model attribution. Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Mar. 2018. Lake Tahoe, NV
[3] IEEE’s Signal Processing Society - Camera Model Identification-Kaggle. https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/49299.
[4] P. Bestagini et al. Local tampering detection in video sequences. Proceedings of the IEEE International Workshop on Multimedia Signal Processing, pages 488–493, Sept.2013. Pula, Italy.
[5] W. Wang and H. Farid. Exposing digital forgeries in interlaced and de interlaced video. IEEE Transactions on Information Forensics and Security, 2(3), 2007.
[6] V. Conotter, E. Bodnari, G. Boato, and H. Farid. Physiologically-based detection of computer generated faces in video. Proceedings of the IEEE International Conference on Image Processing, pages 248–252, Oct. 2014. Paris, France
[7] N. Rahmouni, V. Nozick, J. Yamagishi, and I. Echizen. Distinguishing computer graphics from natural images using convolution neural networks. Proceedings of the IEEEWorkshop on Information Forensics and Security, pages 1–6, Dec. 2017. Rennes, France.
[8] P. Zhou et al. Two-stream neural networks for tampered face detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1831–1839, July 2017. Honolulu, HI.
[9] R. Raghavendra, K. B. Raja, S. Venkatesh, and C. Busch. Transferable deep-cnn features for detecting digital and print-scanned morphed face images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1822–1830, July 2017. Honolulu.
[10] A. R¨ossler et al. Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv:1803.09179, Mar. 2018
[11] Z. Lu, Z. Li, J. Cao, R. He, and Z. Sun. Recent progress of face image synthesis. arXiv:1706.04717, June 2017.
[12] G. Antipov, M. Baccouche, and J.-L. Dugelay. Face aging with conditional generative adversarial networks.arXiv:1702.01983, Feb. 2017.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, Nov. 1997.
[14] C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. Proceedings of the ACM Annual Conference on Computer Graphics And Interactive Techniques, pages 353–360, Aug. 1997. Los Angeles, CA.
[15] M. Abadi et al. Tensorflow: A system for large-scale machine learning. Proceedings of the USENIX Conference on Operating Systems Design and Implementation, 16:265–283, Nov. 2016. Savannah, GA.
[16] H. Averbuch-Elor et al. Bringing portraits to life. ACM Transactions on Graphics, 36(6):196:1–196:13, Nov. 2017.
[17] M. Brundage et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv:1802.07228, Feb. 2018.
[18] K. Dale, K. Sunkavalli, M. K. Johnson, D. Vlasic, W. Matusik, and H. Pfister. Video face replacement. ACM Transactions on Graphics, 30(6):1–130, Dec. 2011.
[19] J. Donahue et al. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):677–691, Apr. 2017.
[20] I. Goodfellow et al. Generative adversarial nets. Advances in Neural Information Processing Systems, pages 2672–2680, Dec. 2014. Montr´eal, Canada. 1
[21] P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5967–5976, July 2017. Honolulu, HI.
[22] Y. Qian et al. Recurrent color constancy. Proceedings of the IEEE International Conference on Computer Vision, pages 5459–5467, Oct. 2017. Venice, Italy.
[23] Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta, “Deep Learning through Convolutional Neural Networks for Classification of Image: A Novel Approach Using Hyper Filter”, International Journal of Computer Science and Engineering, Vol.7, Issue.6, pp.164-168, 2019.

Authorization Required

 

You do not have rights to view the full text article.
Please contact administration for subscription to Journal or individual article.
Mail us at  support@isroset.org or view contact page for more details.

Go to Navigation