Full Paper View Go Back
Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools
Sarthak Gupta1
Section:Research Paper, Product Type: Journal-Paper
Vol.12 ,
Issue.3 , pp.56-65, Jun-2024
Online published on Jun 30, 2024
Copyright © Sarthak Gupta . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: Sarthak Gupta, “Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools,” International Journal of Scientific Research in Computer Science and Engineering, Vol.12, Issue.3, pp.56-65, 2024.
MLA Style Citation: Sarthak Gupta "Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools." International Journal of Scientific Research in Computer Science and Engineering 12.3 (2024): 56-65.
APA Style Citation: Sarthak Gupta, (2024). Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools. International Journal of Scientific Research in Computer Science and Engineering, 12(3), 56-65.
BibTex Style Citation:
@article{Gupta_2024,
author = {Sarthak Gupta},
title = {Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools},
journal = {International Journal of Scientific Research in Computer Science and Engineering},
issue_date = {6 2024},
volume = {12},
Issue = {3},
month = {6},
year = {2024},
issn = {2347-2693},
pages = {56-65},
url = {https://www.isroset.org/journal/IJSRCSE/full_paper_view.php?paper_id=3518},
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
UR - https://www.isroset.org/journal/IJSRCSE/full_paper_view.php?paper_id=3518
TI - Can We Trust Them? A Critical Evaluation of AI-Generated Content Detection Tools
T2 - International Journal of Scientific Research in Computer Science and Engineering
AU - Sarthak Gupta
PY - 2024
DA - 2024/06/30
PB - IJCSE, Indore, INDIA
SP - 56-65
IS - 3
VL - 12
SN - 2347-2693
ER -
Abstract :
This explosion of AI-written text raises the issues of credibility and academic integrity. AI content detectors can be seen as possible solutions, but the question of their effectiveness is still open. This research performs an in-depth analysis of AI content detector accuracy and flaws. We test an array of detectors against a variety of human-written and AI-generated samples that have been modified using various obfuscation methods and are of different text genres. Findings showed notable issues with the detectors concerning specific obfuscation techniques and diversity of styles. We talk about the ethical aspects of the detector, such as possible biases and the risk of incorrect allegations. Finally, we want to point out the necessity of continuous critical evaluation together with the development of other tools like promoting critical thinking and process-level assessment.
Key-Words / Index Term :
AI-generated text, Content detection, Accuracy, Limitations, Academic integrity, Bias.
References :
[1] S. Debora Weber-Wulff, A. Anohina-Naumeca, S. Bjelobaba, T. Foltýnek, J. Guerrero-Dib, O. Popoola, P. Šigut, and L. Waddington, “Testing of detection tools for AI-generated text,” International Journal for Educational Integrity, Vol.19, Issue.26, pp.1-39, 2023. https://doi.org/10.1007/s40979-023-00146-z
[2] A. M. Elkhatat, K. Elsaid, and S. Almeer, “Testing of detection tools for AI-generated text,” International Journal for Educational Integrity, Vol.19, No.26, pp.1-39, 2023. https://doi.org/10.1007/s40979-023-00146-z
[3] J. Otterbacher, “Why technical solutions for detecting AI-generated content in research and education are insufficient,” National Library Of Medicine, Vol.4, No.7, pp.1-9, 2023. https://doi.org/10.1016/j.patter.2023.100796
[4] R. Sable, V. Baviskar, S. Gupta, D. Pagare, E. Kasliwal, D. Bhosale, and P. Jade, “AI Content Detection,” International Advanced Computing Conference, IACC, pp.267-283, 2024. https://doi.org/10.1007/978-3-031-56700-1_22
[5] A. Harada, D. Bollegala, and N. P. Chandrasiri, “Discrimination of human-written and human and machine written sentences using text consistency,” International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp.41-47, 2021. https://doi.org/10.1109/ICCCIS51004.2021.9397237
[6] C. Bail, L. Pinheiro, and J. Royer, “Difficulty Of Detecting AI Content Poses Legal Challenges,” Law 360, pp.1-4, 2023.
[7] F. Ufuk, H. Peker, E. Sagtas, and A. B. Yagci, “Distinguishing GPT-4-generated Radiology Abstracts from Original Abstracts: Performance of Blinded Human Observers and AI Content Detector,” pp.1-7, 2023. https://doi.org/10.1101/2023.04.28.23289283
[8] A. Bhattacharjee and H. Liu, “Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text?,” ACM SIGKDD Explorations Newsletter, Vol.25, No.2, pp.14-21, 2024. https://doi.org/10.1145/3655103.3655106
[9] V. Bellini, F. Semeraro, J. Montomoli, M. Cascella, and E. Bignami, “Between human and AI: assessing the reliability of AI text detection tools,” Current Medical Research and Opinion, Vol.30, No.3, pp.353-358, 2023. https://doi.org/10.1080/03007995.2024.2310086
[10] C. Chaka, “Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools,” Journal of Applied Learning & Teaching, Vol.6, No.2, 2023.
[11] H. Pandiyakumari S. and J. R., “A Survey on the Applications of Machine Learning in Identifying Predominant Network Attacks,” International Journal of Computer Science and Engineering, Oct., Vol.11, Issue.5, pp.16-22, 2023.
[12] K. A. Okewale, I. R. Idowu, B. S. Alobalorun, and F. A. Alabi, “Effective Machine Learning Classifiers for Intrusion Detection in Computer Network,” International Journal of Computer Science and Engineering, Apr., Vol.11, Issue.2, pp.14-22, 2023.
You do not have rights to view the full text article.
Please contact administration for subscription to Journal or individual article.
Mail us at support@isroset.org or view contact page for more details.