Exportar Publicação
A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.
Perdigão, P Almeida, Coelho, N. Mateus & Bras, J. Cascais (2025). AI-Driven Threats in Social Learning Environments: A Multivocal Literature Review. ARIS2 - Advanced Research on Information Systems Security.
P. A. Perdigão et al., "AI-Driven Threats in Social Learning Environments: A Multivocal Literature Review", in ARIS2 - Advanced Research on Information Systems Security, 2025
@null{perdigão2025_1776098004044,
year = "2025",
url = "https://aris-journal.com/aris/index.php/journal"
}
TY - GEN TI - AI-Driven Threats in Social Learning Environments: A Multivocal Literature Review T2 - ARIS2 - Advanced Research on Information Systems Security AU - Perdigão, P Almeida AU - Coelho, N. Mateus AU - Bras, J. Cascais PY - 2025 SN - 2795-4609 DO - 10.56394/aris2.v5i1.60 UR - https://aris-journal.com/aris/index.php/journal AB - In recent years, artificial intelligence (AI) has become important in improving educational processes by facilitating personalized learning and enhancing collaborative platforms. However, the same technologies that offer these advantages can also enable sophisticated cyber threats. This multivocal literature review (MLR) explores four major areas of concern in social learning environments: (1) phishing and social engineering, (2) AI-generated misinformation, (3) deepfake media, and (4) AI-driven detection systems. Gathering insights from recent academic articles, industry reports, and news/blog analyses, the study demonstrates AI’s dual function as both a channel for educational innovation and a tool for malicious exploitation. Findings indicate that AI-powered attacks not only erode trust and academic integrity but also target the inherent vulnerability of collaborative platforms, including Massive Open Online Courses (MOOCs). Additionally, while academic literature focuses on theoretical solutions such as explainable AI (XAI) and advanced machine learning detection, gray literature highlights practical challenges like regulatory gaps, limited funding, and insufficient user training. Blockchain-based audit trails and robust user-awareness campaigns also emerge as critical strategies for enhancing security. This review highlights the importance of interdisciplinary collaboration among policymakers, researchers, educators, and technology developers to ensure that AI’s benefits are not dominated by its misuse. By adopting adaptive security policies, fostering digital literacy, and integrating transparent detection tools, stakeholders can strengthen the resilience of social learning environments against rapidly evolving AI-driven threats. ER -
English