Export Publication
The publication can be exported in the following formats: APA (American Psychological Association) reference format, IEEE (Institute of Electrical and Electronics Engineers) reference format, BibTeX and RIS.
Nobari, Behnam Zendehdel & Babak Zendehdel Nobari (2026). Unmasking, Classifying and Managing the Hidden Risks of Artificial Intelligence (AI) in the Future Workforce: A Soft Systems Methodology (SSM) Approach. Challenges of the Future.
B. Z. Nobari and B. Z. Nobari, "Unmasking, Classifying and Managing the Hidden Risks of Artificial Intelligence (AI) in the Future Workforce: A Soft Systems Methodology (SSM) Approach", in Challenges of the Future, 2026
TY - GEN TI - Unmasking, Classifying and Managing the Hidden Risks of Artificial Intelligence (AI) in the Future Workforce: A Soft Systems Methodology (SSM) Approach T2 - Challenges of the Future AU - Nobari, Behnam Zendehdel AU - Babak Zendehdel Nobari PY - 2026 UR - https://www.fos-unm.si/en/ AB - Research Question (RQ): What hidden and multidimensional risks emerge from the integration of artificial intelligence (AI) into organizational workplaces, and how can these risks be systematically identified and managed from a stakeholder-centered perspective? Purpose: This study aims to unmask, classify, and manage the often-overlooked risks associated with AI adoption in the future workforce. Moving beyond predominantly technical risk narratives, it seeks to provide a holistic and human-centered understanding of ethical, organizational, legal, and societal challenges arising from AI-enabled work environments. Method: The research adopts Soft Systems Methodology (SSM) as its primary analytical framework. Through participatory focus groups involving 15 cross-functional stakeholders at the National Library and Archives of Iran (NLAI), the study applies CATWOE analysis and rich picture modeling to capture diverse stakeholder worldviews and uncover latent risks not fully addressed in existing literature. Results: The findings reveal several practical and non-technical risks, including inadvertent organizational data exposure during the optimization of generative AI tools, generational inequities in workload distribution, and erosion of professional expertise due to over-reliance on AI systems. These risks are classified into five interrelated domains: (1) Ethical and Social, (2) Organizational, (3) Technical, (4) Legal, and (5) Societal risks. Organization: At the organizational level, the study demonstrates how SSM enables the reconciliation of competing priorities—such as managerial efficiency objectives and employees’ job security concerns—into actionable AI risk management strategies. It highlights the importance of AI literacy, accountability mechanisms, and human oversight in organizational decision-making. Society: From a broader societal perspective, the research underscores the implications of AI-related risks for trust in public institutions, workforce equity, and democratic values, emphasizing the need for transparent and responsible AI governance. Originality: The article contributes an original, stakeholder-driven application of SSM to AI risk management, offering an integrated typology of hidden workforce-related AI risks that complements existing technical and regulatory frameworks. Limitations / further research: The study is based on a qualitative case study with a limited sample size. Future research should extend this approach to other organizational contexts and empirically examine the alignment of SSM-based insights with formal AI governance standards such as ISO/IEC 42001 and the OECD AI Principles. ER -
Português