Paper in press
Unmasking, Classifying and Managing the Hidden Risks of Artificial Intelligence (AI) in the Future Workforce: A Soft Systems Methodology (SSM) Approach
Behnam Zendehdel Nobari (Nobari, Behnam Zendehdel); Babak Zendehdel Nobari (Babak Zendehdel Nobari);
Journal Title
Challenges of the Future
Language
English
Country
Slovenia
More Information
Web of Science®

This publication is not indexed in Web of Science®

Scopus

This publication is not indexed in Scopus

Google Scholar

This publication is not indexed in Google Scholar

This publication is not indexed in Overton

Abstract
Research Question (RQ): What hidden and multidimensional risks emerge from the integration of artificial intelligence (AI) into organizational workplaces, and how can these risks be systematically identified and managed from a stakeholder-centered perspective? Purpose: This study aims to unmask, classify, and manage the often-overlooked risks associated with AI adoption in the future workforce. Moving beyond predominantly technical risk narratives, it seeks to provide a holistic and human-centered understanding of ethical, organizational, legal, and societal challenges arising from AI-enabled work environments. Method: The research adopts Soft Systems Methodology (SSM) as its primary analytical framework. Through participatory focus groups involving 15 cross-functional stakeholders at the National Library and Archives of Iran (NLAI), the study applies CATWOE analysis and rich picture modeling to capture diverse stakeholder worldviews and uncover latent risks not fully addressed in existing literature. Results: The findings reveal several practical and non-technical risks, including inadvertent organizational data exposure during the optimization of generative AI tools, generational inequities in workload distribution, and erosion of professional expertise due to over-reliance on AI systems. These risks are classified into five interrelated domains: (1) Ethical and Social, (2) Organizational, (3) Technical, (4) Legal, and (5) Societal risks. Organization: At the organizational level, the study demonstrates how SSM enables the reconciliation of competing priorities—such as managerial efficiency objectives and employees’ job security concerns—into actionable AI risk management strategies. It highlights the importance of AI literacy, accountability mechanisms, and human oversight in organizational decision-making. Society: From a broader societal perspective, the research underscores the implications of AI-related risks for trust in public institutions, workforce equity, and democratic values, emphasizing the need for transparent and responsible AI governance. Originality: The article contributes an original, stakeholder-driven application of SSM to AI risk management, offering an integrated typology of hidden workforce-related AI risks that complements existing technical and regulatory frameworks. Limitations / further research: The study is based on a qualitative case study with a limited sample size. Future research should extend this approach to other organizational contexts and empirically examine the alignment of SSM-based insights with formal AI governance standards such as ISO/IEC 42001 and the OECD AI Principles.
Acknowledgements
--
Keywords
  • Economics and Business - Social Sciences
  • Other Social Sciences - Social Sciences

With the objective to increase the research activity directed towards the achievement of the United Nations 2030 Sustainable Development Goals, the possibility of associating scientific publications with the Sustainable Development Goals is now available in Ciência_Iscte. These are the Sustainable Development Goals identified by the author(s) for this publication. For more detailed information on the Sustainable Development Goals, click here.