Exportar Publicação
A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.
Nobari, Behnam Zendehdel & Babak Zendehdel Nobari (2025). Book of abstracts. In Nadia Molek, Alexander van Biezen, Maria João Velez (Ed.), International Interdisciplinary Conference Transform «The Future of Human Workforce».: FOŠ, UCLL, ISCTE, IHead of Research Department, Vienna Social Fund Education CentreFFI, nnovation Hive, .
B. Z. Nobari and B. Z. Nobari, "Book of abstracts", in Int. Interdisciplinary Conf. Transform «The Future of Human Workforce», Nadia Molek, Alexander van Biezen, Maria João Velez, Ed., FOŠ, UCLL, ISCTE, IHead of Research Department, Vienna Social Fund Education CentreFFI, nnovation Hive, , 2025
@incollection{nobari2025_1770116581470,
author = "Nobari, Behnam Zendehdel and Babak Zendehdel Nobari",
title = "Book of abstracts",
chapter = "",
booktitle = "International Interdisciplinary Conference Transform «The Future of Human Workforce»",
year = "2025",
volume = "",
series = "",
edition = "",
publisher = "FOŠ, UCLL, ISCTE, IHead of Research Department, Vienna Social Fund Education CentreFFI, nnovation Hive, ",
address = ""
}
TY - CHAP TI - Book of abstracts T2 - International Interdisciplinary Conference Transform «The Future of Human Workforce» AU - Nobari, Behnam Zendehdel AU - Babak Zendehdel Nobari PY - 2025 AB - The rapid integration of AI into organizational operations offers numerous benefits but also presents significant challenges. With the advent of Generative AI tools, such as ChatGPT, Copilot, and DeepSeek, AI applications have become increasingly prevalent, generating enthusiasm among experts and managers. However, the unconsidered use of these tools can lead to hidden risks that must be addressed and managed. This research employs Soft Systems Methodology (SSM)—a stakeholder-centric approach—to identify and categorize risks through CATWOE (Customers, Actors, Transformation process, Worldview, Owners, and Environmental constraints) analysis. SSM was selected for its iterative process, which integrates conflicting stakeholder priorities (e.g., executives’ efficiency goals vs. employees’ job security concerns) into a unified risk framework. Through SSM focus groups with 15 cross-functional stakeholders, we identified practical risks not present in the literature, such as employees inadvertently exposing organizational data while over-optimizing AI models. This is in stark contrast to technical risks such as adversarial attacks that have been highlighted in previous studies. The research findings indicate that the hidden risks of AI can be categorized into five areas: (a) Ethical and Social (e.g., dehumanization); (b) Organizational (e.g., over- reliance on AI decision-making); (c) Technical (e.g., poor data quality); (d) Legal (e.g., lack of clear regulations); and (e) Societal (e.g., erosion of trust). The study advocates a paradigm shift toward active AI governance through four strategies: (1) International collaboration (e.g., UN AI advisory bodies); (2) Government mandates (e.g., transparency requirements for high-risk AI systems); (3) Industry standards (e.g., FINRA’s AI compliance guidelines); and (4) Organizational reforms (e.g., establishing Chief AI Officer roles and prioritizing AI literacy programs). ER -
English