The increasing availability of data, computing power & advances in the algorithms has really driven the development of Artificial Intelligence (AI) in recent years. However, many industries & societies despite realizing the value of AI are still skeptical in accepting AI,
...
The increasing availability of data, computing power & advances in the algorithms has really driven the development of Artificial Intelligence (AI) in recent years. However, many industries & societies despite realizing the value of AI are still skeptical in accepting AI, especially when several controversial incidents have come into our spotlights and the challenges that AI have been posing in recent years. This has increasingly raised the concern over the trust in AI and has become a major impediment while adopting AI. Almost every stakeholder, potential users put their concerns upfront to the developers of the technology & management and all these concerns address to one main question – How can I trust AI or Whether AI can be trusted. Addressing the concerns posed by the clients & ensuring that AI solutions developed are trustworthy and responsible has now become one of the top priority and challenges for several technology-based companies. From the stands of scientific literature, there hasn’t been substantial research done on the factors influencing the trust in AI despite the growing attention paid over the importance of trust in AI in recent times. At least, there hasn’t been enough study done on the concepts of trust in the field of AI from the management and socio-technical aspects. This research will focus mainly on improving the trust in AI by identifying the essential trust factors of data in terms of data quality dimensions (DQ) & AI model and the prime objective is to develop a trusted AI model incorporating such trust factors that can help the management & developers to assess the trust factors and improve the trust in AI. The research would mainly be employed with a qualitative study using an inductive approach in order to generate valuable theories as it is mainly supported by literature review, desktop research, interviews, and use of a case study. To be more precise, the research was divided into two phases where the first phase involves the identification of potential factors that influences the trust in data and AI model and they were primarily derived from the extensive study done on the literature review & desktop research, and the second phase involves the identification of important trust factors from the perspective of actors involved in the development of AI. Based on the findings from the interview combined with the initial analysis done on the literature review, an initial version of the model was developed. Since the model was relatively new & comprehensive, it required further evaluation with the experts and based on those reflections combined with the previous analysis (literature review & findings from the initial interviews), a final version of the model was developed. To improve the utility of the proposed model & overall research, the model was compared with some of the core themes laid by AI-based research institutions and leading tech firms to ensure that the model has considered those themes and distinguish the major value of this model. The final version of trusted AI model thus contains nine main phases involved in AI development and in each of the phases, trust factors that were crucial to be considered were tagged along with the detailed indicators for each of the phases. The trusted AI model at the end would mainly help the management and developers ( Technology creators) to establish a robust trust over the AI model or the solutions created & provide a seal of trust to the investors, clients and other stakeholders involved. From this study, identification of essential trust factors of resulting AI model and essential trust factors of data in the form of DQ dimensions were considered to be one of the prime handouts to the scientific research apart from the trusted AI model proposed.