Auditing Artificial Intelligence
More Info
expand_more
Abstract
Recent technological advancements have enabled the development of increasingly impactful and complex Artificial Intelligence (AI) systems. This complexity is paired with a trade-off in terms of system opacity. The resulting lack of understanding combined with reported algorithm scandals have decreased public trust in AI systems. Meanwhile, the AI risk mitigation field is maturing. One of the proposed mechanisms to incentivize the verifiable development of trustworthy AI systems is the AI audit: the external assessment of AI systems.
The AI audit is an emerging subdomain of the Information Technology (IT) audit, a standardized practice carried out by accountants. Contrary to the IT audit, there are currently no AI-specific defined rules and regulations to adhere to. At the same time, some organizations are already seeking external assurance from accountancy firms on their AI systems. AI auditors have indicated that this has lead to challenges in their current audit approach, mainly due to a lack of structure. Therefore, this thesis proposes an AI audit workflow comprised of a general AI auditing framework combined with a structured scoping approach.
Interviews with AI auditors at one accountancy firm in the Netherlands revealed that the demand for AI audits is increasing and expected to keep growing. Clients mainly seek assurance for management of stakeholders and reputation. Furthermore, the challenges the auditors currently experience stem from having to aggregate auditing questions from a range of auditing frameworks, causing issues in their recombination and in determining question relevancy. Subsequently, design criteria for a general auditing framework as well as feedback on a proposed scoping approach were obtained.
Fourteen AI auditing frameworks were identified through a literature search. Following their typology, these could be subdivided into three source categories: academic, industry, and auditing/regulatory. Academic frameworks typically focused on specific aspects of trustworthy AI, while industry frameworks emphasized the need for public trust to drive AI progress. Frameworks developed by auditing and regulatory organizations tended to be most extensive.
Comparison to four common IT audit frameworks and standards showed that AI audit frameworks need to cover a broader range of topics than the traditional IT audit themes. This is a result of the complex socio-technological context involving multiple stakeholders in which AI systems operate. Additionally, it was shown that AI performance monitoring dashboards could cover technical parts of the audit, but that they fall short when it comes to context-dependent topics such as human oversight or societal well-being.
Following analysis of the similarities between the corporate Environmental, Social and Governance (ESG) reporting materiality assessment and the AI audit scoping problem, an ESG materiality assessment approach was translated to a scoping approach for the AI audit. In this translation, feedback from the AI auditors was incorporated. Combined with a general auditing framework, which was built through combination of the fourteen identified frameworks along the obtained design criteria, this formed the basis for the proposed AI audit workflow. The proposed workflow was demonstrated to be executable through a mock case study. Investigation from the data subject perspective for the Public Eye crowd monitoring AI system of the Municipality of Amsterdam resulted in a scoped list of auditing questions relating to privacy, transparency and fairness.
Recommendations for future AI audit workflow designs include exploring the option of incorporating subthemes in the general framework, closer co-development with AI auditors, obtaining insights from auditors at multiple accountancy firms, and automating parts of the audit.