Current literature on ethics of AI suggests that trust in AI is a multifaceted phenomenon, owing to the inherent properties of big data applications, which include phenomena such as opacity (black boxing), problems of optimization (risk of bias) and complexity (issues of accountability). Accordingly, the discussion around trust towards AI is highly linked to accompanying desiderata, such as interpretability, privacy, transparency, fairness, and reliability, which seem to be constituting factors in establishing trust in AI systems.
Certainly, the trust debate benefits from the ethical and philosophical contribution, but partly neglects that the establishment of trust not only relies on a technical but equally on a social and political dimension. The question of what constitutes the variables of trust in AI must be accompanied by the question of who or what upholds and guarantees the establishment and permanence of these factors.
This panel welcomes paper-contributions that tackle one of the following dimensions:
- What establishes/guarantees/warrants trust towards AI: States? Companies? Providers? Engineers? Networks?
- What are the measures, instruments, or tools to establish/uphold/warrant trust in AI systems? Technical standards, regulations, laws, certifications, guidelines, discourses etc.?
- What is the relationship between these entities? Are they complementing and/or contradicting each other? Do they have to be understood as assemblages/networks/power-relations?
- Case studies about effective or unsuccessful establishments of trust in AI applications.
The full Version of the call for abstracts for this session can be found here.
Abstracts can be submitted here.
The deadline for the submission of abstracts is 22 January 2022.
General information on the conference can be found here.