Digitalization in general and artificial intelligence (AI) in particular, e.g. applications of big data analytics and robotics, are radically changing society. This applies not only to the world of industry and politics, but also to an increasing extent to social services like education and healthcare, where vulnerable groups like children, elderly or disabled people are targeted. In this context, societal challenges, e.g. the demographic change, are powerful narratives for a technology-push, that is supposed to foster self-determination, participation, and equality of these groups. For instance, applications of smart home shall allow the elderly to stay in their familiar environment longer (Wessling, 2013), while social robots are supposed to foster the participation of children with special needs in educational settings (Dautenhahn et al., 2009; Kim et al., 2013). With the assessment of big data, unemployed people shall receive adequate offers concerning their job opportunities (Fanta, 2018) and refugees shall get sufficient treatments concerning their health (Baeck, 2017). Furthermore, dangers to the welfare of children shall be identified at an early stage (e.g. Gillingham & Graham, 2016). At the same time, the question arises if technology might transfer social disparities into the digital world. For instance, algorithms for predictive policing seem to replicate inequality because they are based on biased data that leads to accusing ethnic and religious minorities more often than the white majority (e.g. Tayebi & Glässer, 2018; Datta et al., 2015). Living in a socially deprived neighbourhood in the analogue world accounts for a bad digital score, which might then lead to analogously executed punishments.
Although AI is already being used in highly sensitive areas such as kindergartens, welfare state institutions, and authorities, the effects of this technology on these areas have hardly been researched, if at all. The assessment of advantages and disadvantages of AI in these areas is still in its infancy. Therefore, this session seeks to discuss challenges and chances of the application of AI on vulnerable target groups, that shall function as a “burning glass” for the current state and future trends of possibilities to experience self-determination, participation, and equality in a digital society. These groups include, e.g., children, the elderly, people with disabilities, unemployed people as well as refugees.
By taking into account different disciplines, the session follows the concept of integrated research (Stubbe, 2018), that might enable a broader view on the technological impact on individuals (micro level) and institutions (macro level) and help answering the following questions systematically (Manzeschke et al., 2013): In which ways is the application of artificially intelligent technologies ethically questionable with respect to a certain target group? Which ethical challenges do emerge from the application of these technologies? How can these challenges be mitigated or even dissolved? To answer these questions, we would like to focus on conceptual and theoretical work. However, empirical findings, that report on challenges or solutions concerning the application of artificially intelligent technologies on vulnerable target groups, are welcomed as well.KEYWORDS: digital society, artificial intelligence, self-determination, participation, integrated research
SCHNEIDER, Diana (FH Bielefeld – University of Applied Sciences) & SIEBERT, Scarlet (TH Köln – University of Applied Sciences), Germany
This is a call for abstracts for the session “Applying artificial intelligence on vulnerable target groups: chances and challenges” at the 18th Annual STS Conference Graz 2019. Deadline for submissions is January 21st, 2019 (via the online form).
This is the full call: