“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

—Eliezer Yudkowsky

Introduction

Social work is deeply rooted in a longstanding tradition worldwide (Ehrenreich, 1985; Healy, 2008; Stuart, 2013): “For well over a century, social workers have played a powerful role in lifting the nation out of the distress that accompanied industrial and social transformation (...).” (Uehara et al., 2013, p. 165). Social work is primarily a profession aimed at driving and analyzing (social) changes for individuals and their communities (IFSW, 2014). One of the biggest and perhaps (maybe) most significant disruptions today for organizations and society is digitalization (Dufva & Dufva, 2019; Majumdar et al., 2018). In this context of digitalization, the UN Special Rapporteur on extreme poverty and human rights points out in his report on “Extreme poverty and human rights” (United Nations General Assembly, 2019), that “[t]he digital welfare state is either already a reality or emerging in many countries across the globe.” (United Nations General Assembly, 2019, p. 2). But at the same time and in contrast: “The social sector does not benefit fully from the digital revolution. If this trend continues, the constituencies the sector serves could end up on the wrong side of the digital divide.” (Coulton et al., 2015, p. 4). Thus, the purpose of this paper is to explore the digital changes with regard to artificial intelligence on social work, particularly in terms of its potential possibilities and risks.

In social work practice, digital transformations is rising in prominence (Goldkind, 2018; Rafferty & Waldman, 2006; Steiner, 2021), and are sometimes referred to as digital social work (Goldkind, 2018) or e-social work (López Peláez & Marcuello-Servós, 2018; López Peláez et al., 2018). These technologies utilized in social work include social media, virtual reality, and chats (Boddy & Dominelli, 2017; Campbell & McColgan, 2016; Garkisch, 2020; Trahan et al., 2019). Artificial intelligence, a specific constellation of algorithmic and computing strategies, has gained significant attention across industries, often being touted as a game changer (Castellanos, 2020; Davenport & Kalakota, 2019; Ditsche et al., 2023; Lee et al., 2018). Social work has been more conservative in its approach to artificial intelligence. The integration of artificial intelligence and machine learning technology has entered social work and social services over the last few decades (Gamble, 2020; Gillingham, 2006; Goldkind, 2018; Schwartz et al., 2017). The developments are also being intensified by the perennial challenges in social work such as limited resources, increasing caseloads and shortages of skilled social work labor (Berg-Weger & Schroepfer, 2020; Grimwood, 2023; Mänttäri‐van der Kuip, 2016) or overarching trends like sustainability (Jayasooria, 2016; Matthies et al., 2020), globalization (Dominelli, 2010), and growing inequality or injustice (Dorling, 2015). Artificial intelligence in social work can be a new approach, because Artificial Intelligence Enhanced Social Work addresses these challenges and can offer for example the following advantages: predictive thinking (Gillingham, 2016b; Grządzielewska, 2021), enhancements in quality controls (Kum et al., 2015; Pan et al., 2017; Santiago & Smith, 2019), tailored services for better client/user-orientation (Bako et al., 2021; Fink & Brito, 2021), and transparency (Coulthard et al., 2020). Nevertheless, careful consideration must be given to integrating of technology in social work to ensure adherence to ethical principles and respect for the human dimensions and rights (Rodriguez & Storer, 2020; Schneider & Seelmeyer, 2019).

While artificial intelligence (AI) transformations promise to “revolutionize” or “evolve” social work practice, as demonstrated by the development of large language models like ChatGPT (Marquart & Goldkind, 2023), it is crucial to critically assess AI technology implications on the protection of human rights (Livingston & Risse, 2019; Raso et al., 2018; United Nations General Assembly, 2019). The significance of human rights within the context of technological advancements such as AI is particularly crucial in social work, because this profession is recognized by the United Nations as a profession group that strives to enable a humane life (IFSW, IASSW und UN Centre for Human Rights, 1994). Even today, human rights remain the systematic basis and standard for profession action (IFSW, 2018; Mapp et al., 2019; Staub-Bernasconi, 2016). In this context, important principles like human dignity, nondiscrimination, participation, transparency, and accountability are important facts (Mapp et al., 2019). Therefore, it is imperative to ensure that the adoption of AI technologies aligns with the core values of human rights that guide the social work profession.

The intersection of artificial intelligence and social work represents a critical area of study, underscoring the necessity for innovative methods and models to improve the efficacy of social services (Goldkind, 2021; Uehara et al., 2013). This convergence demands the establishment of comprehensive frameworks and regulations to guide ethical and human rights–centered AI integration into social work practices (O'Sullivan et al., 2019; Winfield, 2019). It emphasizes the importance of fostering interdisciplinary and inter-organizational collaboration among researchers, practitioners, and technologists. Such collaboration is essential for exploring and developing potential solutions that can have a positive impact on society, thereby enhancing the effectiveness of social work through the thoughtful application of AI technologies (Perron et al., 2010; Sabah & Cook-Craig, 2010).

We address these gaps in the literature with this systematic review by considering the following: While some research has examined the use of AI in social work, most studies have focused solely on specific sectors, such as child welfare (Gillingham, 2006; Schwartz et al., 2017), youth work (Rice et al., 2018; Ting et al., 2018), or mental health services (Gamble, 2020). However, there are still significant gaps in our understanding of how AI can be best leveraged to enhance social services and improve results for clients (Frey et al., 2020). A comprehensive and unified model, which can serve as a blueprint for an Artificial Intelligence Enhanced Social Work, is currently lacking in the field. Furthermore, there is a dearth of evidence concerning the possibilities and risks of applying artificial intelligence to social work in comparison to other sectors (Lee & Yoon, 2021; Susar & Aquaro, 2019).

The aim of this research is to provide a comprehensive review of the current state of AI and social work research, focusing on its possibilities and risks. Existing findings will be synthesized, and research gaps will be identified. Moreover, this study aims to develop and examine a cohesive model for AI’s application in human rights–based social work: the Unified Model for Artificial Intelligence Enhanced Social Work. Having already explained the importance of human rights in the context of digitalization, the framework aligns with the human rights–based mandate, also known as the “triple mandate” of social work (Staub-Bernasconi, 2007, 2019).

This article is structured as follows: the “Method” section outlines the methodology employed and explains how studies were selected. The “Results” section provides an objective overview of the possibilities and risks associated with Artificial Intelligence Enhanced Social Work. The “Discussion” section features the Unified Model for Artificial Intelligence Enhanced Social Work. The article concludes with a discussion of the limitations and suggestions for further research.

Method

Research Questions

As yet, there is not a single literature focusing on social work and AI. This systematic review first sought to address the question of how social work researchers are conceptualizing studies on social work and AI. Such a mapping of the literature can help to orient future research by addressing the following Research Questions (RQ):

  • RQ 1: What are the possibilities and risks social work and AI literature?

    Under this broad guiding question, this review interrogated the following sub-questions:

    • RQ1a) Which practice areas are social work AI literature focused on?

    • RQ1b) What are the possibilities and risks in the literature for social work profession?

    • RQ1c) What are the possibilities and risks in the literature for social agencies?

    • RQ1d) What are the possibilities and risks in the literature for clients?

    Based on the possibilities and risks, we also want to answer the following questions.

  • RQ 2: How can a Unified Model of Artificial Intelligence Enhanced Social Work look like?

  • RQ 3:What are gaps in existing social work and AI literature?

Research Method: Systematic Literature Review

Aim

Our aim is to understand the intersection of social work and AI via the published scholarship on the subject. For this purpose, we have conducted a systematic literature review. This research method can be used to gather knowledge via a formal, evidence-based mechanism and to provide recommendations for policy, practice, and research (Tranfield et al., 2003). Our method is guided by review frameworks (Denyer & Tranfield, 2011; Tranfield et al., 2003) and existing literature analyses from the field of social work (Guo et al., 2009; McFadden et al., 2015; Weiss‐Gal, 2016; Zechner & Sihto, 2023). Figure 1 summarizes the structure of the research, based on Garkisch et al. (2017).

Fig. 1
figure 1

Summary of systematic review process

Keywords and Search Strings

Following the approach of Maier et al. (2016) in the context of keywords, we distinguish between topic-related (context of AI) and sector-related (context of social work) keywords. In order to obtain a holistic picture, we have oriented ourselves to systematic literature reviews (SLR) in social work (Ahuja et al., 2022; Sánchez-Sandoval et al., 2020), but also in the AI literature (Jiang et al., 2017; Yu et al., 2018; Zawacki-Richter et al., 2019), in our search for keywords. The keywords are presented in Table 1. In total, we identified eight topic-related keywords and four sector-related keywords.

Table 1 Topic and sector related keywords for database search

Used Databases

In selecting our eight databases, we have also been guided by the publications already mentioned: PsycINFO, Business Source Premier EBSCO, SCOPUS, ProQuest’s ABI/INFORM Collection, Social Work Abstracts, and Social Service Abstracts. For cross search, we have used Google scholar.

Inclusion Criteria in the SLR

The main criteria for inclusion in the review were that studies had to (a) be published between 2009 and 2023, (b) match with the mentioned keywords like AI and social work, (c) be published in English language, (d) peer-reviewed publications, and (e) with empirical or theoretical background (for example we have excluded editorials).

Overview of the Search Process

We used a multiple stage process for our systematic review and were supported by research colleagues and their valuable peer feedback. We documented our results step by step in an Excel spreadsheet (search protocol — available to the readers upon request to the authors) and at the end in a PRISMA diagram (Moher et al., 2009; Page et al., 2021), see Figure 2.

Fig. 2
figure 2

PRISMA flow diagram of search strategy and results

At the second stage, to analyze our article in depth, we used a qualitative content analysis (Macpherson & Holt, 2007; Pittaway & Cope, 2007). The content analysis was undertaken independently by two researchers (Lipsey & Wilson, 2001; Ritz et al., 2014). At the end, results were discussed and refined as a team (Cinar et al., 2019; Garkisch et al., 2017).

Results

Article Characteristics (Quantitative Results)

Table 2 summarizes the articles in the sample. A total of 67 publications are included in the SLR’s sample (see Figure 2 PRISMA diagram). Based on this sample, relatively few articles (n = 9) of the total 67 articles in the sample were published between 2005 and 2015. A significant annual increase in the number of articles can be observed between 2019 and 2022, e.g., 12 publications in a single year, 2020. This suggests a trend that the topic of AI and social work is gaining interest in the academic social work literature. We categorized the articles by type and found six different methodological approaches: case study, conceptual, literature review, mixed methods, and qualitative and quantitative research. It is clear that the topic is discussed more on a conceptual level (25 articles), while quantitative approaches (20 articles), mainly evaluating the implementation of AI systems, are a close second. Regarding the geographical origin of the first authors, it is noteworthy that Australia is strongly represented (14 articles). But, concomitantly, a sole author has written prolifically on the datafication of social work, and is the author of 11 articles in the sample.

Table 2 Summary of the articles in the sample

Key Findings (Qualitative Results)

  • RQ1a) Which practice areas are social work AI literature focused on?

    Of the articles in this sample, the majority focus on the practice domains of child protection and child welfare. 21 of the coded articles focused on either the use of administrative data in child welfare organizations to feed algorithms or offered perspectives, advice, and suggestions on the potential ethical challenges of using administrative data and algorithms in child welfare agencies. Far fewer articles were dispersed across the fields of mental health (n = 5), health (n = 6), and organizational practice (n = 4).

  • RQ1b) What are the possibilities and risks in the literature for social work profession?

  • RQ1c) What are the possibilities and risk in the literature for social agencies?

  • RQ1d) What are the possibilities and risks in the literature for clients?

  • Taken together, sub-research question 1b through 1d focus on the possibilities and risks outlined in the social work and AI literature at three levels of consideration, based on the triple mandate (Staub-Bernasconi, 2016, 2019):

    • Profession level: Meaning what are the implications for professional social workers on their profession judgment and ethics.

    • Social agency level: Meaning what are the implications and organizational considerations for social work agencies including cost/benefit, risk assessment, and data quality.

    • Client level: Meaning what are the direct impacts and implications for clients receiving services, including access to resources, quality of service, and overall client satisfaction.

    Figure 3 summarizes the themes (possibilities and risks) we identified.

    Fig. 3
    figure 3

    Artificial Intelligence Enhanced Social Work — possibilities and risks

Possibilities at the Profession Level

Profession Development and Standardization

Artificial intelligence can help develop and shape the social work profession (Schneider & Seelmeyer, 2019; Victor et al., 2021). It also recognizes that variability in how line workers make assessments (Ting et al., 2018). However, AI in social work research is also relevant. Unstructured data can be harnessed and made available for research through text mining and machine learning (Victor et al., 2021). In difficult decision-making, AI can help to be less emotional (Tan, 2022). Furthermore, classification models, standards, and AI-supported processes ensure the quality of AI-supported services and the protection of clients’ rights and dignity in this evolving field (Devlieghere & Roose, 2018; Meilvang & Dahler, 2022; Rodriguez et al., 2019). In addition, AI may help to create social impact (Frey et al., 2020; Santiago & Smith, 2019), e.g., better health outcomes (Zetino & Mendoza, 2019).

Predictive Thinking and Risk Assessment

Through the existence and analysis of data, it is possible to have a look not only at the situation in the past, the present, but also at the situation in the future (Fink, 2018; Frey et al., 2020; Schwartz et al., 2017; Zetino & Mendoza, 2019): “Predictive analytics help develop models that explain what might happen. Once analysts have described what is happening, they can start to answer what is likely to happen next. This type of analysis is performed by extracting information from historical data to populate a model of what is likely to happen, i.e., a predictive model.” (Zetino & Mendoza, 2019, p. 414). The nomenclature here is various: predictive analytics (Schwartz et al., 2017; Zetino & Mendoza, 2019), predictive risk modeling (Gillingham, 2016b, 2017), or automated identification (Victor et al., 2021).

Decision Support

As different authors point out AI and machine learning in social work supports in context of decision-making process: “(…) dataism promises to improve decision-making (…)” (Devlieghere et al., 2022, p. 328) or “There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care.” (Keen et al., 2021, p. 57). On the whole, the advantages outweigh the disadvantages (Pan et al., 2017). Employees thus receive digital support in their decision-making process (Gillingham, 2019b) or management decisions can also be supported digitally in this way (Bako et al., 2021; Zetino & Mendoza, 2019). This can enable better strategic decision-making (Zetino & Mendoza, 2019). The consequences are improved and simplified decision-making (Alfandari et al., 2023; Devlieghere et al., 2022), autonomous decisions (Liedgren et al., 2016), better justification for decisions (Gillingham, 2013), and reduction of the fault susceptibility (Gillingham, 2019a). And to sum it up: “(…) the benefits (…) gained through using this new technology should enable professionals to spend more time working with their clients.” (Schneider & Seelmeyer, 2019, p. 118). In principle, however, AI should only be used to support decisions. It should not replace human decision-making (Victor et al., 2021).

Possibilities at the Social Agency Level

Information and Data Exchange

By using AI, data can also be (more easily) shared between authorities and service providers (Devlieghere & Roose, 2018; Fink & Brito, 2021). The consequence: The care process can be improved (Devlieghere & Roose, 2018) and better communication and information exchange ways can be established (Fink & Brito, 2021). As we have already shown in another point: The exchange of data helps in decision-making (Gillingham, 2011b). Furthermore, the views of the stakeholders can be better discussed together (Devlieghere & Roose, 2018) to better coordinate the various interventions (Zetino & Mendoza, 2019). This can be very helpful, for example, in the field of probationary services (Ting et al., 2018) or in context of nursing and health (Zetino & Mendoza, 2019).

Resource Allocation

By reducing costs and maximizing organizational resources with AI, organizations are able to maximize the use of existing data sets (Schofield, 2017), and they no longer need to be collected in a work-intensive way (Schofield, 2017). This helps to reduce workload (Schofield, 2017) and reduces costs (Schofield, 2017; Victor et al., 2021; Zetino & Mendoza, 2019). The consequences are “cost-effective solution(s)” (Victor et al., 2021, p. 651) or even “cost savings” (Zetino & Mendoza, 2019, p. 409). In addition, services can be designed more efficiently, allowing scarce resources to be used in a more targeted and demand-driven manner (Rice et al., 2018).

Quality Control and Improvement

AI can be used to monitor the quality of social work services and ensure that organizations are using best practices. Thus, using AI can promote the optimization of quality within the social work sector and agencies (Rodriguez et al., 2019; Schwartz et al., 2017). The quality improvement can be manifold, for example with AI in social work it is possible to enable evidence-based decision-making (Kum et al., 2015; Liedgren et al., 2016; Lushin et al., 2019; Zetino & Mendoza, 2019). The accuracy of risk assessments can also be improved (Schwartz et al., 2017): “(…) enable decisions to be more objective and impartial.” (Schneider & Seelmeyer, 2019, p. 117).

Possibilities at the Client Level

Transparency

Standardization and a (data-protected) objective basis for decision-making provide the client with transparent insights (Devlieghere et al., 2018; Meilvang & Dahler, 2022). Ideally, this approach can increase the acceptance of decisions made by social workers and even judicial authorities (Coulthard et al., 2020). From the perspective of transparency, AI tools or necessary questionnaires are therefore filled out together with the clients (Devlieghere et al., 2018). Transparent AI systems decrease misunderstandings and miscommunications since transparent decision-making procedures are more traceable (Meilvang & Dahler, 2022; Ting et al., 2018).

Individualized and User-centered Offers

The (social) needs and requirements of clients should be given special attention (Bako et al., 2021). Better tailored offers can also be made using data (Grządzielewska, 2021; Rice et al., 2018; Rodriguez et al., 2019; Schwartz et al., 2017): Bako et al. (2021) point out that it is possible to “(…) gain better insight into the most needed social interventions in the patient population (..)” (op. cit. p. 2). For example, “enhance HIV Prevention Interventions for Youth Experiencing Homelessness” (Rice et al., 2018, p. 551). In sum: Services can be designed to be more closely tailored (Schwartz et al., 2017).

Early Interventions

In this way, social work can proactively identify and offer services to those who need them and can offer early interventions. No action is required on the part of the potential client (Fink, 2018; Schwartz et al., 2017). It is also possible to make predictions (Gillingham, 2016b). In sum: “Data science has the potential to predict an event before it happens.” (Santiago & Smith, 2019, p. 352).

Risks at the Profession Level

Insufficient Research, Professionalization, and Standards

Social work AI research is still in its infancy (Zetino & Mendoza, 2019). So, there is also a lack of research especially focusing on data science (Cariceo et al., 2018). This is partly due to the fact that AI has not been widely used in social work and there is a paucity of case studies (Liedgren et al., 2016). As a result, there is a lack of evidence-based or evidence-informed recommendations for the use of AI (Cresswell et al., 2020; Landau et al., 2022). There is also a need for new research approaches to the use and testing of AI (Gillingham, 2011a). In addition, social workers are not prepared for the use of technology and especially AI in higher education (Santiago & Smith, 2019; Sicora et al., 2021). Implications for education and training are missing (Sicora et al., 2021) for example in the context of data science (Cariceo et al., 2018) or statistical methods (Coulthard et al., 2020). The result is ignorance and uncertainty for the use of AI (Gillingham, 2006). There is often a lack of knowledge about how to handle and use data in social work (Gillingham, 2020), whereas organizations are under intense pressure to improve efficiency (Tan, 2022). Standards and regulations are still missing (Cresswell et al., 2020; Gamble, 2020). Furthermore, methodological and reporting standards for AI and big date research and interventions are necessary (Cresswell et al., 2020).

Resistance and Knowledge Gaps

As indicated prior, social work is primarily a relational profession, focus on the interpersonal and individual level decision-making (Devlieghere et al., 2022; Fink & Brito, 2021). It is not surprising, therefore, that there is a lack of acceptance of the use of AI among social workers. Some results suggest that social workers are trying to undermine the use of AI (Devlieghere et al., 2018), display “confusion and frustration” (Gillingham, 2013, p. 440) and skepticism (Schneider & Seelmeyer, 2019), or strike during rollout (Ting et al., 2018). There is also a lack of knowledge about the possible use of AI in social work focusing on the organizational level (Devlieghere et al., 2018; Pan et al., 2017), furthermore, a lack of acceptance among the clients, who are frequently not involved in the AI process (Devlieghere et al., 2018).

Ethical Issues

Closely related to dehumanization are the ethical risks of using AI in social work (Gillingham, 2016c; Landau et al., 2022). Especially in the field of child welfare, there is an immense danger of developing unfair and unjust models (Rodriguez et al., 2019). Standards for social agencies are missing (Cresswell et al., 2020; Gillingham, 2016a). Here, ethical, legal, and social implications (ELSI) criteria should be developed for use. These criteria are already being applied in social work (Schneider & Seelmeyer, 2019).

Risks at the Social Agency Level

Data Availability and Quality

Data is needed for developing AI in social work. These data are often not or not sufficiently available (Schneider & Seelmeyer, 2019), lacking in strategic data collection (process) is missing (Gillingham, 2016c), an absence of longitudinal section data (Kum et al., 2015), poor data quality (Meilvang & Dahler, 2022), or unstructured data (Victor et al., 2021). On the one hand, it is valuable to maximize data (Ting et al., 2018); on the other hand, the counterpart is when there is too much data, and it becomes too cluttered (Schofield, 2017). There is a risk that data is too fragmented to capture the most important parts (Devlieghere et al., 2018). Open Data Infrastructure to share data is missing (Walter et al., 2021) and there are also inadequate data support systems to manage and interpret data (Gillingham, 2019a). In addition, not all data could be used for analysis due to legal restrictions (Schneider & Seelmeyer, 2019). In sum, data can therefore be both an enabler and a hindrance (Alfandari et al., 2023).

Safety and Security

Program participant data can be sensitive and confidential (Schneider & Seelmeyer, 2019). At the same time, discretion (with a focus on data) is a key concern in social work (Ranerup & Henriksen, 2022). There is a risk that digital discretion will not be respected (Ranerup & Henriksen, 2022). Data protection (Kum et al., 2015; Schneider & Seelmeyer, 2019) and data security (Zetino & Mendoza, 2019) play an important role in the context of social work. The consequences of non-compliance cannot be assessed directly. However, there could be threats to privacy (Fink, 2018; Fink & Brito, 2021), particularly, e.g., if data is used without permission (Fink, 2018).

Missing Resources and Infrastructure

The resource challenge for AI-enhanced social work is multifaceted. Nevertheless, there may be an absence of adequate financial resources available initially (Pan et al., 2017). Furthermore, there is also a lack of (open) data infrastructure (Walter et al., 2021). However, not only is the technological infrastructure inadequate, but likewise education and training programs for Artificial Intelligence Enhanced Social Work are missing or insufficient (Sicora et al., 2021).

Risks at the Client Level

Dehumanization

Numerous authors point out dehumanization as a problem of the use of AI in social work (Devlieghere et al., 2022; Fink & Brito, 2021), consider the move towards automation bias, or the belief that computers should be believed over humans (Devlieghere et al., 2022). Social work is a relational discipline (Devlieghere et al., 2022; Fink & Brito, 2021). When making decisions, data alone is not enough; it must always be placed in an individual and situational context (Schneider & Seelmeyer, 2019). AI could eliminate individual casework decisions, as AI increasingly takes over the decisions (Schneider & Seelmeyer, 2019). It is therefore important “(…) that the resources and focus on analytic big data never distract from the importance of these relationships.” (Fink & Brito, 2021, p. 172). Overuse of artificial intelligence technologies can potentially hinder the interpersonal bond and relationship between social workers and clients (Devlieghere et al., 2022).

Misinterpretation and Bias

There are still insufficient possibilities to prevent the bias (Meilvang & Dahler, 2022). The causes of bias are many and varied, e.g., “incorrect recommendations” (Gillingham, 2019b, p. 277), data bias (Landau et al., 2022), subjective assessment (Cresswell et al., 2020), and further AI failures (James & Whelan, 2021): “Predictive modeling will never be 100% accurate.” (Ting et al., 2018, p. 643).

Data and Algorithmic Injustice

There are population groups that tend to be under- or over-represented in the data (Walter et al., 2021), like indigenous peoples (Walter et al., 2021), younger people (Fink & Brito, 2021), or the “poor” (Whelan, 2020): “Some of our most vulnerable populations are in danger of becoming even more invisible by the algorithms (...)” (Zetino & Mendoza, 2019, p. 415). Also, often only a single digital data interpretation takes place, and no consideration of the offline context (Frey et al., 2020). The consequences can be varied: This can lead to exclusion (Walter et al., 2021), injustice (Fink & Brito, 2021), prejudices (Gillingham, 2019b), or stigmatization and discrimination (Goldkind, 2021; Keddell, 2019). Furthermore, false interventions and decisions (Fink & Brito, 2021) or dangerous assumptions (Frey et al., 2020) are possible, too.

Discussion

Summary

This systematic review sought to document the universe of social work research on artificial intelligence. Based on our analysis, three key observations emerged.

  • First, to date, the academic literature on social work and AI is primarily conceptual and the discourse these articles engage is insular, being primarily situated in social work journals.

  • Next, where scholars are attempting to use machine learning or predictive analytics as a research method, the content area or population under investigation is most frequently child welfare.

  • Lastly, documenting the social work — an AI research ecosystem is challenging as AI can be considered both a method, for example, using topic modeling to analyze text data and a subject, using chat technologies in mental health services. This conflation of content and method is not a singular social work challenge but adds complexity in how as a field we can shape a uniquely social work perspective on AI.

The study of computer intelligence or the field of artificial intelligence research is now roughly 75 years old; however, recent interest has accelerated dramatically with the increase in computing power and the exponential expansion of data production and collection due largely to internet-based capitalism (Russell & Norvig, 2016). While social work as a practice has existed for roughly double the duration of AI as a field, the social work scholarly literature is considerably more limited given the longer timeline. The lack of a singular focused social work literature has been noted for the last 100 years and was famously captured by Abraham Flexner in his 1915 address to the national social work organization (Flexner, 1905). Indeed, Longhofer and Floersch (2012) describe how as a field there seems to be no center of knowledge production in social work. And, as outlined in the Grand Challenges for Social Work, it is also our task to shape our own work: “(…) social work can and must play a more central, transformative, and collaborative role in society, if the future is to be a bright one for all.” (Uehara et al., 2013, p. 165).

Taken together, these conditions have created a vacuum in the social work AI research literature. At the same time, the digital age is leading to a reassessment of human rights, with a focus on their enforcement and guarantee by the state (Berlyavskiy et al., 2020). Furthermore, this age has highlighted the potential of digital technologies to influence the political and social environment, which has implications for human rights (Monshipouri, 2017). Mathiesen (2014) argues for a declaration of digital rights to ensure that human rights are respected in digital contexts: “In the current digital age, human rights are increasingly being either fulfilled or violated in the online environment.” (Mathiesen, 2014, p. 2). The intersection of artificial intelligence and human rights presents possibilities and challenges. AI has the potential to improve economic and social well-being and human rights (Cath, 2018), but it also raises concerns about privacy, discrimination, and societal impacts (Raso et al., 2018). In the results section of the article, we offer possibilities and risks in detail in relation to the profession of social work. However, respecting and upholding human rights requires more than this. It necessitates framework (Donahoe & Metzger, 2019; Mathiesen, 2014). A framework should be utilized to evaluate and address the human rights impacts of AI, particularly in areas such as data protection (Raso et al., 2018). Such a framework and the design of AI systems should be guided by human rights principles (Aizenberg & van den Hoven, 2020). The “Report of the Special Rapporteur on extreme poverty and human rights” can also provide orientation for the development of content for frameworks and has also been incorporated into the developments (United Nations General Assembly, 2019). These considerations necessitate the development of a framework for the social work profession, too. A proposed framework will be present in the following section of the article.

Recommendations for Practice: Introducing a Conceptual Model for Artificial Intelligence Enhanced Social Work

Envisioning a practice where social work and computing power collaborated to solve clients and communities’ challenges and to identify possibilities for improving the conditions of both, based on this review of the literature, we propose below a model for an Artificial Intelligence Enhanced Social Work practice merging the empathy, interpersonal skills, and social justice power of social work, with the technological fortitude of artificial intelligence. We propose below an Artificial Intelligence Enhanced Social Work framework, see Figure 4. This framework, called the Unified Model for Artificial Intelligence Enhanced Social Work, draws on the Staub-Bernasconi triad or triple mandate focusing on clients, social agencies, and the profession (Staub-Bernasconi, 2007). The aim: This framework provides guidance for professionals to fulfill their roles and responsibilities ethically, promoting social justice and client well-being. It recognizes the significance of their contributions to social work organizations and the wider professional community. The framework advocates for a holistic approach that integrates ethical and human rights considerations beyond the client-professional relationship.

Fig. 4
figure 4

Unified model of artificial intelligence enhanced social work

The Unified Model of Artificial Intelligence Enhanced Social Work hews closely to Staub-Bernasconi’s notions of the triple mandate of balancing the inter-related priorities of the profession, clients, and the social agency; these mandates are parallel to the social work and AI literature. Below we describe the core ideas and concepts of Artificial Intelligence Enhanced Social Work.

The Client Mandate

For Staub-Bernasconi, this aspect of the framework underscores the primary responsibility of professionals to their clients or service users. It emphasizes that professionals should prioritize the well-being and best interests of the individuals, families, or communities they are serving. This includes ensuring that services provided are of high quality, culturally sensitive, and responsive to the specific needs and goals of the clients. Three elements of the client mandate echoed by both Staub-Bernasconi and Taylor’s data justice framework are critical elements in the Artificial Intelligence Enhanced Social Work Model.

Accountability

Human rights scholars and philosophers of data justice suggest that the consumers of algorithmic and AI products have the right to hold the creators of such tools accountable (Katyal, 2019; Novelli et al., 2023). This is particularly relevant in the human service sector, and it is already being discussed in the health care sector (Habli et al., 2020; Murphy et al., 2021). Furthermore, we have to take a closer look at “economic and social rights in the digital welfare state” (United Nations General Assembly, 2019, p. 16).

 → Social work clients and consumers have a right to advocate on their own behalf to demand increased representation in data sets as well as greater transparency with regard to how an AI tool functions, what the tool is doing, and a mechanism for correction and repair, should such a tool cause harm.

Participation and User-centricity

Gatenio Gabel (2016) writes that the participation of all people in decision-making, especially those people affected by such decisions, is a key aspect of rights-based approaches to social work practice. In the context of participatory design, a human rights approach emphasizes the primacy of human experience and well-being in evaluating artificial intelligence, positioning the effects on human lives as the central consideration in governance framework (Donahoe & Metzger, 2019). As Aizenberg and van den Hoven (2020) outline in the context of human rights and AI, it is therefore crucial to involve relevant social groups in the development of technology-enabled AI solutions “(...) to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.” (Aizenberg & van den Hoven, 2020, p. 1). This participation is gaining importance, particularly in the context of digital social work (Lyon & Koerner, 2016; Mois & Fortuna, 2020) and therefore in human rights–based Artificial Intelligence Enhanced Social Work, too. This leads to “(...) a higher level of acceptance and trust.” (Aizenberg & van den Hoven, 2020, p. 11).

 → Prioritizing client’s rights to participation and participatory design would move from a corporate-driven AI development mechanism to a user-centered perspective requiring private-sector actors and social welfare agencies to engage stakeholders and communities in the development of AI tools. In this context, it is recommended to use agile methods, such as design thinking or service design, to integrate the user into the process.

Nondiscrimination and Inclusion

As a human rights profession, nondiscrimination in social work practice has a long tradition (Androff, 2018; Mapp et al., 2019; Pelton, 2001). Especially in the datified society, discrimination can be amplified and codified in automated systems (Gerards & Borgesius, 2022; Heinrichs, 2022). Although artificial intelligence is supposed to help make fact-based and evidence-based decisions, the algorithms can also suggest unjustified or discriminatory recommendations (Aizenberg & van den Hoven, 2020) and as Donahoe and Metzger (2019) describe include “embedded bias” (p. 115). In this case, the UN report of the Special Rapporteur on extreme poverty and human rights points out that “(...) predictive analytics, algorithms and other forms of artificial intelligence are highly likely to reproduce and exacerbate biases reflected in existing data and policies.” (United Nations General Assembly, 2019, p. 22).

→ Nondiscriminatory, rights-based social work practice ensures access to professionals, services, and resources for all people, including marginalized and under-represented populations. Applying nondiscrimination in an Artificial Intelligence Enhanced Social Work context calls for inclusive data sets, inclusion, and representation across decision makers and AI creators as well as inviting a range of affected population members in the review and refinement process of AI tools.

The Profession Mandate

This component of the triple mandate focuses on professionals’ obligations to the social work professions. Professiona are encouraged to engage in continuous learning, professional development, and ethical reflection. This involves upholding the ethical standards, values, and principles of their profession and promoting the profession’s positive image within society.

Interdisciplinarity and Collaboration

Interdisciplinary collaboration and discourse with experts across diverse fields, including computer science, holds paramount importance in the realm of digital and Artificial Intelligence Enhanced Social Work (Chui & Ko, 2020; Perron et al., 2010). In this context, it is also important to build partnerships (Berzin et al., 2016; Uehara et al., 2013). This can help to work together over state lines (Berzin et al., 2016). Social workers should share best practices, learn from best practices, and leverage external support in the form of consultants and partners (Berzin et al., 2016; King et al., 2012; Mackrill & Ebsen, 2018; Nesmith, 2018). Aizenberg and van den Hoven (2020) point out that this is not a straightforward process, but “learning to communicate across disciplines, resolving value tensions between stakeholders (…)” (p. 11) is important. When cooperating with private-sector companies, for example from the technology industry, it is important to point out the human rights view (Donahoe & Metzger, 2019; United Nations General Assembly, 2019), because they often “(...) operate in an almost human rights-free zone (...)” (United Nations General Assembly, 2019, p. 2). Thus, it should be ensured that for example the UN Guiding Principles on Business and Human Rights (UNGPs) should be ensured (Donahoe & Metzger, 2019).

 → To effectively use Artificial Intelligence Enhanced Social Work, organizations need to establish interdisciplinary teams that include social work, computer science, ethics, and privacy experts. This approach promotes full understanding and facilitates assessment of ethical and privacy issues. Forums and platforms also offer significant possibilities for fostering knowledge sharing among professionals from different backgrounds and identifying best practices. The fundamental values and roots of social work should be taken into account when cooperating with economic players, for example.

AI-Future Skills and Continuing Education

As early as the 1980s, there have been concerns about the need for human knowledge and skills to use AI in a meaningful and human-centered way (Gill, 1988). It is extremely important that professionals are educated and sensitized to data and AI literacy (Fiok et al., 2022). There should also be regular possibilities for training (Berzin et al., 2015). In this context, the so-called future skills are important for digital and technological transformation (Ehlers & Kellermann, 2019; Meyer-Guckel et al., 2019), also with regard to digital social work (Garkisch, 2023). The focus on human–machine (AI) interaction (Livingston & Risse, 2019) and legal affairs (United Nations General Assembly, 2019) should also be taken into account here, too.

 → It is imperative to enhance computational thinking, AI, and data literacy and develop skills for evaluating automated systems for social workers. Social workers must receive thorough training and education concerning AI technologies, with a specific emphasis on ethical and privacy concerns, to support the ethical and responsible utilization of AI in their field. Not only is knowledge about AI and its applications relevant, but also the selection of appropriate tools and integration of AI into social worker lectures are crucial.

Ethical and Responsible Issues

Social work is and must continue to be perceived as a face-to-face profession (Baker et al., 2014; Chang et al., 2015). At the same time, social work is ethically grounded, focused on human rights (Boddy & Dominelli, 2017; Sanders & Scanlon, 2021) and social justice (Boddy & Dominelli, 2017; Chow et al., 2021; Edwards & Hoefer, 2010). Ethics holds a crucial role in the implementation of AI (Bostrom & Yudkowsky, 2018; Etzioni & Etzioni, 2017). Ethical and political guidelines can help to overcome these ethical challenges (Larsson, 2020; Lo Piano, 2020; Smuha, 2019). To design human rights–based AI systems above all this, dignity, freedom, equality, solidarity, and the right to life are the overriding ethical values (Aizenberg & van den Hoven, 2020). (Local) core principles of social work should also be taken into account. With regard to the USA, for example, the “Guiding Principles” of the Grand Challenges for Social Work are relevant here: social justice, inclusiveness, diversity, and equity (Barth et al., 2019). Furthermore, it is important to ensure that the human scope for decision-making is not eliminated.

 → It is critical to ensure that technological applications uphold the fundamental principles of the profession, including human rights and justice, when using AI in social work. A mentioning ethical and policy guideline to tackle ethical risks related to AI highlights the importance of establishing explicit standards and protocols to promote responsible use of AI applications in social work. Such guidelines serve to clarify ethical concerns, facilitate decision-making, and might require partnerships with national and international profession associations and the International Organization for Standardization. In addition, informed consent by the client and hybrid counseling services should also be considered.

The Social Agency Mandate

In addition to their responsibilities to clients, professionals are also accountable to the social agency or organization that employs them. This mandate acknowledges that professionals play a crucial role in supporting the mission, goals, and effectiveness of their employing agency. They are expected to collaborate with colleagues, follow organizational policies and procedures, and contribute to the overall functioning and success of the agency. This includes managing resources efficiently, meeting performance targets, and upholding the reputation and integrity of the agency.

Governance, Guiding, and Monitoring

Artificial intelligence has implications for organization management (Hilb, 2020). This will require new or different governance structures (Hilb, 2020; Janssen et al., 2020; Razzaque, 2021). This applies in particular to the management and preparation of data and the IT infrastructure required for this (Janssen et al., 2020). Even though governance and social work have already been discussed in many ways and are not entirely new (Dominelli & Holloway, 2008, 2006; Frahm & Martin, 2009; Webb, 2023), a rethink is needed in the AI environment (Dafoe, 2018; Taeihagh, 2021). With regard to respect for human rights, AI governance models are needed in particular (Cath, 2018).

 → AI governance in social work involves the development and implementation of policies and procedures to ensure that the use of AI technologies meets ethical standards and privacy regulations. Oversight ensures that AI has guardrails, helping to improve the quality of social work services and protects the rights and privacy of clients. Governance structure includes for ethical guidelines, transparency and reporting procedures, participation processes (clients and workers), and evaluation instruments.

Resources, Infrastructure, and Data

Budget cuts are on the agenda of many organizations (Schoech & Bolton, 2015) and this results in limited/contested resources (López Peláez & Marcuello-Servós, 2018). In addition, working with addresses in the digital space is time intensive (Aasback, 2022) and there is a lack of basic IT resources and skills in social work (Brownlee et al., 2010; Dellor et al., 2015). Furthermore, the availability of data is the basis for AI applications (Schneider & Seelmeyer, 2019), as in many cases it is simply stored in “silos” without being processed (Coulton et al., 2015). However, pure data alone is not enough; it must be of high quality, structured, and available over a certain period of time (Meilvang & Dahler, 2022; Victor et al., 2021).

 → Organizations facing resource constraints can prioritize resource allocation and reduce unnecessary expenditures. It is essential to invest in employee training to enhance their digital expertise, given the time-consuming nature of digital work. Enhancing efficiency in the use of digital resources can be accomplished through automation and streamlining work processes. Additionally, collaborations with other institutions can allow for resource-sharing and the development of collective approaches to address resource deficiencies.

Leadership, Change and Transparency

Digital transformation, including the implementation and use of AI technology in organizations, is a process of change (Li, 2020; Mugge et al., 2020). Social work as a caring discipline, characterized by classical structures and traditions (Aasback & Røkkum, 2022). As a result of this, decision-making and change implementation pathways are time consuming (Aasback & Røkkum, 2022; Sullivan-Tibbs et al., 2022). To counteract the traditional structures and the mentioned challenges, agility plays a role within social work (Jeyasingham, 2016, 2020). Furthermore, there is a certain “aversion” to technology within social work (Baker et al., 2014; Barrera-Algarín et al., 2021). Thus, a transparent and structured change management process is therefore of central importance (Hanelt et al., 2021; Schwarzmüller et al., 2018), although with regard to the specific characteristics of social agency organizations (Schiffhauer & Seelmeyer, 2021).

 → Social work organizations should recognize the change process as a comprehensive transformation and set realistic expectations for AI adoption. It is critical to provide training and resources to overcome technology reservations and strengthen the digital literacy of the team. Transparent, structured leadership and change management is essential to ensure an efficient transition to digital technologies.

Limitations and Conclusion

While conducting our systematic literature review, we acknowledge limitations outlined here. The primary constraints pertain to the definition of keywords and the selection of the appropriate time frame for our review. These challenges are in line with similar issues often encountered in scoping/systematic reviews (Colquhoun et al., 2014). Regarding keyword definition, we took a proactive approach to address this limitation by conducting a pilot study to comprehensively derive keywords from existing AI and social work publications, as well as those from other sectors. Additionally, we focused on English peer-reviewed articles for this analysis, a common practice that comes with its own limitations, as both social work and AI discussions are global in nature. Consequently, sources such as books, book chapters, conference papers, dissertations, and gray literature are not included in this analysis. It is worth noting that AI in social work is predominantly discussed in social work journals, but we made an effort to encompass interdisciplinary databases. Lastly, our systematic literature review covers research from the past 15 years, and it is important to recognize that our findings primarily represent the current empirical landscape, as we relied solely on existing literature.

In conclusion, this review has provided a comprehensive overview of the current state of research on artificial intelligence and social work. Through our systematic literature review, we have identified key themes and trends in the scholarship, as well as gaps and limitations that suggest areas for future research. We also explored the potential for AI to enhance social work practice and improve outcomes for clients and organizations. Overall, our findings suggest that there is growing interest in the intersection of these two fields and that there is great potential for collaboration and innovation. However, we also recognize the need for caution and critical reflection, as AI technologies can have unintended consequences and reproduce existing biases and inequalities. Moving forward, we recommend that social work researchers and practitioners engage in ongoing dialogue and collaboration with AI experts and stakeholders and prioritize ethical considerations and social justice principles in their work. We also encourage further research on the specific applications and impacts of AI in social work practice, as well as the perspectives and experiences of professions, social agencies, and clients with our Artificial Intelligence Enhanced Social Work (or “Artificial Social Work”) framework. In doing so, we can ensure that AI is used in ways that are consistent with the values and goals of social work and that it contributes to a more just and equitable society.

Endnotes

The ELSI criteria originally come from the care/health sector, in the context of the implementation and use of technologies (Greenhalgh et al., 2017; Kunze, 2020).