Abstract
Social work, as a human rights–based profession, is globally recognized as a profession committed to enhancing human well-being and helping meet the basic needs of all people, with a particular focus on those who are marginalized vulnerable, oppressed, or living in poverty. Artificial intelligence (AI), a sub-discipline of computer science, focuses on developing computers with decision-making capacity. The impacts of these two disciplines on each other and the ecosystems that social work is most concerned with have considerable unrealized potential. This systematic review aims to map the research landscape of social work AI scholarship. The authors analyzed the contents of 67 articles and used a qualitative analytic approach to code the literature, exploring how social work researchers investigate AI. We identified themes consistent with Staub-Bernasconi’s triple mandate, covering profession level, social agency (organizations), and clients. The literature has a striking gap or lack of empirical research about AI implementations or using AI strategies as a research method. We present the emergent themes (possibilities and risks) from the analysis as well as recommendations for future social work researchers. We propose an integrated model of Artificial Intelligence Enhanced Social Work (or “Artificial Social Work”), which proposes a marriage of social work practice and artificial intelligence tools. This model is based on our findings and informed by the triple mandate and the human rights framework.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
—Eliezer Yudkowsky
Introduction
Social work is deeply rooted in a longstanding tradition worldwide (Ehrenreich, 1985; Healy, 2008; Stuart, 2013): “For well over a century, social workers have played a powerful role in lifting the nation out of the distress that accompanied industrial and social transformation (...).” (Uehara et al., 2013, p. 165). Social work is primarily a profession aimed at driving and analyzing (social) changes for individuals and their communities (IFSW, 2014). One of the biggest and perhaps (maybe) most significant disruptions today for organizations and society is digitalization (Dufva & Dufva, 2019; Majumdar et al., 2018). In this context of digitalization, the UN Special Rapporteur on extreme poverty and human rights points out in his report on “Extreme poverty and human rights” (United Nations General Assembly, 2019), that “[t]he digital welfare state is either already a reality or emerging in many countries across the globe.” (United Nations General Assembly, 2019, p. 2). But at the same time and in contrast: “The social sector does not benefit fully from the digital revolution. If this trend continues, the constituencies the sector serves could end up on the wrong side of the digital divide.” (Coulton et al., 2015, p. 4). Thus, the purpose of this paper is to explore the digital changes with regard to artificial intelligence on social work, particularly in terms of its potential possibilities and risks.
In social work practice, digital transformations is rising in prominence (Goldkind, 2018; Rafferty & Waldman, 2006; Steiner, 2021), and are sometimes referred to as digital social work (Goldkind, 2018) or e-social work (López Peláez & Marcuello-Servós, 2018; López Peláez et al., 2018). These technologies utilized in social work include social media, virtual reality, and chats (Boddy & Dominelli, 2017; Campbell & McColgan, 2016; Garkisch, 2020; Trahan et al., 2019). Artificial intelligence, a specific constellation of algorithmic and computing strategies, has gained significant attention across industries, often being touted as a game changer (Castellanos, 2020; Davenport & Kalakota, 2019; Ditsche et al., 2023; Lee et al., 2018). Social work has been more conservative in its approach to artificial intelligence. The integration of artificial intelligence and machine learning technology has entered social work and social services over the last few decades (Gamble, 2020; Gillingham, 2006; Goldkind, 2018; Schwartz et al., 2017). The developments are also being intensified by the perennial challenges in social work such as limited resources, increasing caseloads and shortages of skilled social work labor (Berg-Weger & Schroepfer, 2020; Grimwood, 2023; Mänttäri‐van der Kuip, 2016) or overarching trends like sustainability (Jayasooria, 2016; Matthies et al., 2020), globalization (Dominelli, 2010), and growing inequality or injustice (Dorling, 2015). Artificial intelligence in social work can be a new approach, because Artificial Intelligence Enhanced Social Work addresses these challenges and can offer for example the following advantages: predictive thinking (Gillingham, 2016b; Grządzielewska, 2021), enhancements in quality controls (Kum et al., 2015; Pan et al., 2017; Santiago & Smith, 2019), tailored services for better client/user-orientation (Bako et al., 2021; Fink & Brito, 2021), and transparency (Coulthard et al., 2020). Nevertheless, careful consideration must be given to integrating of technology in social work to ensure adherence to ethical principles and respect for the human dimensions and rights (Rodriguez & Storer, 2020; Schneider & Seelmeyer, 2019).
While artificial intelligence (AI) transformations promise to “revolutionize” or “evolve” social work practice, as demonstrated by the development of large language models like ChatGPT (Marquart & Goldkind, 2023), it is crucial to critically assess AI technology implications on the protection of human rights (Livingston & Risse, 2019; Raso et al., 2018; United Nations General Assembly, 2019). The significance of human rights within the context of technological advancements such as AI is particularly crucial in social work, because this profession is recognized by the United Nations as a profession group that strives to enable a humane life (IFSW, IASSW und UN Centre for Human Rights, 1994). Even today, human rights remain the systematic basis and standard for profession action (IFSW, 2018; Mapp et al., 2019; Staub-Bernasconi, 2016). In this context, important principles like human dignity, nondiscrimination, participation, transparency, and accountability are important facts (Mapp et al., 2019). Therefore, it is imperative to ensure that the adoption of AI technologies aligns with the core values of human rights that guide the social work profession.
The intersection of artificial intelligence and social work represents a critical area of study, underscoring the necessity for innovative methods and models to improve the efficacy of social services (Goldkind, 2021; Uehara et al., 2013). This convergence demands the establishment of comprehensive frameworks and regulations to guide ethical and human rights–centered AI integration into social work practices (O'Sullivan et al., 2019; Winfield, 2019). It emphasizes the importance of fostering interdisciplinary and inter-organizational collaboration among researchers, practitioners, and technologists. Such collaboration is essential for exploring and developing potential solutions that can have a positive impact on society, thereby enhancing the effectiveness of social work through the thoughtful application of AI technologies (Perron et al., 2010; Sabah & Cook-Craig, 2010).
We address these gaps in the literature with this systematic review by considering the following: While some research has examined the use of AI in social work, most studies have focused solely on specific sectors, such as child welfare (Gillingham, 2006; Schwartz et al., 2017), youth work (Rice et al., 2018; Ting et al., 2018), or mental health services (Gamble, 2020). However, there are still significant gaps in our understanding of how AI can be best leveraged to enhance social services and improve results for clients (Frey et al., 2020). A comprehensive and unified model, which can serve as a blueprint for an Artificial Intelligence Enhanced Social Work, is currently lacking in the field. Furthermore, there is a dearth of evidence concerning the possibilities and risks of applying artificial intelligence to social work in comparison to other sectors (Lee & Yoon, 2021; Susar & Aquaro, 2019).
The aim of this research is to provide a comprehensive review of the current state of AI and social work research, focusing on its possibilities and risks. Existing findings will be synthesized, and research gaps will be identified. Moreover, this study aims to develop and examine a cohesive model for AI’s application in human rights–based social work: the Unified Model for Artificial Intelligence Enhanced Social Work. Having already explained the importance of human rights in the context of digitalization, the framework aligns with the human rights–based mandate, also known as the “triple mandate” of social work (Staub-Bernasconi, 2007, 2019).
This article is structured as follows: the “Method” section outlines the methodology employed and explains how studies were selected. The “Results” section provides an objective overview of the possibilities and risks associated with Artificial Intelligence Enhanced Social Work. The “Discussion” section features the Unified Model for Artificial Intelligence Enhanced Social Work. The article concludes with a discussion of the limitations and suggestions for further research.
Method
Research Questions
As yet, there is not a single literature focusing on social work and AI. This systematic review first sought to address the question of how social work researchers are conceptualizing studies on social work and AI. Such a mapping of the literature can help to orient future research by addressing the following Research Questions (RQ):
-
RQ 1: What are the possibilities and risks social work and AI literature?
Under this broad guiding question, this review interrogated the following sub-questions:
-
RQ1a) Which practice areas are social work AI literature focused on?
-
RQ1b) What are the possibilities and risks in the literature for social work profession?
-
RQ1c) What are the possibilities and risks in the literature for social agencies?
-
RQ1d) What are the possibilities and risks in the literature for clients?
Based on the possibilities and risks, we also want to answer the following questions.
-
-
RQ 2: How can a Unified Model of Artificial Intelligence Enhanced Social Work look like?
-
RQ 3:What are gaps in existing social work and AI literature?
Research Method: Systematic Literature Review
Aim
Our aim is to understand the intersection of social work and AI via the published scholarship on the subject. For this purpose, we have conducted a systematic literature review. This research method can be used to gather knowledge via a formal, evidence-based mechanism and to provide recommendations for policy, practice, and research (Tranfield et al., 2003). Our method is guided by review frameworks (Denyer & Tranfield, 2011; Tranfield et al., 2003) and existing literature analyses from the field of social work (Guo et al., 2009; McFadden et al., 2015; Weiss‐Gal, 2016; Zechner & Sihto, 2023). Figure 1 summarizes the structure of the research, based on Garkisch et al. (2017).
Keywords and Search Strings
Following the approach of Maier et al. (2016) in the context of keywords, we distinguish between topic-related (context of AI) and sector-related (context of social work) keywords. In order to obtain a holistic picture, we have oriented ourselves to systematic literature reviews (SLR) in social work (Ahuja et al., 2022; Sánchez-Sandoval et al., 2020), but also in the AI literature (Jiang et al., 2017; Yu et al., 2018; Zawacki-Richter et al., 2019), in our search for keywords. The keywords are presented in Table 1. In total, we identified eight topic-related keywords and four sector-related keywords.
Used Databases
In selecting our eight databases, we have also been guided by the publications already mentioned: PsycINFO, Business Source Premier EBSCO, SCOPUS, ProQuest’s ABI/INFORM Collection, Social Work Abstracts, and Social Service Abstracts. For cross search, we have used Google scholar.
Inclusion Criteria in the SLR
The main criteria for inclusion in the review were that studies had to (a) be published between 2009 and 2023, (b) match with the mentioned keywords like AI and social work, (c) be published in English language, (d) peer-reviewed publications, and (e) with empirical or theoretical background (for example we have excluded editorials).
Overview of the Search Process
We used a multiple stage process for our systematic review and were supported by research colleagues and their valuable peer feedback. We documented our results step by step in an Excel spreadsheet (search protocol — available to the readers upon request to the authors) and at the end in a PRISMA diagram (Moher et al., 2009; Page et al., 2021), see Figure 2.
At the second stage, to analyze our article in depth, we used a qualitative content analysis (Macpherson & Holt, 2007; Pittaway & Cope, 2007). The content analysis was undertaken independently by two researchers (Lipsey & Wilson, 2001; Ritz et al., 2014). At the end, results were discussed and refined as a team (Cinar et al., 2019; Garkisch et al., 2017).
Results
Article Characteristics (Quantitative Results)
Table 2 summarizes the articles in the sample. A total of 67 publications are included in the SLR’s sample (see Figure 2 PRISMA diagram). Based on this sample, relatively few articles (n = 9) of the total 67 articles in the sample were published between 2005 and 2015. A significant annual increase in the number of articles can be observed between 2019 and 2022, e.g., 12 publications in a single year, 2020. This suggests a trend that the topic of AI and social work is gaining interest in the academic social work literature. We categorized the articles by type and found six different methodological approaches: case study, conceptual, literature review, mixed methods, and qualitative and quantitative research. It is clear that the topic is discussed more on a conceptual level (25 articles), while quantitative approaches (20 articles), mainly evaluating the implementation of AI systems, are a close second. Regarding the geographical origin of the first authors, it is noteworthy that Australia is strongly represented (14 articles). But, concomitantly, a sole author has written prolifically on the datafication of social work, and is the author of 11 articles in the sample.
Key Findings (Qualitative Results)
-
RQ1a) Which practice areas are social work AI literature focused on?
Of the articles in this sample, the majority focus on the practice domains of child protection and child welfare. 21 of the coded articles focused on either the use of administrative data in child welfare organizations to feed algorithms or offered perspectives, advice, and suggestions on the potential ethical challenges of using administrative data and algorithms in child welfare agencies. Far fewer articles were dispersed across the fields of mental health (n = 5), health (n = 6), and organizational practice (n = 4).
-
RQ1b) What are the possibilities and risks in the literature for social work profession?
-
RQ1c) What are the possibilities and risk in the literature for social agencies?
-
RQ1d) What are the possibilities and risks in the literature for clients?
-
Taken together, sub-research question 1b through 1d focus on the possibilities and risks outlined in the social work and AI literature at three levels of consideration, based on the triple mandate (Staub-Bernasconi, 2016, 2019):
-
Profession level: Meaning what are the implications for professional social workers on their profession judgment and ethics.
-
Social agency level: Meaning what are the implications and organizational considerations for social work agencies including cost/benefit, risk assessment, and data quality.
-
Client level: Meaning what are the direct impacts and implications for clients receiving services, including access to resources, quality of service, and overall client satisfaction.
Figure 3 summarizes the themes (possibilities and risks) we identified.
-
Possibilities at the Profession Level
Profession Development and Standardization
Artificial intelligence can help develop and shape the social work profession (Schneider & Seelmeyer, 2019; Victor et al., 2021). It also recognizes that variability in how line workers make assessments (Ting et al., 2018). However, AI in social work research is also relevant. Unstructured data can be harnessed and made available for research through text mining and machine learning (Victor et al., 2021). In difficult decision-making, AI can help to be less emotional (Tan, 2022). Furthermore, classification models, standards, and AI-supported processes ensure the quality of AI-supported services and the protection of clients’ rights and dignity in this evolving field (Devlieghere & Roose, 2018; Meilvang & Dahler, 2022; Rodriguez et al., 2019). In addition, AI may help to create social impact (Frey et al., 2020; Santiago & Smith, 2019), e.g., better health outcomes (Zetino & Mendoza, 2019).
Predictive Thinking and Risk Assessment
Through the existence and analysis of data, it is possible to have a look not only at the situation in the past, the present, but also at the situation in the future (Fink, 2018; Frey et al., 2020; Schwartz et al., 2017; Zetino & Mendoza, 2019): “Predictive analytics help develop models that explain what might happen. Once analysts have described what is happening, they can start to answer what is likely to happen next. This type of analysis is performed by extracting information from historical data to populate a model of what is likely to happen, i.e., a predictive model.” (Zetino & Mendoza, 2019, p. 414). The nomenclature here is various: predictive analytics (Schwartz et al., 2017; Zetino & Mendoza, 2019), predictive risk modeling (Gillingham, 2016b, 2017), or automated identification (Victor et al., 2021).
Decision Support
As different authors point out AI and machine learning in social work supports in context of decision-making process: “(…) dataism promises to improve decision-making (…)” (Devlieghere et al., 2022, p. 328) or “There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care.” (Keen et al., 2021, p. 57). On the whole, the advantages outweigh the disadvantages (Pan et al., 2017). Employees thus receive digital support in their decision-making process (Gillingham, 2019b) or management decisions can also be supported digitally in this way (Bako et al., 2021; Zetino & Mendoza, 2019). This can enable better strategic decision-making (Zetino & Mendoza, 2019). The consequences are improved and simplified decision-making (Alfandari et al., 2023; Devlieghere et al., 2022), autonomous decisions (Liedgren et al., 2016), better justification for decisions (Gillingham, 2013), and reduction of the fault susceptibility (Gillingham, 2019a). And to sum it up: “(…) the benefits (…) gained through using this new technology should enable professionals to spend more time working with their clients.” (Schneider & Seelmeyer, 2019, p. 118). In principle, however, AI should only be used to support decisions. It should not replace human decision-making (Victor et al., 2021).
Possibilities at the Social Agency Level
Information and Data Exchange
By using AI, data can also be (more easily) shared between authorities and service providers (Devlieghere & Roose, 2018; Fink & Brito, 2021). The consequence: The care process can be improved (Devlieghere & Roose, 2018) and better communication and information exchange ways can be established (Fink & Brito, 2021). As we have already shown in another point: The exchange of data helps in decision-making (Gillingham, 2011b). Furthermore, the views of the stakeholders can be better discussed together (Devlieghere & Roose, 2018) to better coordinate the various interventions (Zetino & Mendoza, 2019). This can be very helpful, for example, in the field of probationary services (Ting et al., 2018) or in context of nursing and health (Zetino & Mendoza, 2019).
Resource Allocation
By reducing costs and maximizing organizational resources with AI, organizations are able to maximize the use of existing data sets (Schofield, 2017), and they no longer need to be collected in a work-intensive way (Schofield, 2017). This helps to reduce workload (Schofield, 2017) and reduces costs (Schofield, 2017; Victor et al., 2021; Zetino & Mendoza, 2019). The consequences are “cost-effective solution(s)” (Victor et al., 2021, p. 651) or even “cost savings” (Zetino & Mendoza, 2019, p. 409). In addition, services can be designed more efficiently, allowing scarce resources to be used in a more targeted and demand-driven manner (Rice et al., 2018).
Quality Control and Improvement
AI can be used to monitor the quality of social work services and ensure that organizations are using best practices. Thus, using AI can promote the optimization of quality within the social work sector and agencies (Rodriguez et al., 2019; Schwartz et al., 2017). The quality improvement can be manifold, for example with AI in social work it is possible to enable evidence-based decision-making (Kum et al., 2015; Liedgren et al., 2016; Lushin et al., 2019; Zetino & Mendoza, 2019). The accuracy of risk assessments can also be improved (Schwartz et al., 2017): “(…) enable decisions to be more objective and impartial.” (Schneider & Seelmeyer, 2019, p. 117).
Possibilities at the Client Level
Transparency
Standardization and a (data-protected) objective basis for decision-making provide the client with transparent insights (Devlieghere et al., 2018; Meilvang & Dahler, 2022). Ideally, this approach can increase the acceptance of decisions made by social workers and even judicial authorities (Coulthard et al., 2020). From the perspective of transparency, AI tools or necessary questionnaires are therefore filled out together with the clients (Devlieghere et al., 2018). Transparent AI systems decrease misunderstandings and miscommunications since transparent decision-making procedures are more traceable (Meilvang & Dahler, 2022; Ting et al., 2018).
Individualized and User-centered Offers
The (social) needs and requirements of clients should be given special attention (Bako et al., 2021). Better tailored offers can also be made using data (Grządzielewska, 2021; Rice et al., 2018; Rodriguez et al., 2019; Schwartz et al., 2017): Bako et al. (2021) point out that it is possible to “(…) gain better insight into the most needed social interventions in the patient population (..)” (op. cit. p. 2). For example, “enhance HIV Prevention Interventions for Youth Experiencing Homelessness” (Rice et al., 2018, p. 551). In sum: Services can be designed to be more closely tailored (Schwartz et al., 2017).
Early Interventions
In this way, social work can proactively identify and offer services to those who need them and can offer early interventions. No action is required on the part of the potential client (Fink, 2018; Schwartz et al., 2017). It is also possible to make predictions (Gillingham, 2016b). In sum: “Data science has the potential to predict an event before it happens.” (Santiago & Smith, 2019, p. 352).
Risks at the Profession Level
Insufficient Research, Professionalization, and Standards
Social work AI research is still in its infancy (Zetino & Mendoza, 2019). So, there is also a lack of research especially focusing on data science (Cariceo et al., 2018). This is partly due to the fact that AI has not been widely used in social work and there is a paucity of case studies (Liedgren et al., 2016). As a result, there is a lack of evidence-based or evidence-informed recommendations for the use of AI (Cresswell et al., 2020; Landau et al., 2022). There is also a need for new research approaches to the use and testing of AI (Gillingham, 2011a). In addition, social workers are not prepared for the use of technology and especially AI in higher education (Santiago & Smith, 2019; Sicora et al., 2021). Implications for education and training are missing (Sicora et al., 2021) for example in the context of data science (Cariceo et al., 2018) or statistical methods (Coulthard et al., 2020). The result is ignorance and uncertainty for the use of AI (Gillingham, 2006). There is often a lack of knowledge about how to handle and use data in social work (Gillingham, 2020), whereas organizations are under intense pressure to improve efficiency (Tan, 2022). Standards and regulations are still missing (Cresswell et al., 2020; Gamble, 2020). Furthermore, methodological and reporting standards for AI and big date research and interventions are necessary (Cresswell et al., 2020).
Resistance and Knowledge Gaps
As indicated prior, social work is primarily a relational profession, focus on the interpersonal and individual level decision-making (Devlieghere et al., 2022; Fink & Brito, 2021). It is not surprising, therefore, that there is a lack of acceptance of the use of AI among social workers. Some results suggest that social workers are trying to undermine the use of AI (Devlieghere et al., 2018), display “confusion and frustration” (Gillingham, 2013, p. 440) and skepticism (Schneider & Seelmeyer, 2019), or strike during rollout (Ting et al., 2018). There is also a lack of knowledge about the possible use of AI in social work focusing on the organizational level (Devlieghere et al., 2018; Pan et al., 2017), furthermore, a lack of acceptance among the clients, who are frequently not involved in the AI process (Devlieghere et al., 2018).
Ethical Issues
Closely related to dehumanization are the ethical risks of using AI in social work (Gillingham, 2016c; Landau et al., 2022). Especially in the field of child welfare, there is an immense danger of developing unfair and unjust models (Rodriguez et al., 2019). Standards for social agencies are missing (Cresswell et al., 2020; Gillingham, 2016a). Here, ethical, legal, and social implications (ELSI) criteria should be developed for use. These criteria are already being applied in social work (Schneider & Seelmeyer, 2019).
Risks at the Social Agency Level
Data Availability and Quality
Data is needed for developing AI in social work. These data are often not or not sufficiently available (Schneider & Seelmeyer, 2019), lacking in strategic data collection (process) is missing (Gillingham, 2016c), an absence of longitudinal section data (Kum et al., 2015), poor data quality (Meilvang & Dahler, 2022), or unstructured data (Victor et al., 2021). On the one hand, it is valuable to maximize data (Ting et al., 2018); on the other hand, the counterpart is when there is too much data, and it becomes too cluttered (Schofield, 2017). There is a risk that data is too fragmented to capture the most important parts (Devlieghere et al., 2018). Open Data Infrastructure to share data is missing (Walter et al., 2021) and there are also inadequate data support systems to manage and interpret data (Gillingham, 2019a). In addition, not all data could be used for analysis due to legal restrictions (Schneider & Seelmeyer, 2019). In sum, data can therefore be both an enabler and a hindrance (Alfandari et al., 2023).
Safety and Security
Program participant data can be sensitive and confidential (Schneider & Seelmeyer, 2019). At the same time, discretion (with a focus on data) is a key concern in social work (Ranerup & Henriksen, 2022). There is a risk that digital discretion will not be respected (Ranerup & Henriksen, 2022). Data protection (Kum et al., 2015; Schneider & Seelmeyer, 2019) and data security (Zetino & Mendoza, 2019) play an important role in the context of social work. The consequences of non-compliance cannot be assessed directly. However, there could be threats to privacy (Fink, 2018; Fink & Brito, 2021), particularly, e.g., if data is used without permission (Fink, 2018).
Missing Resources and Infrastructure
The resource challenge for AI-enhanced social work is multifaceted. Nevertheless, there may be an absence of adequate financial resources available initially (Pan et al., 2017). Furthermore, there is also a lack of (open) data infrastructure (Walter et al., 2021). However, not only is the technological infrastructure inadequate, but likewise education and training programs for Artificial Intelligence Enhanced Social Work are missing or insufficient (Sicora et al., 2021).
Risks at the Client Level
Dehumanization
Numerous authors point out dehumanization as a problem of the use of AI in social work (Devlieghere et al., 2022; Fink & Brito, 2021), consider the move towards automation bias, or the belief that computers should be believed over humans (Devlieghere et al., 2022). Social work is a relational discipline (Devlieghere et al., 2022; Fink & Brito, 2021). When making decisions, data alone is not enough; it must always be placed in an individual and situational context (Schneider & Seelmeyer, 2019). AI could eliminate individual casework decisions, as AI increasingly takes over the decisions (Schneider & Seelmeyer, 2019). It is therefore important “(…) that the resources and focus on analytic big data never distract from the importance of these relationships.” (Fink & Brito, 2021, p. 172). Overuse of artificial intelligence technologies can potentially hinder the interpersonal bond and relationship between social workers and clients (Devlieghere et al., 2022).
Misinterpretation and Bias
There are still insufficient possibilities to prevent the bias (Meilvang & Dahler, 2022). The causes of bias are many and varied, e.g., “incorrect recommendations” (Gillingham, 2019b, p. 277), data bias (Landau et al., 2022), subjective assessment (Cresswell et al., 2020), and further AI failures (James & Whelan, 2021): “Predictive modeling will never be 100% accurate.” (Ting et al., 2018, p. 643).
Data and Algorithmic Injustice
There are population groups that tend to be under- or over-represented in the data (Walter et al., 2021), like indigenous peoples (Walter et al., 2021), younger people (Fink & Brito, 2021), or the “poor” (Whelan, 2020): “Some of our most vulnerable populations are in danger of becoming even more invisible by the algorithms (...)” (Zetino & Mendoza, 2019, p. 415). Also, often only a single digital data interpretation takes place, and no consideration of the offline context (Frey et al., 2020). The consequences can be varied: This can lead to exclusion (Walter et al., 2021), injustice (Fink & Brito, 2021), prejudices (Gillingham, 2019b), or stigmatization and discrimination (Goldkind, 2021; Keddell, 2019). Furthermore, false interventions and decisions (Fink & Brito, 2021) or dangerous assumptions (Frey et al., 2020) are possible, too.
Discussion
Summary
This systematic review sought to document the universe of social work research on artificial intelligence. Based on our analysis, three key observations emerged.
-
First, to date, the academic literature on social work and AI is primarily conceptual and the discourse these articles engage is insular, being primarily situated in social work journals.
-
Next, where scholars are attempting to use machine learning or predictive analytics as a research method, the content area or population under investigation is most frequently child welfare.
-
Lastly, documenting the social work — an AI research ecosystem is challenging as AI can be considered both a method, for example, using topic modeling to analyze text data and a subject, using chat technologies in mental health services. This conflation of content and method is not a singular social work challenge but adds complexity in how as a field we can shape a uniquely social work perspective on AI.
The study of computer intelligence or the field of artificial intelligence research is now roughly 75 years old; however, recent interest has accelerated dramatically with the increase in computing power and the exponential expansion of data production and collection due largely to internet-based capitalism (Russell & Norvig, 2016). While social work as a practice has existed for roughly double the duration of AI as a field, the social work scholarly literature is considerably more limited given the longer timeline. The lack of a singular focused social work literature has been noted for the last 100 years and was famously captured by Abraham Flexner in his 1915 address to the national social work organization (Flexner, 1905). Indeed, Longhofer and Floersch (2012) describe how as a field there seems to be no center of knowledge production in social work. And, as outlined in the Grand Challenges for Social Work, it is also our task to shape our own work: “(…) social work can and must play a more central, transformative, and collaborative role in society, if the future is to be a bright one for all.” (Uehara et al., 2013, p. 165).
Taken together, these conditions have created a vacuum in the social work AI research literature. At the same time, the digital age is leading to a reassessment of human rights, with a focus on their enforcement and guarantee by the state (Berlyavskiy et al., 2020). Furthermore, this age has highlighted the potential of digital technologies to influence the political and social environment, which has implications for human rights (Monshipouri, 2017). Mathiesen (2014) argues for a declaration of digital rights to ensure that human rights are respected in digital contexts: “In the current digital age, human rights are increasingly being either fulfilled or violated in the online environment.” (Mathiesen, 2014, p. 2). The intersection of artificial intelligence and human rights presents possibilities and challenges. AI has the potential to improve economic and social well-being and human rights (Cath, 2018), but it also raises concerns about privacy, discrimination, and societal impacts (Raso et al., 2018). In the results section of the article, we offer possibilities and risks in detail in relation to the profession of social work. However, respecting and upholding human rights requires more than this. It necessitates framework (Donahoe & Metzger, 2019; Mathiesen, 2014). A framework should be utilized to evaluate and address the human rights impacts of AI, particularly in areas such as data protection (Raso et al., 2018). Such a framework and the design of AI systems should be guided by human rights principles (Aizenberg & van den Hoven, 2020). The “Report of the Special Rapporteur on extreme poverty and human rights” can also provide orientation for the development of content for frameworks and has also been incorporated into the developments (United Nations General Assembly, 2019). These considerations necessitate the development of a framework for the social work profession, too. A proposed framework will be present in the following section of the article.
Recommendations for Practice: Introducing a Conceptual Model for Artificial Intelligence Enhanced Social Work
Envisioning a practice where social work and computing power collaborated to solve clients and communities’ challenges and to identify possibilities for improving the conditions of both, based on this review of the literature, we propose below a model for an Artificial Intelligence Enhanced Social Work practice merging the empathy, interpersonal skills, and social justice power of social work, with the technological fortitude of artificial intelligence. We propose below an Artificial Intelligence Enhanced Social Work framework, see Figure 4. This framework, called the Unified Model for Artificial Intelligence Enhanced Social Work, draws on the Staub-Bernasconi triad or triple mandate focusing on clients, social agencies, and the profession (Staub-Bernasconi, 2007). The aim: This framework provides guidance for professionals to fulfill their roles and responsibilities ethically, promoting social justice and client well-being. It recognizes the significance of their contributions to social work organizations and the wider professional community. The framework advocates for a holistic approach that integrates ethical and human rights considerations beyond the client-professional relationship.
The Unified Model of Artificial Intelligence Enhanced Social Work hews closely to Staub-Bernasconi’s notions of the triple mandate of balancing the inter-related priorities of the profession, clients, and the social agency; these mandates are parallel to the social work and AI literature. Below we describe the core ideas and concepts of Artificial Intelligence Enhanced Social Work.
The Client Mandate
For Staub-Bernasconi, this aspect of the framework underscores the primary responsibility of professionals to their clients or service users. It emphasizes that professionals should prioritize the well-being and best interests of the individuals, families, or communities they are serving. This includes ensuring that services provided are of high quality, culturally sensitive, and responsive to the specific needs and goals of the clients. Three elements of the client mandate echoed by both Staub-Bernasconi and Taylor’s data justice framework are critical elements in the Artificial Intelligence Enhanced Social Work Model.
Accountability
Human rights scholars and philosophers of data justice suggest that the consumers of algorithmic and AI products have the right to hold the creators of such tools accountable (Katyal, 2019; Novelli et al., 2023). This is particularly relevant in the human service sector, and it is already being discussed in the health care sector (Habli et al., 2020; Murphy et al., 2021). Furthermore, we have to take a closer look at “economic and social rights in the digital welfare state” (United Nations General Assembly, 2019, p. 16).
→ Social work clients and consumers have a right to advocate on their own behalf to demand increased representation in data sets as well as greater transparency with regard to how an AI tool functions, what the tool is doing, and a mechanism for correction and repair, should such a tool cause harm.
Participation and User-centricity
Gatenio Gabel (2016) writes that the participation of all people in decision-making, especially those people affected by such decisions, is a key aspect of rights-based approaches to social work practice. In the context of participatory design, a human rights approach emphasizes the primacy of human experience and well-being in evaluating artificial intelligence, positioning the effects on human lives as the central consideration in governance framework (Donahoe & Metzger, 2019). As Aizenberg and van den Hoven (2020) outline in the context of human rights and AI, it is therefore crucial to involve relevant social groups in the development of technology-enabled AI solutions “(...) to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.” (Aizenberg & van den Hoven, 2020, p. 1). This participation is gaining importance, particularly in the context of digital social work (Lyon & Koerner, 2016; Mois & Fortuna, 2020) and therefore in human rights–based Artificial Intelligence Enhanced Social Work, too. This leads to “(...) a higher level of acceptance and trust.” (Aizenberg & van den Hoven, 2020, p. 11).
→ Prioritizing client’s rights to participation and participatory design would move from a corporate-driven AI development mechanism to a user-centered perspective requiring private-sector actors and social welfare agencies to engage stakeholders and communities in the development of AI tools. In this context, it is recommended to use agile methods, such as design thinking or service design, to integrate the user into the process.
Nondiscrimination and Inclusion
As a human rights profession, nondiscrimination in social work practice has a long tradition (Androff, 2018; Mapp et al., 2019; Pelton, 2001). Especially in the datified society, discrimination can be amplified and codified in automated systems (Gerards & Borgesius, 2022; Heinrichs, 2022). Although artificial intelligence is supposed to help make fact-based and evidence-based decisions, the algorithms can also suggest unjustified or discriminatory recommendations (Aizenberg & van den Hoven, 2020) and as Donahoe and Metzger (2019) describe include “embedded bias” (p. 115). In this case, the UN report of the Special Rapporteur on extreme poverty and human rights points out that “(...) predictive analytics, algorithms and other forms of artificial intelligence are highly likely to reproduce and exacerbate biases reflected in existing data and policies.” (United Nations General Assembly, 2019, p. 22).
→ Nondiscriminatory, rights-based social work practice ensures access to professionals, services, and resources for all people, including marginalized and under-represented populations. Applying nondiscrimination in an Artificial Intelligence Enhanced Social Work context calls for inclusive data sets, inclusion, and representation across decision makers and AI creators as well as inviting a range of affected population members in the review and refinement process of AI tools.
The Profession Mandate
This component of the triple mandate focuses on professionals’ obligations to the social work professions. Professiona are encouraged to engage in continuous learning, professional development, and ethical reflection. This involves upholding the ethical standards, values, and principles of their profession and promoting the profession’s positive image within society.
Interdisciplinarity and Collaboration
Interdisciplinary collaboration and discourse with experts across diverse fields, including computer science, holds paramount importance in the realm of digital and Artificial Intelligence Enhanced Social Work (Chui & Ko, 2020; Perron et al., 2010). In this context, it is also important to build partnerships (Berzin et al., 2016; Uehara et al., 2013). This can help to work together over state lines (Berzin et al., 2016). Social workers should share best practices, learn from best practices, and leverage external support in the form of consultants and partners (Berzin et al., 2016; King et al., 2012; Mackrill & Ebsen, 2018; Nesmith, 2018). Aizenberg and van den Hoven (2020) point out that this is not a straightforward process, but “learning to communicate across disciplines, resolving value tensions between stakeholders (…)” (p. 11) is important. When cooperating with private-sector companies, for example from the technology industry, it is important to point out the human rights view (Donahoe & Metzger, 2019; United Nations General Assembly, 2019), because they often “(...) operate in an almost human rights-free zone (...)” (United Nations General Assembly, 2019, p. 2). Thus, it should be ensured that for example the UN Guiding Principles on Business and Human Rights (UNGPs) should be ensured (Donahoe & Metzger, 2019).
→ To effectively use Artificial Intelligence Enhanced Social Work, organizations need to establish interdisciplinary teams that include social work, computer science, ethics, and privacy experts. This approach promotes full understanding and facilitates assessment of ethical and privacy issues. Forums and platforms also offer significant possibilities for fostering knowledge sharing among professionals from different backgrounds and identifying best practices. The fundamental values and roots of social work should be taken into account when cooperating with economic players, for example.
AI-Future Skills and Continuing Education
As early as the 1980s, there have been concerns about the need for human knowledge and skills to use AI in a meaningful and human-centered way (Gill, 1988). It is extremely important that professionals are educated and sensitized to data and AI literacy (Fiok et al., 2022). There should also be regular possibilities for training (Berzin et al., 2015). In this context, the so-called future skills are important for digital and technological transformation (Ehlers & Kellermann, 2019; Meyer-Guckel et al., 2019), also with regard to digital social work (Garkisch, 2023). The focus on human–machine (AI) interaction (Livingston & Risse, 2019) and legal affairs (United Nations General Assembly, 2019) should also be taken into account here, too.
→ It is imperative to enhance computational thinking, AI, and data literacy and develop skills for evaluating automated systems for social workers. Social workers must receive thorough training and education concerning AI technologies, with a specific emphasis on ethical and privacy concerns, to support the ethical and responsible utilization of AI in their field. Not only is knowledge about AI and its applications relevant, but also the selection of appropriate tools and integration of AI into social worker lectures are crucial.
Ethical and Responsible Issues
Social work is and must continue to be perceived as a face-to-face profession (Baker et al., 2014; Chang et al., 2015). At the same time, social work is ethically grounded, focused on human rights (Boddy & Dominelli, 2017; Sanders & Scanlon, 2021) and social justice (Boddy & Dominelli, 2017; Chow et al., 2021; Edwards & Hoefer, 2010). Ethics holds a crucial role in the implementation of AI (Bostrom & Yudkowsky, 2018; Etzioni & Etzioni, 2017). Ethical and political guidelines can help to overcome these ethical challenges (Larsson, 2020; Lo Piano, 2020; Smuha, 2019). To design human rights–based AI systems above all this, dignity, freedom, equality, solidarity, and the right to life are the overriding ethical values (Aizenberg & van den Hoven, 2020). (Local) core principles of social work should also be taken into account. With regard to the USA, for example, the “Guiding Principles” of the Grand Challenges for Social Work are relevant here: social justice, inclusiveness, diversity, and equity (Barth et al., 2019). Furthermore, it is important to ensure that the human scope for decision-making is not eliminated.
→ It is critical to ensure that technological applications uphold the fundamental principles of the profession, including human rights and justice, when using AI in social work. A mentioning ethical and policy guideline to tackle ethical risks related to AI highlights the importance of establishing explicit standards and protocols to promote responsible use of AI applications in social work. Such guidelines serve to clarify ethical concerns, facilitate decision-making, and might require partnerships with national and international profession associations and the International Organization for Standardization. In addition, informed consent by the client and hybrid counseling services should also be considered.
The Social Agency Mandate
In addition to their responsibilities to clients, professionals are also accountable to the social agency or organization that employs them. This mandate acknowledges that professionals play a crucial role in supporting the mission, goals, and effectiveness of their employing agency. They are expected to collaborate with colleagues, follow organizational policies and procedures, and contribute to the overall functioning and success of the agency. This includes managing resources efficiently, meeting performance targets, and upholding the reputation and integrity of the agency.
Governance, Guiding, and Monitoring
Artificial intelligence has implications for organization management (Hilb, 2020). This will require new or different governance structures (Hilb, 2020; Janssen et al., 2020; Razzaque, 2021). This applies in particular to the management and preparation of data and the IT infrastructure required for this (Janssen et al., 2020). Even though governance and social work have already been discussed in many ways and are not entirely new (Dominelli & Holloway, 2008, 2006; Frahm & Martin, 2009; Webb, 2023), a rethink is needed in the AI environment (Dafoe, 2018; Taeihagh, 2021). With regard to respect for human rights, AI governance models are needed in particular (Cath, 2018).
→ AI governance in social work involves the development and implementation of policies and procedures to ensure that the use of AI technologies meets ethical standards and privacy regulations. Oversight ensures that AI has guardrails, helping to improve the quality of social work services and protects the rights and privacy of clients. Governance structure includes for ethical guidelines, transparency and reporting procedures, participation processes (clients and workers), and evaluation instruments.
Resources, Infrastructure, and Data
Budget cuts are on the agenda of many organizations (Schoech & Bolton, 2015) and this results in limited/contested resources (López Peláez & Marcuello-Servós, 2018). In addition, working with addresses in the digital space is time intensive (Aasback, 2022) and there is a lack of basic IT resources and skills in social work (Brownlee et al., 2010; Dellor et al., 2015). Furthermore, the availability of data is the basis for AI applications (Schneider & Seelmeyer, 2019), as in many cases it is simply stored in “silos” without being processed (Coulton et al., 2015). However, pure data alone is not enough; it must be of high quality, structured, and available over a certain period of time (Meilvang & Dahler, 2022; Victor et al., 2021).
→ Organizations facing resource constraints can prioritize resource allocation and reduce unnecessary expenditures. It is essential to invest in employee training to enhance their digital expertise, given the time-consuming nature of digital work. Enhancing efficiency in the use of digital resources can be accomplished through automation and streamlining work processes. Additionally, collaborations with other institutions can allow for resource-sharing and the development of collective approaches to address resource deficiencies.
Leadership, Change and Transparency
Digital transformation, including the implementation and use of AI technology in organizations, is a process of change (Li, 2020; Mugge et al., 2020). Social work as a caring discipline, characterized by classical structures and traditions (Aasback & Røkkum, 2022). As a result of this, decision-making and change implementation pathways are time consuming (Aasback & Røkkum, 2022; Sullivan-Tibbs et al., 2022). To counteract the traditional structures and the mentioned challenges, agility plays a role within social work (Jeyasingham, 2016, 2020). Furthermore, there is a certain “aversion” to technology within social work (Baker et al., 2014; Barrera-Algarín et al., 2021). Thus, a transparent and structured change management process is therefore of central importance (Hanelt et al., 2021; Schwarzmüller et al., 2018), although with regard to the specific characteristics of social agency organizations (Schiffhauer & Seelmeyer, 2021).
→ Social work organizations should recognize the change process as a comprehensive transformation and set realistic expectations for AI adoption. It is critical to provide training and resources to overcome technology reservations and strengthen the digital literacy of the team. Transparent, structured leadership and change management is essential to ensure an efficient transition to digital technologies.
Limitations and Conclusion
While conducting our systematic literature review, we acknowledge limitations outlined here. The primary constraints pertain to the definition of keywords and the selection of the appropriate time frame for our review. These challenges are in line with similar issues often encountered in scoping/systematic reviews (Colquhoun et al., 2014). Regarding keyword definition, we took a proactive approach to address this limitation by conducting a pilot study to comprehensively derive keywords from existing AI and social work publications, as well as those from other sectors. Additionally, we focused on English peer-reviewed articles for this analysis, a common practice that comes with its own limitations, as both social work and AI discussions are global in nature. Consequently, sources such as books, book chapters, conference papers, dissertations, and gray literature are not included in this analysis. It is worth noting that AI in social work is predominantly discussed in social work journals, but we made an effort to encompass interdisciplinary databases. Lastly, our systematic literature review covers research from the past 15 years, and it is important to recognize that our findings primarily represent the current empirical landscape, as we relied solely on existing literature.
In conclusion, this review has provided a comprehensive overview of the current state of research on artificial intelligence and social work. Through our systematic literature review, we have identified key themes and trends in the scholarship, as well as gaps and limitations that suggest areas for future research. We also explored the potential for AI to enhance social work practice and improve outcomes for clients and organizations. Overall, our findings suggest that there is growing interest in the intersection of these two fields and that there is great potential for collaboration and innovation. However, we also recognize the need for caution and critical reflection, as AI technologies can have unintended consequences and reproduce existing biases and inequalities. Moving forward, we recommend that social work researchers and practitioners engage in ongoing dialogue and collaboration with AI experts and stakeholders and prioritize ethical considerations and social justice principles in their work. We also encourage further research on the specific applications and impacts of AI in social work practice, as well as the perspectives and experiences of professions, social agencies, and clients with our Artificial Intelligence Enhanced Social Work (or “Artificial Social Work”) framework. In doing so, we can ensure that AI is used in ways that are consistent with the values and goals of social work and that it contributes to a more just and equitable society.
References
Aasback, A. W. (2022). Platform social work - A case study of a digital activity plan in the Norwegian Welfare and Labor Administration. Nordic Social Work Research, 12(3), 350–363. https://doi.org/10.1080/2156857X.2022.2045212
Aasback, A. W., & Røkkum, N. H. A. (2022). Domesticating technology in pandemic social work. Journal of Comparative Social Work, 16(2), 172–196. https://doi.org/10.31265/JCSW.V16I2.387
Ahuja, L., Price, A., Bramwell, C., Briscoe, S., Shaw, L., Nunns, M., O’Rourke, G., Baron, S., & Anderson, R. (2022). Implementation of the making safeguarding personal approach to strengths-based adult social care: Systematic review of qualitative research evidence. The British Journal of Social Work, 52(8), 4640–4663. https://doi.org/10.1093/bjsw/bcac076
Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data and Society, 7, 1–14. https://doi.org/10.1177/2053951720949566
Alfandari, R., Taylor, B. J., Enosh, G., Killick, C., McCafferty, P., Mullineux, J., Przeperski, J., Rölver, M., & Whittaker, A. (2023). Group decision-making theories for child and family social work. European Journal of Social Work, 26, 204–217. https://doi.org/10.1080/13691457.2021.2016651
Androff, D. (2018). Practicing human rights in social work: Reflections and rights-based approaches. Journal of Human Rights and Social Work, 3(4), 179–182. https://doi.org/10.1007/s41134-018-0056-5
Baker, S., Warburton, J., Hodgkin, S., & Pascal, J. (2014). Reimagining the relationship between social work and information communication technology in the network society. Australian Social Work, 67(4), 467–478. https://doi.org/10.1080/0312407X.2014.928336
Bako, A. T., Taylor, H. L., Wiley, K., Zheng, J., Walter-McCabe, H., Kasthurirathne, S. N., & Vest, J. R. (2021). Using natural language processing to classify social work interventions. The American Journal of Managed Care, 27(1), e24–e31. https://doi.org/10.37765/ajmc.2021.88580
Barrera-Algarín, E., Sarasola-Sánchez-Serrano, J. L., & Sarasola-Fernández, A. (2021). Social work in the face of emerging technologies: A technological acceptance study in 13 countries. International Social Work, 66, 1149–1166. https://doi.org/10.1177/00208728211041672
Barth, R. P., Gehlert, S., Joe, S., Lewis, C. E., Jr., McClain, A., Shanks, T. R., Sherraden, M., Uehara, E., & Walters, K. L. (2019). Grand challenges for social work: Vision, mission, domain, guiding principles, and guideposts to action. Grand Challenges for Social Work. https://grandchallengesforsocialwork.org/wp-content/uploads/2019/09/GCSW-Principles-2-5-19.pdf. Accessed 24 Jul 2024.
Berg-Weger, M., & Schroepfer, T. (2020). Covid-19 pandemic: Workforce implications for gerontological social work. Journal of Gerontological Social Work, 63(6–7), 524–529. https://doi.org/10.1080/01634372.2020.1772934
Berlyavskiy, L. G., Kolushkina, L. Y., Nepranov, R. G., & Pozdnishov, A. N. (2020). Human rights in the digital age. In E. G. Popkova & B. S. Sergi (Eds.), Lecture Notes in Networks and Systems. Digital Economy: Complexity and Variety vs. Rationality (pp. 916–924). Springer International Publishing. https://doi.org/10.1007/978-3-030-29586-8_104
Berzin, S. C., Coulton, C. J., Goerge, R., Hitchcock, L., Putnam-Hornstein, E., Sage, M., & Singer, J. (2016). Policy recommendations for meeting the grand challenge to harness technology for social good. American Academy of Social Work and Social Welfare. https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1791&context=csd_research. Accessed 24 Jul 2024.
Berzin, S. C., Singer, J., & Chan, C. (2015). Practice innovation through technology in the digital age: A grand challenge for social work. https://grandchallengesforsocialwork.org/wp-content/uploads/2015/12/WP12-with-cover.pdf. Accessed 28 Jul 2024
Boddy, J., & Dominelli, L. (2017). Social media and social work: The challenges of a new ethical space. Australian Social Work, 70(2), 172–184. https://doi.org/10.1080/0312407X.2016.1224907
Bostrom, N., & Yudkowsky, E. (2018). The ethics of artificial intelligence. In R. V. Yampolskiy (Ed.), Artificial intelligence safety and security (pp. 57–69, Chapman & Hall/CRC artificial intelligence and robotics series). Chapman and Hall/CRC an imprint of Taylor and Francis.
Brownlee, K., Graham, J. R., Doucette, E., Hotson, N., & Halverson, G. (2010). Have communication technologies influenced rural social work practice? British Journal of Social Work, 40(2), 622–637. https://doi.org/10.1093/bjsw/bcp010
Campbell, A., & McColgan, M. (2016). Making social work education app’ier: The process of developing information-based apps for social work education and practice. Social Work Education, 35(3), 297–309. https://doi.org/10.1080/02615479.2015.1130805
Cariceo, O., Nair, M., & Lytton, J. (2018). Data science for social work practice. Methodological Innovations, 11, 1–8. https://doi.org/10.1177/2059799118814392
Castellanos, S. (2020). The Changes AI Will Bring. The Wall Street Journal. https://www.wsj.com/articles/the-changes-ai-will-bring-11597072070. Accessed 24 Jul 2024.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A, Mathematical, Physical, and Engineering Sciences, 376, 20180080. https://doi.org/10.1098/rsta.2018.0080
Chang, J., McAllister, C., & McCaslin, R. (2015). Correlates of, and barriers to, Internet use among older adults. Journal of Gerontological Social Work, 58(1), 66–85. https://doi.org/10.1080/01634372.2014.913754
Chow, J.C.-C., Elizabeth Pathak, L., & Yeh, S. T. (2021). Using mobile apps in social work behavioral health care service: The case for China. International Social Work, 64(5), 689–701. https://doi.org/10.1177/00208728211031953
Chui, C.H.-K., & Ko, A. (2020). Converging humanitarian technology and social work in a public health crisis: A social innovation response to COVID-19 in Hong Kong. Asia Pacific Journal of Social Work and Development, 31, 59–66. https://doi.org/10.1080/02185385.2020.1790412
Cinar, E., Trott, P., & Simms, C. (2019). A systematic review of barriers to public sector innovation process. Public Management Review, 21(2), 264–290. https://doi.org/10.1080/14719037.2018.1473477
Colquhoun, H. L., Levac, D., O’Brien, K. K., Straus, S., Tricco, A. C., Perrier, L., Kastner, M., & Moher, D. (2014). Scoping reviews: Time for clarity in definition, methods, and reporting. Journal of Clinical Epidemiology, 67(12), 1291–1294. https://doi.org/10.1016/j.jclinepi.2014.03.013
Coulthard, B., Mallett, J., & Taylor, B. (2020). Better decisions for children with “big data”: Can algorithms promote fairness, transparency and parental engagement? Societies, 10(4), 97. https://doi.org/10.3390/soc10040097
Coulton, C. J., Goerge, R., Putnam-Hornstein, E., & de Haan, B. (2015). Harnessing big data for social good: A grand challenge for social work.
Cresswell, K., Callaghan, M., Khan, S., Sheikh, Z., Mozaffar, H., & Sheikh, A. (2020). Investigating the use of data-driven artificial intelligence in computerised decision support systems for health and social care: A systematic review. Health Informatics Journal, 26(3), 2138–2147. https://doi.org/10.1177/1460458219900452
Dafoe, A. (2018). AI governance: A research agenda. University of Oxford. http://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf. Accessed 24 Jul 2024.
Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94
Dellor, E., Lovato-Hermann, K., Wolf, J. P., Curry, S. R., & Freisthler, B. (2015). Introducing technology in child welfare referrals: A case study. Journal of Technology in Human Services, 33(4), 330–344. https://doi.org/10.1080/15228835.2015.1107520
Denyer, D., & Tranfield, D. (2011). Producing a systematic review. In D. A. Buchanan & A. Bryman (Eds.), The SAGE handbook of organizational research methods (Paperback ed.,1. publ, pp. 671–689). Sage Publications Inc.
Devlieghere, J., Bradt, L., & Roose, R. (2018). Creating transparency through electronic information systems: Opportunities and pitfalls. The British Journal of Social Work, 48(3), 734–750. https://doi.org/10.1093/bjsw/bcx052
Devlieghere, J., Gillingham, P., & Roose, R. (2022). Dataism versus relationshipism: A social work perspective. Nordic Social Work Research, 12, 328–338. https://doi.org/10.1080/2156857X.2022.2052942
Devlieghere, J., & Roose, R. (2018). Electronic information systems: In search of responsive social work. Journal of Social Work, 18(6), 650–665. https://doi.org/10.1177/1468017318757296
Ditsche, J., Schieler, M., & Steffan, A. (2023). ChatGPT’s strategic implications for industry, and how companies can harness its full potential. Roland Berger. https://www.rolandberger.com/en/Insights/Publications/ChatGPT-A-game-changer-for-artificial-intelligence.html. Accessed 24 Jul 2024.
Dominelli, L. (2010). Globalization, contemporary challenges and social work practice. International Social Work, 53(5), 599–612. https://doi.org/10.1177/0020872810371201
Dominelli, L., & Holloway, M. (2008). Ethics and governance in social work research in the UK. The British Journal of Social Work, 38, 1009–1024. https://doi.org/10.1093/bjsw/bcm123
Donahoe, E., & Metzger, M. M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115–126. https://doi.org/10.1353/JOD.2019.0029
Dorling, D. (2015). Injustice: Why social inequality still persists (Revised). Policy Press.
Dufva, T., & Dufva, M. (2019). Grasping the future of the digital society. Futures, 107, 17–28. https://doi.org/10.1016/j.futures.2018.11.001
Edwards, H. R., & Hoefer, R. (2010). Are social work advocacy groups using Web 2.0 effectively? Journal of Policy Practice, 9(3–4), 220–239. https://doi.org/10.1080/15588742.2010.489037
Ehlers, U.-D., & Kellermann, S. A. (2019). Future skills: The future of learning and higher education. Karlsruhe. https://www.learntechlib.org/p/208249/. Accessed 24 Jul 2024.
Ehrenreich, J. H. (1985). The altruistic imagination: A history of social work and social policy in the United States. Cornell Univ.
Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial intelligence. The Journal of Ethics, 21(4), 403–418. https://doi.org/10.1007/s10892-017-9252-2
Fink, A. (2018). Bigger data, less wisdom: The need for more inclusive collective intelligence in social service provision. AI & Society, 33(1), 61–70. https://doi.org/10.1007/s00146-017-0719-2
Fink, A., & Brito, M. (2021). Real big data: How we know who we know in youth work. Child and Youth Services, 42, 150–178. https://doi.org/10.1080/0145935X.2020.1832888
Fiok, K., Farahani, F. V., Karwowski, W., & Ahram, T. (2022). Explainable artificial intelligence for education and training. The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology, 19(2), 133–144. https://doi.org/10.1177/15485129211028651
Flexner, A. (1905). Is social work a profession? The New York School of Philanthropy.
Frahm, K. A., & Martin, L. L. (2009). From government to governance: Implications for social work administration. Administration in Social Work, 33(4), 407–422. https://doi.org/10.1080/03643100903173016
Frey, W. R., Patton, D. U., Gaskell, M. B., & McGregor, K. A. (2020). Artificial intelligence and inclusion: Formerly gang-involved youth as domain experts for analyzing unstructured Twitter data. Social Science Computer Review, 38(1), 42–56. https://doi.org/10.1177/0894439318788314
Gamble, A. (2020). Artificial intelligence and mobile apps for mental healthcare: A social informatics perspective. Aslib Journal of Information Management, 72(4), 509–523. https://doi.org/10.1108/AJIM-11-2019-0316
Garkisch, M. (2020). Zwischen zwei welten: Virtual reality in der sozialen arbeit. In P. Brandl & T. Prinz (Eds.), Blaue reihe. Innovationen bei sozialen dienstleistungen band 1: Theoretische ansätze für eine innovative zukunft (1. Auflage). WALHALLA Fachverlag.
Garkisch, M. (2023). Digital future skills for social workers. ConSozial, Nuremberg.
Garkisch, M., Heidingsfelder, J., & Beckmann, M. (2017). Third sector organizations and migration: A systematic literature review on the contribution of third sector organizations in view of flight, migration and refugee crises. VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, 28(5), 1839–1880. https://doi.org/10.1007/s11266-017-9895-4
Gatenio Gabel, S. (2016). A rights-based approach to social policy analysis. Springer-Verlag. https://doi.org/10.1007/978-3-319-24412-9
Gerards, J., & Borgesius, F. Z. (2022). Protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence. Colorado Technology Law Journal, 20, 1–56. https://doi.org/10.2139/ssrn.3723873
Gill, K. S. (1988). Artificial intelligence and social action: Education and training. In B. Göranzon & I. Josefson (Eds.), Knowledge, skill and artificial intelligence (pp. 77–91). Springer-Verlag. https://doi.org/10.1007/978-1-4471-1632-5_8
Gillingham, P. (2006). Risk assessment in child protection: Problem rather than solution? Australian Social Work, 59(1), 86–98. https://doi.org/10.1080/03124070500449804
Gillingham, P. (2011a). Computer-based information systems and human service organisations: Emerging problems and future possibilities. Australian Social Work, 64(3), 299–312. https://doi.org/10.1080/0312407X.2010.524705
Gillingham, P. (2011b). Decision-making tools and the development of expertise in child protection practitioners: Are we ‘just breeding workers who are good at ticking boxes’? Child & Family Social Work, 16(4), 412–421. https://doi.org/10.1111/j.1365-2206.2011.00756.x
Gillingham, P. (2013). The development of electronic information systems for the future: Practitioners, ‘embodied structures’ and ‘technologies-in-practice.’ The British Journal of Social Work, 43(3), 430–445. https://doi.org/10.1093/bjsw/bcr202
Gillingham, P. (2016a). Electronic information systems to guide social work practice: The perspectives of practitioners as end users. Practice, 28(5), 357–372. https://doi.org/10.1080/09503153.2015.1135895
Gillingham, P. (2016b). Predictive risk modelling to prevent child maltreatment and other adverse outcomes for service users: Inside the ‘black box’ of machine learning. The British Journal of Social Work, 46(4), 1044–1058. https://doi.org/10.1093/bjsw/bcv031
Gillingham, P. (2016c). Technology configuring the user: Implications for the redesign of electronic information systems in social work. British Journal of Social Work, 46(2), 323–338. https://doi.org/10.1093/bjsw/bcu141
Gillingham, P. (2017). Predictive risk modelling to prevent child maltreatment: Insights and implications from Aotearoa/New Zealand. Journal of Public Child Welfare, 11(2), 150–165. https://doi.org/10.1080/15548732.2016.1255697
Gillingham, P. (2019a). Can predictive algorithms assist decision-making in social work with children and families? Child Abuse Review, 28(2), 114–126. https://doi.org/10.1002/car.2547
Gillingham, P. (2019b). Decision support systems, social justice and algorithmic accountability in social work: A new challenge. Practice, 31(4), 277–290. https://doi.org/10.1080/09503153.2019.1575954
Gillingham, P. (2020). The development of algorithmically based decision-making systems in children’s protective services: Is administrative data good enough? The British Journal of Social Work, 50(2), 565–580. https://doi.org/10.1093/bjsw/bcz157
Goldkind, L. (2018). Digital social work: Tools for practice with individuals, organizations, and communities. Oxford University Press Incorporated.
Goldkind, L. (2021). Social work and artificial intelligence: Into the matrix. Social Work, 66(4), 372–374. https://doi.org/10.1093/sw/swab028
Greenhalgh, T., Wherton, J., Papoutsi, C., Lynch, J., Hughes, G., A’Court, C., Hinder, S., Fahy, N., Procter, R., & Shaw, S. (2017). Beyond adoption: A new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. Journal of Medical Internet Research, 19(11), e367. https://doi.org/10.2196/jmir.8775
Grimwood, T. (2023). Post-critical social work? Social Work and Society, 21(1), 1–16.
Grządzielewska, M. (2021). Using machine learning in burnout prediction: A survey. Child and Adolescent Social Work Journal, 38(2), 175–180. https://doi.org/10.1007/s10560-020-00733-w
Guo, B., Perron, B. E., & Gillespie, D. F. (2009). A systematic review of structural equation modelling in social work research. British Journal of Social Work, 39(8), 1556–1574. https://doi.org/10.1093/bjsw/bcn101
Habli, I., Lawton, T., & Porter, Z. (2020). Artificial intelligence in health care: Accountability and safety. Bulletin of the World Health Organization, 98(4), 251–256. https://doi.org/10.2471/BLT.19.237487
Hanelt, A., Bohnsack, R., Marz, D., & Antunes Marante, C. (2021). A systematic review of the literature on digital transformation: Insights and implications for strategy and organizational change. Journal of Management Studies, 58(5), 1159–1197. https://doi.org/10.1111/joms.12639
Healy, L. M. (2008). Exploring the history of social work as a human rights profession. International Social Work, 51(6), 735–748. https://doi.org/10.1177/0020872808095247
Heinrichs, B. (2022). Discrimination in the age of artificial intelligence. AI & Society, 37(1), 143–154. https://doi.org/10.1007/s00146-021-01192-2
Hilb, M. (2020). Toward artificial governance? The role of artificial intelligence in shaping the future of corporate governance. Journal of Management and Governance, 24(4), 851–870. https://doi.org/10.1007/s10997-020-09519-9
IFSW. (2014). Global definition of social work. https://www.ifsw.org/what-is-social-work/global-definition-of-social-work/. Accessed 24 Jul 2024.
IFSW. (2018). Global social work statement of ethical principles. https://www.ifsw.org/global-social-work-statement-of-ethical-principles/. Accessed 24 Jul 2024.
IFSW, IASSW und UN Centre for Human Rights. (1994). Human rights and social work: A manual for schools of social work and the social work profession. United Nations. https://digitallibrary.un.org/record/209246/files/training1en.pdf?ln=en. Accessed 24 Jul 2024.
James, A., & Whelan, A. (2021). ‘Ethical’ artificial intelligence in the welfare state: Discourse and discrepancy in Australian social services. Critical Social Policy, 42, 22–42. https://doi.org/10.1177/0261018320985463
Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance: Organizing data for trustworthy artificial intelligence. Government Information Quarterly, 37, 1–8. https://doi.org/10.1016/j.giq.2020.101493
Jayasooria, D. (2016). Sustainable development goals and social work: Opportunities and challenges for social work practice in Malaysia. Journal of Human Rights and Social Work, 1(1), 19–29. https://doi.org/10.1007/s41134-016-0007-y
Jeyasingham, D. (2016). Open spaces, supple bodies? Considering the impact of agile working on social work office practices. Child & Family Social Work, 21(2), 209–217. https://doi.org/10.1111/cfs.12130
Jeyasingham, D. (2020). Entanglements with offices, information systems, laptops and phones: How agile working is influencing social workers’ interactions with each other and with families. Qualitative Social Work, 19(3), 337–358. https://doi.org/10.1177/1473325020911697
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230–243. https://doi.org/10.1136/svn-2017-000101
Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review, 66, 54–141. https://doi.org/10.1017/9781108680844.004
Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8(10), 281. https://doi.org/10.3390/socsci8100281
Keen, J., Ruddle, R., Palczewski, J., Aivaliotis, G., Palczewska, A., Megone, C., & Macnish, K. (2021). Machine learning, materiality and governance: A health and social care case study. Information Polity, 26(1), 57–69. https://doi.org/10.3233/IP-200264
King, G., O’Donnell, C., Boddy, D., Smith, F., Heaney, D., & Mair, F. S. (2012). Boundaries and e-health implementation in health and social care. BMC Medical Informatics and Decision Making, 12(1), 100. https://doi.org/10.1186/1472-6947-12-100
Kum, H.-C., Joy Stewart, C., Rose, R. A., & Duncan, D. F. (2015). Using big data for evidence based governance in child welfare. Children and Youth Services Review, 58, 127–136. https://doi.org/10.1016/j.childyouth.2015.09.014
Kunze, C. (2020). (Nicht-)Nutzung, Transfer, Verbreitung und Nachhaltigkeit von Gesundheitstechnologien: Deutsche Version des NASSS-Frameworks. https://doi.org/10.13140/RG.2.2.21875.89123
Landau, A. Y., Ferrarello, S., Blanchard, A., Cato, K., Atkins, N., Salazar, S., Patton, D. U., & Topaz, M. (2022). Developing machine learning-based models to help identify child abuse and neglect: Key ethical challenges and recommended solutions. Journal of the American Medical Informatics Association : JAMIA, 29(3), 576–580. https://doi.org/10.1093/jamia/ocab286
Larsson, S. (2020). On the governance of artificial intelligence through ethics guidelines. Asian Journal of Law and Society, 7(3), 437–451. https://doi.org/10.1017/als.2020.19
Lee, D., & Yoon, S. N. (2021). Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International Journal of Environmental Research and Public Health, 18, 1–18. https://doi.org/10.3390/ijerph18010271
Lee, J., Davari, H., Singh, J., & Pandhare, V. (2018). Industrial artificial intelligence for industry 4.0-based manufacturing systems. Manufacturing Letters, 18, 20–23. https://doi.org/10.1016/j.mfglet.2018.09.002
Li, F. (2020). Leading digital transformation: Three emerging approaches for managing the transition. International Journal of Operations & Production Management, 40(6), 809–817. https://doi.org/10.1108/IJOPM-04-2020-0202
Liedgren, P., Elvhage, G., Ehrenberg, A., & Kullberg, C. (2016). The use of decision support systems in social work: A scoping study literature review. Journal of Evidence-Informed Social Work, 13(1), 1–20. https://doi.org/10.1080/15433714.2014.914992
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage.
Livingston, S., & Risse, M. (2019). The future impact of artificial intelligence on humans and human rights. Ethics & International Affairs, 33(02), 141–158. https://doi.org/10.1017/S089267941900011X
Lo Piano, S. (2020). Ethical principles in machine learning and artificial intelligence: Cases from the field and possible ways forward. Humanities and Social Sciences Communications, 7(1), 1–7. https://doi.org/10.1057/s41599-020-0501-9
Longhofer, J., & Floersch, J. (2012). The coming crisis in social work. Research on Social Work Practice, 22(5), 499–519. https://doi.org/10.1177/1049731512445509
López Peláez, A., & Marcuello-Servós, C. (2018). e-Social work and digital society: Re-conceptualizing approaches, practices and technologies. European Journal of Social Work, 21(6), 801–803. https://doi.org/10.1080/13691457.2018.1520475
López Peláez, A., Pérez García, R., & Aguilar-Tablada Massó, M. V. (2018). e-Social work: Building a new field of specialization in social work? European Journal of Social Work, 21(6), 804–823. https://doi.org/10.1080/13691457.2017.1399256
Lushin, V., Becker-Haimes, E. M., Mandell, D., Conrad, J., Kaploun, V., Bailey, S., Bo, A., & Beidas, R. S. (2019). What motivates mental health clinicians-in-training to implement evidence-based assessment? A survey of social work trainees. Administration and Policy in Mental Health, 46(3), 411–424. https://doi.org/10.1007/s10488-019-00923-4
Lyon, A. R., & Koerner, K. (2016). User-centered design for psychosocial intervention development and implementation. Clinical Psychology, 23(2), 180–200. https://doi.org/10.1111/cpsp.12154
Mackrill, T., & Ebsen, F. (2018). Key misconceptions when assessing digital technology for municipal youth social work. European Journal of Social Work, 21(6), 942–953. https://doi.org/10.1080/13691457.2017.1326878
Macpherson, A., & Holt, R. (2007). Knowledge, learning and small firm growth: A systematic review of the evidence. Research Policy, 36(2), 172–192. https://doi.org/10.1016/j.respol.2006.10.001
Maier, F., Meyer, M., & Steinbereithner, M. (2016). Nonprofit organizations becoming business-like. Nonprofit and Voluntary Sector Quarterly, 45(1), 64–86. https://doi.org/10.1177/0899764014561796
Majumdar, D., Banerji, P. K., & Chakrabarti, S. (2018). Disruptive technology and disruptive innovation: Ignore at your peril! Technology Analysis & Strategic Management, 30(11), 1247–1255. https://doi.org/10.1080/09537325.2018.1523384
Mänttäri-van der Kuip, M. (2016). Moral distress among social workers: The role of insufficient resources. International Journal of Social Welfare, 25(1), 86–97. https://doi.org/10.1111/ijsw.12163
Mapp, S., McPherson, J., Androff, D., & Gatenio Gabel, S. (2019). Social work is a human rights profession. Social Work, 64(3), 259–269. https://doi.org/10.1093/sw/swz023
Marquart, M. S., & Goldkind, L. (2023). ChatGPT: Implications for social work education and practice. Advance Online Publication with: Virtual session for the 2023 NASW-NYC Social Work Month Series. Online via Zoom. https://doi.org/10.7916/axhj-x577
Mathiesen, K. (2014). Human rights for the digital age. Journal of Mass Media Ethics, 29(1), 2–18. https://doi.org/10.1080/08900523.2014.863124
Matthies, A.-L., Peeters, J., Hirvilammi, T., & Stamm, I. (2020). Ecosocial innovations enabling social work to promote new forms of sustainable economy. International Journal of Social Welfare, 29(4), 378–389. https://doi.org/10.1111/ijsw.12423
McFadden, P., Campbell, A., & Taylor, B. (2015). Resilience and burnout in child protection social work: Individual and organisational themes from a systematic literature review. The British Journal of Social Work, 45(5), 1546–1563. https://doi.org/10.1093/bjsw/bct210
Meilvang, M. L., & Dahler, A. M. (2022). Decision support and algorithmic support: The construction of algorithms and professional discretion in social work. European Journal of Social Work, 27, 30–42. https://doi.org/10.1080/13691457.2022.2063806
Meyer-Guckel, V., Klier, J., Kirchherr, J., & Winde, M. (2019). Future skills: Strategische Potenziale für Hochschulen. Stiftververband. https://www.stifterverband.org/download/file/fid/7213. Accessed 24 Jul 2024.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ Clinical Research Edition, 339, b2535. https://doi.org/10.1136/bmj.b2535
Mois, G., & Fortuna, K. L. (2020). Visioning the future of gerontological digital social work. Journal of Gerontological Social Work, 63(5), 412–427. https://doi.org/10.1080/01634372.2020.1772436
Monshipouri, M. (2017). Human rights in the digital age: Opportunities and constraints. Public Integrity, 19(2), 123–135. https://doi.org/10.1080/10999922.2016.1230690
Mugge, P., Abbu, H., Michaelis, T. L., Kwiatkowski, A., & Gudergan, G. (2020). Patterns of Digitization. Research-Technology Management, 63(2), 27–35. https://doi.org/10.1080/08956308.2020.1707003
Murphy, K., Ruggiero, E., Upshur, Di. R., Willison, D. J., Malhotra, N., Cai, J. C., Malhotra, N., Lui, V., & Gibson, J. (2021). Artificial intelligence for good health: A scoping review of the ethics literature. BMC Medical Ethics, 22, 1–14. https://doi.org/10.1186/s12910-021-00577-8
Nesmith, A. (2018). Reaching young people through texting-based crisis counseling. Advances in Social Work, 18(4), 1147–1164. https://doi.org/10.18060/21590
Novelli, C., Taddeo, M., & Floridi, L. (2023). Accountability in artificial intelligence: What it is and how it works. AI and Society. https://doi.org/10.1007/s00146-023-01635-y
O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., Holzinger, K., Holzinger, A., Sajid, M. I., & Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics + Computer Assisted Surgery : MRCAS, 15(1), e1968. https://doi.org/10.1002/rcs.1968
Pan, I., Nolan, L. B., Brown, R. R., Khan, R., van der Boor, P., Harris, D. G., & Ghani, R. (2017). Machine learning for social services: A study of prenatal case management in Illinois. American Journal of Public Health, 107(6), 938–944. https://doi.org/10.2105/AJPH.2017.303711
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Systematic Reviews, 10, 89. https://doi.org/10.1186/s13643-021-01626-4
Pelton, L. H. (2001). Social justice and social work. Journal of Social Work Education, 37(3), 433–439. https://doi.org/10.1080/10437797.2001.10779065
Perron, B. E., Taylor, H. O., Glass, J. E., & Margerum-Leys, J. (2010). Information and communication technologies in social work. Advances in Social Work, 11(2), 67–81. https://doi.org/10.1177/1468017307084739
Pittaway, L., & Cope, J. (2007). Entrepreneurship education. International Small Business Journal: Researching Entrepreneurship, 25(5), 479–510. https://doi.org/10.1177/0266242607080656
Rafferty, J., & Waldman, J. (2006). Fit for virtual social work practice? Journal of Technology in Human Services, 24(2–3), 1–22. https://doi.org/10.1300/J017v24n02_01
Ranerup, A., & Henriksen, H. Z. (2022). Digital discretion: Unpacking human and technological agency in automated decision making in Sweden’s social services. Social Science Computer Review, 40(2), 445–461. https://doi.org/10.1177/0894439320980434
Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Levin, K. (2018). Artificial intelligence and human rights: Opportunities and risks. Berkman Klein Center Research, 1–62. https://doi.org/10.2139/ssrn.3259344
Razzaque, A. (2021). Artificial intelligence and IT governance: A literature review. In A. M. A. M. Al-Sartawi (Ed.), The big-data driven digital economy: Artificial and computational intelligence (pp. 85–97). Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-73057-4_7
Rice, E., Yoshioka-Maxwell, A., Petering, R., Onasch-Vera, L., Craddock, J., Tambe, M., Yadav, A., Wilder, B., Woo, D., Winetrobe, H., & Wilson, N. (2018). Piloting the use of artificial intelligence to enhance HIV prevention interventions for youth experiencing homelessness. Journal of the Society for Social Work and Research, 9(4), 551–573. https://doi.org/10.1086/701439
Ritz, A., Giauque, D., Varone, F., & Anderfuhren-Biget, S. (2014). From leadership to citizenship behavior in public organizations. Review of Public Personnel Administration, 34(2), 128–152. https://doi.org/10.1177/0734371X14521456
Rodriguez, M. Y., DePanfilis, D., & Lanier, P. (2019). Bridging the gap: Social work insights for ethical algorithmic decision-making in human services. IBM Journal of Research and Development, 63(4/5), 8:1-8:8. https://doi.org/10.1147/JRD.2019.2934047
Rodriguez, M. Y., & Storer, H. (2020). A computational social science perspective on qualitative data exploration: Using topic models for the descriptive analysis of social media data. Journal of Technology in Human Services, 38(1), 54–86. https://doi.org/10.1080/15228835.2019.1616350
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.
Sabah, Y., & Cook-Craig, P. (2010). Learning teams and virtual communities of practice: Managing evidence and expertise beyond the stable state. Research on Social Work Practice, 20(4), 435–446. https://doi.org/10.1177/1049731509339031
Sánchez-Sandoval, Y., Jiménez-Luque, N., Melero, S., Luque, V., & Verdugo, L. (2020). Support needs and post-adoption resources for adopted adults: A systematic review. The British Journal of Social Work, 50(6), 1775–1795. https://doi.org/10.1093/bjsw/bcz109
Sanders, C. K., & Scanlon, E. (2021). The Digital Divide Is a Human Rights Issue: Advancing Social Inclusion Through Social Work Advocacy. Journal of Human Rights and Social Work, 6, 130–143. https://doi.org/10.1007/s41134-020-00147-9
Santiago, A. M., & Smith, R. J. (2019). What can “big data” methods offer human services research on organizations and communities? Human Service Organizations: Management, Leadership & Governance, 43(4), 344–356. https://doi.org/10.1080/23303131.2019.1674756
Schiffhauer, B., & Seelmeyer, U. (2021). Responsible digital transformation of social welfare organizations. In D. Ifenthaler, S. Hofhues, M. Egloffstein, & C. Helbig (Eds.), Digital Transformation of Learning Organizations (pp. 131–144). Springer International Publishing.
Schneider, D., & Seelmeyer, U. (2019). Challenges in using big data to develop decision support systems for social work in Germany. Journal of Technology in Human Services, 37(2–3), 113–128. https://doi.org/10.1080/15228835.2019.1614513
Schoech, D., & Bolton, K. W. (2015). Automating and supporting care management using web-phone technology: Results of the 5-year Teleherence Project. Journal of Technology in Human Services, 33(1), 16–37. https://doi.org/10.1080/15228835.2014.998575
Schofield, P. (2017). Big data in mental health research - Do the ns justify the means? Using large data-sets of electronic health records for mental health research. Bjpsych Bulletin, 41(3), 129–132. https://doi.org/10.1192/pb.bp.116.055053
Schwartz, I. M., York, P., Nowakowski-Sims, E., & Ramos-Hernandez, A. (2017). Predictive and prescriptive analytics, machine learning and child welfare risk assessment: The Broward County experience. Children and Youth Services Review, 81, 309–320. https://doi.org/10.1016/j.childyouth.2017.08.020
Schwarzmüller, T., Brosi, P., Duman, D., & Welpe, I. M. (2018). How does the digital transformation affect organizations? Key themes of change in work design and leadership. Management Revue, 29, 114–138. https://doi.org/10.5771/0935-9915-2018-2-114
Sicora, A., Taylor, B. J., Alfandari, R., Enosh, G., Helm, D., Killick, C., Lyons, O., Mullineux, J., Przeperski, J., Rölver, M., & Whittaker, A. (2021). Using intuition in social work decision making. European Journal of Social Work, 24(5), 772–787. https://doi.org/10.1080/13691457.2021.1918066
Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International, 20(4), 97–106. https://doi.org/10.9785/cri-2019-200402
Staub-Bernasconi, S. (2007). Dienstleistung oder Menschenrechtsprofession? Zum Selbstverständnis Sozialer Arbeit in Deutschland mit einem Seitenblick auf die internationale Diskussionslandschaft. In A. Lob-Hüdepohl & W. Lesch (Eds.), Ethik sozialer Arbeit: Ein Handbuch (UTB Soziale Arbeit, Vol. 8366). Ferdinand Schöningh.
Staub-Bernasconi, S. (2016). Social work and human rights—Linking two traditions of human rights in social work. Journal of Human Rights and Social Work, 1(1), 40–49. https://doi.org/10.1007/s41134-016-0005-0
Staub-Bernasconi, S. (2019). Menschenwürde - Menschenrechte - Soziale Arbeit: Die Menschenrechte vom Kopf auf die Füße stellen. (Soziale Arbeit und Menschenrechte, Band 1). Verlag Barbara Budrich.
Steiner, O. (2021). Social work in the digital era: Theoretical, ethical and practical considerations. British Journal of Social Work, 51(8), 3358–3374. https://doi.org/10.1093/bjsw/bcaa160
Stuart, P. H. (2013). Social work profession: History. In Encyclopedia of Social Work. https://doi.org/10.1093/acrefore/9780199975839.013.623
Sullivan-Tibbs, M. A., Rayner, C. B., Crouch, D. L., Peck, L. A., Bell, M. A., Hasting, A. D., Nativo, A. J., & Mallinger, K. M. (2022). Social work’s response during the COVID-19 pandemic: A systematic literature review-balancing telemedicine with social work self-care during a pandemic. Social Work in Public Health, 37(6), 499–509. https://doi.org/10.1080/19371918.2022.2032904
Susar, D., & Aquaro, V. (2019). Artificial intelligence. In S. Ben Dhaou (Ed.), ACM Digital Library, Proceedings of the 12th International Conference on Theory and Practice of Electronic Governance (pp. 418–426). Association for Computing Machinery. https://doi.org/10.1145/3326365.3326420
Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. https://doi.org/10.1080/14494035.2021.1928377
Tan, W. (2022). Innovation of social work model based on big data analysis of the Internet of Things. Scientific Programming, 2022, 1–5. https://doi.org/10.1155/2022/6904190
Ting, M. H., Chu, C. M., Zeng, G., Li, D., & Chng, G. S. (2018). Predicting recidivism among youth offenders: Augmenting professional judgement with machine learning algorithms. Journal of Social Work, 18(6), 631–649. https://doi.org/10.1177/1468017317743137
Trahan, M. H., Maynard, B. R., Smith, K. S., Farina, A. S. J., & Khoo, Y. M. (2019). Virtual reality exposure therapy on alcohol and nicotine: A systematic review. Research on Social Work Practice, 29(8), 876–891. https://doi.org/10.1177/1049731518823073
Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14(3), 207–222. https://doi.org/10.1111/1467-8551.00375
Uehara, E., Flynn, M., Fong, R., Brekke, J., Barth, R. P., Coulton, C., Davis, K., DiNitto, D., Hawkins, J. D., Lubben, J., Manderscheid, R., Padilla, Y., Sherraden, M., & Walters, K. (2013). Grand challenges for social work. Journal of the Society for Social Work and Research, 4(3), 165–170. https://doi.org/10.5243/jsswr.2013.11
United Nations General Assembly. (2019). Digital welfare states and human rights - Report of the Special Rapporteur on extreme poverty and human rights. https://digitallibrary.un.org/record/3834146?v=pdf. Accessed 24 Jul 2024.
Victor, B. G., Perron, B. E., Sokol, R. L., Fedina, L., & Ryan, J. P. (2021). Automated identification of domestic violence in written child welfare records: Leveraging text mining and machine learning to enhance social work research and evaluation. Journal of the Society for Social Work and Research, 12(4), 631–655. https://doi.org/10.1086/712734
Walter, M., Lovett, R., Maher, B., Williamson, B., Prehn, J., Bodkin-Andrews, G., & Lee, V. (2021). Indigenous data sovereignty in the era of big data and open data. Australian Journal of Social Issues, 56(2), 143–156. https://doi.org/10.1002/ajs4.141
Webb, S. A. (2023). Social work in a risk society. In V. E. Cree & T. McCulloch (Eds.), Student Social Work Social work: A Reader (2nd ed., pp. 74–78). Routledge Taylor and Francis Group. https://doi.org/10.4324/9781003178699-12
Weiss-Gal, I. (2016). Policy practice in social work education: A literature review. International Journal of Social Welfare, 25(3), 290–303. https://doi.org/10.1111/ijsw.12203
Whelan, A. (2020). “Ask for more time”: Big data chronopolitics in the Australian welfare bureaucracy. Critical Sociology, 46(6), 867–880. https://doi.org/10.1177/0896920519866004
Winfield, A. (2019). Ethical standards in robotics and AI. Nature Electronics, 2(2), 46–48. https://doi.org/10.1038/s41928-019-0213-6
Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education. https://doi.org/10.1186/s41239-019-0171-0
Zechner, M., & Sihto, T. (2023). The concept of generational contract: A systematic literature review. International Journal of Social Welfare, 33, 710–723. https://doi.org/10.1111/ijsw.12636
Zetino, J., & Mendoza, N. (2019). Big data and its utility in social work: Learning from the big data revolution in business and healthcare. Social Work in Public Health, 34(5), 409–417. https://doi.org/10.1080/19371918.2019.1614508
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Garkisch, M., Goldkind, L. Considering a Unified Model of Artificial Intelligence Enhanced Social Work: A Systematic Review. J. Hum. Rights Soc. Work (2024). https://doi.org/10.1007/s41134-024-00326-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s41134-024-00326-y