Children’s right to participation in AI: Exploring transnational co-creative approaches to foster child-inclusive AI policy and practice
Abstract
The right to participation in matters related to children is a fundamental right of every child. AI systems are emerging in all contexts of children’s lives, both in the US and EU, yet children’s voices are often ignored in AI policy and practice, particularly children from historically marginalised communities. This article explores the policy and practice gaps in Europe and the US and the lack of children’s participation in AI at the global level. Current AI policies and practices discount the transnational implications of AI on children’s lives and their rights in the context of the United Nations Convention on the Rights of the Child (UNCRC). The article calls for co-creative approaches which should be implemented transnationally to elevate the benefits of the inclusion of children in AI policy and practice – not only as users but also as contributors and innovators. This approach offers AI systems and policies that are more inclusive and children-friendly. By offering children more agency as contributors and innovators, children would gain more power compared to only being users. We propose that the balance in transnational power dynamics in AI policy and practice could become reversed. Ultimately, children’s increased agency as innovators in shaping AI practices can offer mutual benefits for children’s individual and social development, inclusive AI policy, and innovation practice.
1.Introduction
Children’s Right to Participation is a fundamental right of every child globally (Article 12 of the United Nations Convention on the Rights of the Child – UNCRC, 1989). Since February 2021, the General Comment 25 of the UNCRC specifies that children’s fundamental rights equally apply to online interactions with children as much as to offline (Committee on the Rights of the Child, 2021). The US is the major provider of AI products and services but it is the only country in the world that has not ratified the UNCRC. Children are actively using AI-driven emerging technology products for their education, health, and entertainment across the globe. However, children scarcely participate in shaping AI policy and practice beyond being users (Dignum et al., 2021).
The World Economic Forum (WEF) forecasts that Internet of Things (IoT) devices with AI systems will only proliferate in citizens’ lives (WEF, 2021), yet safety, security, privacy, and trust remain significant issues for improving their governance. Perceived benefits of AI tools, such as offering personalised services, recommendations, and fast and comprehensive prognoses on individual and societal needs, are vast. Nonetheless, AI systems have proven to cause social inequalities (European Commission – EC, 2020), unjustifiable biases, unjust risk scoring, and addictive algorithms aggravating mental health harm to young netizens (Karim et al., 2020). These implications on young users compared to adults can be even more profound across the world, a phenomenon which has elsewhere been coined as “generational unfairness” (McStay & Rosener, 2021). This is evident in the child’s developmental stages in life and that their earliest experiences, including their AI-mediated ones, can affect them throughout their adulthood (Epps-Darling, 2020). Moreover, the number of AI systems is exponentially increasing in children’s everyday education, healthcare, entertainment, and socialisation (from basic web searches to critical health assessments) both in the US and in the Netherlands (Statista, 2022).
Child participation in AI design in the US is limited and not inclusive (Psihogios et al., 2022) – an obstacle perpetuated in part by the lack of federal policy and practice guidelines. However, California is the only state in the US to have introduced a relevant policy (CCPA, 201 ) for an age-appropriate AI design for children (Assembly Bill No. 2273). The Netherlands, while a signatory of the UNCRC, also lags in child participation in decision-making and artificial intelligence system design practices and policies (WRR, 2021). Tech policies without young people’s voices have proven to be ineffective and incomplete due to the lack of a framework for inclusive policies and practices at the different levels of AI design (Gasser, 2019). Introducing AI to children is a proper strategy to prepare an AI-capable generation (Kahn et al., 2018). Including children from diverse backgrounds and lived experiences in AI design-oriented, co-creational processes is even more vital now due to the increased access to AI products and services. AI systems’ implications have multiple effects on children’s daily lives and personal development.
In this article, we highlight the challenges of child participation in AI policy and practice. As part of the challenges, we explore how US and European AI policy and relevant AI practice gaps impede the participation of children in AI policy and practice. To address policy and child participation gaps, we argue for co-creative approaches with children to understand the power dynamic between adults and children in the AI systems. We propose a three-step child-centred participation model in AI policy and practice through co-creative approaches to ensure inclusiveness. Children’s participation and increased agency in AI development is not only about protecting them; we call for children empowerment in AI towards a just digital future.
2.Challenges in child participation in AI policy and practice
Children’s safe and inclusive participation in Artificial Intelligence (AI) has become essential. Yet, it is challenged by gaps in theory, policy, and practice. From these perspectives, these gaps pertain to how to increase children’s agency and what would that mean for AI policy and practice. From theoretical perspectives, for instance, Roger Hart (1992) introduced a ladder for children’s participation model. Within this ladder, the gradual increase in children’s agency is depicted. Whereas this model paves a path for children’s agency and citizenship in the AI-driven digital society, challenges of this ladder could be addressed by defining an AI-specific model on how children can gradually execute more agency and reach their fullest agency as innovators of AI policy and AI practice. To address such challenges, models for children’s meaningful participation in policy-related decision-making processes could be inspirational. Dutch authors (Bouma et al. 2018) demonstrate a model of how children’s growing agency in decision-making processes within the welfare systems can have beneficial effects. Child participation in AI challenges emerges from the gaps in the national policies and the lack of inclusive AI practice frameworks.
3.Transnational AI policy gaps
3.1United States
In 2021, the United States accounted for 34.7% of global tech revenues, the largest portion of which stemmed from AI-driven systems (Sava, 2022). The US neither adopts global policy on children’s rights and participation nor on AI; it is not a signatory of the UNCRC and did not sign onto the recent framework for ethics of AI from the United Nations Educational, Scientific and Cultural Organisation (UNESCO, 2020). Likewise, the US has no federal laws relating to children’s participation in AI. While the recently proposed Algorithmic Accountability Bill (117th US Congress, 2022) briefly discusses the participation of impacted groups, it does not specifically discuss children’s participation in the AI design, development, and deployment process. Children’s Online Privacy Protection Act (COPPA)-1998 provides data privacy and protection up to the age of 13 but there are no policy regulations to safeguard 14–18 years old children and no opportunities to participate in the AI systems. Moreover, this policy gap further expands social exclusions of the digital divide across the US and aggravates the effects of children’s non-participation in AI practices. Nearly half of Americans without at-home internet were in Black and Hispanic households (Chakravorti, 2022), and the digital divide is known to diminish opportunities for children from lower incomes, people of colour (Benjamin, 2019), and foreign-born migrants, further exacerbating their economic, social, and political marginalisation. AI has been perceived as a human right (AccessNow, 2020). We argue that it requires including the most vulnerable impacted groups, such as children from historically marginalised groups (Kalluri, 2020). Digital rights are human rights and human rights are children’s rights. The child-centred approach would pave a way for child-friendly AI policy and practice to humanise AI systems.
3.2Europe
The Council of Europe’s Children’s Rights Strategy outlines the problem of misalignment between children’s developmental needs and current AI design practices (CoE, 2022). Whereas the strategy provides recommendations for child participation in relation to digital systems. There is a policy gap surrounding how AI-related regulations are developed within the European Union. There are promising European developments in the month of May 2022, the European Strategy for Better Internet for Kids (BIK
One of the main objectives behind a series of upcoming EU regulations, such as the Digital Services Act (EC, 2022) and/or the Digital Markets Act (EC, 2022), is to curb the monopolistic power of platforms by categorising such companies as “gatekeepers”. The goal is to frame them as major accountability nodes in the network between global (including European) advertisers and other entities on platforms. This regulation aims to facilitate small companies’ competitive possibilities against big tech companies and foster interoperability between large and smaller AI technology innovators. These regulations, similar to General Data Protection Regulation (GDPR), will also have extraterritorial effects on US companies, such as Apple, Microsoft, Google, Meta, and IBM while fostering AI innovation. These companies’ AI products are fueled by personal data and algorithmic assessments which have a severe impact on children (La Fors & Larsen, 2022). Therefore, although these regulations are focused on rebalancing the power dynamics toward more influence for smaller AI companies and autonomy for the data subject, the effectiveness of these regulations is yet to be seen. Bringing back more control into the hands of the data subjects in Europe has already provided the impetus for the General Data Protection Regulation (GDPR). Yet, the emergence of a large portion of AI systems can have significant implications on such vulnerable groups as children. Civil society organisations are also concerned about how the proposed EU regulations will respect human rights while promoting responsible AI innovation (APC, 2021). For example, the 5Rights Foundation’s report (2022) brought to light a wide diversity of online harm to children. Another main objective behind the upcoming European regulations, particularly, the proposed European AI Act (EAIA) is to prevent the innovation of AI systems which would perpetuate “reasonably foreseeable misuse” (EC, 2021/0106). The EAIA establishes a novel assessment for detecting AI system-mediated harms. The Act separates systems according to their harmfulness into four risk categories: unacceptable risk, high-risk, limited risk, and minimal risk. Given that children’s data also constitutes “raw material” for AI systems, and although relevant GDPR prescriptions for children remain applicable when AI systems are used, the AI Act and its risk-based assessment scheme would benefit from the children’s perspectives (Zuboff, 2021). This seems urgent as the proposed AI Act’s focus remains mainly centred on the safety of AI systems and less on the safety of humans subjected to these systems (Alan Turing Institute, 2021). Consequently, child participation in closing transnational policy gaps regarding AI systems, and in shaping more inclusive and children-friendly US and European AI policies and practices remains urgent to address.
4.AI practice gaps
Child participation-oriented tools and guidance mechanisms lay bare AI practice gaps. The nine basic requirements for meaningful child participation have also been set out as international standards to follow in 2009 by the General Comment 12 of the UNCRC (UNCRC, 2009). These principles hold that child participation needs to be: “1) transparent and informative; 2) voluntary; 3) respectful; 4) relevant; 5) child-friendly; 6) inclusive; 7) supported by training; 8) safe and sensitive to risks, and 9) accountable.” UNICEF’s Policy Guidance on AI for children stresses the need for child participation in AI system design but neglects to offer models for what the increased agency of involving children could mean for AI systems and policies (Dignum et al., 2021). The World Economic Forum’s AI for Children Toolkit offers FIRST (Fair, Inclusive, Responsible, Safe, and Transparent) principles for ethical AI product development which would foster children’s healthy development (WEF, 2022). The Children’s Online Safety Toolkit of the 5Rights Foundation explains a set of online harms that children can come across (5RightsFoundation, 2022). Neither UNICEF’s guidance nor the WEF’s or the 5Rights Foundation’s toolkit advocates for meaningful child participation in AI policy and practices. There is a clear practice gap and a lack of a child participation framework to promote inclusive children’s participation in the AI policy and practice. We explore co-creative approaches with children through a design justice lens to address the child participation challenges in AI.
5.Co-creative approaches with children
Children are often controlled by adults and it is challenging to establish a flat circle power structure. Co-creative processes are structured to be insightful learning experiences. It offers opportunities to empower and equip children to hold opinions and make decisions on all matters related to them (Hansen, 2017). Children’s inventive capacity and even life-saving societal impact have already been demonstrated by a variety of technological inventions (Vaden, 2017). These skills and competencies could be expanded further into AI policy and practice to address the pressing issues in their respective communities (Hansen, 2017). Co-creation with children (Borum et al., 2015) helps adult stakeholders to understand and interact with children on a deeper, more equal, and respectful level; to be sensitive and ethically responsible to balance diverse interests and the children’s perspectives and at the same time to ensure that no harm comes to children (CocPlayfulMinds, 2022). Co-creative processes recognize children’s competence and offer methods of self-expression that promote comfort and creativity in AI design (Hansen, 2017). Applying participatory and design justice approaches could offer benefits for AI policy and practice with children. The co-creative design experts have already indicated the four roles for children to play in the design process as a user, tester, informant, and design partner to ensure their participation in the design process (Borum et al., 2015). Iversen et al. (2017) argued around power dynamics in the co-creative design process and introduced children’s role as protagonists in the participatory design process to ensure equal power with adult designers. These children’s roles in the co-creative approach discuss the distribution of power between the children and the adult stakeholders which has a scope to change the objective, process, and outcome measures of the design process.
Co-creative design with children experiments have shown how specific perspectives of children can offer refreshing insights that are different from adult perspectives. The City of Children Project acknowledges that the co-design process with children adds fresh perspectives and competencies to technology developers and administrators (Biosca, 2017). Similarly, the Estonian Academy of Arts explored co-creation with children in the AI design process which enabled shared decision-making and engaged children as equal design partners (Kubinyi et al., 2021). Utopian agenda for Child-Computer Interaction (CCI) positioned democracy, skillfulness, and emancipation for the already established co-creative participatory design justice approach (Iversen & Didler, 2013). These inputs from past experiments offer a framework for how differing degrees of agency for children could benefit from implementing the right to children’s participation in more just AI policy and practice. In addition to other co-design experiments, inclusive approaches supported the adoption of design justice models to promote social inclusion and equity in the design process. This was achieved through meaningful participation of historically marginalised children based on the local, social, and cultural contexts (Costanza-Chock, 2020). Moreover, AI co-design experiments would serve to prevent the broadening of the AI-mediated divide and prejudices between marginalised and privileged communities through their meaningful and inclusive participation in AI design. Each earlier experiment demonstrates that the co-creative process raises critical questions of who participates in and who holds power and controls the process. Allowing for the roles of participants and controllers to change by involving a broader diversity of children from diverse backgrounds would bring more balance into the power dynamics by democratising the process.
6.Meaningful child participation in AI
Children’s meaningful participation in shaping AI would be important to close the policy and practice gaps for two major reasons: 1) Enabling children’s healthy development and 2) Preventing harm to children. Not involving children meaningfully in AI practices and policies would impede both the prevention of AI-mediated harms coming to them and their healthy (biological and civic) development when interacting with AI. We argue for children’s meaningful participation at all levels of AI policy and practice. By focusing on the three participatory roles of children, we offer perspectives on how children’s participatory influence and the impact of their agency could be scaled up in AI system-related policies and practices. These three roles are namely: 1) users as consumers of AI; 2) contributors to AI development and deployment; and, in the most positive scenario, 3) active innovators capable of influencing AI system design. We discuss these roles of children in a co-creative approach through a six-step AI development process: 1) planning, 2) data collection, 3) data access, 4) use of algorithms, 5) deployment and 6) reporting and dissemination. Describing how these steps relate to the three participatory roles of children can shed light on the increasing agency to transform power imbalances that expose children to such online harms, effectively moving toward more children-friendly AI system development and policy-making practices in the US and The Netherlands.
6.1Reducing and preventing AI harms
1) Reduce algorithmic bias and prevent harm: Inclusive participation enables children to question AI-mediated decisions toward themselves. If AI systems distribute unfair judgments about children this can be perceived as punishment or “unforgiveness toward children” (La Fors, 2020). Digitised welfare services are increasingly using AI tools, however, due to biased algorithms, less privileged groups of society have limited access to housing, jobs, and other welfare programs in the US (Sisson, 2019; Eubanks, 2019). The Federal Trade Commission’s report also underlined that addressing such online harms as algorithmic bias requires a holistic approach and cannot be addressed by an AI system design alone (FTC, 2022). Children can experience long-term detrimental implications due to AI-mediated biases; the inclusive and holistic participation of children would allow children from diverse backgrounds to explain and liberate themselves from unfair judgments and to be perceived in their full humanity (La Fors, 2020). Their participation could further enable children in enacting a more nuanced socio-cultural context, hyper-local languages, and equal representation. All this can serve to improve the mitigation and prevention of biased predictions and recommendations, contributing to fairer AI-mediated decision-making processes.
2) Avoid addictive algorithms: Currently no age verifications or age-appropriate social media policy regulations to prevent existing addictive algorithms. Children’s participation in policy formation and design may enable their agency to make more informed decisions, and AI tools can be more appropriate to age and cultural contexts. Addictive algorithms and the lack of comprehensive policies and practices continue to keep children’s participation in AI development and deployment unattainable.
3) AI-mediated cyber crime and child abuse prevention: Children’s participation in AI policy and practice can help victims to cope better with online crimes against children. Crimes such as child pornography, unauthorized live streaming, and online grooming are often mediated by AI systems. Child participation in this sense would contribute to shredding harmful practices of victimisation online (Turton, 2022). Child participation in AI policy and practice would also raise awareness among parents and guardians about the mediating effects of AI systems in offline environments that lead to cybercrime and online exploitation of children, such as cyberbullying and child-sex trafficking (ChildHub, 2022). Beyond addressing harms, broad and structural child participation could facilitate AI innovations that serve children’s healthy development.
6.2Healthy development through children-AI interactions
1) Child development matters: Contemporary AI tools are “exploiting the behavioural surplus” of children as if they were adult consumers (Zuboff, 2019). Despite stricter data protection policies for children in Europe and certain states of the US, children remain AI market users. Currently, children can only be users of AI systems that are the choices of adults in children’s lives. These can cause developmental issues in every spectrum of children’s identity development (Livingstone & Blum-Ross, 2020).
2) Understand AI systems: AI systems are already difficult to explain to adults, and even more so to children. However, children are digital natives and spend a significant amount of their time interacting with AI systems in diverse contexts of their lives. Their understanding of AI systems is vital. Moreover, addressing this knowledge gap will provide long-term benefits in ensuring a just digital future. Diverse children’s participation in AI design, development, and deployment would increase their understanding of AI systems and help them to make informed choices about digital footprints.
3) Child friendliness: Child participation in AI will promote child-friendly AI products and services. If AI tools are child friendly, they can be more accessible to any social group. It will help expand AI services to a larger audience.
4) Be diversity-aware and inclusive: Diverse children’s participation, particularly from historically marginalised communities, in the end-to-end process of the AI policy and practice with the so-called socially privileged children, would sensitise them toward each other’s differences. Facilitating interactions among children with different backgrounds regarding AI systems would render both AI policies and systems more inclusive and equitable. Inclusive AI will lead to just tools for pressing social needs.
7.Child participation models in AI
In this section, we present our arguments for implementing a three-step children role as participation model in the AI design, development, and deployment process not only to address AI-mediated harms but to also foster the healthy development of children. A broader set of online harms have also been raised by the Age-Appropriate Design Breaches report of the 5Rights Foundations (2022). This demonstrates that children are experiencing a vast amount of online harm as users, such as the use of their data creating exploitative risks toward themselves with “content, conduct and contact risks”. To prevent such probable harm to children, and promote AI for children’s good, their participation in AI policy and design is essential. Digital Access to Scholarships (DASH) at Harvard has raised critical questions to ensure young people’s participation in AI systems in health, education, entertainment, and beyond (Cortesi et al., 2021). Child-Centred AI case studies from UNICEF (SomeBuddy, 2021) offer examples of child participation in the AI development process.
The three-step roles of child participation model in AI, we developed considered children’s engagement, role, and power to make decisions in the AI policies and practices. We introduce three roles as: User, Contributor, and Innovator of AI.
7.1User
We acknowledge that children as users participate with the least agency in AI and have the least power to prevent and mitigate AI-mediated risks. Children as users are often unaware of the consequences of their interactions with AI and continue to consume AI-mediated content directly and indirectly without knowledge. They are often victimised and criminalised while engaging with AI systems. Conventional tech companies like Google, Meta, TikTok, IBM, Microsoft, and more are following user research models (Chakravorti, 2021) to engage children’s inputs in the product development process; however, the participant may not be aware of the product development cycle, deployment strategies, and how data will be used. Children primarily use products as market-driven consumers and do not have further opportunities to contribute to product development and deployment, which often leads to bias and harm. Current STEM education is preparing children to become more skilled users of AI tools but this learning opportunity is not accessible and affordable for most historically marginalised groups.
7.2Contributor
Children’s agency as a contributor is larger than as a user. This can manifest in children being participants in the AI design process with content, input to policy, and practice to develop products with stakeholders. But, children have limited knowledge of where and how it will be deployed in the social settings and no power to make decisions in any of the processes. UNICEF’s Policy Guidance on AI for Children seeks children’s contributions to the AI development process. The World Economic Forum’s Generation AI Toolkit (WEF, 2021) perceives children as AI consumers and offers principles in line with their developmental needs for AI system developers. These policy and practice discussions highlight AI and children but do not address how children’s contribution to AI with their social inclusion in all contexts would make a difference. Without the diverse children’s participation in AI policy and practice, there is a risk of misrepresentation and tokenism in reflecting upon AI systems.
7.3Innovator
Children’s agency as an innovator would grow compared to being a contributor. As innovators, children shape the end-to-end AI design processes as designers, developers, and deployers. The ethical use of data would foster more informed decisions and more child-friendly AI in the products and services. Children are already coming up with innovative solutions for pressing social and environmental challenges (Ramnani, 2021). The 2020 winner of the Children’s Peace Prize aims to address cyberbullying through AI-driven applications (Rahman, 2020). This particular invention shows that children can innovate both AI technology as well as prevent AI-mediated harm. Children can serve as innovators in AI practice as well as transform their knowledge for child-friendly AI policy (Webster, 2022). The fact that children’s AI innovations have been taken up by society demonstrates that children’s agency can bring along different forms of societal change. Such change cannot be achieved if children only remain users or contributors to AI practices. Table 1 provides child participation models in the AI development process to showcase the children’s roles and power dynamics of children’s roles in AI.
8.Benefits of child participation models in addressing AI-mediated harms
Algorithmic bias and harm can most effectively be prevented if children become innovators. As innovators, children could contribute to the planning of data collection, distribution of data, and how algorithms would need to access data. Becoming part of this process informs children about the effects of data and the effects of their use by algorithms. By participating in designing algorithms, children would need to know when they actively plan to design them and what kinds of AI implications need to be anticipated (such as human rights implications; technical aspects; and how the purpose of algorithms is achieved). Children would also gain more skills to understand and distinguish what bias is, how bias emerges through data collection, and what could potentially be interpreted as bias when providing access
Table 1
Children role | Planning | Data collection | Data access | Use of algorithms | Deployment | Reporting and dissemination |
---|---|---|---|---|---|---|
User | No power | No other commercially viable option to use services but by agreeing to and sharing data | Free access for big tech and no power to user | No control, no knowledge, and no power | No understanding of targeted audience and consequences | No control of representation |
Contributor | Limited engagement of stakeholders and power | Tokenism/limited power to share opinions regarding data collection | Engagement is embraced to the extent to which it provides economic surplus | Children’s coding is embraced to the extent to which it results in economic surplus | No power but more knowledge of the consequences of data processing and algorithms | Possible misrepresentation/tokenism |
Innovator (designer, coder, UX developer) | Children gain power through meaningful engagement | Understanding and more assertiveness to shape how/who’s data is collected | More assertiveness to shape how/who’s data is accessed/under what conditions shared | More understanding of how the algorithm works | More power to control where and when to be applied | Offer a better representation of social diversity |
to certain information. This understanding would include an increased power of control regarding the deployment of algorithms and the reporting and dissemination of data.
Addictive algorithms would potentially become minimised if children could actively become involved as innovators not only in learning what makes addictive algorithms addictive but also in being able to actively shape the code behind such algorithms to become less so. As the 5Rights Foundations (2022) also underlined, attention-grabbing algorithms are often ingrained in the business model of big technology platforms which are frequently used by children. Consequently, child innovators could shape addictive algorithms to be less addictive.
AI-mediated cyber crime could also be effectively addressed if children would become aware of how their data is collected and shared. Children would better be able to avoid AI-mediated cybercrime if they can shape how data can be accessed and learn how algorithms can be misused in such ways that they compromise cyber security. Furthermore, innovator children would also have more control over how algorithms are deployed and where soft spots for cybercrime could be found in the network. To some degree, children as innovators would be enabled to shape how to build in or use cyber security functions, such as potential firewalls in AI-mediated systems. The inclusion of diverse children would also offer a better representation of social diversity in shaping cyber security features. The latter requires access to cyber security education for the most marginalised groups and potentially access to AI ethics and children’s rights courses so that they could innovate in line with children’s rights impact assessment requirements, such as laid down by the EU High-Level Expert Group on Trustworthy AI (EC, 2019).
Child abuse prevention could gain multiple benefits from involving children as innovators in shaping algorithmically mediated processes (Turton, 2022). Offering children room and skills to shape such algorithms could offer multiple skills sets for children: a) to learn what can be called abusive behaviour online and offline; b) to learn what to share and not online regarding abusive content; c) to code algorithms which can amplify the spread of abusive online or offline content. The example of the Children’s Peace Prize winner who developed the Cyberbully prevention app exemplifies how a form of online child abuse could be prevented by young persons’ active involvement in the innovation process.
9.Conclusion
Children’s right to participation in AI at all levels has become cardinal for the present and future generations for ethical AI innovations. We have explored theoretical frameworks, policy gaps, and closely related practical challenges that impede children’s participation and we argued for child participation models that could address them. Whereas, co-creation with children has proven to be an effective technique to bridge the gap between theory, policy, and practice (Bouma et al., 2018). There has been increased research, toolkits, and policy briefs focusing on online child safety (5Rights Foundation, 2022), but children’s meaningful participation in AI policy and practice remains underexplored. The meaningful participation of children needs to be reviewed from the perspective of transnational communities and in line with the guiding principles of the General Comment 12 of the UNCRC for child participation (UNCRC, 2009). However, no clear reflections were available on what children’s meaningful participation in AI policy and practice, would look like if children could step out of their role as only users and become contributors and innovators. With our exploration, we have shown that children in such roles could thrive to achieve their fullest potential, foster their healthy development, reduce harm, and promote safety in AI products and services.
We have explored how co-creative approaches through child participation models could assist inclusive AI policy and practice. We understand that the successful co-creation with and meaningful participation of children in AI policy and practice requires the aligned multi-stakeholder engagement of AI developers, policymakers, parents, and children. Participatory co-creative AI design justice approaches have a high potential to provide opportunities for equal access to children from all walks of the social pyramid. The meaningful participation of children as contributors in shaping AI policy and practice is currently very limited both in the US and in The Netherlands; chances for children to emerge as innovators from historically marginalised communities are even lower. We have explored that different degrees of child participation can be beneficial to prevent and mitigate the harmful effects of algorithmic bias, addictive and exploitative algorithms, AI-mediated cybercrime, and online child abuse. Our models rendered it traceable what children’s more active role as contributors or innovators would mean from the perspectives of data processing, algorithm design, development, and deployment. We acknowledge that this process could be more time- and resource-consuming but leads to a safe and sustainable future for children and AI. We call for transnational policy makers in the respective governments and tech companies to update and incorporate meaningful children’s participation models and appropriate roles for children in their AI policy and practice. We can conclude that the engagement of children together at all levels of the AI system process, would help to achieve a more children-friendly AI policy and practice. Our child participation models in the AI development processes can be used to inspire more meaningful and practical participation of children in the end-to-end process of AI design, development, and deployment. In future research, co-creative design justice frameworks could benefit from implementing child participation models in AI policy, practice, tools, products, and services. In these frameworks, children’s power in co-creative approaches would be concretised and more visible on the level of design, data, algorithm, development, and deployment of AI systems.
References
[1] | 5Rights Child Online Safety Toolkit. ((2022) , May). 5Rights Foundation. https://childonlinesafetytoolkit.org. |
[2] | AccessNow. ((2018) ). Human Rights in the Age of Artificial Intelligence. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf. |
[3] | Alan Turing Institute. ((2021) ). Architecting our Future Insights from the Inaugural Trustworthy Digital Identity Conference. Alan Turing Institute. https://www.turing.ac.uk/sites/default/files/2021-12/trustworthy_digital_identity_conference_insights_2021.pdf. |
[4] | Algorithmic Accountability Act of 2022, H.R. 6580 IH, 117th Congress, 2D Session. ((2022) ). https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf. |
[5] | Baroness Kidron. ((2022) ). Systemic breaches of the Age Appropriate Design Code. Https://5rightsfoundation.Com/Uploads/Letter_5RightsFoundation-BreachesoftheAgeAppropriateDesignCode.Pdf. https://5rightsfoundation.com/uploads/Letter_5RightsFoundation-BreachesoftheAgeAppropriateDesignCode.pdf. |
[6] | Benjamin, R. ((2019) ). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. |
[7] | Biosca, O. ((2017) , July 25). Co-creation with children for designing tomorrow’s cities and mobility. Https://Mobilitybehaviour.Eu/2017/07/25/Co-Creation-with-Children-for-Designing-Tomorrows-Cities-and-Mobility/. |
[8] | Bouma, H., López, M., Knort, E.J., & Grietens, H. ((2018) ). Meaningful participation for children in the dutch child protection system: A critical analysis of relevant provisions in policy documents. Child Abuse & Neglect, 82: (08). doi: 10.1016/j.chiabu.2018.02.016. |
[9] | California Legislature Regular Session. ((2022) , April). AB-2273 The California Age-Appropriate Design Code Act (NO. 2273). California Legislative Information. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220AB2273. |
[10] | California Consumer Privacy Act (CCPA). ((2018) ). State of California Department of Justice. https://oag.ca.gov/privacy/ccpa. |
[11] | Chakrovorti, B. ((2021) , July 20). How to close the digital divide. Harvard Business Review – Https://Hbr.Org/2021/07/How-to-Close-the-Digital-Divide-in-the-u-s. |
[12] | ChildHub. ((2022) ). https://childhub.org/en/child-protection-multimedia-resources/cyberbullying-there-way-out. |
[13] | Coc Playful Minds. ((2022) ). Retrieved May 13, 2022, from https://www.cocplayfulminds.org/en/our-world/programmes/playful-spaces/sense-billund/. |
[14] | Constanza-Chock, S. ((2022) , February 27). Design Practices: “Nothing about Us without Us.” https://design-justice.pubpub.org/pub/cfohnud7/release/2. |
[15] | Council of Europe Strategy for the Rights of the Child (2022–2027). ((2022) ). Council of Europe. https://rm.coe.int/council-of-europe-strategy-for-the-rights-of-the-child-2022-2027-child/1680a5ef27. |
[16] | Cortesi, S., Hasse, A., & Gasser, U. ((2021) ). Youth participation in a digital world: Designing and implementing spaces, programs, and methodologies. Youth and Media, Berkman Klein Center for Internet & Society. https://cyber.harvard.edu/publication/2021/youth-participation-in-adigital-world. |
[17] | Dignum, V., Penagos, M., Pigmans, K., & Vosloo, S. ((2021) , November). UNICEF Policy Guidance on AI for Children: Version 2.0 |
[18] | DSA Alliance. ((2022) , April). Digital Services Act Human Rights Alliance: Don’t compromise on the protection of fundamental rights in the ongoing negotiations. APC. https://www.apc.org/en/pubs/digital-services-act-human-rights-alliance-dont-compromise-protection-fundamental-rights. |
[19] | Epps-Darling, A. ((2022) , October 24). How the racism backed into adolescents hurts. The Atlantic. https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/. |
[20] | EU White Paper on Artificial Intelligence. ((2022) ). https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf. |
[21] | Eubanks, V. ((2019) ). Automating inequality: how high-tech tools profile, police and punish the poor. MacMillan Publisher. |
[22] | European Commission. ((2021) , April). Regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts (2021/0106(COD)). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN. |
[23] | European Commission. ((2022) a). Digital Services Act: Commission welcomes political agreement on rules ensuring a safe and accountable online environment. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_2545. |
[24] | European Commission. ((2022) b, April). Digital Markets Act: Commission welcomes political agreement on rules to ensure fair and open digital markets. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_1978. |
[25] | European Commission. ((2022) c, May). European Strategy for a Better Internet for Kids (BIK |
[26] | Global market share of the information and communication technology (ICT) market from 2013 to 2022, by selected country. ((2022) , March). Statista https://www.statista.com/statistics/263801/global-market-share-held-by-selected-countries-in-the-ict-market/. |
[27] | FTC-Federal Trade Commission. ((2022) , June). Combating Online Harms Through Innovation. https://www.ftc.gov/system/files/ftc_gov/pdf/Combatting%20Online%20Harms%20Through%20Innovation%3B%20Federal%20Trade%20Commission%20Report%20to%20Congress.pdf. |
[28] | Hansen, A.S. ((2017) ). Co-design with children How to best communicate with and encourage children during a design process. www.ntnu.edu. Retrieved June 7, 2022, from https://www.ntnu.edu/documents/139799/1279149990/13+Article+Final_anjash_fors%C3%B8k_2017-12-07-20-11-11_Co-Design+with+Children+-+Final.pdf/b8dd19c4-d2b1-4322-a042-718e06663e13. |
[29] | Iversen, O.S., & Dindler, C. ((2013) ). A utopian agenda in child-computer interaction. International Journal of Child-Computer Interaction, 1: (1), 24-29. |
[30] | Iversen, O., Smith, R.C., & Didler, C. ((2017) ). Child as Protagonist: Expanding the Role of Children in Participatory Design. IDC ’17: Proceedings of the 2017 Conference on Interaction Design. doi: 10.1145/3078072. |
[31] | Kahn, K., Megasari, R., Piantari, E., & Junaeti, E. ((2018) ). AI programming by children using Snap! block programming in a developing country. Thirteenth European Conference on Technology Enhanced Learning, 11082. |
[32] | Kalluri, P. ((2020) , July 7). Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature. https://www.nature.com/articles/d41586-020-02003-2. |
[33] | Kharim, F., Oyewande, A., Abdalla, L.F., Ehsanullah, R.C., & Khan, S. ((2022) ). Social Media Use and its Connection to the Internet. National Library of Medicine. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7364393/. |
[34] | Kubinyi, E.L., Naydenova, V., & Kuusk, K. ((2020) ). Children and design students practicing playful co-creation in a youth creativity lab. Https://Wiki.Aalto.Fi/Download/Attachments/191500264/Kubinyi_Naydenova_Kuusk.Pdf?version=1&modificationDate=1621264242055&api=v2. Retrieved May 12, 2022, from https://wiki.aalto.fi/download/attachments/191500264/Kubinyi_Naydenova_Kuusk.pdf?version=1&modificationDate=1621264242055&api=v2. |
[35] | La Fors, K., & Larsen, B. ((2022) ). Why AI companies should develop child-friendly toys and how to incentivize them. World Economic Forum. https://www.weforum.org/agenda/2022/07/ai-children-friendly-smart-toys/. |
[36] | Lee, N.T. ((2022) ). Closing the digital and economic divides in rural America. Brookings. https://www.google.com/url?q=https://www.brookings.edu/longform/closing-the-digital-and-economic-divides-in-rural-america/&sa=D&source=docs&ust=1653897434530881&usg=AOvVaw0a7YnXUM2oFCVugOY2dNEw. |
[37] | Livingstone, S., & Blum-Ross, A. ((2020) ). Parenting For a Digital Future, How Hopes and Fears About Technology Shape Children’s Lives, Oxford University Press. Oxford University Press. |
[38] | McStay & Rosner, G. ((2021) ). Emotional artificial intelligence in children’s toys and devices: Ethics, governance and practical remedies. Big Data and Society, 8: (1). |
[39] | Psihogios, A.M. ((2022) , April 11). Adolescents Are Still Waiting on a Digital Health Revolution Accelerating Research-to-Practice Translation Through Design for Implementation. JAMA Pediatrics. Retrieved May 12, 2022, from https://jamanetwork.com/journals/jamapediatrics/article-abstract/2790964?guestAccessKey=99745fd9-b2a1-4114-86bd-807e4147cad0&utm_source=silverchair&utm_medium=email&utm_campaign=article_alert-jamapediatrics&utm_content=olf&utm_term=041122. |
[40] | Raamnani, M. ((2021) , November 26). Young Innovators & Whizz-kids That Made A Mark In 2021. Https://Analyticsindiamag.Com/Young-Innovators-Whizz-Kids-That-Made-a-Mark-in-2021/. https://analyticsindiamag.com/young-innovators-whizz-kids-that-made-a-mark-in-2021/. |
[41] | Rahman, S. ((2020) ). Sadat Rahman (17) from Bangladesh wins International Children’s Prize 2020. Https://Www.Kidsrights.Org/News/Sadat-Rahman-17-from-Bangladesh-Wins-International-Childrens-Peace-Prize-2020/. |
[42] | Hart, R.A. ((1992) ). Children’s Participation: From tokenism to citizenship, Innocenti Essay no. 4. |
[43] | Sisson, P. ((2019) , December 17). Housing algorithm discrimination goes high-tech. Https://Archive.Curbed.Com/2019/12/17/21026311/Mortgage-Apartment-Housing-Algorithm-Discrimination. Retrieved April 10, 2022, from https://archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination. |
[44] | SomeBuddy. ((2021) , August). SomeBuddy. UNICEF. https://www.unicef.org/globalinsight/media/2086/file. |
[45] | Statista. ((2022) , July). Global toy market: total revenue 2007–2020. https://www.statista.com/statistics/194395/revenue-of-the-global-toy-market-since-2007/. |
[46] | Turton, W. ((2022) , April 26). Tech Giants Duped Into Giving Up Data Used to Sexually Extort Minors. Https://Www.Bloomberg.Com/News/Articles/2022-04-26/Tech-Giants-Duped-by-Forged-Requests-in-Sexual-Extortion-Scheme. Retrieved May 16, 2022, from https://www.bloomberg.com/news/articles/2022-04-26/tech-giants-duped-by-forged-requests-in-sexual-extortion-scheme. |
[47] | UNESCO. ((2020) ). Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence (SHS/ BIO/AHEG-AI/2020/4 REV.2). https://unesdoc.unesco.org/ark:/48223/pf0000373434. |
[48] | UNCRC – United Nations Committee on the Rights of the Child. ((2009) , July). General comment No. 12 (2009): The right of the child to be heard (CRC/C/GC/12). Refworld.org. https://www.refworld.org/docid/4ae562c52.html. |
[49] | Vaden, A. ((2017) , July 12). 5 Kid Inventors Saving Lives With Their Famous Inventions. https://sphero.com/blogs/news/5-kid-inventors-who-are-saving-lives. |
[50] | Webster, N. ((2022) , March 30). World’s youngest computer programmer wants to help children tap job market. The National News. Retrieved May 12, 2022, from https://www.thenationalnews.com/uae/2022/03/30/worlds-youngest-computer-programmer-wants-to-help-children-tap-job-market/. |
[51] | WEF-World Economic Forum. ((2021) ). Future of the Connected World Report (No. 2021). https://www.weforum.org/connectedworld/report. |
[52] | WEF-World Economic Forum. ((2022) b, March). Artificial Intelligence for Children Toolkit. https://www.weforum.org/reports/artificial-intelligence-for-children. |
[53] | WRR. ((2021) , November). Opgave AI. De nieuwe systeemtechnologie (No. 105). Wetenschappelijke Raad voor het Regeringsbeleid. https://www.wrr.nl/publicaties/rapporten/2021/11/11/opgave-ai-de-nieuwe-systeemtechnologie. |
[54] | Yang, B., Wei, L., & Pu, Z. ((2020) ). Measuring and improving user experience through artificial intelligence-aided design. Front. Psychol., 11: (11). doi: 10.3389/fpsyg.2020.595374. |
[55] | Zuboff, S. ((2019) ). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books Ltd. |