Guiding AI’s Responsible Use in Schools

Type: Article
Topics: School Administrator Magazine, Technology & AI

November 01, 2024

Districts can develop effective guidelines that are comprehensive yet recognize the fluid nature

The evolving landscape of artificial intelligence requires school and district leaders to be responsive and proactive, cautious but bold, and knowledgeable about a topic where much uncertainty still exists.

While a handful of states have adopted guidance on the integration of AI and generative artificial intelligence tools in education, many school districts are left to develop their own responsible use guidelines.

Effective responsible use policies must balance multiple requirements. They should meet the needs of the unique communities that each district serves and apply to all emerging technologies, not just AI. They must attend to biases and protect data and privacy, while also allowing learners and teachers to learn about and benefit from AI in the classroom.

A Rapid Evolution

So how are district leaders wrestling with these challenges and what support do they need?

That’s what our team at Digital Promise is seeking to uncover. Our project, “Responsible, Ethical and Effective Acceptable Use Policies for the Integration of Generative AI in U.S. School Districts and Beyond,” which is funded by the National Science Foundation, has been collaborating closely with 10 school districts from across the United States to help leaders develop AI policies and prepare their schools and staff to engage with AI responsibly and effectively.

As our work evolves, we have identified guiding principles to help school district leaders develop effective emerging technology policies.

This Content is Exclusive to Members

AASA Member? Login to Access the Full Resource

Not a Member? Join Now | Learn More About Membership

Pati Ruiz and KellyAnn Tsai

Senior director of the edtech and emerging technologies team; Research communicator

Digital Promise, Redwood City, Calif.

Considerations for Evaluating AI

Artificial intelligence literacy — the knowledge and skills that enable humans to critically understand, use and evaluate AI systems — has emerged as a crucial skill set for everyone in education.

Evaluating AI is the most critical element of AI literacy. Too often, people use AI passively without considering the privacy, safety or societal implications of doing so. To be truly AI literate, users must take a more active approach to understand not only what data the tool is using, but also how these data are being used and shared.

When evaluating AI tools for your district, we recommend four considerations.

Transparency: Support users to understand what data and methods were used to train the AI system or tool. Ask: What AI model and methods were used to develop this tool? What datasets were used to train this AI model?

Safety: Understand data privacy, security and ownership. Ask: How is information being collected, used and shared? How do we prevent tools from collecting data and/or deleting data that was collected?

Ethics: Consider how datasets, including their accessibility and representation, reproduce bias in our society. Ask: How is AI perpetuating issues of access and equity? Who is harmed and benefiting, and how?

Impact: Examine the credibility of outputs and the efficacy of algorithms and question the biases inherent in the use of AI systems and tools. Ask: Is this AI algorithm the right tool for impact? Is this AI output credible? How do we center human judgment in decision making?

Evaluating AI is an essential component of a broader AI literacy framework, which also includes understanding AI and using AI. Digital Promise’s AI Literacy Framework helps educational leaders design and implement a clear approach to AI literacy for their learners, teachers and community.

—  Pati Ruiz and KellyAnn Tsai

Writing Clear and Accessible Policies

While policies touch upon complex and evolving topics, they need to be clear and understandable by all school community members. A few tips:

Avoid techno-centric language when possible. Direct educators and families who may be less familiar with technology terms to glossaries such as the Glossary of Artificial Intelligence Terms for Educators.

Consider easy-to-consume ways to disseminate guidelines. The Iowa City Community School District drafted short teacher and student guidelines, formatted as one-page infographics, to be posted in classrooms as reminders about responsible use. Using the acronyms TEACH and LEARN, they summarize what students ought to consider when using AI for learning and what teachers must consider when leveraging AI for instruction.

Avoid human terminology, such as “hallucinate” or “think,” when writing about AI. Such language reinforces the idea that AI is a humanlike robot that will replace humans. Instead, refer to AI as a tool or technology that can (and does) make mistakes.

Resource Leads

These informational resources can help school districts devise responsible use guidelines:

Digital Promise report, “AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology”

Sample empathy interview protocol

Sample language for emerging technology acceptable use policies

AI Pedagogy Project 

—    Pati Ruiz and KellyAnn Tsai

Advertisement

Advertisement


Advertisement

Advertisement