Ethical Considerations for Using Artificıal Intelligence

Type: Article
Topics: Ethics, School Administrator Magazine, Technology & AI

October 02, 2023

Your school community’s stakeholders serve an important role in identifying the dilemmas at play
Many women sitting around a table talking
Teachers at the Kent Place Institute in Summit, N.J., attend training on the ethics of using artificial intelligence in school. PHOTO COURTESY OF THE ETHICS INSTITUTE AT KENT PLACE SCHOOL

Since the release of ChatGPT last winter, conversations about the ways in which artificial intelligence will disrupt the educational sector have loomed large. Even though AI emerged in the 1950s and now is part of many people’s daily routines, educators have not framed it as an existential threat until recently when its potential use by students became a reality.

The concerns typically raised regarding jobs, privacy, safety and misuse of AI, however, aren’t unique to its latest tools. We need only look at the use of AI in security surveillance, drone-based warfare and internet bots to anticipate the possible benefits and harms of such technology. As such, the stakes feel high for school administrators who are looking out for the needs of teachers, students and families.

The Ethics Institute at Kent Place School can provide a vehicle for thinking through the culture of fear surrounding the use of artificial intelligence in education. A central component of what we call our school’s Ethical Decision-Making Method© is to engage multiple stakeholders in dialogue to understand their needs and wants before arriving at an institutional policy. By following our five-step process, we believe other school leaders can develop an AI policy that is people-based, nuanced and contextual enough to be helpful, ethical and mission aligned.

Understanding First

The first part of the Ethical Decision-Making Method is to study and understand the situation. This initial step can feel overwhelming. Many of us don’t consider ourselves equipped to understand the technological functions of AI tools, which leads to fear and dismissal: “How might this threaten my job security?” and “I can’t worry about this right now” and “I don’t want to learn how to use AI; I do my job well enough without it.”

Karen Rezach in pink blouse and dark blue blazer
Karen Rezach

For these reasons, we find most teachers and staff need time to play around with AI tools before trying to answer how educators should use or not use AI.

It’s important to understand the perspectives of the various stakeholder groups in a school community. Most high school students use ChatGPT and Dall-e and middle school students use Quillbot and various translation platforms. We created a sandbox of AI tools for the adults in our school to explore. After gathering feedback about AI tools used by the school community, it’s smart to engage in stakeholder discussions to understand the concerns and hopes around AI use. The guiding questions for such discussions are: “What ethical issues does ChatGPT raise for you at school or in your job at school?” and “What, if anything, should we do to address these concerns and possibilities?”

The feedback provided our technology department with relevant information to develop training around AI tools that were anchored in three guiding questions:

What is important for us to understand about this AI tool that will have an impact on our decisions on how it is used or not used?

Which AI tools will we endorse as a school, and what further training for which constituents will be required?

What privacy or safety concerns does each tool present, and should this lead us to restricted use or non-use of a particular tool?

Equity, Safety and Authenticity

The second step of our Ethical Decision-Making Method is to identify the values involved on all sides. We find educators are most concerned about integrity, efficiency, equity and authenticity when it comes to artificial intelligence in schools. Because students can use AI tools to complete assignments, concerns arise about integrity and honor code violations.

Ariel Sykes in flowery blue top
Ariel Sykes

For instance, educators will need to grapple with whether there is a meaningful difference in using AI at different points in the writing process: brainstorming, outlining, drafting and editing. For teachers who value efficiency, AI can develop questions and lesson plans that are aligned with state standards and help with the more menial, time-consuming tasks that affect teachers’ direct time with students. Supervisors must determine what tasks are acceptable to use AI for support.

There also is the open question of AI’s capability to level the learning landscape to meet the needs of all students and help advance equity in all educational settings. If AI can provide all students with access to out-of-school support that currently exists only for families with financial means, schools will need to determine what AI tools they want to train teachers and students on.

The value of authenticity arises when teachers express concern about the impact AI tools will have on the process of student learning and the development of a student’s sense of self. Many students already focus on their grades and view learning as the creation of a product — an essay, presentation or exam. Fear exists among educators that AI tools only exacerbate this focus, when the hope is to shift student awareness toward embracing the messy but fulfilling journey of learning.

If a student, for example, uses an AI tool whenever she gets stuck on a learning task, then she misses out on working through the challenge of making a mistake. In addition, it is through getting to know how a student is thinking when she veers off course that a teacher is able to identify areas for growth and re-instruction. Students may not have the chance to develop their own voice if they rely on AI tools to create content or to edit their work. Schools will have to be mindful of the ways AI can diminish authentic learning.

You want to hear from others beyond teachers. In meetings with families, you may find concerns about safety and success. Although many AI tools are open access, some do have age-of-use restrictions. Parents want assurance that if their children are using AI in classrooms, they are doing so in an age-appropriate manner and that any data being collected are not going to be misused.

Accountability and fairness also are central concerns among families, as is their desire to balance the advantages of students learning how to use an emerging technology so they will be well prepared for the future with the difficulties of monitoring students’ misuse of this technology to circumvent learning.

Administrators and staff often identify the values of safety, privacy, success and accountability as central concerns around AI in schools. Questions such as “Do we need to be careful about what we put into ChatGPT because of the collation of data?” and “If we use an AI tool to create content or to make a decision, what types of checks must we have in place to make sure human safety is ensured?” will need to be addressed to balance the ways in which AI tools can help people do their job more effectively and efficiently.

Middle school and high school students worry about fairness and success. If there are no clear parameters and accountability protocols about AI use in place, then students will likely misuse it. Students need to understand if they misuse AI to complete an assignment, they will not receive the same grade as those who followed the rules. Students also want to learn about how AI works so they can be informed users of these emerging technologies, which they recognize as necessary for their future success.

Nuanced Questioning

Step three of our Ethical Decision-Making Method is to identify the ethical dilemma. When it comes to AI use in schools, there are many competing commitments, and the issue can be polarizing. Some people want to ban AI outright and others want to fully embrace it. We find that the ethical dilemma that consistently emerges is not “should we allow AI?” but rather “What AI should we embrace and how?”

This nuanced look at the issue raises the following related questions:

How will we determine what AI tools are acceptable for use for teaching and learning as opposed to data management, logistics and administrative support?

Should our acceptable-use policies look the same for all members of our school community?

How should we monitor AI use in our school community?

How should we communicate our policies and procedures to the members of our school community?

Aligning With School Mission

Step four of our Ethical Decision-Making Method is to identify the values that will influence your decisions. Looking back at your stakeholder interviews, what were the values that needed balancing and how did they align with the school’s mission?

Use these values as a checkpoint to develop policy and procedures for acceptable AI use by answering these questions: “Are we centering the value(s) of ____?” and “Does this align with our school’s mission and vision?”

Step five is to arrive at a decision and then to communicate it effectively and respectfully. At this stage, it is important to lead with the values that informed your decision and to include in an explanation explicit references to the stakeholders’ perspectives that were considered. This will provide transparency of process and encourage buy-in, even among those who do not fully agree with the final decision.

For effective communication, we recommend providing clear benchmarks and avenues for continued feedback. Using focus group data from step one can help you provide the training and support for all stakeholders regarding their concerns and desires about AI use in their schools. This alignment process helps in rolling out and enforcing any policy on artificial intelligence.

Ongoing Challenges

The use of AI in K–12 education is multifaceted and ever-evolving. We expect to see more tools emerge as the AI field shifts toward commercializing programs for the education sector specifically. It is for these reasons that it is helpful to develop a culture of ethical decision making now.

As you build buy-in to your institutional values, the policy and procedure process can become less adversarial and more collective. Keep your channels of communication open to all stakeholders, check in with them about how policies and procedures are working, and rely on the Ethical Decision-Making Method as new dilemmas arise. n

Ariel Sykes is the assistant director of the Ethics Institute at Kent Place School in Summit, N.J. Karen Rezach is the founding director of the Ethics Institute.

Author

Ariel Sykes and Karen Rezach

Assistant director and founding director

The Ethics Institute at Kent Place School, Summit, N.J.

The Right Questions to Ask

Knowing the right questions to ask during faculty and staff meetings on hot topics can be hard. The following list provides focused questions to ensure productive conversations about AI use in a school or school district that will allow you to gather information to guide your institutional decision-making process.

Questions for AI in teaching and learning

Do we need to rethink fundamentally the ways we prepare students for a society that embraces AI?

Are there skills we no longer need to teach because of the AI tools available?

In what ways can AI tools get in the way of learning and skill development?

Questions for AI in task management and completion

Should the criteria for using ChatGPT and other AI tools be “I could have produced this if given enough time”?

Should adults be asked to cite their use of AI tools when producing content?

What should we spend time doing as part of our jobs and what are we comfortable outsourcing to AI tools?

—  Ariel Sykes and Karen Rezach

Advertisement

Advertisement


Advertisement

Advertisement