Before introducing an AI tool, schools should carefully review its terms and conditions. Schools should fully understand who has legal authority and control over the data that is entered into the system, what uses are permitted, vendor obligations, and third-party data sharing practices. Administrators must understand what liability they may or may not be taking on, and what happens if there is misuse, inaccuracies, or data breaches.
All student data is sensitive, and AI systems must comply with all applicable federal and state laws protecting that data. Schools must comply with applicable student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Act (COPPA) when required. As a rule of thumb, AI tools should only be allowed to collect, process, and retain the minimum amount of data necessary to deliver the service they are providing.
Schools using AI tools must ensure they meet accessibility standards. Schools should evaluate whether the tool can ensure all learners, including students with disabilities, can access the tool and what safeguards are necessary to ensure all students have a safe, productive experience.
Schools should always require a human review and keep a “human in the loop.” When using AI, it is important that a human reviews all items generated by an AI tool, especially in high-impact uses, like grading, placement, and discipline-related decisions. Human judgement should always be the final step in the decision-making process.
Schools should examine whether use of the AI tool is associated with improvements in student learning, including positive changes to relevant assessments and the development of targeted skills, like writing and math.
Schools should assess whether use of the AI tool influences student engagement and participation in the classroom.
Schools should evaluate whether the AI tool meaningfully supports instruction, increases personalized learning, and improves classroom practices.