Many educators are grappling with questions about AI detection. Yet, AI detection tools are unreliable, biased, and distressing. False positives can harm students’ academic standing and well-being, with marginalized groups often disproportionately affected, while detectors still miss significant portions of AI-generated text (Lurye, 2025; Encouraging Academic Integrity, 2025; Hirsch, 2024). And detection tools assume students simply copy and paste AI outputs, when in reality many use these tools more fluidly–taking suggestions, rewriting, or iterating through prompts–making their work indistinguishable from original writing. As one student noted, “it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own…” (Terry, 2023).
What Students Want
Most students believe institutions should address academic integrity concerns related to generative AI, but they largely prefer proactive and educational approaches over punitive measures. A significant number of students want clear rules about when and how AI tools can be used, as well as a voice in shaping them (Flaherty, 2025).
From Policing to Partnership
Given the inherent risks of detection and bans — tools that can unfairly penalize students and policies that do little to promote ethical use — the better path forward is not more surveillance, but more collaboration. Faculty-written policies risk missing the mark if they ignore how students actually engage with AI. Instead of policing AI through punitive measures, faculty can create space for students to help define appropriate guidelines. Policies crafted together shift the dynamic from rules imposed to standards co-owned, building trust and relevance.
Why Co-Creation Works
Self-Determination Theory suggests that students are more likely to internalize and adhere to guidelines when they have a hand in creating them. Involving students in developing AI usage policies communicates that their perspectives are valued and supports their need for autonomy, turning compliance into genuine commitment. Co-created rules feel less like authoritarian decrees and more like shared standards, which in turn fosters ownership, clarity, and consistency in how those policies are understood and followed (Guay, 2022; Kuo et al., 2025).
Practical Approaches to Co-Create AI Policies
Research makes it clear: students are more likely to respect and follow policies they help shape. But theory alone won’t change your syllabus. The real shift happens when faculty move from principle to practice. The good news? There are straightforward, adaptable activities you can use right now to bring students into the conversation and co-create meaningful AI usage policies. For best results, implement these activities within the first week or two of the term, or before your first major assignment.
Document-Based Collaboration
1. Shared Policy Google Doc
A structured Google Doc provides policy headings (Assessment, Collaboration, Academic Integrity). Students co-edit the text under each section, adding suggestions in comments. As comments are resolved, the document evolves into a finalized class-wide AI usage policy.
Tool: Google Docs
2. Scenario Response Wiki
Students use a wiki page to respond to realistic AI-use scenarios (e.g., “AI writing feedback on essays”). Small groups draft responses for each scenario, and peers edit for consistency. Over time, pages become a collective guide to what counts as acceptable AI use, directly forming a policy.
Tool: Canvas Wiki Pages (or equivalent LMS wiki feature)
3. Crowd-Sourced Glossary
Students collaboratively define AI-related terms and practices in a shared glossary tool (wiki or Google Doc). Each entry includes “permitted uses” and “restricted uses.” The glossary doubles as both a vocabulary aid and a concrete class AI policy.
Tools: Canvas Wiki Pages, Google Docs
4. Policy Charter Pad
Using a template, students co-author a charter with structured sections: Purpose, Guidelines, Responsibilities, and Consequences. Each section is drafted collaboratively, with rotating editors refining language. The final product is a polished class AI usage charter.
Tools: Google Docs
Discussion & Forum-Based Activities
5. Canvas Discussion Draft
Instructor seeds a discussion with prompts for different policy areas. Students propose clauses and debate wording in threads. A moderator (instructor or rotating student role) synthesizes the top ideas into a consensus policy posted back to the group.
Tool: Canvas Discussions
6. Draft & Vote Forum
The instructor posts draft clauses in a forum. Students propose alternatives as replies. A class-wide vote (via Canvas poll or Google Form) determines the preferred wording. The winning clauses are compiled into the final AI policy.\
Tools: Canvas Discussions, Google Forms
7. Policy Blog Chain
Students write sequential short blog posts on a shared course blog. Each post revises or critiques the prior entry, building momentum toward consensus. The chain of posts is later synthesized into a cohesive AI usage policy.
Visual & Interactive Tools
8. Miro Collaborative Map
On a Miro board, students build a shared mind map with branches like “Learning Support,” “Integrity,” and “Assessment.” They attach notes or examples under each branch. The class then translates the map’s structure into a written, shared policy document.
Tool: Miro
9. Perusall Policy Annotation
An instructor uploads an external AI policy (e.g., from a university or journal) into Perusall. Students highlight passages and comment on what they agree with or want to adapt. Annotations are collected and distilled into a tailored class policy.
Tool: Perusall
Media & Feedback Tools
10. Media Roundtable & Podcast
Students record short reflections (video or audio) on what should or shouldn’t be in the AI policy. Using Kaltura in Canvas, Microsoft Teams, or Canvas Discussions with media replies, they share contributions and respond to peers. The instructor (or a student group) compiles clips and/or transcripts into a single artifact. This collective media product is then distilled into draft clauses for the shared AI usage policy.
Tools: Kaltura, Microsoft Teams, Canvas Discussions
11. AI Policy Survey & Summary
The instructor creates a Qualtrics survey with items such as: “Is it acceptable to use AI to generate code for an assignment?” Students select Acceptable, Unacceptable, or Conditional and provide a brief rationale. Qualtrics automatically aggregates results into tables and charts, making consensus and disagreements easy to spot. The class then uses these visual summaries to draft clear, evidence-based clauses for the shared AI usage policy.
Tool: Qualtrics
12. Peer-Reviewed Policy Exchange
Each student drafts a mini-policy document and submits it to a shared folder or assignment space. Using a structured rubric, peers review at least two classmates’ drafts, either through Peerceptiv or LMS assignment tools. The strongest and most frequently endorsed ideas are integrated into a composite class policy authored by the group.
Tools: Peerceptiv, Google Drive, Canvas Assignments
Bringing It All Together
Once students have contributed through these activities, the instructor’s role is to bring the pieces together. Compiling the results, highlighting areas of consensus, and drafting a clear, shareable policy ensures that the final guidelines reflect the class’s input. Sharing this draft back with students not only closes the loop but also reinforces that their voices shaped the outcome.
Before you drop a boilerplate AI statement in your syllabus, try one of these toolkit activities. Start small–maybe a survey or a media roundtable–and see how co-writing changes the game.
References
Encouraging academic integrity – University Center for Teaching and Learning. (2025). University of Pittsburgh. https://teaching.pitt.edu/resources/encouraging-academic-integrity/
Flaherty, C. (2025, August 29). How AI is changing—not ‘killing’—college. Inside Higher Ed. https://www.insidehighered.com/news/students/academics/2025/08/29/survey-college-students-views-ai
Guay, F. (2022). Applying self-determination theory to education: Regulations types, psychological needs, and autonomy supporting behaviors. Canadian Journal of School Psychology, 37(1), 75–92. https://doi.org/10.1177/08295735211055355
Hirsch, A. (2024, December 12). AI detectors: An ethical minefield. Center for Innovative Teaching and Learning. https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/
Kuo, T.-S., Chen, Q. Z., Zhang, A. X., Hsieh, J., Zhu, H., & Holstein, K. (2025). PolicyCraft: Supporting collaborative and participatory policy design through case-grounded deliberation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–24). Association for Computing Machinery. https://doi.org/10.1145/3706598.3713865
Lurye, S. (2025, August 7). Students have been called to the office — and even arrested — for AI surveillance false alarms. AP News. https://apnews.com/article/ai-school-surveillance-gaggle-goguardian-bark-8c531cde8f9aee0b1ef06cfce109724a
Terry, O. K. (2023, May 12). Opinion | I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt