Weighted assessment - Integrity and use of AI

 When we are talking about integrity and academic misconduct, we are looking at something different here. We know that we should not cheat in exam. Misconduct in simple term mean "copy others work and claim it yours", "ask other to do your own work", "adapt others work without acknowledgment". With AI, what seems to be clear has become blur. I attended the course, dialogue of GPT and these are the insights generated by GPT that I extracted.

1. Clarify AI’s Role in Coursework

Students may use generative AI tools (e.g., ChatGPT, DALL·E, Canva AI, Copilot) as learning assistants, not substitutes for original thought.
We will use three guiding principles to communicate expectations schoolwide:
• Declare – Be transparent about how AI was used (tool, stage, purpose, extent).
• Attribute – Credit AI-generated ideas, images, or phrasing that influenced the work.
• Improve – Use AI as a starting point for refinement, verification, and testing through human judgment.

These principles move students from “using AI to complete work” to “using AI to deepen thinking.”


2. Require Process Evidence and Disclosure

All coursework involving AI should show how students developed their work and the extent of AI involvement.
Evidence may include:

  • Screenshots or logs of AI prompts and outputs (with reflection).

  • Annotated drafts showing revision and decision-making.

  • A short AI Use Declaration stating tools used, purpose, and level of influence (Minimal / Moderate / Extensive).

Departments are encouraged to adapt templates that fit their subjects while maintaining the same core expectation of transparency and accountability.


3. Mixed Mode Assessment: Supervised + Open

To maintain integrity and authenticity while supporting innovation, coursework can blend:

  • Supervised sessions (AI-free): Students demonstrate independent thinking or technical competence under controlled conditions.

  • Open sessions (AI-allowed): Students use AI ethically for brainstorming, organization, or analysis—with clear documentation.

  • Reflective component or viva: Students explain how AI influenced their work and what they learned from the process.

This mixed approach ensures we assess both capability without AI and responsible co-thinking with AI.


4. Reinforce Our Local Ethical Code

Each department should develop or adapt a short code of conduct stating:

“We value human judgment, creativity, and integrity. AI may support thinking, but it cannot replace understanding or accountability.”

Teachers can model this transparency by sharing how they use AI responsibly for lesson design or resource preparation.


5. Leverage AI-Powered Learning Analytics Thoughtfully

As we explore AI-supported learning analytics (e.g., analysing reflections, tracking design iterations, or mapping self-directed learning behaviours), data should be used formatively — to guide reflection and feedback, not for punitive ranking.
Analytics should illuminate how students learn, reinforcing growth and ethical awareness.

Comments

Popular posts from this blog

2024 Thrive 8 - Low knowledge bases and a bad start?

Adaptive thinking in Self-regulated Learning and Self-directed Learning