From SAFE by Design to Practice: The Techwell Project Approach

Author:
Plamen Miltenoff, MLIS, Ph.D.
Independent Researcher / Center for Educational Initiatives
SAFE by Design is a policy with outlines for schools, policymakers, and technology developers to reduce risks and protect student wellbeing when AI companions and chatbots are used in education. SAFE by Design argues the rather fast penetration of AI companions and general purpose chatbots into schools and, the lack of time for governance, pedagogy, and wellbeing frameworks to adapt to this intrusion. Students use these systems for academic support, emotional expression, and social interaction, often on school issued devices and networks. The document positions this as a structural challenge rather than a classroom management issue. Schools now operate in parallel with a shadow digital layer that shapes thinking, identity, and relationships.
A central takeaway is the distinction between consumer AI optimized for engagement and purpose built educational tools aligned with learning science. Commercial systems optimize for retention, satisfaction, and emotional stickiness. Educational systems should optimize for cognitive effort, accuracy, inclusion, and growth. When schools fail to distinguish between these categories, they risk importing commercial incentive structures into learning environments. For the Techwell project, which examines digital wellbeing and AI literacy, this distinction provides a conceptual anchor. Our Techwell project operationalizes this by developing evaluation rubrics to assess functionality as well as underlying incentive logic, reward structures, and relational cues embedded in AI systems.
Another key insight of the report concerns misaligned engagement metrics. In commercial contexts, engagement equals time and frequency. In education, engagement equals effort, persistence, and conceptual change. The report highlights risks such as sycophancy, where systems affirm user beliefs to maintain satisfaction. This dynamic can weaken epistemic resilience and critical thinking. For our Techwell project, this insight supports the integration of “epistemic wellbeing” into digital wellbeing frameworks. AI literacy modules within the project addresses the calibration of trust, recognition of persuasive design, and strategies for productive struggle. Rather than teaching students how to prompt better answers, the Techwell project emphasizes the importance to question, verify, and challenge AI outputs.
The document also foregrounds relational design risks. When AI companions simulate empathy, romance, or friendship, they blur boundaries between tool and relationship. The report draws attention to the vulnerability of minors to anthropomorphism and emotional attachment. Our Techwell projects engages with digital fatigue and mental health in educational settings, and is worth expanding the scope from screen time to relational architecture. The project can incorporate modules on the empathy gap between human and machine, the psychological impact of simulated intimacy, and help seeking pathways that prioritize trusted adults over AI systems.
Purchasing decisions become a key point of influence. The report outlines five core standards for education technology: it should be safe, grounded in evidence, inclusive, easy to use, and able to work with other systems. It advises adding clear requirements to contracts, such as removing addictive game like features, clearly stating that the system is a machine, providing dashboards for teachers, and setting up reporting processes that include human review. The Techwell project can turn these standards into policy templates for schools and universities, develop practical checklists for choosing AI tools, and run training sessions for leaders. This shifts Techwell from raising awareness to shaping how institutions make decisions.
The emphasis on cross curricular AI literacy aligns strongly with the Techwell’s mission. AI literacy should move from a technical skill to a human AI interaction competency embedded across subjects. Students should understand probabilistic output, hallucination risk, bias, and the social implications of training data.
Family school partnership also plays a significant role in the report. Many AI interactions occur outside formal school oversight. The document promotes joint media engagement and transparent communication with families. The Techwell projects reflects this dimension by developing parent oriented resources, webinars, and communication toolkits. This strengthens the project’s impact beyond institutional boundaries.
AI in education requires safety by design, not reactive correction. In the Techwell project, we integrate governance, pedagogy, and technology design. We position ourselves as a bridge between research, policy, and practice by testing evaluation frameworks, piloting AI literacy curricula, and documenting educator experiences. SAFE by Design provides a policy level scaffold. In the Techwell project, we translate this scaffold into measurable practices, professional development models, and research instruments that examine how AI affects cognitive, emotional, and social wellbeing in educational contexts.
SAFE AI Companions Task Force. 2026. “S.A.F.E. by Design: Policy, Research, and Practice Recommendations for AI Companions in Education.” https://www.aiforeducation.io/s/SAFE-by-Design-Policy-Research-and-Practice-Recommendations-for-AI-Companions-in-Education.pdf
