AI is now at the center of how we talk about teaching, learning, and employability, at KTH in general, and very concretely at a school level within SCI. The question isn’t whether generative AI will be used; it already is. The real question is how we integrate it so it becomes a resource rather than a shortcut that quietly erodes learning.
Our SCI faculty breakfast on AI in education highlighted both strong engagement and a clear need. Faculty explicitly asked the school to facilitate structured sharing of experiences, teaching materials, and course experiments so we can learn faster together. That kind of open exchange is a powerful tool, one we are in the process of building. It also reinforces something I feel strongly about: we will not navigate this transformation by writing policies and documents alone. Policies can set boundaries and provide a shared vocabulary, but practice is built in the everyday, in assignments, seminars, feedback, and assessment design.
A term that is often used is “AI readiness.” But AI readiness is not about becoming a prompt expert; it is about judgment, i.e. the ability to evaluate outputs, verify claims, recognize uncertainty, and decide when AI supports learning and when it replaces it. That kind of judgment rests on deep disciplinary knowledge and critical thinking, and it’s part of a university’s responsibility to teach it explicitly. Research comparing ChatGPT with human tutors points in this direction: AI can be helpful as a structured dialogue partner, but human guidance remains essential for nuance and tailored feedback (Frontiers | Socratic wisdom in the age of AI: a comparative study of ChatGPT and human tutors in enhancing critical thinking skills).
At the same time, we should be honest about risks. If AI becomes an “answer engine,” students may produce polished outputs while thinking less, a concern raised clearly in STEM education discussions about (Gen)AI and learning design. And this is also an equity issue: students differ in access, confidence, and literacy, and a “GenAI divide” can widen if universities don’t provide structured support and shared expectations (The GenAI divide among university students: A call for action – ScienceDirect). KTH’s decision to provide Copilot access to all students and staff is an important step. The next question is how we align tool choices, whether Copilot or other platforms (free or paid) with our learning goals and with students’ development of judgement.
This is why, at KTH (SCI), as at many other universities, we are treating AI integration as ongoing academic development, not a one-off compliance exercise: raising a shared baseline of AI competence among teachers and building on-site platforms, workshops and recurring arenas, where we can share materials, compare course designs, and learn from each other as practice evolves. If we keep the focus there, on judgement, critical learning, and collegial exchange, I genuinely believe we can navigate this transformation, and do it in a way that serves our students best.

