Arafeh Karimi, PhD
I am an independent judgement and sensemaking practitioner working with foundations, public-interest organisations, and evaluation teams when decisions about AI and digital systems carry real consequences for learners, workers, and communities.
What I bring
I have spent the past fifteen years working in the space between research, systems, and lived practice. That includes:
- A PhD in Human-Computer Interaction and co-design from the University of Queensland, focused on how people and technologies meet in real contexts.
- Leadership roles across national digital learning programmes, commissioned evaluations, and reporting-phase syntheses for public and philanthropic funders.
- Work spanning formal and informal education, social and neurodivergence contexts, and child-rights and public-interest technology initiatives across multiple countries.
I rarely look at a decision in isolation. I have seen how apparently sound strategies unfold, and sometimes unravel, once they meet classrooms, staffrooms, families, and frontline work.
How I think about evidence and risk
My work is grounded in socio-technical systems research and participatory design. I pay close attention to:
- How knowledge, power, and responsibility are distributed when new tools are introduced.
- Who quietly carries the verification and interpretive load, often teachers, junior staff, families, or frontline workers.
- Where ethics, policy, and practice fall out of alignment under real institutional conditions.
On this site, that work appears in two main lanes:
- Evidence & Learning: syntheses and briefs that turn scattered studies, reports, and field accounts into one decision-ready artefact.
- Judgement & Governance: shadow-risk and verification-load artefacts that surface what existing metrics and risk registers often miss.
Selected work
Recent public and policy-facing work includes a UNESCO-affiliated publication and recent book chapters on relational risk, care, and governance in AI-enabled learning systems.
Compassion by Design: Building AI With and For Caring Educators
A contribution on how care, dignity, and human judgement can be designed into educational AI systems without reducing them to sentiment or abstraction.
Used by: public institutions and international organisations aligning AI adoption with pedagogical purpose, educator agency, and institutional legitimacy.
From Guardrails to Covenants: Relational Bias in GenAI Education
A judgement-led analysis of how bias in generative AI for education emerges relationally over time through design choices, institutional incentives, and pedagogical assumptions.
Used by: policy leads, researchers, and designers assessing relational and temporal risk in AI adoption.
From Guardrails to Attunement: AI Pedagogies of Care, Consent, and Co-Emergence
An exploration of why compliance-driven ethics and static guardrails often struggle to hold in lived educational contexts, and what more attuned approaches to care, consent, and responsibility require.
Used by: institutions and research groups developing ethical and pedagogical AI frameworks under real-world complexity.
Lived grounding
Alongside formal institutional work, I maintain ongoing participatory engagement with neurodivergence and non-institutional learning communities.
Much of my current work is grounded in close involvement with neurodivergent family life, teacher workload, and the infrastructures that support or undermine care.
That grounding keeps my judgement accountable to realities that rarely appear in dashboards or slide decks.
How I work with you
I work through finite, artefact-based commissions, rather than open-ended advisory roles. Typical engagements are:
- Scoped and time-bound, with a clear decision or governance moment in view.
- Primarily asynchronous and low-meeting, to protect depth of thinking on both sides.
- Designed to produce written outputs you can circulate to boards, funders, or leadership teams, including briefs, syntheses, memos, or governance notes.
I take on a limited number of commissions at a time so the work remains rigorous, independent, and usable at moments of real consequence.