AI Education and Education about AI
Reflections on machine intelligence and human learning — two trajectories hand-in-hand
You know how sometimes you’re sitting in your room late at night, thinking about seemingly unrelated things, and suddenly they connect in your mind? That’s what happened when I was reviewing the history of AI in education. There’s this fascinating pattern that emerges when you look at how we’ve been using machines to teach, and how we’ve been teaching about machines themselves.
It’s actually quite funny when you think about it — we’re using AI to teach humans while simultaneously teaching humans about AI. Makes you wonder who’s really learning from whom, ya?
When Machines Became Teachers
Let me share a timeline about teaching machines that reveals something fascinating about how we’ve understood — and misunderstood, maybe?— the nature of learning itself. Not the neural networks of today, with their vast matrices of floating-point numbers approximating knowledge, but their mechanical ancestors — devices of brass and spring that embodied our earliest theories about how minds acquire understanding.
Sidney Pressey and his invention. Image from Hack Education and Slate.
In 1924, Sidney Pressey created what he called a “teaching machine.” Picture a mechanical device, not unlike a typewriter, but one that could engage in a primitive form of dialogue with its user. It would present a question, await a response, and provide immediate feedback. The mechanism’s simplicity belied its philosophical implications: here was our first attempt to externalize the process of teaching, to capture in gears and levers what we thought was the essence of learning.
What makes this historical moment particularly revealing isn’t just the mechanical ingenuity, but how the machine embodied the dominant psychological theory of its time — behaviorism. This was an era when psychologists like Watson and Skinner viewed learning as fundamentally a matter of stimulus and response, rather like our contemporary reinforcement learning algorithms. The parallel is striking: just as we now train neural networks through reward signals, behaviorists believed human learning could be reduced to a series of carefully crafted reinforcements. There’s an elegance to this view, though perhaps also a certain poverty of imagination.
By the 1960s, with the advent of Computer-Assisted Instruction (CAI), our metaphors for learning had shifted dramatically. The emergence of cognitive psychology brought with it a new model: the mind as information processor. We began thinking of human cognition in terms of input, processing, and output — terms borrowed from the very machines we were building to assist in education. The irony is delicious: we created computers in our image, then began seeing ourselves in theirs. It’s a kind of technological mirror stage, reflecting back our theories about ourselves through the devices we build.
Minicomputers from 1960. Image from Satari.
The 1980s brought us Intelligent Tutoring Systems (ITS), marking another shift in our understanding. These systems attempted something more ambitious: adapting to individual learning patterns. The difference is rather like that between an ustadz who merely transmits information and one who observes, adjusts, and guides — who understands that knowledge transfer isn’t merely about content but about context, timing, and the unique characteristics of each student’s mind.
PLATO IV Terminal, ca. 1972–74. Credit: University of Illinois Archives. Referenced through Ars Technica.
This evolution from mechanical quiz-givers to adaptive learning systems traces not just a history of technological advancement, but a deepening understanding of what it means to learn and teach. Each generation of teaching machines serves as a kind of technological historiography, preserving in their design and operation the educational theories of their time.
Meanwhile, in Computer Science Departments…
But here’s the other side of the story: while we were building these teaching machines, we were also trying to figure out how to teach people about AI itself. At first, it was all technical — algorithms, data structures, that sort of thing. Very dry, very “here’s how you make the computer do things.” The typical AI curriculum evolved through distinct phases: starting with symbolic AI and expert systems in the 80s, moving through statistical learning in the 90s, and eventually arriving at today’s deep learning paradigms. Each phase brought its own set of assumptions about what students needed to know — from LISP programming and logical inference to probability theory and calculus, and now to the excruciating details of neural architectures and gradient optimization.
Yet throughout this evolution, something curious happened. As our technical capabilities grew more sophisticated, our educational approach became increasingly compartmentalized. We developed specialized tracks: machine learning engineers focusing purely on model architecture, data scientists consumed by statistical methods, AI ethicists pondering societal implications — each in their own silo, rarely engaging in meaningful dialogue with the others. Rather like building different parts of a spacecraft without ever discussing where we’re trying to go. (though I suspect in Indonesian industry context, pure compartmentalization in job descriptions is still a rare luxury 😅)
The funny thing is, we kind of missed something important along the way. In our Islamic tradition, when you teach someone astronomy, you’re not just teaching them about stars — you’re teaching them about the signs of Allah’s capability to create the universe, thus the name al-Khaliq. When you teach mathematics, you’re also teaching wisdom about using the proper methodology for its proper time or issue. But somehow, when it comes to AI, we got caught up in the technical bits and forgot about the wisdom part, or at best, put it as a footnote. It’s almost as if in our rush to build artificial intelligence, we forgot to apply our natural intelligence to understanding its deeper implications.
In our Islamic intellectual tradition, we have this beautiful concept of adab — it’s not just about knowing what’s right, but developing the wisdom to understand why it’s right and when to apply it.
A little elaboration about “adab”
In Islamic thought, adab carries a depth that our English translations often struggle to capture fully. While commonly translated as “etiquette” or “good manners,” these renderings barely scratch the surface of its metaphysical implications. Syed Muhammad Naquib al-Attas, in his penetrating analysis, defines adab as “the recognition and acknowledgment of the right and proper place of things in the order of creation, such that it leads to the recognition and acknowledgment of God in the order of being and existence.”
Consider how this definition transforms our understanding: adab isn’t merely about following prescribed rules or maintaining social graces. It’s about developing an almost intuitive comprehension of the proper order of reality — a kind of spiritual-intellectual GPS that helps us navigate the complex landscape of existence. In the context of knowledge and learning, it manifest as what we might call “intellectual courtesy” towards the nature of things: understanding not just what they are, but their proper place in the grand scheme of creation.
This understanding of adab reveals interesting parallels with what contemporary philosophy of technology calls “technological appropriateness” — though I’d argue the Islamic concept offers richer metaphysical grounding. It’s about maintaining proper relationships: between the knower and the known, between capability and responsibility, between what we can do and what we ought to do. One might say it’s the difference between knowing how to build a neural network and understanding its proper place in the ecology of human knowledge and activity (though I suspect many of us are still struggling with both aspects).
The concept becomes particularly poignant when we look at its root meaning in Arabic, which connects to invitation or gathering for a feast: “banquet” (maʾduba). There’s something beautifully apt about this etymology — proper adab invites us to partake in knowledge not as mere consumers, but as guests who understand both the privilege and responsibility of their position. Rather like being invited to a fancy dinner party: knowing which fork to use is helpful, but understanding how to be a good guest goes far beyond mere mechanical rule-following.
Take machine learning algorithms, for instance. When we teach gradient descent, we often present it as this elegant mathematical dance — adjusting weights and biases in tiny steps until our loss function reaches its minimum. It’s beautiful in its own way, rather like watching a master calligrapher perfect their strokes. But there’s a deeper parallel here with the Islamic concept of ihsan (excellence and beauty in action) that we rarely discuss. Just as the calligrapher’s art isn’t merely about getting the letters right but about embodying spiritual meaning through physical form, our algorithms shouldn’t merely optimize for mathematical accuracy but for meaningful engagement with reality.
I remember sitting in a machine learning class where we spent three weeks on backpropagation mathematics (those partial derivatives that haunted my dreams), but not a single session on what I’d call the “adab of algorithmic intervention.” This is where the deeper meaning of adab becomes crucial. When we deploy a model that will affect people’s lives, we’re not just implementing a technical solution — we’re participating in what Islamic philosophers would call tasarruf (dispensation or governance) over a domain of reality.
Consider the process of data collection. From a purely technical standpoint, it’s about gathering sufficient, clean, representative data. But through the lens of adab, we must ask: Are we maintaining proper relationships with the sources of our data? Are we honoring the trust (amanah) placed in us by those whose information we process? The Prophet ﷺ said, “None of you truly believes until he loves for his brother what he loves for himself.” How might this hadith inform our approach to data privacy and algorithmic fairness?
Even the concept of model bias takes on new dimensions when viewed through this lens. It’s no longer just about statistical skew or sampling error — though these technical aspects remain important. It becomes a question of ‘adl (justice) and mizan (balance) in our technological interventions. Are our models maintaining the proper order of things, or are they subtly distorting the relationships between people, knowledge, and reality?
Beyond Technical-Ethical Dualism
When I think about our ulama of the past, I’m struck by their deep understanding of knowledge integration. Take Ibn al-Haytham, for instance — here was a man who could write detailed treatises on optics while simultaneously reflecting on the spiritual implications of light as both a physical phenomenon and a metaphor for divine illumination. His work wasn’t just technically precise; it was existentially meaningful.
These scholars understood something we seem to have forgotten in our rush toward technological advancement: that knowledge isn’t merely about accumulation but about integration. When they taught astronomy, they weren’t just mapping celestial bodies — they were revealing the ayat (signs) of Allah in the cosmos. Their mathematics wasn’t divorced from metaphysics; their medicine wasn’t separated from moral philosophy. Each technical insight opened a window to deeper understanding of existence itself.
The brilliant irony here is that while we pride ourselves on building “neural networks” and “deep learning” systems, we’ve somehow managed to make our approach to teaching them remarkably shallow. We’ve created artificial neural networks inspired by human cognition, yet our education about them often lacks the very depth and interconnectedness that characterizes human understanding.
This is why I propose “integrated AI literacy” — not as another checkbox in our curriculum, but as a fundamental reimagining of how we approach AI education. Think of it as applying the wisdom of our intellectual tradition to the challenges of our technological present. Just as your mother teaching you to cook isn’t merely about measurements and techniques but about understanding the barakah (blessing) in food and the adab of sharing it, our AI education should encompass both technical mastery and ethical wisdom.
This integration manifests in several interconnected ways:
- When teaching neural networks, we explore not just their architecture but their epistemological implications. What does it mean that a system can “learn”? How does statistical pattern recognition relate to human understanding? The technical details of backpropagation become more meaningful when contextualized within larger questions about knowledge and consciousness.
- In discussing data ethics, we draw upon Islamic principles of amanah and mas’uliyyah not as abstract concepts but as lived realities. The Prophet ﷺ said, “Every one of you is a shepherd and is responsible for his flock.” How does this principle transform our understanding of data stewardship? What does responsible AI development look like when we truly internalize this perspective?
- Our exploration of AI bias becomes richer when viewed through Indonesia’s unique cultural lens. Our archipelago’s diversity isn’t just a demographic fact — it’s a living laboratory for understanding how different worldviews interact. When we discuss bias in AI systems, we’re not just talking about statistical skew but about the challenge of building systems that respect and reflect our cultural complexity.
More Questions, Deeper Answers
As we advance in AI technology, our questions seem to evolve from the technical to the existential. We’ve gone from “Will robots take our jobs?” to “Will AI systems preserve our values?” — though I suppose being replaced by a robot and losing our values could be considered equally unsettling, just on different metaphysical planes.
The pattern in AI education is telling: we’ve progressed from teaching simple if-else statements to exploring complex neural architectures, from supervised learning to reinforcement learning, from narrow AI to contemplating artificial general intelligence. Yet our ethical frameworks haven’t quite kept pace. It’s as if we’re building a spaceship while still using the moral compass of a bicycle — technically impressive, but perhaps not quite suited to the journey ahead.
This brings me to a realization that feels both obvious and essential: the way we educate about AI inevitably shapes the AI we create. When we teach AI primarily as a technical discipline, we shouldn’t be surprised when we get AI systems that excel at technical tasks but stumble on basic human values. It’s rather like teaching someone to read without teaching them to understand — technically proficient, but missing the essence.
For Indonesia: At the Crossroads of Wisdom and Innovation
For Indonesia, this moment in technological history presents a unique vantage point — not merely as observers of the global AI revolution, but as inheritors of an intellectual tradition that offers insights into the very nature of intelligence and learning. Our positioning in the Global South isn’t a disadvantage to overcome, but rather a perspective to be embraced. We stand not at the periphery of technological innovation, but at the confluence of multiple epistemic traditions that might just hold the key to developing more holistic approaches to AI.
Consider our intellectual heritage: centuries before the term “integrated learning” became fashionable in Western academia, our scholars were already practicing what we might call “epistemological tawhid” — the unity of knowledge. When scholars like Al-Ghazali wrote about the classification of knowledge, they weren’t creating artificial silos but revealing the inherent connections between different domains of understanding. This isn’t just historical trivia; it’s a living intellectual tradition that offers lessons for our contemporary challenges in AI education.
What makes our position particularly significant is our experience in navigating multiple knowledge systems. In Indonesia, we’ve long practiced what anthropologists might call “epistemic flexibility” — the ability to move between different ways of knowing without losing our grounding. We understand, viscerally, that there are multiple valid ways of approaching truth, whether through the rigorous logic of computer science, the spiritual insights of tasawwuf, or the practical wisdom of local traditions.
This positioning offers us what Imam Al-Ghazali might have called a furqan — a criterion for distinguishing truth from mere utility. But it’s more than that. Our tradition has always emphasized the integration of the right knowledge (‘ilm) with wisdom (hikmah), understanding that they are as inseparable as the body and soul. In the context of AI education, this traditional understanding takes on renewed relevance, offering a framework for developing AI systems that are not just technically sophisticated but ethically grounded and culturally aware.
Perhaps most importantly, our perspective from the Global South allows us to ask questions that might not occur to those working within the dominant technological paradigms. When we think about AI development, we’re naturally inclined to ask not just “How can we make this more efficient?” but “How can we make this more just?” Not merely “How can we scale this technology?” but “How can we ensure this technology serves all of humanity, not just its most privileged segments?”
I often heard software engineers said, “In tech, we often solve the wrong problem perfectly.”
The statement carries echoes of Al-Ghazali’s critique of the philosophers of his time — technical excellence without wisdom is like having a powerful engine with no steering wheel. Or as we say in Indonesian, “seperti kapal tanpa kompas” — like a ship without a compass. Though in our case, the ship might be powered by neural networks and running on Nvidia processors, but still fundamentally lost.
This brings us to a crucial question: how do we navigate these waters? The answer, I believe, lies not in choosing between technical expertise and ethical wisdom — a false dichotomy that itself reveals our modern tendency to fragment knowledge. Instead, we must recognize their fundamental inseparability. Like the dual nature of light — both wave and particle — true AI education must embrace both technical rigor and ethical wisdom. This duality isn’t a compromise; it’s a completion.
The Prophet ﷺ said, “Wisdom is the lost property of the believer.” There’s something profound in applying this hadith to our current technological moment. In our quest to advance AI education, we might just rediscover some “ancient” wisdom that proves surprisingly relevant to our most modern challenges. Though I suspect the debug console errors will remain as cryptic as ever — perhaps some mysteries, like certain ayat al-mutashabihat, are meant to keep us in a state of perpetual contemplation.
Insya Allah, in the next post, we’ll explore practical strategies for implementing this integrated approach to AI education in Indonesia. Though given the pace of AI advancement, we might need to update our strategies before I finish writing — a challenge our traditional scholars, with their timeless insights, never had to face. Their books typically remained relevant for centuries; we’re lucky if our technical documentation survives six months without deprecation notices.
Note: As with any good machine learning model, this exploration of AI education’s dual genealogy raises more questions than it answers. But then again, if your understanding is giving you perfect answers, it’s probably overfit to your existing beliefs. And in matters of wisdom and technology, a little underfitting might be exactly what we need.
References and Further Read
On Educational Technology
Wong, G. K. W., Ma, X., Dillenbourg, P., & Huan, J. (2020). “Broadening artificial intelligence education in K-12: Where to start?” ACM Inroads, 11(1), 20–29. Particularly relevant for its discussion of AI education for primary and secondary school levels.
Holmes, W., Bialik, M., & Fadel, C. (2019). “Artificial Intelligence in Education: Promises and Implications for Teaching and Learning.” The authors provide an excellent overview, though their optimism occasionally outpaces their evidence — a common affliction in our field, I’ve noticed.
On Islamic Educational Philosophy
Al-Ghazali. (2010). “The Book of Knowledge” (Kitab al-’Ilm from Ihya ‘Ulum al-Din). Trans. Kenneth Honerkamp. One might say Al-Ghazali was discussing the importance of explainable AI centuries before we invented it — though he was more concerned with explaining human intelligence to humans.
Al-Attas, S. M. N. (1980). “The Concept of Education in Islam.” Still remarkably relevant four decades later.
Patrizi, L. (2020). “Chapter 22 The Metaphor of the Divine Banquet and the Origin of the Notion of Adab”. In Knowledge and Education in Classical Islam. Leiden, The Netherlands: Brill. https://doi.org/10.1163/9789004413214_024. More on the concept of adab from historical perspective.
On How Islam Classical Scholars is Also A Philosopher
Ishaq, U. M. (2020). Filsafat sains: Menurut Ibn al-Haytham. Prenada Media.
Personal Reflections and Field Notes
I’ve also drawn from my experiences working in AI development and Islamic philosophy, though as any good scientist knows, personal experience makes for interesting hypotheses but unreliable conclusions. Still, these experiences have shaped my understanding of how theoretical frameworks manifest in practice — or sometimes, more instructively, how they don’t.