In late 2022, decades worth of AI progress suddenly entered the public space in the form of ChatGPT. Those who have been tracking AI trends didn’t find the technology itself that surprising since shock had become routine – new art capabilities, even AI agents with feelings – in the emerging generative AI landscape. ChatGPT moved the conversation from “something we hear about” to “something we can play with”. The result is the new hype landscape that will dominate 2023. For those in higher education, it has all the vibes of web 2.0 and MOOCs hype rolled into one. Please update your buzzword bingo cards.
I’m writing a series on the threat higher education faces and why I think it’s future is one of substantive, dramatic, and systemic changes. However, it’s not AI. There are a range of factors related to the core of what educators do: discover, create, and share knowledge. AI is part of it – and in the future may be the entirety of it. Short term, however, there are other urgent factors.
A bit of background
Over the last 20+ years, like many of my colleagues, I’ve been involved in the digitization of higher education. My engagement has been at four stages:
- Networked learning and connectivism, which then lead to
- Massive open online courses (MOOCs), which then lead to
- Learning analytics, and over the last five years,
- Artificial intelligence in education.
On point 4, we’ve now hosted three annual conferences on AI and learning, started an AI leadership and institutional capacity building organization, and a weekly(ish) newsletter on sensemaking, AI, and learning (SAIL). Recently, I’ve been interviewing leaders in universities, big tech, startups, and non-profits regarding what’s happening in education and the future implications of those patterns of change. These posts will share the nature of these discussions.
My perspective is that AI is an absolute systems-level threat to education – in terms of creation, teaching, assessment, and research. ChatGPT, however, is not that threat. More progress needs to be made and Google is the most likely company to drive things forward as they have a far more sophisticated AI portfolio than any other technology company. They have been slow to reveal it to the general public, but they seem to be applying AI principles and practices to the release of LaMDA and related tools. While rumours swirl that Google is “freaking out“, responsibility is the stated undercurrent in their logic. Even the usually non-humble CEO of Google acquisition DeepMind now urges caution: “AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization”. AI is a threat to society and to higher education. Before we dive into that, however, a framework for how we consider those and other threats is needed.
As goes information, so go universities
I’ve long been fascinated with the question of “what is the core change?” that we are experiencing as a society as we digitize. If we can understand this core change, we can move from discussing what is changing to planning for what we are becoming. McNeely and Wolverton’s Reinventing Knowledge remains a key guide in my thinking (a dash of McLuhan never hurts, of course). My core assertion is that what we do with information is the most important aspect in determining the shape of knowledge institutions like schools and universities.
What are we doing with information? Well, we’re co-creating it. We’re building it in digital networks. We’re politicizing it. We’re sharing it in real time. Information today is digital, networked, and (often) transparent. Increasingly, it will be AI-generated. Universities of the future will mirror the affordances of these digital information opportunities.
What does this mean? Many researchers and innovators have been playing in this space for decades, driving universities to see the internet as a space for shared creation and engagement – a global networks of learners, rather than primarily focusing on local classroom instruction. Knowledge as networked and the process of learning as one of growing and pruning those networks. Nowhere is this more pronounced than in the current angst to find a way to either ban or incorporate AI tools into daily classroom learning.
The key question for university leaders to answer is: “If we were to create a university with today’s tools, technologies, and cultural values, what would it look like?”. Universities would hardly look like the system we have now and few leaders are prepared to seriously confront the implications of this question.
About a decade ago, I delivered a talk to a group of senior university leaders about the structure of tomorrow’s universities. At the end of the talk, the president stated “I have no doubt that you are right. I think future universities will look like what you’ve shared. But if I implemented those changes today, I’d be fired”. Being early is as bad as being wrong.
That introduces the need to consider how systems change and the implications of slow systemic responses.
How do systems change?
My core thesis is that the future of universities will be determined by what we as a society are able to do with information. Technology creates new ways of being and acting that existing systems are slow to incorporate. Consider the pedagogical model prevalent in most university courses: centralized expert instruction, learning activities shaped by the instructor, and assessment to determine the degree to which learners have mastered course concepts. The entire model is antithetical to digital networks. While information changes rapidly and requires constant updating, education still has a view of teaching and assessing individuals memory systems rather than equipping learners to navigate nuanced, ambiguous, and complex landscapes. Most instructors and learners, unfortunately, do not have the skills and abilities to create a different system – one that reflects these global information networks.
Why are instructors and students not able to incorporate new technologies to make fundamental changes? Largely it’s due to a unique attribute of higher education and schools in general: they are networks of networks and systems of systems. Changing only one small part fails to produce change of the entire system. Shane Dawson addresses these challenges under the framework of complexity leadership. Innovation needs intentional support and protection. When an institution innovates systemically, the results are clear and impactful as shown by Paul LeBlanc‘s SNHU.
We’re left with a challenging position, similar to Khun’s argument in The Structure of Scientific Revolutions: anomalies accumulate in systems that eventually require a complete shift in thinking to incorporate those anomalies into new frameworks. Or similarly, by slightly torturing Gould’s biological framework of long periods of stability with short moments of punctuated equilibrium, that even systems experience periods of dramatic change even though stability feels permanent.
Theorists to guide
Four theorists are important in unpacking the resilience of systems change. The first one (Perez) describes factors driving change, the second two (Schumpeter and Christensen) describe how change occurs at a systems and social level, and the final (David) describes how technology, at first, fails to produce change. Universities are not forced to fully confront market forces due to a combination of regulation, social status (the social value of a degree), and broader integrated systems (degree to employment pathways, student loan funding).
- Perez’s framework of economic-socio-technical innovation doesn’t directly speak to the dynamics of innovation and systems change so much as society level transformations. In her model, economic and social dynamics drive technological revolutions that then in turn transform life.
- Schumpeter‘s creative destruction where new creations destroy the old in a cycle of perpetual renewal. This results in economic models “incessantly destroying the old one, incessantly creating a new one”.
- Christensen argued that disruptions occur in markets that established organizations often ignore until it’s too late. As a result, disruptive innovation results in market entrants “displacing established competitors”.
- David argues that systems may incorporate new technology, but in the first instance, the adoption “does the work of the old” and doesn’t produce dramatic systems change.
The core assertions:
My core assertion, to be unpacked in future posts, relates to the inability of universities to respond to and absorb broad systemic change pressures because they are part of a network of networks and system of systems. This positioning means that technologies that enable new ways of interacting with information (Perez, Schumpeter, Christensen) even good ideas, ones that could disrupt teaching and learning and research, are not adopted in such a way as to produce systems change (David).
The system of education itself is unable to meet the challenge of the moment, which is to recreate itself to align with the creation, production, and sharing affordances of digital information. Models of change that suggest change comes from the outside don’t apply in education due to regulatory oversight and protection of the system of education itself through social value and integrated pipelines to employment. As I’ll detail in the next post, however, the change threats marshaling outside the gates of higher education suggest that the system defenses are weakening and being breached.