September 25, 2024
Shockingly (to myself), I’ve fallen into a research project over the past few months. In the process, I learned a small fact that I just can’t stop thinking about, and it connects to my dissatisfaction about the takes I’ve seen response to AI from educators. The connection is oblique, but bear with me.
In earlier versions of the Western interstate system, the seas were managed by the freedom of the seas doctrine, which tried to balance nations’ coastal sovereignty against the use of the oceans for trade. In the 18th century, this doctrine was upheld by a principle referred to as the cannon-shot rule, where the territory of a state could be said to extend only one coastal cannon-shot into its adjacent oceans. This crude technique provided an available technical measurement for “slicing” Earth within the freedom-of-the-seas doctrine that was held at the time.
A fascinatingly human detail is that it was apparently not considered that cannons would eventually be made to shoot further, until they could. As Tirza Meyer writes:
As technology evolved –including even the simple fact that cannon range increased – the necessity to establish exactly how far territorial waters reached had become a pressing legal issue by the early twentieth century. (43)
The cannon-shot rule as an interstate agreement, then, required a fictive reliability, a sense that:
The story relates to Paul Virilio’s theory of the accident. The accident here refers to the inherent unreliability of technological progress, the reality that technical development can only be managed up to a point. As Virilio explained in a 1998 interview:
You cannot separate the accident from reality. The accident is merely the other face of substance, and Aristotle defined it already as such. According to Aristotle, reality is a mixture of “substans” (i.e. what is well established, from the Latin “substare”), and of “accidens” (what “falls into,” from “accidere”). He characterized “substans” as absolute and necessary, and “accidens” as relative and fortuitous. Consequently, reality is made up of these two dimensions. As soon as something is well established (a substance), it is necessarily accompanied by something unreliable, which can trigger off forces difficult to contain at any moment. Technology can only progress in a struggle against the accident.
For Virilio, these technological accidents used to be more localized. But, given the interconnected nature of the modern world-system, technical accidents have become ever more “integral,” that is, globally-implicating.
What’s interesting to me, here, is that the technical accident relies on a limited knowledge about the future and an inability to predict technological development. Moreover, the questions we do ask about technological development are often the wrong ones, because we can’t extrapolate present conditions into the future. This was clearly the case for the establishment of the cannon-shot rule. It offered an available measurement but couldn’t predict its integral accident, the unknown technical progress of the cannon and its effects on legal interstate relations.
This inability to reckon with and ultimately face unknowingness is, in my opinion, the central problem of current conversations about AI in higher education. We simply don’t have a roadmap here. Some would like to travel the well-worn territory of how we responded to earlier technologies, which is endearingly human but wrong. Others want to not engage with the issue at all.
But more and more I feel like higher ed isn’t asking the right questions, either about generative AI’s substance or its possible accidents. Environmental concerns are certainly an obvious integral accident, not only of AI but of networked technologies in general. But what other accidents can we try (to the extent that can try) to imagine?
It’s worth noting that Virilio was explicitly considering the possibility of AI in thinking through these questions. In the same interview, he says:
Well, it is true that the fifth generation computers will not only be able to learn but also to bring forth other computers. What bothers me most in this idea of self-learning computers is the closed circuit character of these systems. The world of computing generally is plagued by this closed loop problem, which is what makes it so dangerous in the hands of a totalitarian system. In order to avoid this “Gleichschaltung,” as the Nazis called it, it is necessary to structure new computer systems as open systems.
Even this comment, from 1998, suggests something that academic institutions are, predictably, ignoring: the proprietary and closed nature of the AI systems that we will seemingly very likely be building, in one way or another, into the foundations of learning in years to come. This may not pose a clear problem now, but there is always an accident around the corner.
Are we treating AI as the evolving technology that it is? Or are we spending our time building “cannon shot rules” around ultimately unstable and undetermined assumptions? I’m starting to think about “cannon-shot epistemology” to name the ways we build knowledge and infrastructure on an human but incorrect assumption of stable technology.
So, what does this look like? Here’s an example. When folks in higher ed conversations do try to imagine the future of AI, they demonstrate cannon shot epistemology in observably specific ways.
A recent opinion piece by Ray Schroeder published by Inside Higher Ed demonstrates them. It starts well enough by acknowledging that AI is outpacing higher education but that we still need to find ways to imagine its future in order to respond to it effectively. Good. I agree. This, so far, is exactly right.
But where does Schroeder’s imagination take him?
I see us replacing midlevel administrators with intelligent agents that can efficiently and effectively make decisions that are thoroughly documented and adaptive to changing goals and outcomes.
One paragraph later:
Startling as it may seem to some, I can see these advanced models, such as those with Ph.D. reasoning, filling adjunct faculty posts while overseen by human professors. The long-running OpenAI-funded Khanmigo project demonstrates that key teaching, tutoring and personalization skills can be delivered by generative AI.
Putting aside the obvious and offensive groundwork being laid against non-tenure faculty and administrative staff, why is it that everyone’s imagination seems to exhaust itself just before AI could possibly affect them?! Elsewhere in this exact same article, Schroeder discusses how “Multiple societies have been formed and fascinating communities have been built by intelligent agents.” He’s talking about Minecraft, which is embarrassing, but the point is this: he can imagine AI creating societies, but he can’t imagine AI developing one half-inch past firing contingent labor across the university and (gasp) affecting tenured faculty. But, why? Why would AI and its integration into higher education stop there?
This is cannon shot epistemology. Schroeder (and the many people who produce similar self-serving arguments) relies on AI staying exactly within the boundaries he assigns it in the same article that he admits that the technology is outpacing us. To be clear, I am not arguing that one specific outcome will occur. Rather, I am just asking us all to admit that AI and its integration in our lives will develop in unpredictable ways. With this in mind, maybe we shouldn’t normalize the possibility educators losing their jobs to it. As a thought.