Are we asking the right questions about AI?

photo collage of alan hansen

This article is the fifth and final in a series of articles on helenair.com discussing Carroll College faculty perspectives and experiences with AI. 


When I was a first-year student living in a college dorm, word got around about Danny, a guy down the hall who would write your Sociology or Economics essay for a fair price. I vaguely remember wondering if Danny was making good money. Nearly 40 years later, a generative artificial intelligence platform (pick your favorite; there are dozens) plays the role of Danny, and its efficiency, quality, and low cost have driven modern Dannys out of business. My point in beginning with Danny’s side hustle is, very simply: We’ve been here before.

Scottish essayist Thomas Carlyle and American media scholar Marshall McLuhan both expressed concern that the tools we use transform not just how we work and how we relate, but more fundamentally who and what we are. The thing is, Carlyle said this in 1829, and McLuhan said something very much like it in 1964. British economist John Maynard Keynes lamented the inevitable economic disruption caused by new technologies. He said this in 1930. Stephen Hawking told the BBC that the development of “full artificial intelligence” could mean the end of humans as we know us. He said this much more recently, in 2014.

So we’ve (maybe) been here before. Sometimes we search for answers before we have the right questions to ask. With AI seemingly just the most recent in a long line of technological developments that occasioned sightings of the falling sky, what are the questions we need to ask, this time around? One question is: “How do we perceive generative artificial intelligence?” So much of how we think about AI, and the extent to which we think we’ve been here before, depends on how, after all, we perceive it.

If we see AI as a tool, then we might see it as we do the chainsaw, the calculator, and the dental drill: Instruments that make our lives easier by replacing skills that (given the new tool) we don’t really need anymore. We might see AI as something more akin to the printing press, the cotton gin, or the microprocessor: Tools, yes, but tools that also occasioned changes so fundamental in the way we humans relate to one another that we (in retrospect) call those moments “revolutions.” We might (as so many do) see AI as a collaborator of some sort, which evokes images of colleagues, personal assistants, Danny-types, and, as a prominent tech leader put it recently, hungover interns.

In the prior articles in this series, my colleagues have pointed to just a few of the consequences of AI. In this, the last article of the series, I have no such aspiration. I just want to know which questions to ask.

If I’m pessimistic about AI, it’s because the cart of technological advancement in AI has been put before the horse of identifying and proactively addressing its environmental, societal, economic, educational, relational, and mental – that is to say human – impacts. (We’ve been there before, too.)

If I’m hopeful about AI, it’s because I believe that the term “full” is doing a lot of work in Stephen Hawking’s warning. And it’s because we’ve faced technological advancements before, and we’ve learned (or we’re still learning) how to reshape our tools even as they’re reshaping us. If I’m hopeful about what will be known in future history books as the AI revolution, it’s because we’re finally getting around to figuring out which questions we need to be asking.

I collaborated with ChatGPT (unpaid version) as I wrote this article. I asked ChatGPT to list instances of and gather quotations about technological revolutions, which I then verified. I also asked ChatGPT provide other information, which I ended up not really using, and I resisted its repeated, enthusiastic requests to do more for me than I wanted. A copy of my full conversation with ChatGPT is available upon request at ahansen@carroll.edu.

Download pdf of this article


Alan Hansen, Ph.D., is a professor of Communication Studies and Director of the Communication Center at Carroll College.

Helena IR: Are we asking the rights questions about AI?