The phrase “change is inevitable, growth is optional,” attributed to John C. Maxwell, reflects my experience over the past six decades. Advanced technologies have relieved humans from tedious tasks, both physical and mental, but change is often met with resistance. For more than 200 years, slide rules were used for quick computation. Hand-held electronic calculators became available in the early 1970s. The engineering college, where I was then an assistant professor of chemical engineering, decided to ban their use during exams. These calculators were thought to provide an unfair advantage for the few students who owned them. Three years later, electronic calculators became a common computational tool for students and faculty and the ban was dropped. Today I display my slide rule and book of logarithm tables in my office as reminders of the “good ole days” of computation.
The responsibility of engineers and scientists is to try to anticipate the unintended consequences of any innovation. But the fear of potential negative consequences sometimes slows innovation. The introduction of bicycles in the late 19th and early 20th centuries afforded women (and men, of course) unprecedented mobility and freedom, which some feared could lead to promiscuity and moral decay. Bicycles ultimately contributed positively to the liberation of women and their equal rights, but adopting this new technology required overcoming the fear of social disruption.
I recently experimented with GPT-4 and was pleasantly surprised in most cases. Asking the right question— the “prompt”—appears key to achieving the desired result. GPT-4 usually produced a good first draft in response, but required human editing and fact-checking to finalize the work. (Of course, this is also true when interacting with a human expert.) Importantly, the technology is able to synthesize and integrate text and numerics, and then summarize the information requested. It is also an excellent editor. I asked GPT-4 to edit an earlier draft of this essay and the recommended changes were generally about word choice.
As a former engineering educator, I see value in generative AI as an aid to student learning. It could:
- Edit drafts of papers and reports. This application would be especially useful to international students and researchers writing in a non-native language.
- Create a first draft of a report that integrates several topics, including a mix of technical and non-technical areas.
- Test for inadvertent omissions of information.
- Integrate concepts of ethics and social sciences into problem-solving, design, and innovation.
- Perform technical calculations.
To probe the final point, I prompted GPT-4 to solve two problems a teacher might present to an undergraduate student in chemical engineering. The first was a straightforward determination of the heat capacity of a material using thermal data, and the second was a more complicated problem the flow of liquid through a pipe under turbulent conditions. In both cases, I gave GPT-4 an “A” grade, not only for getting the right answer, but also for clearly laying out the solution with the correct equations and converting the input data to consistent units.
The potential benefits of this technology are perhaps greatest for underprivileged students. Accessibility should be a key goal.
The above example also leads to a dilemma for the teacher: A student could answer these homework problems correctly without developing the solution strategy, missing the value of the assignment. The creativity of the teacher must come into play to ensure the student learns the basics behind the calculations. At the same time, generative AI could free up time to focus more on principles and concepts and less on routine exercises, thus liberating students and faculty to be more creative—the essence of engineering.
Another potential unintended consequence of generative AI is further inequity in education if it is only available to those students who can afford the license to use it. The potential benefits of this technology are perhaps greatest for underprivileged students. Accessibility should be a key goal.
I am excited about the future of generative AI and how it can improve engineering education and practice. As this technology evolves and we learn more about it, greater benefits to society will result. Guardrails must be in place to limit the spread of misinformation, bias, propaganda, and other negative byproducts of mass connectivity. Generative AI has the promise to enhance human creativity and effectiveness if used wisely and distributed equitably. We should embrace this change—it is inevitable—and use it to grow as individuals while guarding against its misuse. This is the challenge.
John L. Anderson
John L. Anderson is the president of the National Academy of Engineering. He previously served as president of the Illinois Institute of Technology, provost at Case Western Reserve University, and dean of engineering and chair of chemical engineering at Carnegie Mellon University. During 2014-2020, he served on the National Science Board.