B
Back in 2009, the renowned science fiction author Ursula K. Le Guin appeared at an event in Portland to discuss her short story “The Ones Who Walk Away from Omelas.” In the story, the utopia of Omelas exists only because of the abject misery of one child kept in a dank basement closet.
Decades after its 1973 publication, Le Guin told the audience, readers continued to ask her about it. Each generation, the author said, found the story a timely vehicle to consider whether the happiness of the many can justify the suffering of a few—often through the lens of the technology or the moral crisis of the moment.
One earnest attendee jumped up. She asked for advice on how to act in response to the story’s meditation on utilitarian philosophy. Le Guin shrugged, abdicating follow-up responsibility to her readers. I don’t know, she said: “I just wrote the story.”
The audience laughed. Le Guin went on to suggest that there may not be one answer, but that knowing enough to ask questions is the start of establishing a personal framework of ethics.
Today, the Omelas story remains a powerful gateway for teaching ethics, particularly around the use of technology and whom it benefits. That’s why Calvin Deutschbein, an assistant professor of computer science at Willamette, puts the story on the syllabus of their “Ethics, Teamwork and Communication” class. The course is mandatory for undergraduates pursuing a degree in data or computer science at Willamette’s new School of Computing & Information Sciences. Graduate students in the school have their own requirement, a course called “Data Ethics, Policy, and Human Beings.” For Deutschbein, whose research focuses on computer security in hardware design, the goal is to help students develop systems of ethics that will guide their future careers.
This is essential work, especially in light of the tech industry’s poor track record of ethical decision-making. Consider how the political consulting firm Cambridge Analytica acquired the private Facebook data of tens of millions of users and used it to send microtargeted ads to voters during the 2016 US presidential election. Or how faulty data storage again and again puts consumers at risk for identity theft. Or how search engines enable hate and extremism by tuning algorithms or refining design features to either exacerbate or curb content. Or how, in 2023, forty-one states and Washington, D.C., sued Meta Platforms, alleging that it intentionally builds its products, such as Instagram, with addictive features that harm young users.
Ethical decision-making feels particularly urgent as generative artificial intelligence continues to take hold in our daily lives and power our technology, at a pace too fast for any emerging legislation or international protocol to govern its use. AI will change how people live and work—arguably more rapidly than at any other moment of technological advancement in human history. This will lead to transformative social change. Or upheaval, depending on your perspective.
So, what exactly is artificial intelligence? It’s an all-encompassing term to describe a field that combines computer science and large datasets to build machines that are programmed by humans to do smart things. Among the smart things AI is now capable of is teaching itself to learn and adapt. It does this by using algorithms and statistical models to analyze large sets of data and draw inferences from them—a practice often known as machine learning. For example, ChatGPT is a form of artificial intelligence that uses predictive text to answer people’s questions or to assemble information from their prompts.
“AI is a tool, like a saw or a hammer,” says Deutschbein, who teaches students how the tools work and how to think about the ethical minefields they represent. To give one example: Many AI systems use large language models that were trained on nearly 200,000 pirated books; some authors have sued tech giants for the unauthorized use of the material. Among the pirated works used to train AI are dozens of Le Guin’s books and stories, including the Omelas story.
Students discuss who controls the tools and who benefits—or fails to thrive—from AI. They discuss how to be responsible stewards of the immense trove of data gathered by governments, health care companies, cell phone providers, and even grocery store loyalty programs. They consider accessibility and affordability. And they study what happens when AI programs make up answers with false information, a predictive problem known as “hallucinating.” There’s also plenty to discuss around the mining of lithium and cobalt—the physical materials that create the technology—and the carbon footprint of data centers.
“We think of the mathematical process as being objective. And that’s problematic, because there’s a human being deciding what question to ask and how the data is gathered.”
The School of Computing & Information Sciences came into being in 2023 to house Willamette’s existing undergraduate programs in computing and data science and master’s program in data science, as well as a new master’s in computer science. Its prevailing teaching philosophy is that big tech needs people who are comfortable asking and answering questions around ethics, says Jameson Watts, its dean. Watts—who studies the intersection of marketing and computer science—champions the mandatory ethics courses.
For undergraduates, he says, the course is an introduction to ethical dilemmas likely to arise in their professional lives. As an example, Watts cites facial recognition applications that don’t recognize darker-skinned faces as well as they recognize lighter-skinned faces. Students learn to understand the origins of such bias and how developers can combat it. For graduate students, studying ethics allows them to explore issues they may already have encountered in their careers. Even if students don’t become experts on AI ethics, Watts says, they’ll be able to “elevate its importance within an organization.”
The school aims to produce computer and data science graduates who can not only write code, design software, and analyze complex data, but can take a human-centered approach to such work. The school is positioning itself as a national leader in teaching ethics, a competitive advantage that Watts believes will set its graduates apart. And the curriculum is designed to make students highly employable at graduation, especially first-generation college students and others from backgrounds underrepresented in computer and data science. (Inclusion in the workforce is yet another ethical issue facing the industry.)
“If you want folks in the industry to be making decisions that are grounded in values of elevating diverse voices or addressing the issues of disadvantaged populations, then you have to make sure that diverse voices and disadvantaged populations get a seat at the table,” Watts says. “And the way they get a seat at the table is by getting this education and then getting into the job.”
There’s a breadth-versus-depth tradeoff to a mandatory ethics class. Because of the ethics class, students have time in their schedule for one fewer advanced technical course. But “we have consciously made this choice, and we think it’s the right one,” Watts says. “There are myriad opportunities for further technical development after you leave our classroom, but almost no one out in the industry is asking you to think about or focus on ethics once you’re on the job.”
Discussions of ethics are central across the school’s curriculum, not only in the mandatory classes. When Professor of Computer Science Haiyan Cheng teaches algorithms, she talks about how the equations that undergird artificial intelligence are often opaque. For ordinary people who don’t know computer science, they exist in a black box. She asks her students: Can you justify what’s in the box?
She wants them to think about what biases might be a part of the algorithms they’re writing, and how their products will work in countries with different laws or cultures, including the European Union, which has stiff privacy constraints. Among other topics, her students also explore how an ethical approach may add to the cost of doing business. Cheng has designed a new course called “Computing for Social Good” and has embedded the lessons of that class in her other courses. “When I teach my machine-learning algorithm, it’s not about just getting the result, it’s about giving it justification.”
Other professors, such as Hank Ibser, a statistician and data scientist, focus on ethics through a historical lens. He works with graduate students to look at how power structures evolve during the data life cycle—from how information is gathered, to the source of the information, to the mathematical models at the heart of generative AI.
“Every phase of that is subject to human bias,” Ibser says. “We think of the mathematical process as being objective, and we feel like the methods are trying to uncover some truth. And that’s problematic, because there’s a human being deciding what question to ask and how the data is gathered.”
This becomes particularly important as ChatGPT becomes a household name. One ChatGPT rival, Anthropic’s Claude, shares a disclaimer with users that “it may occasionally generate incorrect or misleading information, or produce offensive or biased content.”
If artificial intelligence is biased, Ibser says, that’s because it’s based on fallible human inputs at almost every iteration of the technology.
“Almost no one out in the industry is asking you to think about or focus on ethics once you’re on the job. ”
Full disclosure: AI helped write this article. As a journalist, I record most interviews and upload them to an automated transcription service I’ve used for about six years. The software has gotten much speedier in recent months, because of advances in predictive AI; it now transcribes my interviews within minutes. The transcripts aren’t perfect—I must check the accuracy of each quote I use against the recording. For example, Watts told me that the university wants to cultivate “flexible thinkers,” which the software interpreted as “flexible fingers.” But the software is very good, especially when the audio quality is high, and not having to transcribe a whole interview saves me valuable time.
Recently, the transcription software began offering to summarize my interviews. At first, I refused to click on the summaries, an AI feature made possible by rapid advances in predictive language models. The summaries felt like cheating, although I couldn’t quite explain why. And I worried that AI might prove better than I am at summarizing my own interviews. Finally, though, I clicked, with the reasoning that if I was writing an article about AI, I should at least understand how it might affect my own career. The good news for me? The summary was mediocre. AI is not about to take my job—not yet.
I found the software far more skilled at generating a list of topics covered during the interview. This, I reasoned, could be useful as a reminder of topics to include in a story, or details to follow up on with my sources as I pieced it together.
I shared the summaries with the computer and data scientists I interviewed for this piece, something I wouldn’t normally do as part of my workflow. But I wanted to talk through the ethical implications of using AI in my own work, to better understand how students would study the issue. I also thought it would be interesting for professors to see my process and consider how it might apply to other knowledge-based or creative professions once viewed as relatively immune to automation, such as law, screenwriting, architecture, art, and software development.
I told them that the summaries of the interviews felt incomplete, a flickering facsimile of a human conversation with little bits and pieces floating away uncaptured, like digital motes of dust. The computer program, I realized, failed to understand what was going unsaid in each interview. The human brain naturally fills in all sorts of gaps in conversation, grasping the connections in the unsaid in a way that a predictive text machine cannot—at least not yet.
When I say this to Cheng, she nods over our Zoom call. “The machine literally is generating,” Cheng says. “It’s not writing with passion.”
In fact, Sam Altman, the CEO of OpenAI, which developed ChatGPT, has said he has “deep misgivings” about a vision of the future where “everyone is super close to AI friends.” It is among the reasons OpenAI made a deliberate choice not to assign a human name to its machine.
Cheng points out that AI can help us to replace or automate repetitive, mundane work so that humans can focus on more creative and more complicated matters. “I want to bring out the good side, how much technology has made our life easier and helped us to improve productivity and efficiency,” she says. “The challenge is in balancing benefit and potential risk. I want my students to know not just, ‘Oh, be very careful,’ but also not, ‘Oh, let’s use it without any consideration.’ ”
As a science fiction writer, Le Guin predicted the consequences of technical advances that would change humankind. In an oft-quoted 2014 speech at the National Book Awards, she warned of hard times to come as big technology companies further commodified artistic aims: “We’ll be wanting the voices of writers who can see alternatives to how we live now, can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope.”
The same can be said for all of us, not just writers. We all benefit when Facebook hires data scientists who understand how to raise ethical concerns on the job, when TikTok employs designers who’ve studied the consequences of the addictive nature of modern technology, when Amazon has software developers who think about the carbon footprint of their goods and how they’re delivered, when YouTube executives consider how and why their recommendation algorithms direct people toward specific content, and when prompt engineers with Microsoft weigh the inherent bias within AI systems. Because if we humans are fallible, the technology we produce is, too.
But that technology also has the capacity to make the world a better, more ethical place—one that, to paraphrase Le Guin, envisions hopeful alternatives to how we live now.
Digital Ethics Conversation Starters
We asked Willamette professors in the School of Computing & Information Sciences to share ethical questions they might pose to students. How would you answer them?
1. How can AI make us happy?
2. What ethical considerations should be taken into account when creating an AI system for decision-making?
3. Who should own/have access to information about you (address, political party, search history, purchasing history, medical records)? What rules should exist around data storage for this information, and what should happen if there is a breach? Whose fault is it?
4. How can we promote positive behavior and discourage negative behavior in the AI age?
5. Who does and who should make ethical decisions about use and proliferation of AI?
6. If we slow down/regulate AI in the US in response to ethical issues, will other countries progress faster?
••
Erika Bolstad is a journalist in Portland. Her book, Windfall: The Prairie Woman Who Lost Her Way and the Great-Granddaughter Who Found Her, is a finalist for a 2024 Oregon Book Award. For the prior issue of Willamette, she wrote about the bar exam and Senator Lisa Murkowski JD’85.