Illustrations by Stephanie Dalton Cowan

The Future of AI

Artificial intelligence is poised to transform society. How do we develop it safely?

When the company OpenAI released an artificial intelligence program called ChatGPT in 2022, it represented a drastic change in how we use technology. People could suddenly have a conversation with their computer that felt a lot like talking to another person, but that was just the beginning. AI promised to upend everything from how we write programming code and compose music to how we diagnose sick people and design new pharmaceutical remedies.

The possibilities were endless. AI was poised to transform humanity on a scale not seen since the Internet achieved wide-scale adoption three decades earlier. And like the dot-com craze before it, the AI gold rush has been dizzying. Tech companies have raced to offer us AI services, with massive corporations like Microsoft and Alphabet gobbling up smaller companies. And Wall Street investors have joined the frenzy. For instance, Nvidia, the company that makes about 80 percent of the high-performance computer chips used in AI, hit a market capitalization of $2 trillion in March, making it the third most valuable company on the planet.

But amid all this excitement, how can we make sure that AI is being developed in a responsible way? Is artificial intelligence a threat to our jobs, our creative selves, and maybe even our very existence? We put these questions to four members of the Boston College computer science department—professors William Griffith, Emily Prud’hommeaux, George Mohler, and Brian Smith—as well as Gina Helfrich ’03 of the University of Edinburgh’s Centre for Technomoral Futures, which studies the ethical implications of AI and other technologies.

This conversation has been lightly edited for clarity and length. Helfrich was interviewed separately, with her comments added into the conversation.  


We constantly hear about the wonders of AI, but what questions should we be asking about it? 

William Griffith: If you think back to social media, it actually changed the way we operate and interact. I’m wondering how AI will possibly either extend that or go in a different direction. We should look at AI from many ethical perspectives, such as justice, responsibility, duty, and so on. My sense is that is the way to think about most of the challenges that confront us, not only technologically but socially and environmentally.

Emily Prud’hommeaux: One of the big issues is going to be authenticity. When media, images, language, or speech are created through artificial intelligence, it’s getting to the point where it’s so good that it’s difficult to know if that product was produced by a human or by artificial intelligence. That’s one of the big things that people are struggling with right now—how to educate people so that they can tell the difference, because it’s going to get more difficult. 

George Mohler: The question I find interesting is, is this an immediate existential threat or is that kind of overhyped? And if you look at the experts who invented this technology, they’re actually split. Some of them believe that in twenty years we could have artificial intelligence that’s smarter than humans. And then the other segment of AI researchers believe we’re very far from that. 

Brian Smith: One of the first things that came out was the ethics of how people are behaving with these things. How will students, schools, teachers, faculty members deal with a machine that can essentially just do your homework? The problem is, people were going, “AI is this new thing, and we’re going to be scared of it.” But the reality is, it’s really academic integrity that’s the issue. So there is kind of a value system around academic integrity that has to come in before we start thinking about the technical pieces of things.

Prud’hommeaux: I think most students are using ChatGPT to guide them. And I don’t think many students are wholesale copying text from ChatGPT and popping it in a Word document and submitting it to their class. But I have noticed that I can tell when something was written by ChatGPT because it sounds really dumb in some way. It sounds like it was written by a team of marketing executives.

So how do we promote academic integrity in the age of ChatGPT? 

Gina Helfrich ’03: I don’t know that professors and university leaders have a great answer yet. It’s all still so new. People are still being extraordinarily creative in the ways that they’re coming up with to use these tools. But the companies who created the tools didn’t have a clear vision of what they should be for in the first place. I don’t think that it’s helpful to assume that all students want to cheat on their essays. It’s more interesting to look at reasons that students choose to cheat or plagiarize, as opposed to singling out AI as somehow special. That being said, there’s this feeling that to stay on the cutting edge, universities should welcome the use of generative AI [which can be instructed by a person to create original pieces of writing, videos, images, etc.]. Yet, so much of what happens in the classroom is still left up to the individual instructor, and some instructors will say, “Yeah, go to town, use generative AI. We don’t mind.” And others will say, “absolutely not.” It must be very interesting from a student point of view to have polar opposite expectations and experiences around these tools, and I genuinely don’t know how they’re navigating it. My sense is that university leaders are really scrambling to try to figure out what line they should take on these tools.

Rule

 

Photo illustration of William Griffith


William Griffith
Associate Professor of the Practice in the BC Computer Science Department

Griffith was previously associate director of the BC Computing Center and studies the ethics and mindful uses of technology. He is a licensed clinical psychologist.

Rule

 

How else is AI going to shape the development of our children? 

Griffith: How this technology will affect kids cognitively, emotionally, and in terms of their education is going to be a serious issue. You can invent personalities, you can invent things in more realistic ways than ever before, and kids will figure out how to use this technology. I have great concerns about the development of children and the presence of this software.

Of course, it’s not just higher ed. Corporate America, Wall Street, the military, and so many other sectors are also struggling with these questions. Should the government step in and regulate AI?

Mohler: There’s so many different types of AI that each type would have its own issues and avenues for regulation. For example, with chatbots like ChatGPT or Llama, the issue is more around copyright issues—they are trained by using other people’s data—and what to do about that. Some people have said, “Oh, we should stop training those models.” That doesn’t make sense to me. It makes sense for people and scientists to be able to investigate the models and then to figure out the copyright issues. On the other end of the spectrum, you have things like autonomous weapons for military use. That’s not going to be regulated by the US—there’s going to need to be some international treaties. Then there are technologies like autonomous vehicles or medical treatments that will need some sort of regulation.

Prud’hommeaux: I was recently reviewing papers for our main professional conference, and I read several that were proposing chatbots for mental health therapy. And for every single paper, there was one reviewer who was like, “I think this is not necessarily an ethical application of AI, to replace a human with a machine for a vulnerable person who’s experiencing a mental health emergency.” That’s something I can imagine being regulated relatively easily by the government. I’m teaching a criminal justice class right now, and one of the problems we’re looking at is dealing with recidivism, and how do you predict that? Can a person do a better job at predicting whether someone will commit another crime when they are let out of prison? Can a computer do a better job with that? And that’s something I can imagine being regulated, too. But some of the things that they want to regulate are more complicated—like, how do you force AI to not tell someone how to make a bomb if that’s what they request? There are all these things you can trick AI into doing for you and it will provide really good, accurate information. How is a company supposed to prevent those things from happening within their software? I think a lot of that kind of regulation would be very difficult to implement.

Helfrich: Historically, we’ve seen when there are innovations of various kinds, it can take a while for the gears of government to catch up. But ultimately, I think the public does expect that the government will step in and make sure that things that are being advertised and sold to the public are not going to be grossly harmful. I think we’re getting to that point now where governments around the world are catching up to this big change in the past few years around AI and starting to institute some much-needed regulations. I’m sure it is ultimately going to be an iterative process. Maybe we’ll have this first iteration of the regulations and we’ll find the ways that it’s working and the ways that maybe it’s not working and come back and make changes so that it works better. 

Rule

 

Photo illustration of Gina Helfrich


Gina Helfrich ’03
Manager of the University of Edinburgh’s Centre for Technomoral Futures

Helfrich’s work is focused on the ethical implications of development in artificial intelligence, machine learning, and other data-driven technologies. A PhD, she is also the deputy chair of the University of Edinburgh’s AI and Data Ethics Advisory Board.

Rule

 

It’s been reported that AI has been used to select the targets of drone attacks. Who bears responsibility when AI makes mistakes during wartime?

Helfrich: The topic of who’s responsible is huge in thinking about ethical AI. The researcher Madeleine Clare Elish came up with the concept of the moral crumple zone. A crumple zone on a car is designed to take the impact in a crash, so that it protects the person and passengers in the vehicle. The moral crumple zone is essentially the nearest human who can be blamed for whatever is happening with regards to the computer. Keeping with the theme of cars, think about a car like a Tesla that is in a self-driving mode when it gets into a crash. We say this self-driving car crashed. Who should we hold responsible? Well, the person who put the car into the self-driving mode, right? That’s the nearest person that we can assign that responsibility to, so they’re in the moral crumple zone. It’s definitely something to be concerned about, because that can be a way of letting some of the companies that are pushing AI tools off the hook. At the same time, there are also decision makers in the organizations that use AI tools developed by tech companies. Those people also need to be held responsible and accountable for any mistakes. If we’re talking about a military use, for example, there has to be someone in the military brass who made the call to say, “We’re going to delegate these targeting decisions to a machine.” If the machine makes mistakes, who decided that the machine is the one that should make those choices? The question of collective accountability and responsibility around AI tools is something that we have to keep in mind, because they’re so complex, and because the process that goes into their development and deployment goes through many, many hands.

Griffith: Using AI in warfare has complex, multilevel ethical and political implications, ranging from the international to the individual level. When can AI make decisions autonomously, if at all, and when will human intervention be required?  It also raises the question: Can a machine be programmed with human ethical decision-making ability? The challenge for policy makers is to develop well-thought-out legal and ethical standards that will be applied individually and internationally. People say, “Well, it was the software that was the problem, and you can’t go after the programmers.” I think that some of these programmers ought to be like licensed engineers, in the sense that you wouldn’t go on the Tappan Zee bridge if it was built by people who weren’t licensed engineers. The software industry needs to think about themselves similarly to the engineering profession when it comes to licensing. That’s maybe part of the responsibility, but there are famous cases where a medical device killed people because the hospital using it didn’t investigate it well enough, and the people using it weren’t trained well enough, and the people that designed it used software stopgaps instead of hardware. You couldn’t ultimately assign responsibility in those cases because there were six players in the game. So I’m not sure how we regulate that. That’s a difficult problem. 

Rule

 

Photo illustration of George Mohler


George Mohler
Daniel J. Fitzgerald Professor and BC Computer Science Department Chair

Mohler’s research focuses on statistical and deep-learning approaches to solving problems in spatial, urban, and network data science.

Rule

 

But what does it mean for us as humans to hand off decision-making to a machine?

Griffith: Certainly, it can make us lazier mentally and otherwise.

Smith: With some of these tools, you go and query something, and it’ll just tell you stuff. Whereas, not that long ago, we would have to go to Google and get links, and then we would have to do a little bit of mental processing to make sense of the search results. Now you don’t even have to think about it. Context becomes really important. At what points does it make sense to use these things to gain some efficiency, to speed some things up, and hopefully not take away from our own ability? And then, of course, it also brings up the question of what is important to know—much like search engines raised the questions of what’s important to know. I remember people saying, “Oh, kids don’t know the dates of the Civil War anymore.” Who cares? What really matters is, why was there a Civil War?

Griffith: The Swiss psychologist Jean Piaget said you need a challenge to grow and develop your cognitive abilities. How do you get smarter if these technologies make everything easier?

What are some of the obstacles to international standards for responsible AI development?

Helfrich: Those efforts are already underway. There are many different principles that have been developed around responsible use of AI by all kinds of different organizations. But there’s a geopolitical struggle around the race for AI, like the US versus China. Those kinds of tensions lead away from a more unified international agreement. Colleagues of mine point out that we’ve accomplished this for other things that everyone agreed were really important. There are international standards around airplanes, for example. So it could absolutely be the case that we might see something like that with regards to AI. And if we don’t, then we can probably expect there to be differing AI regimes in different parts of the world. What’s expected with regards to AI in China might look somewhat different than the expectations in the US or in Europe.

As AI makes it easier and easier to generate authentic-looking imagery, how will we be able to trust anything we find online? Are we entering an unprecedented era of misinformation?

Prud’hommeaux: One of the challenges is it’s difficult for most people to tell the difference between something that was created by a computer and something that was created by a person. Tech companies are always going to be in a race to see who can get ahead of who in AI, but I feel like there’s another role they could take on, which is developing technologies that can help identify things that were created by a computer and then educating people about that. Maybe there’s more of a role for companies to be saying, “Here’s an image. We think it’s not a real image. We think this image was artificially created.”

Griffith: It makes me think of raising children who are subjected to this technology, and how we will teach them to make these decisions and handle these creations that we’re leaving them as we pass on, and I’m not sure the educational system is up to that yet.

Helfrich: I think digital literacy is part of the solution, but it’s certainly not sufficient on its own. There are efforts to think about new ways of verifying the provenance of an image. But human beings can only be so vigilant. The first deepfake that I was genuinely taken in by was a viral image of the Pope wearing a designer Balenciaga coat. I just thought, Oh, cool jacket—good for you. But the image was a fake. The reason that things like that fool people like myself is because we have no reason to be on alert or suspicious that a picture of the Pope in a jacket is something that isn’t actually accurate. And so I think that’s where malicious actors are really going to have the edge, because humans just don’t have the mental fortitude to be on alert for every single thing that we encounter and say, “Is this real? Is what I’m looking at a deepfake?” It’s exhausting. You just can’t question your reality every moment of every day like that. And that contaminates our information environment, because we risk getting into this situation where the digital infrastructure that we’ve come to rely on, like Internet search, becomes polluted by AI-generated content. We no longer know how to sift through what’s true from what’s false, because we’re used to being able to go into Google and get good information. But what happens when you go to Google and the top ten results are all AI-generated fluff?

Rule

 

Photo illustration of Emily Prud’hommeaux


Emily Prud’hommeaux
Gianinno Family Sesquicentennial Assistant Professor in the BC Computer Science Department

Prud’hommeaux’s areas of research include natural language processing and methods of applying computing technologies to health and accessibility issues, particularly in the areas of speech and linguistics.

Rule

 

The technology to replicate human voices is astonishingly accurate. We read about people being taken in by scammers imitating a loved one’s voice.

Prud’hommeaux: The technology for generating speech is actually really good. It used to be quite terrible and you could immediately tell if something was a synthetic voice. Now it’s getting much more difficult. I can’t even begin to figure out how you would stop that kind of scam from happening, but unfortunately, those kinds of scams are happening. Even without the help of artificial intelligence, people are being scammed all of the time over phone and Internet and text into sending money to places they shouldn’t send money to. I know educated people who have fallen victim to these kinds of scams. So I feel like while it is true that it’s very easy to impersonate someone’s voice now, it might be just a very small percentage of scams that are actually relying on that technology.

Helfrich: We might decide that artificial mimicking of human voices is too dangerous, and if it’s too dangerous, it’s off the table. Yes, maybe there are many ways that that could be useful. Maybe it could give a more robust voice for people who rely on technology for their own voice, like people who can’t speak with their vocal cords anymore. But maybe we decide that the benefit is outweighed by the harm of all the fraud and scams that are enabled by synthetic voices. It remains to be seen how these kinds of questions get addressed at the regulation level, but weighing benefits and harms is going to be a huge part of making those decisions.

AI is already allowing workers to offload some tasks to a computer. Isn’t there a risk that the technology could improve to the point where a human isn’t needed to do a job at all?

Prud’hommeaux: The actors and writers strike earlier this year was interesting. A lot of that had to do with artificial intelligence. Would studios replace writers with something like ChatGPT? Can AI create footage of an actor giving a performance they never gave? I think that they were really ahead of the curve by striking when they did, because they recognized that automation, artificial intelligence, machine learning could potentially replace them. I don’t think it’s going to happen soon. We may be bumping up against some natural AI limits shortly. But I do think there’s the potential in other sectors for this same thing. Computer programmers are always worried that they’re going to be replaced by ChatGPT or Microsoft Copilot or whatever. And I can certainly see that as a possibility, but right now, if you ask ChatGPT to do a lot of coding things, it kind of gets it right, but then it makes stuff up and it gets stuff wrong. You definitely still need a human there to actually make it work and to integrate it into the system. So I can see it having an impact, but I don’t think it’s something that’s happening right now.

Helfrich: What we’ve seen so far is that any company that has tried to wholesale replace human beings with AI has later had to backtrack. The AI just does not perform up to spec in a variety of contexts. Many of these workplace concerns are around replacing employees with generative AI tools, and those tools have no concept of what is true and what is false. They don’t have any sense of what it means to be accurate to the real world. So there is an inherent risk that generative AI tools will make some kind of meaningful mistake that will come back to bite the company that has employed them. A lot of these tools are not ready for prime time in that way, and the hype has perhaps prematurely convinced some companies that they are ready—and these companies are reaping the consequences of those choices. Some kinds of work that people are used to doing will be handed off to AI tools, but in terms of AI operating all on its own to replace a person, that doesn’t seem feasible to me anywhere in the medium term, because this is an unsolved problem.

Rule

 

Photo illustration of Brian Smith


Brian Smith
Honorable David S. Nelson Chair and Associate Dean for Research at the Lynch School of Education and Human Development

Smith studies the design of computer-based learning environments, human-computer interaction, and computer science education. He also has an appointment with the Computer Science Department.

Rule

 

Human biases have been shown to influence everything from outcomes in the criminal justice system to hiring decisions in corporate America. Since humans are designing AI, how do we prevent human biases from making their way into these new technologies?

Griffith: I don’t think we’ll ever get rid of bias. It’s always going to be present because cultures have different values. A bias doesn’t mean negative. But if it becomes a prejudice, then that’s when I start to think about how we have to govern it. How did the biased data get into these files in the first place? People must have asked questions, and the questions are biased in the beginning. They’re value-laden. Look at the biases that are causing prejudicial laws to be made, prejudicial hiring decisions to be made, and so on and so forth.

Prud’hommeaux: It’s not that the algorithms are biased or that the people who made them are prejudiced or whatever. It’s that the data they’re being built on has bias in it. And that may be a bias that exists in the world, or it may be a bias of individuals who are creating content. I actually had my students ask ChatGPT to create a bio for a computer science professor, and it was like, “He did this. He did that. He has a degree from this place.” And when I asked them to do it for an English professor, it was a “she.” For a nursing professor, it was “she.” For an engineering professor, it was “he.” Maybe ChatGPT is like, Well, this is the way it is in the world, so I’m going to predict the most likely thing. I think a lot of the bias is there in the data and trying to get rid of that is complicated. And a lot of those biases are not necessarily people being prejudiced. A lot of them are just reflecting the way the world is at certain times.

Mohler: With these models that are making decisions, we evaluate their accuracy for different groups of individuals. We can make explicit the models’ weaknesses. And then, because we can inspect the model, we can try to adjust the model to reduce bias. There’s a whole subfield of computer science that is trying to deal with issues around algorithmic fairness and bias. There are people out there trying to solve those problems. If an algorithm or a human is going to make a critical decision, probably both are biased. Is it possible that with an algorithm in the loop, we could make that decision less biased? I think the answer is yes.

Griffith: And why do these programs have to think the way we do? If they thought differently, would that be a positive? Could they investigate our biases?

Helfrich: It’s a huge difficulty. Right now, a lot of that AI training data comes from the Internet. That leads to the question: Well, who’s most well-represented on the internet? The English language, for example, is hugely overrepresented. So even though having a diverse development team could be very helpful in improving problems with bias for AI tools, that is by no means enough, because the data that the AI tools are built upon themselves exhibit social biases. The digitally excluded are not part of the training data for AI tools. It’s a really difficult question.

It seems like every day we read another news story about a giant tech company buying up a new AI company. Is it a problem to have so few companies with so much control over this new technology?

Prud’hommeaux: They’re the ones that actually have the resources to be able to build these kinds of models. Something like ChatGPT or DALL-E—a university can’t really build that. We don’t have the resources to do that. The only people who can do that are these huge, huge companies with tons and tons of money and tons and tons of access to computing resources. So, until we can figure out how to make AI require fewer resources, it’s going to have to be them doing it. There is an effort through the National Science Foundation to create some sort of national artificial intelligence research resource that would pool computational resources for researchers in the US that might allow them to have similar resources to these companies.

Smith: I suppose the question is, even with the budget of the National Science Foundation, could you build something like a Google or a Nvidia? The amount of computing power is just so big. I talked to another group of universities who were thinking about whether they could in fact pool research: “We don’t want to get left behind. How do we band together to build our own infrastructure to create models that are university-led?” I looked at them, I was like, “Well, this is an elite group. So if you guys did this, wouldn’t you effectively build the same problem? It would be the university elite as opposed to the corporate elite.” There lies the problem. I said, “I’ll tell you what, why don’t you add to your team? Some historically Black colleges and universities, a couple of minority-servicing institutions?” And this was a panel. So they went, “Right, I believe we’re out of time.”

A number of prominent AI researchers have signed on to a statement warning that artificial intelligence could lead to human extinction, and science fiction often portrays AI gaining some kind of sentience that leads to the development of a rival consciousness. How plausible are these scenarios?

Mohler: People should think about what AI technologies do well and what they currently don’t do well. AI can write a plausible college essay. But we don’t have artificial intelligence that can clean your house. I think the distinction there is important, because normally we would have thought, “Well, writing a college essay is much harder than putting away the dishes in my kitchen.” But in fact, we are pretty far away from having any kind of technology that could do that for us. ChatGPT can’t plan. It doesn’t reason in the way you might want it to. It’s just measuring correlations in text and then inputting missing text after that. I think there’s a lot of steps that would need to happen to have movie-level artificial intelligence in our lives, and it’s unclear how you would get to that level of technology.

Smith: Someone asked me, “What about HAL from 2001: A Space Odyssey and movies like that? And I was like, “So it’s plausible because it happens in movies? Is there a non-fictional example that you can give me of machines trying to kill humans?” And that person got upset, saying, “That’s not funny.” I said, “No, it is. Because you can’t give me an example of this happening.” Mr. Coffee never decided one day, like, That’s it. We’re taking them down. Alexa didn’t say to the room, Trip them, knock them out, give them concussions. It doesn’t happen. It’s a weird thing to me that people would imagine, “Oh, it’s the end of the world,” when there are things that are happening right now in the world that we could actually be paying attention to that need attention, as opposed to thinking about the Roomba getting really mad and going, like, That’s it ... ◽