Artificial Intelligence

Colby College is creating an artificial intelligence institute


Colby College is a private liberal arts school located in southern Maine. You can take classes in art history, chemistry, music, all the staples, and now the school is adding artificial intelligence to the list. Colby is among the first liberal arts colleges to create an artificial intelligence institute to teach students about AI and machine learning through the lenses of subjects like history, gender studies and biology. The college received a $30 million gift from a former student to set up its new institute.

This, of course, comes as the world is grappling with ethics and AI and how to build a moral foundation into algorithms. I spoke with David Greene, the president of Colby College. He said that eventually, he’d like every student to study artificial intelligence to graduate. The following is an edited transcript of our conversation.

A headshot of Colby College President David Greene.
David Greene (Photo courtesy of Colby College)

David Greene: I think there’s going to be a period in the not-too-distant future where this is absolutely ubiquitous in higher education, in the way it’s become ubiquitous in our lives, because of the way that it is shaping all of our interactions, whether we’re online or the way that we’re gathering information today, or the way that companies are actually marketing to us on a daily basis. If you’re an English scholar, or you’re in sociology, you’re interested in immigration patterns around the world, you’re probably going to be using machine learning to really look at those in a deep way. If you’re interested in being able to look at textual analysis across millions of volumes, you’re going to need machine learning to be able to do that. And I think it’s important for us to get out ahead of this because in 10 years, my sense is if you are not in AI in a big way, for most colleges and universities it’s going to be very hard to be relevant.

Molly Wood: Can you break down just into specifics what this will actually look like? If someone is an English major, will they take a specific class, or will this kind of base-level understanding be part of everything they’re learning?

Greene: You know, what we do with writing — writing right now is part of every course that we have. And it’s just become a way of communicating, a way of demonstrating your work, a way of thinking. Writing is just so essential to everything that we’re doing. I think, as we look ahead in the next 10, 20 years, AI is going to have a similar role for us. It won’t be you just taking a writing course, it’s going to be that in each of your courses, you’re going to think about research and its impact, what’s knowable and what’s not? How do we learn in a deeper and more ongoing way? How do we have access to the greatest wealth of information to be able to make decisions? And when you start to get into that thinking, you know that AI and machine learning are going to play a major role in all of this. And so I don’t see it as one particular class for each discipline. Instead, I see it as just organically integrated across our courses in a way that students will have this, no matter what their major is. They will have had deep experience with AI, and they’ll be able to use that when they leave Colby in so many different fields.

Wood: Is this, in some ways, about improving outcomes, producing graduates who are really prepared for the job market that is happening right now?

Greene: There’s no doubt about it. I think that if you look 10 or 20 years from now, it’s not going to be a plus factor, it’s going to be an absolute expectation. So for us to build that capacity now is essential in all of this.

Wood: Why do you think you’re so early? Do you feel like you’re hearing from other institutions that they’re going to make this as deep a part of their curriculum?

Greene: I think in our sector, there has not been that much. If you look around right now, and you say, “Where is AI absolutely prevalent at this moment?” And really, it’s in three different spheres. You see it very much in the tech industry. You see it very much in the defense industry. And you see it in the elite engineering universities that are deeply connected into both of those places. And they have a set of motives that are somewhat different than we would have at Colby in all of this. And I think what’s happening is we want to make sure that there’s a real democratization of AI. That colleges and universities across the country are engaged in this work so that students from all backgrounds and experiences have a deep understanding of this and ultimately shape the future of AI in a way that really understands the downstream consequences of them. It is not only concern about the profit motive for them, or bigger, faster, more capacity, but thinking about what are the implications for all of us in society for AI? If we keep this in a very narrow sphere, we’ll never get there. But if we’re in a position where students from across different colleges are always working on this, we have a big chance of changing the way that AI can be for the common good. We’re actually going to start a summer institute as part of this, where students and faculty from colleges around the country will come to Colby and they’ll be able to enhance their own skill sets around AI and then bring those back to their campuses, because we think it’s not important only that Colby does this, but that colleges across the country do it. So we’ll have an evangelical nature of all of this to make sure that we’re always spreading the word about how important it is not to bury our head in the sand, but to recognize that AI is here right now. It’s not going anywhere. And we better be part of changing it for the better.

Wood: I just want to follow up on that idea, that you’re producing students who can lead AI instead of being led by AI. What problems do you think these future cohorts can help tackle that traditionally trained computer scientists or even ethicists can’t so far?

Greene: Yeah, I think that’s it. You know, you’ve said it beautifully. I think that we need to have a whole cohort of students from different backgrounds and experiences who are really leading AI and not being led by it. So one of the beauties of people who are trained in the liberal arts is that they really understand how to come at a problem from multiple, different angles. They understand history in context. They understand how things play out over time, and not just the near-term impact of something, but what happens over a longer period. How do you look at that impact and understand and predict what might happen if you actually make this decision versus that decision? And right now, because things are so narrow, we’re missing much of that. And I think the more that we have people who are coming from liberal arts backgrounds, who are really raising the kind of questions that will ultimately shape AI in more positive ways, the better off we’ll be.

Related links: More insight from Molly Wood

Colby is arguably the first liberal arts college in the U.S. to set up such an extensive program. It’s worth noting that last September, Oxford University announced an Institute for Ethics in AI, headed by philosophers and funded partly by a $188 million donation from Blackstone co-founder Stephen Schwarzman, who specifically noted that he thinks an education grounded in the humanities is essential to developing ethical artificial intelligence.

On the topic of ethics and AI, it increasingly sounds like Google maybe just doesn’t want to hear about it anymore. In November, Timnit Gebru, a Black AI scientist who served on Google research’s AI ethics team, said she was fired for speaking forcefully about the company’s ethics and lack of commitment to racial diversity. Google has said she resigned, but something like 2,700 employees signed a letter in support of her. And last month, Google said it was investigating the other co-lead on that ethics team, Margaret Mitchell, for allegedly improperly sharing internal documents. As it happens, Mitchell has also criticized the company’s record on diversity and ethics, and she said on Twitter that she was compiling research on Gebru’s firing when Google started its investigation.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.