From the President: AI, faith and the human connection

Posted By: Kerry Weber Catholic Media Blog, The Catholic Journalist,
Kerry Weber
Kerry Weber

The opening keynote of this year’s Catholic Media Conference June 17, titled “The Mirror and the Window: AI and the Future of Catholic Communication,” will be presented by Taylor Black, director of AI and Venture Ecosystems within the Strategic Programs and Communities team at Microsoft’s Office of the Chief Technology Officer, and the Inaugural Director of the Leonum Institute for AI & Emerging Technologies at the Catholic University of America in Washington.

Black will also host a master camp June 17 titled “Hands on the Tools: From Friction to Flow Without Losing Your Soul.” 

Black is a Seattle native with degrees from Gonzaga University and Boston College. He is in diaconal formation for the Holy Protection of Mary Byzantine Catholic Eparchy of Phoenix, and he and his wife are foster and adoptive parents, with three adoptions and six foster placements.

I spoke with Black on a video call to get a preview of his insights. Our conversation has been condensed and edited. 

What is it about this moment that most excites you, from the perspective of working in AI?

I think there’s two things. One is there’s just a tremendous amount of innovation and possibility that working well with a tool like this technology presents and can unlock in terms of discovery and scientific innovation, in terms of reshaping workforces and economy in a vein that can be oriented toward more human flourishing. Like any technology, of course, that remains ours to guide and champion, rather than these tools being used in ways that don’t lead to all of those happy outcomes. But I think that, by and large, a lot of people are very concerned about this and have that sort of hope standing shoulder to shoulder together to drive this forward in really interesting ways. 

You mentioned working well with these technologies. What does working well with them look like, particularly from your perspective of your background in philosophy?

Your own personal use of these tools ends up being very epistemologically oriented. Now, what I mean by that is we’ve never, in the history of humanity, encountered something on the other side of a text thread that so closely simulates what we’ve always encountered as another human on the other side. And as a result of that similarity or that simulation, we can delude ourselves into thinking that the technology on the other side of that conversation is actually coming to insights or coming to knowledge or to truth, because we assume that of other humans when we’re interacting with them. So, it’s important to be able to pay attention to your own cognitional activities when you’re coming to these tools, because they can lead you astray, purely by the nature of the way in which we assume they’re human in a certain sort of fashion. When you come to these tools, you have to approach them with a strong sense of your own cognitive activity, a strong sense of your own human anthropology. And if you don’t have those things, it can be easy to slide into handing off things that are yours, your own cognitive acts, your own pursuit of truth, to the tool where they should not belong.

I’ve read a lot about the atrophy of critical thinking skills as a potential outcome of overuse of AI. How do we avoid that? 
Taylor Black
Taylor Black

You can use these tools to make your thinking harder as well, in good ways. So, rather than asking the thing for the answer, ask the thing whether you have thought through all of the pertinent questions, or whether your questions are good questions. Using the tool in a true tool sense. In that fashion, you’re able to sharpen your own thinking. You’re not handing off your cognitive tasks but using it as a true thought partner. That can be learned like any other kind of use of a tool, in the same way that we learned how to drive a car, in the same way that we learn how to read a book. You have to be careful with it, like all powerful things, but I think it can be done well and make people smarter and more productive as a result.

How has your work at the Catholic University informed your work at Microsoft?

In many ways. In fact, I recently presented at our faith-based customers summit that was part of our nonprofit customer summit. Microsoft, like many tech companies, has a tremendous number of customers who are faith-based organizations, and being able to represent their concerns, their customer experience of the tools themselves, when they’re coming explicitly from a particular worldview, ends up being a tremendously important thing for all tech companies to learn from.

I’m seeing a lot of humanity — secular, religious — rising to the occasion of saying: Hey, these tools seem to impact our worldviews, our way of thinking, our understanding of what it means to be a human person. We have some very strong thoughts about that, and we don’t think that sort of shaping should be in the hands of a select few. Let’s all kind of lean in on this. Let’s all express the ways in which we think humans should interact with this technology and put in place the things that need to be put in place, not only regarding policy, of course, but also voting with our feet, telling the creators of these tools how we want to interact with these tools.

And do you think there is an openness to hearing those concerns in the tech world? Can people on the ground actually have an impact?

Yep, very much so. For example, last summer, we had a research project at Microsoft that was called Tech for Religious Empowerment, and we hired researchers to look at how we can shape our tools to better suit our customers of faith. Google DeepMind recently hired a philosopher. There’s already a philosopher on staff for Anthropic. And so, I think that everybody creating these tools also have similar concerns and want that anthropological view to be developed more robustly, so that they themselves — us, the makers of these things — can have a voice in how all of this works together as well. I feel very bullish on all of our collective ability to shape these tools and the direction in which they go.

For journalists in particular, there is a big concern that writing — the trade that people have trained for throughout their lives — is going to disappear because of the ability of the large language model to produce similar and, frankly, very good content in a second, versus hours and hours of human labor. Do you have any advice for journalists with this concern?

This one’s complicated, right? Because there’s a way in which the nature of the work is going to shift.

There’s an interesting concept, though, even in the parallel space of code production and development: The best people who are using these tools to do a lot of the code writing and are experts in the craft of the code itself, are able to manipulate and get the kinds of outputs that they want, precisely because they’re so good at the underlying thing that the machine is doing. So, there’s a certain sense in which journalists are already in a really good place in terms of being able to utilize AI tools, because as masters of natural language, they now have a powerful ally on their side that requires natural language in order to be able to do what it does.

This AI tool is coming in as a helper, but only if I treat it as a helper. It can also be a replacement, if I haven’t been thinking about my craft to the level that I could, if I haven’t been thinking about my audience to the level that I could in terms of being able to convey truthful, helpful messages. 

We’ve seen this with other technological renovations: there was something lost when we shifted to the mechanical loom over the human weavers. There’s always going to be a human weaver element in society, because there’s a love of the thing itself. There’s a love of doing the work in that particular kind of way. That, I think, should always be there.

But there’s also a way in which you can use these tools to do things that you weren’t able to do before, not just in a generation standpoint, but a driving your own excellence sort of standpoint. It can help you do some of the research. It can help you iterate through the five dozen words that you had been thinking about in order to get that precise nuance that you’re trying to get across in that paragraph or to reach a particular kind of audience. Used in that sort of adversarial way in order to kind of refine one’s own thinking in the craft of journalism or poetry, that’s a great use of the tool that elevates the craft itself. 

What are you telling your own children about AI and how do they use it?

They don’t; they also don’t have devices. This is common for a lot of people in tech. Part of the reason for that is we know how powerful these tools are on vulnerable populations. Good, human-flourishing-oriented use of AI, at the moment, requires a tremendous amount of judgment, not only because of the ways in which the tools themselves are constructed, but because you have to know a lot about your own modes of cognition and your own personhood. Now, as I like to remind people, until you’re 24 or 25 years old, not only do you have software problems, but you have hardware problems in the sense that you don’t have a fully developed prefrontal cortex. And so, my word of caution, generally, unless it’s in very discrete circumstances, is to teach young people how to make good decisions and teach them how to develop their own voice before exposing them to these kinds of tools until roughly that age, because they won’t have one otherwise.

Kerry Weber is Executive Editor at America Media and President of the Catholic Media Association.