Embrace change, take risks, and disrupt yourself
Hosted by top 5 banking and fintech influencer, Jim Marous, Banking Transformed highlights the challenges facing the banking industry. Featuring some of the top minds in business, this podcast explores how financial institutions can prepare for the future of banking.
What is the Potential of ChatGPT in Banking
ChatGPT has taken over the news cycle since launching at the end of November of 2022. Its AI version of human conversation has created optimism and debate around the opportunities and risks associated with this technology.
The question is, how can conversational AI impact banking as we know it, from an operational and customer experience perspective? What is the scope of this change and what are the challenges?
I am excited to have Charles Morris, Chief Data Scientist for Financial Services at Microsoft on the Banking Transformed podcast. We will be discussing how banks and credit unions can leverage the power of ChatGPT and conversational AI in the future.
This Episode of Banking Transformed is sponsored by FIS
How do you find your feet on ground that’s constantly shifting?
You have to read The Global Innovation Report from our partners at FIS. From embedded finance and ESG to crypto, decentralized finance and the metaverse, FIS pinpoints the trends you need to watch – and explains how innovation can give you an advantage, in good times and bad.
Discover how the latest innovations could affect your business. Explore the research today by visiting www.fisglobal.com/global-innovation-report
FIS. Advancing the way the world pays banks and invests.
Where to Listen
Find us in your favorite podcast app.
Jim Marous:
Hello and welcome to Banking Transform. The top podcast in retail banking. I'm your host, Jim Marous, owner and CEO of the Digital Banking Report and co-publisher of The Financial Brand. ChatGPT has taken over the news cycle since launching at the end of November of 2022. It's AI version of Human Conversation has created optimism and debate around the opportunities and risks associated with this new technology. The question is, how can conversational AI impact banking as we know it from an operational and customer experience perspective? What is the scope of this change and what are the challenges? I'm excited to have Charles Morris, chief data scientists for Financial Services at Microsoft on the Banking Transform podcast. We'll be discussing how banks and credit unions can leverage the power of ChatGPT and conversational AI in the future. ChatGPT has made significant waves in the technology industry for its ability to accurately answer questions, complete a wider range of tasks, from creating content to developing software and formulating business ideas.
Launched in November of 2022 by OpenAI, the AI program has impressed users and technologists with its ability to mimic human language and thought patterns all while providing coherent and topical information. The ability to leverage AI in this way means that productivity can be increased in a variety of industries. So welcome to the show Charles. So where do we start? It's amazing how much have been said about ChatGPT while still leaving so many questions unanswered. We were talking before the podcast that you can't turn on a news program or listen to a business magazine without this being referenced. And on the consumer level, which is kind of interesting when you talk about how advanced the technologies, how many people are already using it or at least thinking about it on a daily basis. So as a starting point for those few people who may not be aware what ChatGPT actually is, can you provide a bit of a primer around the platform?
Charles Morris:
Yeah, sure. So ChatGPT is a model that's coming out of OpenAI, and for those of you who don't know, Microsoft has partnered with OpenAI who is a sort of research institution that also has a for-profit component that builds these open AI models. And the partnership initially was like, let's see if we can push the boundaries of AI because we're seeing that we're really close to being able to do some really cool things. So we were partnered with them saying, okay, well Azure has really good AI infrastructure, so let's help you build that out. And it kind of took off from there. So as we were starting to build these models, these are based off models called GPT-3, sometimes you hear 3.5. So GPT-3 stands for generative pre-trained transformer model. And what this means is it's been trained on a massive quantity of text and code data and basically compiling that down into a learned representation over many billions of parameters.
What this means is that it kind of understands how language is constructed by reading everything and seeing how language historically has been constructed. Now ChatGPT is they took those models and they trained them specifically for conversational situations so that it wouldn't just say, oh, here's a language model, it would understand that you're trying to do it as a conversation. And obviously when OpenAI released this into the public, I don't think anybody was fully expecting just how viral this technology would go because they actual base technology to do it had actually been around for a little while, but once they released it just took off and now everybody's talking about it. And now we're starting to see some really cool applications already, both Microsoft but also coming down the pike from some of our customers.
Jim Marous:
So where do you see the use of conversational AI in the short term as well as further down the road? Is ChatGPT even ready for prime time yet?
Charles Morris:
Yeah, so ChatGPT, you should think about it as one in a series of related models and a model family of these generative models. And the way that we're sort of thinking about it is all these models are essentially becoming a co-pilot for a human to be more productive. They can help you understand more content, they can help you write content and produce content, and the human ultimately is still responsible for the content they produce. So historically we've thought a lot about automation and straight through processing and those things are still important, but this is sort of a slightly different flavor. We're really moving towards augmentation and using humans plus AI to move a lot faster.
And to that extent, we already have a number of use cases that are wildly successful that we could talk about. Everything from GitHub Copilot, which developers are already using to write more and more of their lines of code to, as you probably saw yesterday, we announced the new Bing, which is powered by some of these OpenAI GPT models where we have these Prometheus models that are specifically for search. And now you can interact and search in more intuitive ways and it remembers what you were asking about. It knows how to synthesize answers across different sources while citing those answers. So I think it's safe to say we have real scenarios going out right now and we're going to see just this is going to be totally transformational technology.
[06:12] Jim Marous:
So there's a lot of discussion around ChatGPT used for customer service and even marketing. Can this technology be customized for each, let's say finance institution, can be customized for each institution in a way that can point customers to specific products or services based on the questions asked?
Charles Morris:
It can. Yeah. So I think some of that is still in development, but if you think about it, we really want to think about these models as platforms. So every time the P in GPT is pre-trained. This pre-training process is quite expensive and acquires super computing infrastructure to both train and actually host these models at scale. So we're actually creating these models as platforms so that customers can access these models and use them for their own scenarios. They can fine tune them for their own use cases so they can understand the context of their business. And then of course that's going to show up in a couple different ways. It's going to show up with our customers directly fine-tuning these models for their scenarios, but it's also going to show up in Microsoft first party development as well. We're baking these into our products and these segments that help customers do this so they don't necessarily need to build everything from scratch themselves. So we're really going to see a lot of innovation both from Microsoft as well as from our customers in the space and it's very doable today.
Jim Marous:
So a lot of financial institutions really struggle sometimes with building content, content that the consumer can access internally. Can this technology be used to assist in the development of content or the replacement of content that can actually personalize the response to a consumer's question?
Charles Morris:
Yeah, so when we talk about these models today, there is still sort of semblance of you want to have human in the loop. There's a responsible AI component where we want to make sure these models are incredibly powerful, they're incredibly flexible, so they can do a lot. And we want to make sure that when we're producing with these, producing content with these models, we're not absolving responsibility of saying, "Hey, an AI did it." So there is still an element where we need to understand how it's producing things. However, we have some great examples of content generation where we're able to dramatically increase the speed at which people are able to create personalized content. One public example we have of this is not banking but it's still relevant, is CarMax. What CarMax did was they have all these reviews across all their cars on their site from customers that say, oh, I like this about the car, I don't like that about the car.
And they use the OpenAI models to basically summarize all of those reviews for every car, write a review that says, "Hey, here's what people like about this car, here's what people don't like about this car." And then they sent those written reviews to their content team for approval. So they took something that they estimated would've taken dozens of, like years and years to do, and they got it down to a few months. So they had that content team who was ultimately approving and taking responsibility for what they were producing, but they did that in a way that they were able to accelerate much faster. And I think we're going to see the same thing in banking scenarios where you're able to get a much faster, more synthesized answer and you have to make sure that that's correct, but I think the value's going to be there for sure. [09:23]
Jim Marous:
So as you referenced early in the conversation, we sometimes use ChatGPT as an overall concept that includes all of GPT, but those are simply different things. So can GPT be used to assist in programming innovation and even back office operational improvements?
Charles Morris:
Yeah, of course. So inside of the OpenAI family of models, one of the family models is called Codex, and this is basically GPT-3 trained on code and code documentation as well as other language. And the biggest production use case of this today is GitHub Copilot. And for those of you who may not have heard of Copilot, Copilot is a co-pilot that sits inside of your text editor editor and helps you write code, it will generate code. So you'll write a comment or the beginning of the function and it'll actually synthesize based on what you're trying to do, suggestions. So this is not replacing the programmer, but it's enabling them to write more code faster. So we're seeing that that use case in Copilot is leading to very high acceptance rates of code by developers and just supercharging their development speeds. That same process of making internal APIs for specific banking systems easier to use, that is going to be a very doable use case.
Jim Marous:
So you're involved quite a bit in the financial services industry. What are some case studies of how you've seen both ChatGPT or GPT by itself used in financial services so far or at least where it's tested that you can talk about?
Charles Morris:
Yeah, so that was the first thing I was going to say is I think a lot of people are still very early. So even the places that are very serious, they're kind of keeping it hush, hush on what they're doing because they're not sure yet how exactly it's going to work out. So I can talk in very broad strokes, obviously keep your ear out as I'm sure more and more customer studies will become available. What I can say is in broad strokes, we're seeing exactly some of the things you referenced is we have so much content, how do we sift through it? How do we summarize it? How do I even know if I want to read this thing?
So generating content summaries to be able to understand what's in this, do I care about it, before I go and read. And then combining that with scenarios like semantic search where rather than just doing keyword search and you're searching for specific words, you're actually using the embeddings from these powerful language models to find things that are similar in abstract way and find more similar matches. We're seeing that people are able to parse through content much faster. So the summarization, the search that's a more natural search. And then on the other end is the generation side, is getting people to write that first draft much more quickly with the help of AI.
[12:11] Jim Marous:
So there's a discussion already about the next iteration of ChatGPT and of GPT-4 by OpenAI and the potential seems just simply amazing. Where do you see this technology going? I would've said it in the next three to five years, but even in the next one to three years, where do you see this technology actually evolving?
Charles Morris:
That is a really interesting question because I personally have not seen, I've not been part of one of these major inflection points before to the point where I think it's really hard to make predictions. I think we're opening up a brand new set of capabilities that are going to be used in totally unpredictable combinations of ways, and we're just starting to see the very beginning of it with things like GitHub Copilot, with the new Bing search being really intuitive. But I'm telling you this is a foundational ground shift in how people work with technology. This idea of a AI Copilot that helps you understand and helps you create faster. This is just something we haven't had before. And so I think it's really hard to make predictions about where, but I'm really excited to see where it goes because I mean just myself using Copilot, right.
When I'm writing code, it's not like it's writing exactly what I'm producing, but for me, when I'm staring at a blank piece of code editor, it is extremely intimidating to get started. As you start to learn how to work with GitHub Copilot, you start to understand how to get it to prompt it so that it gives you what you want. It's the same thing when I'm writing a document. I can start out with a couple of bullet points and now I can generate a first draft of a summary I'm trying to do and get way farther there. And then when it says something I don't want to say that's actually helpful too because I know that's not what I want to say. So rather than sitting there banging my head against my desk, I'm so much more productive and it's because I have this AI Copilot in the mix.[14:14]
Jim Marous:
What's interesting is I was talking to you before the podcast, this is how I'm using it myself, and that when I'm trying to write an article on a Sunday or some other day of the week and I'm trying to come up with some concepts, I'll send a question or my thought of what the question's going to be to ChatGPT and say, what's your answer in this? And not to help develop the copy, but to make sure I haven't left something out. In addition, as you said, if it comes back with a question it tends to sometimes get into a bullet point format. I go, I don't want this in a bullet point format. It completely rethinks what it's talking about and positions in a different way. It can give references, it can do a lot of things, and it seems like every time I'm playing with it I come up with some other way of saying what I want to say, and it really becomes very powerful to realize that it can only go so far if you don't give it the right direction.
But it can go really far if you give it some unique directions to go. And I talked to you about before the podcast where sometimes the naysayers say, well, it's all rudimentary. Well, sometimes the questions are, and it's just very interesting to go having the dialogue back and forth and saying, "How can we go from here to there in a way that's transformational?" I mean, just every time you open a door there's more there. So when you talk about the expanded use of AI as it relates to something like let's say financial reporting, compliance, think of this area, these are areas that could work too. These are pretty much, they're developmental, but they're a lot of data involved. It can help in those areas as well, can't it?
Charles Morris:
It can and I think one thing that I want to keep coming back to is really the way to think about it is not as an automation tool. There are some places where it can be used as an automation tool, but really it is about that co-pilot experience. So if you think about flying a plane, if the pilot walks away and the co-pilot crashes the plane, it's a problem. The pilot's ultimately still responsible. So in this case, we're seeing these tools as the human is still responsible, the human is still the person doing it, but we want this co-pilot to be able to make them do it faster. So in the case of compliance and all these advanced documentation scenarios, how much time do you spend reading stuff that you don't really need to read?
Jim Marous:
Right.
Charles Morris:
How much time do you spend thinking about, well, how is a way to bird this or summarize this? Well, if I could get you that first draft or if I could get you that TLDR, the too long, don't read of this particular section, would that be helpful to you? Would it help you move that much faster? I think we're going to see people changing the way they work, because as you learn how to prompt these tools to get the outputs you want, and you've probably noticed this yourself just using ChatGPT, you just ask it stupid things, it's not very helpful. But if you kind of understand, here's what I want you to give me, it becomes a really powerful productivity tool, and I just think we're going to see some really phenomenal ways that plays out.
Jim Marous:
Yeah, it's very interesting because the dynamics of what it can do, one of the things, I won't say it's a drawback, but right now limitation is that it's using data through, I think it's 2021. Do you see this becoming a realtime tool?
Charles Morris:
So I think that's a really interesting area because keep in mind that these things are generative. Meaning they don't actually have understanding themselves. It's based on just probability of how language works. So based on what you're asking it, how you're prompting it, you're kind of creating a probability of what potential options might come next, and it's spitting those out. I think the real power, and what we saw with the Bing announcement, is combining these powerful generative models with knowledge systems that actually have the facts, truth, the information. And if you can use those two things together, like we saw in the, if people haven't watched the new Bing demos, you absolutely should.
But basically what the Bing is doing there is it's using Bing search results, and we've improved our search results using these technologies as well, but it's basically getting all that information back and then having the language agent summarize and cite, with citations, where it's actually getting those back. So I think that to me is the thing I'm most excited about, is the marrying of expert knowledge systems that have that truthfulness, with the language generation that handle all of the semantic and structural details that you have to write. Those two things together as we start to figure that out more and more, I think we're going to see entirely new categories of products of [inaudible 00:18:23].
Jim Marous:
So let's take short break here and recognize the sponsor of this podcast. So welcome back. I'm joined today by Charles Morris, chief data scientist for financial services at Microsoft. We've been discussing the potential of ChatGPT technology in banking. So Charles, we talked a lot about the benefits and the advantages of this technology as it relates to banking as other industries, but when you talk about ChatGPT, there are limitations and risks. What do you see right now, still early in the life cycle with regard to limitations or risk for this technology right now?
Charles Morris:
I think there are new risks that emerge when you get a new technology like this, especially a powerful technology like this that's really just unlike anything that kind of came before it. And I think Microsoft is uniquely positioned to kind of be a leader in this space, in this responsible AI space. In 2012, we published our responsible AI principles, and since then we've published updates and [inaudible 00:19:28] that our customers can use for their own responsible AI, but we have an office of responsible AI and OpenAI cares very deeply about the responsible AI principles as well. So we are partnering with them and we have teams of lawyers and ethicists and responsible AI experts who are trying to figure out, okay, what are the potentials for harm? What are the potential of mitigations you can do at the model level, at the product level, and be able to get through this?
So trying to understand where those risks are and how to mitigate them, and that is a big part of why that human in the loop component is still so vital. We want to make sure that people are using this responsibly, which is why even in the Azure OpenAI service, it's generally available, but you still need to apply use case by use case because we want to make sure that people aren't accidentally spinning something up that's going to cause harm and causing harm to individuals or reputational damage to our customers. So we're still treating that very seriously, and that's a big part of the design considerations here. So we're going to enter new territory and we don't know everything that's going to pop up, but we are very diligently working to solve these and mitigate them to the best of our abilities and something we take very seriously and have for a long time.
Jim Marous:
It's interesting because we have challenges, humans even consuming the news nowadays because news has its biases, things of this nature. How do you avoid biases when you're taking all the information that's available and try to do that or does ChatGPT and technology like this simply avoid those situations that may open the door for biases?
Charles Morris:
Well, so when we build Azure OpenAI service and when we build the new Bing, this is something that we're trying to mitigate. Now I to can't speak, I don't want to speak on behalf of teams too much and specifics of what we're doing, but one example is in Azure OpenAI service we have safety filters and content moderation filters built in, and we have red teams that go in and try to break it and they try to make it do bad things and in this sandbox and see how they can mitigate that. So going through that process, we're still learning all the limitations, but trying to figure out, again, at every level from the model all the way up to the application, what are the risks that this creates and how do you mitigate them?
And right now, one of the best ways we have to mitigate risks is by having a professional human being in the middle of that loop to be able to understand what's happening. So if it does generate something erroneous, if it generates something harmful it's probably going to get caught by a content filter. But if it generates something erroneous, a human being is going to be able to catch that right before it goes out. So that's where there's all sorts of new ways that we're thinking about risk. We're going to learn a lot in this process, but it's definitely something that should be at the forefront of any design process, but it is something that is being actively worked on and mitigated as much as possible.
Jim Marous:
Can this tool also help in the area of identity and fraud from the standpoint of the kind of questions that's asked or the interaction that it has with a human? Can this be a future tool down the road potentially?
Charles Morris:
I think that's an area that's being explored. I don't know definitively whether the answer to that will be yes or no. There are definitely places where I could envision that being extremely useful, but there's also risks that that could potentially present as well. So I'm going to hold off speculating too much and go and see where we end up with this. I'd be surprised if no one ended up going down that road with it.
Jim Marous:
And finally, when we look at a tool like this, when we look at AI in general, always comes up the question around, does this replace humans? And you've touched upon a little bit, but what do you see as the impact on the talent pool in the future? Is this an expansion of that talent pool? Is it going to change the dynamics of employment, things of this nature?
Charles Morris:
My view on that is that this creates a lot of opportunity. I think it lowers the bar for people using AI. You look at ChatGPT as an example. Obviously now you might go on it and say, oh, we're at capacity, come back later. But we do have tens of millions of users using this technology, using this AI in very powerful ways. Once the new Bing becomes more broadly available, that's going to directly impact users on a day-to day basis and make things much more flexible and more powerful, more intuitive. So I think the way that we're going to work, I think what's being kind of being revealed here is what parts of knowledge work are actually much easier than they seem.
They're much more repetitive and structured. And then what parts of knowledge work are actually hard, just sort of the creativity and the dealing with people and all these things. And I think we're going to see that jobs change, but I don't necessarily see it as being a zero sum like machine versus humans being replaced. I think people are going to do their jobs differently. I think we're going to have new categories of work, we're going to have new applications that come out. And I think on the whole, I think more people are going to benefit from this technology.
Jim Marous:
It's interesting, we talked about it before the podcast, that it seems like every time you open up a search on a term such as ChatGPT, there's something else you learned. You mentioned the fact that just yesterday in our real time today you announced the integration of ChatGPT with Bing, and the opportunities are immense, and I think this is one of these things we could probably have a podcast once a week for a while because the marketplace is changing so fast, but it is exciting and it does open the door. We had a conversation recently with an advisory company and they talked about how it's being used for financial statements to run tests and things of this nature, not the chat basis, but on the GPT basis. And I think it's going to be exciting, something to pay attention to. It's certainly an area for us to all learn as we go along. So thank you so much for being on the podcast today. I really appreciate your time.
Charles Morris:
I had a lot of fun.
Jim Marous:
Thanks for listening to Banking Transform, the winner of three international awards for podcast excellence. If you enjoy what we're doing, please take some time to give us a review on your favorite podcast app. It helps us to continue to get great guests. Finally, be sure to catch my recent articles on The Financial Brand and the research we're doing for the Digital Banking Report. This has been a production of Evergreen Podcast. A special thank you to our senior producer, Leah Hasid, audio engineer, Sean Rule-Hoffman, and video producer, Will Pritz. I'm your host, Jim Marous. Until next time, remember, new technologies are coming at us faster than ever before. The question becomes, will your organization be ready for when it does?
Hide Transcript