Embrace change, take risks, and disrupt yourself
Hosted by top 5 banking and fintech influencer, Jim Marous, Banking Transformed highlights the challenges facing the banking industry. Featuring some of the top minds in business, this podcast explores how financial institutions can prepare for the future of banking.
The Promise and Peril on ChatGPT in Banking Part 1
With the rise of generative AI (ChatGPT) and its increasing impact on various sectors, including banking, it's crucial to examine how these advancements are transforming the landscape, both for better and for worse.
From exploring the impact of generative AI on customer experience and engagement to discussing the ethical considerations and regulatory challenges, it is more important than ever to understand the transformative power of AI in banking.
My guest for part one of this important Banking Transformed podcast interview on generative AI and ChatGPT in banking is Brian Roemmele, President of Multiplex. Brian equips listeners with the knowledge and understanding needed to embrace the promise of generative AI and ChatGPT while mitigating associated risks.
Where to Listen
Find us in your favorite podcast app.
Jim Marous (00:13):
Hello, and welcome to Banking Transformed, the top podcast in retail banking. I'm your host, Jim Marous, founder and CEO of the Digital Banking Report, and co-publisher of The Financial Brand.
Jim Marous (00:23):
With the rise of generative AI, ChatGPT and the increasing impact on various sectors, including banking, it's crucial to examine how these advancements are transforming the landscape, both for the better and for the worse.
Jim Marous (00:37):
From exploring the impact of generative AI in customer experience and engagement, to discussing the ethical considerations and regulatory challenges, it is more important than ever to understand the transformative power of AI in banking.
Jim Marous (00:51):
My guest on the Banking Transformed Podcast is Brian Roemmele, president of Multiplex. Brian aims to equip listeners with the knowledge and understanding, and need to embrace the promise and understand the perils of generative AI and ChatGPT while mitigating the associated risks.
Jim Marous (01:10):
This is part one of a two-part series on generative AI. Be sure to listen to both parts.
Jim Marous (01:17):
Generative AI has emerged as a groundbreaking technology, empowering machines to go beyond mirror analysis.
Jim Marous (01:24):
From unveiling the secrets behind cutting edge AI models to discussing the ethical considerations surrounding their usage, our guests can provide you with deep knowledge into the ever-evolving landscape of generative AI.
Jim Marous (01:39):
So, Brian, thank you so much for being on the show again. The last time you were on the show, we discussed the potential of voice banking. Boy, have things changed.
Jim Marous (01:48):
So, before we begin, can you reintroduce yourself to our audience and share a little bit about your background?
Brian Roemmele (01:55):
Well, Jim, thank you for having me back. And it seems like decades ago since the last time we've had our conversation. But I've been following you all this time. So, it's like we've never separated.
Brian Roemmele (02:05):
So, what is it that I do? I'm a researcher. I absolutely love technology and I'm also, loving the convergence of humanity and technology and trying to understand better ways to piece it together so that we can better coexist with the technology that we're creating, particularly AI.
Brian Roemmele (02:29):
That did not come about when I was younger because I thought technology would solve every problem in the universe. And now, I realize that that was a youthful exuberance.
Brian Roemmele (02:39):
I now realize that we need to have a simpatico with our technology in a way that is enhancing humanity, not de enhancing it.
Brian Roemmele (02:48):
And so, that has led me into all sorts of computer occupations. I started out as soldering hardware together as a kid. I started designing software. Some of the very earliest software I built, now, we would call them expert systems.
Brian Roemmele (03:05):
But back then, I had the fantasy that I was creating the beginnings of AI. Expert systems had great domain knowledge and would appear to be very smart in the very narrow domains that they were expert in.
Brian Roemmele (03:18):
And so, that was the Commodore 64 era so it's ancient times, and over the years I've just grown with the technology. A lot of times I've put it on a shelf and did not bother my mental capacity with it because it stalled. Technology like this tends to stall at times and it requires new thinking.
Brian Roemmele (03:42):
And the very first time I really re-energized it was in the early 2000s when I started seeing voice models start to become much more powerful in the underlying AI.
Brian Roemmele (03:57):
And that led us up to Alexa and Siri. And a lot of people got their first taste of what they believed was AI, but it was really not. It was a proto-AI, not that fascinating, because you really needed to know what to ask for, it couldn't figure out what you wanted.
Brian Roemmele (04:15):
And then by let's say 2010, we started seeing the large language model concept and generative pre-trained transformers become very useful in trying to understand our intents and our volitions.
Brian Roemmele (04:36):
And those were originally used to try to decode language, but now, we're decoding what we are actually needing and wanting from a prompt or a question.
Brian Roemmele (04:47):
So, that's when I started really diving into my garage lab and building what I believe is the only thing on the planet like this. I call it the intelligence amplifier. And this is a concept of AI that is designed to amplify human intelligence.
Brian Roemmele (05:05):
I actually turn AI on its ear intelligence amplification as a way to pay homage to the fact that the intelligence is generated by the human, and it's just being amplified by the machine. And so, that's where we are today.
Jim Marous (05:23):
So, when we look at generative AI, how is this different than the AI most of us were familiar with prior to November 30th of last year?
Brian Roemmele (05:34):
Wow, great question, Jim. The difference is one of scale. When we had very good domain models within a silo of information expert systems, they did appear to be very intelligent.
Brian Roemmele (05:48):
But the problem is, humans have a very nebulous definition of what intelligence is. We have to see some kind of novelty or surprise coming from the output, something that was not necessarily expected that a machine would generate.
Brian Roemmele (06:04):
And by the time we saw ChatGPT-3 released, I think people were kind of like, "Yeah, that's good, but it's not really that good." When GPT-3 was released and there was an experiment by a mass population, we started seeing incredible outputs that people did not expect, and they were shocked and awed, is really what took place.
Brian Roemmele (06:31):
And I wouldn't just say the average person, I would say across the entire technical world. It was really ground zero for a lot of people. It was a little bit of a delay for me because I'd been working with these models from the moment they came out, from the inception of OpenAI when Elon was with the company.
Brian Roemmele (06:50):
But by the time ChatGPT was released, I was pretty much blown away also. And I also, realized that it was at that moment that people understood that the question was just as important as the answer.
Jim Marous (07:07):
That's a great definition there too, because as you mentioned, a lot of people go, "Nah, it wasn't all that good." But then you realize it's only as good as a question as you specify.
Brian Roemmele (07:20):
Exactly. And so, I've always trained people to be what we call prompt engineers. And a lot of people feel like, oh, that's like a search engineer on Google, and it's like somebody who just puts in search terms.
Brian Roemmele (07:34):
Actually, it's not. The people who are most qualified to prompt AI are not technical people, are not people from the AI community because they see the model through a different lens.
Brian Roemmele (07:46):
The people who I've found, and corporations are finding, they're hiring these folks as soon as they are trained, are people with linguistics backgrounds and backgrounds in even poetry, ancient history, philosophy, psychology.
Brian Roemmele (08:02):
Psychology plays a really big part in trying to elicit an elucidation out of a prompt that you may ... this is what it really comes down to. AI knows things, but it doesn't know. It knows things.
Brian Roemmele (08:18):
And it's up to the human prompter to construct a question or a prompt in such a matter that it can get that information out of the AI.
Brian Roemmele (08:27):
And some folks would rather say, "Well, in the future, AI will get so good, it will understand what you're asking." That's simply not the case.
Brian Roemmele (08:36):
Even with the intelligence amplifiers that I have around me, these devices have been following me for the better part of 20 years with my context and life, about 10 years. They cannot predict everything that I'm going to want, and they certainly can't predict what my question is going to be in a lot of cases.
Brian Roemmele (08:57):
So, I find it fascinating that somebody would think that prompt engineering or the ability to create a prompt will be unnecessary.
Brian Roemmele (09:06):
In fact, if you see the press as we're recording about ChatGPT today, there is analysis by Stanford University that ChatGPT-4 has gotten less responsive and less well received in the questions that it's answering.
Brian Roemmele (09:23):
In fact, in one case, in one series of questions, it dropped by 90% in its capability from the release day back in March.
Brian Roemmele (09:30):
So, AI is a constant moving target and they're constantly changing it. And with those changes in a model, you have to change the way you prompt the system. And if you don't do that, you're going to get something entirely different.
Jim Marous (09:44):
Is it the difference between 3.5 and 4 where you said it, it actually dropped in its ability to answer questions. Was that because it changed the way it interpreted what we were asking or some other reason?
Brian Roemmele (09:56):
There's a lot of questions about that. One of the primary fundamental problems with, let's call it cloud-based AI … I'm a proponent of open source local AI for companies and for individuals, cloud-based AI has to be all things to all people.
Brian Roemmele (10:12):
And we always know that if we do all things to all people, we go to a restaurant that has every ingredient that has ever been made, the food's not going to be good if it's all mixed together.
Brian Roemmele (10:22):
And that's what's going on with the large language models that major corporations is that they have teams that, in fact, the teams are now, larger than the groups that are working on actually advancing the AI to do things called alignment and safety.
Brian Roemmele (10:36):
So, the alignment and safety teams are there to try to make AI safe and align to human values. It sounds really great, and one could go down an Orwellian hole of saying that sort of doublespeak. I'm not going to say that at this moment, ne can speculate.
Brian Roemmele (10:53):
I am saying that with their desire to try to make AI safe, they are taking away neurons that would otherwise be useful in other questions. And therefore, dumbing down AI.
Brian Roemmele (11:05):
I've called it an AI lobotomy in a sense. And this is being brought about through a lot of mechanisms. One is most definitely political, another one is psychological, and another one is fear of regulation.
Brian Roemmele (11:22):
One would argue that regulation and political are tied together, but they're being dealt with differently.
Brian Roemmele (11:31):
Now, my problem is if you can do a search even in Google for a particular subject, and you can get a result, AI should be able to answer exactly the same way.
Brian Roemmele (11:42):
So, what AI is trying to do today, companies like Open AI and Google, they're trying to limit it from even answering questions that somebody could pose to a search engine. And to me, that's sort of ridiculous and a fool's errand.
Brian Roemmele (11:56):
Now, if it's on the dark web, which this AI was not trained on, and it produces results coming from the dark web, which is not really wholesome for society, I absolutely agree with that. But by doing all this work to make AI please everybody, they're ultimately going to please nobody. And that's where OpenAI is.
Brian Roemmele (12:16):
And I also, have another issue is when we're prompting an AI system in a cloud, our questions are going to be used in the training of that AI.
Brian Roemmele (12:28):
And for the corporate people that are listening to us and the individuals that want some level of privacy, anything you ask AI and the results it generates is going to be used to build better models.
Brian Roemmele (12:41):
And if you're sharing corporate data, it's a good chance that that corporate data, if it's unique enough, will become part of a future model. No matter what the documentation says, there's no way of completely stopping that.
Brian Roemmele (12:54):
So, in a corporate setting, in a financial setting, in a banking setting, I am 100% about developing your own local AI. Now, it's not going to be as powerful day one as ChatGPT, but it will be trained on your data and only your company will have access to it if you don't put it in any network or any cloud.
Brian Roemmele (13:16):
So, these things come hand in hand. Safety as far as what are the outputs, and then safety on what's going into the AI model that could be used in a way that you didn't intend.
[Music Playing]
Jim Marous (13:30):
So, let's take a short break here and recognize the sponsors of this podcast.
Jim Marous (13:33):
So, Brian, when I'm interacting with ChatGPT and I'm asking the questions, is it learning more about what I'm looking for so that I can maybe get a little bit shorter on my request? So, right now, I ask it all kinds of things, but I ask it in very compartmentalized ways to get the results I want.
Jim Marous (13:56):
But is it learned with regard to me over time, or not really with regard to me, but with regard to the universe or both?
Brian Roemmele (14:05):
That's a great question. Yeah, yeah, both. Well, Jim, I think there is a couple of ways to look at this. Within every AI large language model system, there's something called the context window.
Brian Roemmele (14:17):
The context window can be, at this point, there's one AI system that's 1 million tokens, which is probably equivalent to 800,000 words. It's a very, very large context window. In fact, the Great Gatsby was put into a AI model called Claude, which was Claude 100K, which is a 100,000 tokens in a context window.
Brian Roemmele (14:42):
And it was asked to write the next chapter of the Great Gatsby. And it wrote ... it constantly, I tested all the time on it because Great Gatsby's is small enough and the memory, there's enough space where you can have it give you a sort of a creative output based on the characters and the understanding of the storyline.
Brian Roemmele (15:02):
So, it's really interesting in that creative sense. And again, I call this creative and we can go down that path on what creativity is, what consciousness is, what intelligence is. All these things are going to have to be redefined or defined more accurately.
Brian Roemmele (15:16):
But in the case of us just prompting ChatGPT, the context window is about 5 to 8,000 characters according to how it's being used. So, within that context window, that's as much memory as you have before amnesia starts taking place, and it starts forgetting the original elements of the prompt.
Brian Roemmele (15:37):
And this is why super prompting, which is what we advocate at promptengineer.university.
Brian Roemmele (15:44):
We train people how to super prompt to become very powerful not only to maybe save your job and to make your job 10X more valuable because you now, are standing on the shoulders of somebody stronger, but also, maybe even just to become a prompt engineer out and about in your career. And the people qualified for that are the least likely candidates, as I said before.
Brian Roemmele (16:09):
So, in the case of that context window, once you get past that limit, it's going to have a vague hazy recognition that you asked the question before, before it just kind of stops.
Brian Roemmele (16:21):
So, that's today. What we do with local AI, and we do this with an open-source free product called GPT4All, is we create a local vector database and we feed our questions and answers back into the vector database. So, it remembers that to remember that you actually asked that question before and remembers the context of that question.
Brian Roemmele (16:47):
So, slowly but surely, it remembers who you are. And over time, if you feed all of your email, all your communications, all the podcasts you ever did, text, speech-to-text, it will have a really good idea about what you think about the world and how you may answer a question. And this is part of why intelligence amplification's so powerful.
Brian Roemmele (17:16):
So, the final part of this is what happens to your question when it goes up to these models. It's very nebulous. You do sign off that they will train their model based on your question-answer, pair. This is called fine tuning.
Brian Roemmele (17:35):
So, once you envelop the entire corpus of data that these models have taken ... Llama is another model by Facebook Meta and of course, open AI's model, and then you have Google's models.
Brian Roemmele (17:51):
Their particular training was taking essentially everything that was on the internet and then fine tuning it on the question-answer pairs that are found on Twitter, that are found on Facebook, that are found on Reddit.
Brian Roemmele (18:05):
Which is why you're seeing these companies closing the walls down to access from large companies to understand that.
Brian Roemmele (18:12):
Now, why is that important? Because just having the corpus of information is not enough. How humans interact with that data has to be taught through fine tuning. So, it's not built in to the models.
Brian Roemmele (18:26):
And fine tuning is where we spend a lot of time with our corporate clients. We will sit down and say, "How much fine tuning do you want?"
Brian Roemmele (18:34):
Like we have an insurance client where we took all of their data, everything that it was ever generated by that company was digitalized at one point, and we actually extended that digitalization. And that's part of the training that we're doing on a model.
Brian Roemmele (18:49):
Now, that model is not baked yet, meaning it's not fully trained on GPUs, it's on a vector database. But even there, they're able to ask questions about the company that no single person or even group could have answered, because it now, knows everything about that company.
Brian Roemmele (19:07):
Needless to say, I think anybody listening to me realizes that that model should never be on a cloud anywhere. It has to be cut off from the world because anybody to hack that could severely jeopardize that company.
Brian Roemmele (19:20):
And so, I'm not here to scare people, but this is the direction it's going in. And so, we're at a fork in the road. We've reached, I believe peak cloud for AI before we start realizing our data is so valuable.
Brian Roemmele (19:35):
And so, when you're prompting AI and you're trying to answer those questions and you're trying to constrain the domain of information to a specific point, that's an art as much as it is a science.
Brian Roemmele (19:50):
And a lot of times we have to create persona. So, we have to create persona or a motif. This is part of super prompting.
Brian Roemmele (19:58):
The persona shapes the way that question's going to be dealt with. I like using either a university professor persona and I like creating a motif that you have to make a presentation to the UN about this discovery.
Brian Roemmele (20:12):
So, one might say, "What are you doing here? That sounds like storytelling." I say, test it out. When you test it out, you start realizing exactly what it does.
Brian Roemmele (20:23):
It forces the large language model to pick a neuron passageway through its neuronal connections that are much more constrained, much more laser targeted, and the elucidations are near the genius level.
Brian Roemmele (20:41):
Whereas if you ask a simple question, you're going to get a simple answer. And a lot of people get mad, and I say it's user operator era. So, that's kind of where we're coming from.
Jim Marous (20:54):
So, from the perspective of gathering information, learning through over time, is this where ChatGPT and AI in this sense can really impact customer experiences?
Jim Marous (21:05):
In other words, let's say it was a financial institution and we want to build a database on an individual customer level on what they've asked, what the answers were, the dynamics of that relationship.
Jim Marous (21:16):
Is it the future use of ChatGPT and generative AI where you can actually have individual customer communications conversations that are retained and built over time?
Brian Roemmele (21:27):
Jim, that is brilliant question. Absolutely. In fact, I cannot imagine a future where that does not take place. And again, it should all be done in the proper way with permissions.
Brian Roemmele (21:39):
But the bespoke way that a financial organization could interact with their clients based upon enveloping all of the customer touching experiences and maybe real-world experiences that they might garner from that customer.
Brian Roemmele (21:55):
Again, I would tread carefully, but people have a public persona. I would say that if you do it with care and you do it with dignity and permission by offering a value to the client, by understanding more of the dimension of the milestones in life that that client is going through, your ability to finely tether an output is phenomenal.
Brian Roemmele (22:21):
And I believe that that's where we're going. And I believe the customer will find the value just by having an interaction with their financial GPT.
Jim Marous (22:32):
Yeah. And because if you look at what frustrates customers right now, it's having to reinform the financial institution, or the airline, or whatever it may be about what happened in the past because it's not easily accessible in today's world.
Jim Marous (22:47):
But if I'm communicating about a challenge, so a challenge I've had over time just taking on different flavors, ChapGPT understanding this journey makes it so that the results are much more valuable.
Jim Marous (23:01):
And to your point, the value transfer makes it so there's less concern about privacy and security. Not that it doesn't matter, it still matters, but the concern level goes down because the value proposition has gone up.
Brian Roemmele (23:16):
Absolutely. And I would say every client is going to be minimum 10X more valuable to an organization when AI is being utilized correctly. And I'm not just speaking of things that we know, but the things that we don't know.
Brian Roemmele (23:36):
I would imagine, let's look at it from this point of view. Let's imagine that within a corporation, every client has their own AI model, that is distinctly their own. And that we use a grand corporate AI to pull those models to try to find situations where the company can offer much higher value.
Brian Roemmele (24:00):
My results in doing this, and we've been doing this quite a while, we've probably been doing it longer than anybody. And we do it as a crack team. We go in there, we establish models within corporations. We don't have business cards that even say that we do this.
Brian Roemmele (24:17):
I mean, it's all on recommendation. It's all on referral. I don't advertise it, but our schedule is packed with it.
Brian Roemmele (24:24):
And we go in there and we just basically look at what they've been doing within the cloud. We take every interaction from customer service that they have in the cloud (and we know that the typical cloud providers that are out there) and we put it in these models. And within hours, we're getting insights that nobody has seen.
Brian Roemmele (24:42):
I think before we started recording here, we're talking about some $14 million at one company that today just discovered from the AI model that we just kind of threw together in six days.
Brian Roemmele (24:56):
It's been six days of mostly taking desperate pieces of data and throwing into a local vector database and running an open-source local model and just quizzing it.
Brian Roemmele (25:09):
And I can't even predict what kind of value we are going to see if every customer had a representation of AI that sits there.
Brian Roemmele (25:21):
And again, doing it the right way. The wrong way is a kind of the way that technology's been used thus far. The opaque algorithm that Google uses, or that Netflix uses, or Amazon. We're past those days.
Brian Roemmele (25:38):
I think as people realize the power and the potential dangers of AI, the more transparent and the more inclusive you are with that client, "Hey, this is your AI, we're building it for you. This is going to know everything that you could possibly want to know about your financial profile and maybe just your phase in life profile intermixed with that."
Brian Roemmele (26:04):
And getting the client on board with that is going to be the trailblazing companies. And I don't care how old the company is, it's whether or not they take this mission. And frankly, it's a tough mission.
Brian Roemmele (26:18):
I've sat down with some companies, and it took months of delays because of internal debates of, "This is proprietary. We don't want customers to know how we're using their data."
Brian Roemmele (26:30):
And I'm like, "They're going to know at some point. So, let's just open the windows, turn on the lights, let them see it, let them have access to it, and if they don't want it, turn it off. And then they just have a human operator that they interact with."
Brian Roemmele (26:48):
"And do a comparison, Mr. Client and Mrs. Client, and tell me whether you enjoy this relationship with this AI tool available or not available."
Jim Marous (26:59):
So, given that, is there a time when ChatGPT and generative AI can actually then prompt questions back to the customer?
Jim Marous (27:08):
So, let's say I'm calling up about a problem and it involves maybe credit card payments and could ChatGPT or generative AI, ask me then, "Do you have balances in other finances?"
Jim Marous (27:22):
Or ask questions that can then make it so that the solution that's proposed is more overarching better answer than would be just from the data that is currently under the roof of the financial institution?
Brian Roemmele (27:37):
Absolutely, Jim. This is going to be a phenomenal aspect of it. The interactivity of building a model that is really concisely understanding where that person is. There's no two customers that are exactly alike.
Brian Roemmele (27:52):
We've only done statistics in business because we had crude tools. The laser wasn't invented yet, so we're using this big floodlight and everything's going to look the same.
Brian Roemmele (28:04):
Now, we have this laser that will be able to finely tune to the individual questions, and so it will prompt the individual based on activities. I mean, now, how it manifests, it's up to the client.
Brian Roemmele (28:18):
I mean, I have one client, we have an insurance company we're working with that is similar to what we're talking about. And they are proposing using voice enabled dial out systems to call the client or to text the client when they see something that they think is valuable for the individual.
Brian Roemmele (28:39):
And not as an advertisement, but as a dialogue. So, advertising is over, dialogues are the future. And those dialogues, if they're meaningful, if they're indistinguishable from a really ...
Brian Roemmele (28:52):
And again, the dialogues that we need to create in this AI can't be corporate speak, it needs to be much closer. And the way we can get away with that in a corporate setting is that this is your personal AI.
Brian Roemmele (29:09):
So, it's going to be more like something on your shoulder saying, "I think it's a good time that you consider dropping this particular card because it has a high interest rate and we can cut your payments down by $250 a month if we move your funds across this as a refinance." Things like that. Those kind of things.
Brian Roemmele (29:32):
We know this, Jim, we know people don't like talking about medical problems and financial problems with anybody. It's like the hardest thing. Even their doctor's like, "Yeah, I got this." And they're like, "Well, no.” And definitely medical, financial problems.
Brian Roemmele (29:48):
And here's what we do know, the tests are already 100% clear. People would more want to disclose psychological issues, medical issues to a AI system that is dialoguing, interacting with them than any human being to a high percentile. It's like to the 78 percentile. And this is done in three studies now across different universities.
Brian Roemmele (30:13):
I'm working with a university right now, that do it with financial type of things. I'm expecting that to be higher. I'm expecting it to be to the 80 percentile.
Jim Marous (30:26):
I'm thinking about that and going again, if you continually ask the consumer or the small business, whatever it may be, “Are you okay sharing this?” We only will use as much as they share.
Jim Marous (30:36):
But the reality is, if you build more and more trust ... I mean, it's kind of like the trust I have with Amazon and everybody has with Amazon. We pay Amazon every year to use our data to our benefit and makes our buying decisions easier.
Jim Marous (30:50):
Well, in the same case, in financial services or any industry. If it learns over time and it asks me questions that make it perform better, eventually, I'm going to come to the realization that I want my financial institution or my generative AI to understand that I have deposit accounts elsewhere, I have credit accounts elsewhere where my internal challenges are emotionally with maybe the way the market's performing.
Jim Marous (31:16):
All these things that can combine both universal perspectives, but also, individual perspectives.
Jim Marous (31:22):
And what's interesting about that is, again, and you keep on emphasizing in every single sentence, as long it's used correctly, as long as it's built correctly.
Jim Marous (31:32):
So, with that in mind, what steps are being taken to address the consumer concerns regarding data privacy, and protection, and leveraging AI to avoid biases and decision making?
Brian Roemmele (31:48):
Great question. I would say that it's a double-edged sword. I think that what happens when it comes from a regulatory standpoint, is that we have overly broad, overly potentially damaging regulation that could make the United States or any other country that is subscribing to overregulation, fully incapable of competing on a grand scale with other countries that are making a more decidedly metered approach.
Brian Roemmele (32:23):
How can we make this better? I think financial institutions could lead in this. I believe that if they lead with open, transparent AI usage, that they will become the gold standard, which they should be, and how this technology can be deployed. And not just in financial, but in every other aspect.
Brian Roemmele (32:45):
And I believe that industry when doing it the right way, actually does a better job than regulatory. And the problem is there's a conservatism that comes from the financial industry that we know about, right, Jim? And we're always challenged with it.
Brian Roemmele (33:03):
When I was doing a lot of consulting and banking and payments and, “Apple pays coming, guys." Three years before it came. Nobody would listen. I said, "This is your dog in the race. You can actually lead by this by shaping it the way you want." And the conservatism held it back.
Brian Roemmele (33:21):
This is another opportunity. There is absolutely no doubt that AI is going to become simpatico with individuals and making their decisions financially.
Brian Roemmele (33:30):
And I'm not saying this is going to bypass personal AI that's going to be making financial decisions. I'm saying there's a good chance that you're going to have multiple AIs that you're going to interact with, and maybe just your AI will interact with somebody else's AI.
Brian Roemmele (33:45):
That's not entirely a difficult reaction. Even if they're just talking to each other over the internet or by a phone call, if you will.
Brian Roemmele (33:55):
But if a financial institution could say, "Okay, we're going to embrace artificial intelligence to the betterment of our clients. And here is our declaration, here's how we're going to use the data. The data is your data. It is not our data. You can take your data back at any moment, at any time, and we don't have your data any longer."
Brian Roemmele (34:21):
Unfortunately, some companies may not like that. I don't think there's any other way because that's where the regulation's going to go anyway.
Brian Roemmele (34:29):
So, take the higher ground, let people have control and ownership of their data, but give it to them in a way that is so valuable, so delightful to interact with like any other experience. Make it a delightful experience and so valuable that they would never want to leave you.
Brian Roemmele (34:48):
So, instead of using the stick, keep the carrot and always just use the carrot. And not only will that forestall draconian regulation, it would allow that company to be a beaming leader in an industry that's considered maybe a laggard in technology.
Brian Roemmele (35:06):
But where else can it be applied better than in the financial realm where a lot of people have their finances, what I would say in a very disordered fashion. They're all over the place. Even the most ordered person, the studies over the years, I'm sure you know this from being in it so long, is that people's finances are over the place and there needs to be a consolidation.
Brian Roemmele (35:33):
The idea of QuickBooks initially was to do that. And it did it to a certain level, but it reached its zenith and we've never gotten past it.
Jim Marous (35:44):
So, do you see then the future of ChatGPT and generative AI with regard to customer experience being something that it becomes an evolving, let's call it brochureware or content that becomes very specific to that individual where you actually, as the learning process goes on, it will point you into a direction that is best for you in a more of a consultative perspective?
Brian Roemmele (36:11):
Absolutely. And although it will not always be maximized for the highest profit to be garnered out of each individual, you're best to let this device, this software, this system normalize the relationship.
Brian Roemmele (36:27):
Because if you are doing everything right, the value you're offering will make that person never, ever want to not have this capability because it's an investment.
Brian Roemmele (36:41):
The investment in giving data to an AI platform is phenomenal because over time, it's the stickiest thing you'll ever see. There's nothing else.
Brian Roemmele (36:52):
So, what happens is net over time, and I can show this with some of my research and this statistically, if you do right by that client, they're going to make much more money. The company is going to make much more money, and their cost to maintain a customer is going to go through the floor.
Brian Roemmele (37:14):
Because they're not going to need to acquire as many new customers as ferociously as they do today, because it now, is to just a percentage point, different struggle with the top players. And this could be major double digit percentages in capability.
Jim Marous (37:35):
So, with that in mind, and we're still so early in this whole process of evolution, are you familiar with any notable success stories of banks or credit unions using generative AI and ChatGPT to improve maybe the customer experience, the engagement level, or even innovation?
Brian Roemmele (37:56):
Jim, this is wonderful and it's kind of depressing for me because a lot of companies have barred the use of AI within the company, specifically ChatGPT for the valid reasons we talked about earlier. Are you giving out private information? What's the legal limits? Things of that nature.
Brian Roemmele (38:18):
That unfortunately, throw the baby out with the bathwater, there's a lot of executives that contacts me directly and they can't quite get to the C level to open up the prospect of using AI outside from known vendors.
Brian Roemmele (38:36):
The known vendors are taking their time utilizing the AI, these are the cloud providers. And of course they are proposing AI from their perspective as a customer service tool, maybe as a way to replace employees.
Brian Roemmele (38:50):
And I think that's the most foolish thing. AI should never be used to replace a single employee. AI should be used to empower employees to be 10 times, at very minimum, seven X more powerful.
Brian Roemmele (39:04):
It's like a lever. The more you give this person the ability to have the leverage of standing on the shoulders of AI, the more powerful that individual becomes within the organization.
Brian Roemmele (39:15):
And it's backward thinking that it's all about the cost the companies are experiencing. That's very, very short term. Because if you maintain that person, and if you train them ...
Brian Roemmele (39:26):
And that's what we do at promptengineer.university, is we train people to be empowered so that they can actually go out into the world and go to their managers, go to their executives or executives go to other executives within their organizations and say, "Look what I discovered."
Brian Roemmele (39:43):
And this is how I equate it, and we're old enough to remember some of this. The Apple 2 became popular for one primary reason. It was a thing called a spreadsheet. And later on it became Lotus. But we had Multiplan and we had all these different sort of spreadsheets.
Brian Roemmele (39:59):
Now, the very first spreadsheet brought into companies was a guy hauling an apple under one arm and a monitor in the other arm and maybe a software box on their head and doing their job, their spreadsheet work in the company and taking their computer back home.
Brian Roemmele (40:17):
The data processing departments of major corporations back when the Apple was taking off, were absolutely rejecting the use of personal computers in the company. Everything had to go through the mainframe.
Brian Roemmele (40:29):
And they saw the spreadsheet as a joke. They said, "Why would you want a spreadsheet? We'll do a job run on our cobalt system and we'll get you back in about six days.” Whereas a guy could play and a gal could play with numbers and see the differences.
Brian Roemmele (40:45):
That was self-empowerment. Bring your own computer to the office. That was the early decade, first decade of the personal computer was the personal empowerment of that computer within the job.
Brian Roemmele (40:57):
And luckily, a lot of very wise companies said, "Oh, what the heck let Joe or Lisa bring their computer in as long as we're not paying for it, they can play with their spreadsheets." That fundamentally changed every single corporation in the world.
Brian Roemmele (41:12):
This is a thousand times more powerful than the spreadsheet. And did we need to fire people who are accountants when the spreadsheet came? Did an executive say, "Oh, I read an article that spreadsheets are going to make accountants redundant. Let's fire them all."
Brian Roemmele (41:31):
It's just like the same thing of, "Oh, I read an article in the Wall Street Journal that ChatGPT is going to cause everybody to lose their job. Let's start firing." This is a knee-jerk reaction.
Brian Roemmele (41:41):
The positive way to do this, and the way your stock is going to take off is when the world realized you not only did not fire anybody, you hired more people, and they now, are an army empowered with the corporate data, the AI and their ability to know how to use it in a safe, effective manner that is not dangerous to anybody.
Brian Roemmele (42:03):
That story being put into the public markets, I'm not going to make a promise, but I know is going to raise the price of the stock.
Brian Roemmele (42:13):
Whereas the story that I temporarily cut 5,000 jobs because AI replaced it, good luck with that because one of your competitors is going to get the other story we just talked about and empowering people.
Brian Roemmele (42:24):
So, AI is a moment of human empowerment if used correctly. And that's one of my missions when I go into a company is, in fact, one of the things I — the agreements that most companies make with me is that they do not fire a single person because of what we've been training on AI.
Brian Roemmele (42:44):
And they make that commitment. It's not legally bounding. But at this point in dozens and dozens of companies, they've all hired people at this point, to utilize AI and to train their staff even better beyond our training.
[Music Playing]
Jim Marous (42:59):
Brian, thank you very much for part one of our interview with Brian Roemmele around ChatGPT, generative AI, and AI in general. Be sure to catch our second part of this interview, the next podcast on Banking Transformed.
Jim Marous (43:16):
Thanks for listening to Banking Transformed, the winner of three international awards for podcast excellence.
Jim Marous (43:20):
If you enjoyed today's interview, please take some time to give our show a positive review. Also, be sure to catch my recent articles on The Financial Brand and check out the research we're doing for the Digital Banking Report.
Jim Marous (43:31):
This has been a production of Evergreen Podcasts. A special thank you to our senior producer, Leah Haslage; audio engineer, Chris Fafalios, and video producer, Will Pritts.
Jim Marous (43:41):
I'm your host, Jim Marous. Until next time, remember, understanding the potential of generative AI is a key to redefining what's possible in the future.
Hide Transcript