Wednesday, November 8, 2023
HomeBankPodcast: Approaching AI with a plan

Podcast: Approaching AI with a plan

Financial institutions are investing in AI and, as they do, they must consider application, talent and regulation.  

Card issuing fintech Mission Lane has created an internal framework to help implement new technologies, including AI, head of engineering and technology Mike Lempner tells Bank Automation News on this episode of “The Buzz” podcast. 

Mission Lane has a four-step framework when approaching new technology, he said: 

Listen as Lempner discusses AI uses at the fintech, monitoring risk and maintaining compliance when implementing new technology throughout a financial institution.  

The following is a transcript generated by AI technology that has been lightly edited but still contains errors.

Whitney McDonald 0:02
Hello and welcome to The Buzz, a bank automation news podcast. My name is Whitney McDonald and I’m the editor of bank automation News. Today is November 7 2023. Joining me is Mike Lempner. He is head of engineering and technology at FinTech mission lane. He’s here to discuss how to use the right type of AI and underwriting and identifying innovation and use cases for AI, all while approaching the technology with compliance at the forefront. He worked as a consultant before moving into the FinTech world and has been with Mission lane for about five years.

Mike Lempner 0:32
I’m Mike Lempner, I’m the head of our engineering and technology at mission lane. Been in the role where I’ve been leading our technology group and engineers to help build different technology solutions to support our customers and enable the growth of mission lane. I’ve been in that role for about five years prior to that mission Lane was actually spun off from another fin tech startup, and I was with them for about a year as an employee prior to that as a consultant. And prior to that time, I spent about 28 years in consulting consulting for a variety of different fortune 500 companies, startups, but mostly all in the financial services space.

Whitney McDonald 1:09
And maybe you could walk us through mission Lane give us a little background on what you guys do. Sure,

Mike Lempner 1:16
Mission lane is a FinTech that provides credit products to customers who are typically denied access to different financial services, largely in part due to their minimal credit history, as well as poor credit history in the past. For the most part, our core product that we offer right now is we have a credit card product that we offer to different customers.

Whitney McDonald 1:39
Well, thank you again for being here. And of course, with everything going on in the industry. Right now, we’re going to be talking about a topic that you just can’t seem to get away from, which is AI and more specifically ai ai regulation. Let’s let’s kind of set the scene here. First of all, I’d like to pass it over to you, Mike to first kind of set the scene on where AI regulation stands today and why this is an important conversation for us to have today.

Mike Lempner 2:08
Yeah, sounds good. As you mentioned, Whitney AI has been really all the the conversation for about the past year, since Chechi. Beatty, and others kind of came out with their capabilities. And I think as a result, regulators are looking at that and trying to figure out how do we catch up with that? How do we feel good about what what it does? What it provides? How does it change anything that we do currently today? And I think for the most part, you regulations are really stand the test of time, regardless of technology and data. But I think there’s always kind of the lens, okay, where we are today with technology, has anything changed where we are in terms of data sources, and what we’re using to kind of make decisions from a financial services standpoint is that also creating any kind of concerns and you’ve got different regulators who look at it, you’ve got some regulators who are looking at it from a consumer protection standpoint, others who are looking at it from the soundness of the banking industry, others who are looking at it from an antitrust standpoint, privacy is another, you know, big aspect of it and as well as Homeland Security. So there’s there’s different regulators looking at it in different ways and trying to understand and and try to stay as much ahead of it as they possibly can. And so I think a lot of times that they’re looking at things and trying to kind of look at the existing regulations, and understand are there adjustments that need to be made an example of that CFPB, I think recently provided some some comments and feedback related to adverse action notices, and how those are basically being generated in the light of artificial intelligence, as well as like new modeling capabilities, and including, like new data capabilities. So I think there’s there’s some specific things in many ways it doesn’t change the core regulatory need. But I do expect there’s going to be some fine tuning or adjustments that get me to the regulations to kind of put in place more more protections.

Whitney McDonald 4:10
Now, for this next question, you did give the example of looking at existing regulation, keeping all the different regulatory bodies in mind what already exists in the space? How else might financial institutions prepare for new AI regulation? What could that preparation look like? And what are you really hearing from your partners on that front?

Mike Lempner 4:33
Yeah, I think it’s, it’s not just specific to AI regulations. It’s really all regulations, and just kind of looking at the landscape of what’s happening. You know, where we are. I think the one thing that we know for sure is regulation changes will always happen and the they’re just a part of doing business and financial services. And so that need is not going away. So There are different privacy laws that are being put into place some, in some cases by different states. There’s other things, you know, as I mentioned with AI are emerging and growth, how do regulators feel comfortable with that as well? So I think in terms of preparing, just like you would with any regulatory activities going on, it’s important to have the right people within the organization involved in that in for us, that’s typically our legal team or risk team who are working both internally as well as getting external counsel, who will help us understand like, what are some of the current regulatory ideas that are out there being considered? How might that impact us as a business and we’re staying on top of it. And then as things materialize over time, we work to better understand that regulation, and then what it means for us, and then what do we need to do to be able to support it. So I think that’s a biggest part of it is getting the right people in the organization to stay on top of it know what’s currently happening, what might be happening in the future, leveraging external resources, as I mentioned, is they may have expertise in this area, and just staying on top of it so that you’re not surprised and then really kind of reacting to the situation.

Whitney McDonald 6:14
Now, as AI regulation does start coming down the pipeline, there’s definitely not been a a waiting period, when it comes to investing in AI implementing AI and innovating within AI. Maybe you can talk us through how you’re navigating all of those while keeping compliance in mind, ahead of further regulation that does come down. Yeah,

Mike Lempner 6:39
absolutely. The, you know, for for us in AI is is a really kind of broad kind of area. So it represents, you know, generative AI like chat GPT. It also involves machine learning and other statistical kinds of algorithms that can be applied. And we operate in a space where we’re taking on risk by giving people credit cards and credit. And so for us, there’s a core part of what we do the underwriting of credit. That is is challenging involves risk. And so for us, it’s important to have really good models that help us understand that risk and help us understand like who we want to give credit to. We’ve been ever since we got started, we’ve been using AI and machine learning quite a bit in our our models. For us, one of the important things is to really look at and where we may have many models that support our business. Some of them are credit underwriting models, some of them are fraud models, some of them may be other models, we have dozens of different models that we have is making sure that we’re applying the right AI technology to meet both the business needs, but also taking into account regulation. So as an example, for credit underwriting, it’s super important for us to be able to explain the outcomes of a given underwriting model to regulators as an example. And so if you’re using something like generative API, AI or chat GPT, where accuracy is not 100%. And there’s the concept of hallucinations. And while hallucinations might have been cool for a small group of people in the 60s, it’s not very cool when you talk about regulators and trying to explain why you made a financial decision to give somebody a credit card or not. So I think it’s really important for us to use the right type of AI and machine learning models for our credit underwriting decisions so that we do have the explainability have it. And we were very precise in terms of the outcome that we’re expecting, versus other types of models. And it could be marketing models, there could be, as I mentioned, fraud models or payments models that we may have as well that support our business. And there, we might be able to use more advanced modeling techniques to support that.

Whitney McDonald 8:57
No great examples. And I like what you said about explainability as well. I mean, that’s huge. And that comes up over and over again, when it does come to maintaining compliance while using AI. You can have it in so many different areas of an institution, but you need to explain the decisions it’s making, especially with what you’re doing with with the credit decisioning. I’m moving in to something that you had already mentioned a little bit about, but maybe we can get into this a little bit further. is prepping your team for AI investment implementation. I know that you mentioned having the right teams in place. How can financial institutions look to what you guys have done and maybe take away a best practice here? For really prepping your team? What do you need to have in place? How do you change that culture as AI as the AI ball keeps rolling?

Mike Lempner 9:52
Yeah, I think for us, it’s similar to what we do for any new or emerging technology in general. which is, you know, we’ve got a an overall kind of framework or process that we have like one is just identify the opportunity and the use cases. So we’re really understanding like, what are the business outcomes that we have? How can we apply technology like AI or additional data sources to solve for that particular business challenge or outcome. And then so that’s one is just having that inventory of where all the places that we could use it, then to like really looking at it and understanding the risks, as I mentioned, credit risk is one thing. And that we may want to have a certain approach to how we do that, versus marketing or fraud or other activities may have a slightly different risk profile. So understanding those things. And even when we talk about generative AI, for us, using it for internal use cases of engineers writing code and using it to help write the code is one area where it might be lower risk for us, or even in the operations space, where you’ve got customer service, who maybe we can automate a number of different functions. So I think understanding the use cases understanding the risks, then also having a governance model, and that is, I think, a combination of having a team of people that are cross functional to include legal risk, and and other members of the leadership team who can really look at it and say, here’s our plan. And what we would like to do with this technology, do we all feel comfortable moving forward? Do we fully understand the risk? Are we looking at it like holistically, then also, governance? Like for us, we already have model governance that we have for that really identify what are the models we have in place? What types of technology do we use? Do we feel good about that? What other kind of controls do we need to have in place. So I think having a good governance framework is another key piece of it. Investing in training is a another key thing to do. So particularly in the case of emerging generative AI capabilities, it’s fast evolving, it’s really important to kind of make sure that people just aren’t enamored by the technology, but really understanding it, understanding how it works, understanding the implications, there’s a difference to whether we’re going to use a public facing tool and provide data like Chet GPT, or whether we’re going to use internal AI platforms using our internal data, and use it, you know, for more proprietary purposes. So there’s a difference, I think, in many ways, and having people understand some of those differences and what we can do there, it’s important. I think, lastly, the other key thing from an overall approach standpoint, is to really iterate and start small, and get some of the experience on some of those low risk areas. In for us the low risk areas, like we’ve identified a number of different areas that we’ve already built out some solutions around customer service. And engineering, as I mentioned, you can use some of the tools to help write code, and it may not be the finished product, but it’s at least a first draft of code that you can, you can start with that. So you’re not basically starting with a blank sheet of paper.

Whitney McDonald 13:09
Yeah, and I mean, thank you for breaking out those those lower risk use cases that you can put in action today. I think we’ve seen a lot of examples lately of AI, that is an action that is able to be launched and used and leveraged today. Speaking of maybe more of a future look, generative AI was one thing that you had mentioned, but even beyond that, would just love to get your perspective on potential future use cases that that you’re excited about within AI, where regulation is headed. But however you want to take that future look, question of what’s coming for AI, whether in the near term, or near term or the long term? Sure.

Mike Lempner 13:53
Yeah, it’s I think it’s a very exciting time and insane, exciting space. And to me, it’s remarkable just the capabilities that existed a year ago where you could kind of go and and put in text or audio or video and be able to interact and and get like, you know, interesting content that could help you just more whether it was just personal searches or whatever be productive, and to now where it’s available more internally for different organizations. And even what we’ve seen internally is trying to use the technology six months ago, may have involved eight steps and a lot of what I’ll call data wrangling to kind of get the data in the right format, and to feed it in to now it’s more like there might be four steps involved in so you can very, you can much more easily integrate data and get to the outcomes and so it’s become a lot simpler to implement. And I think that’s going to be the future is that it will continue to get easier, much easier for people to apply it to their use cases and to use it for a variety of different use cases. And I think different vendors We’ll start to understand some patterns where, you know, there might be a call center use case that, you know, always occurs, you know, one example I always think of is, I can’t think of a time in the past 10 plus years where you called customer service and get transferred to an agent, where they don’t say, this call may be recorded for quality assurance purposes, with quality assurance of a phone call usually involves people manually listening to it and taking notes and kind of filling out a scorecard. Well, now with you know, AI capabilities that can all be done in a much more automated way. So there’s, there’s lots of different things that like that kind of use case, that pattern that I’m guessing there are gonna be vendors who will now put that type of solution out there and make it very easy for people to consume almost like the AWS approach, where things that AWS did are now kind of exposed as services that other companies can kind of plug into very easily. That’s an example where I think the technology is headed, and you’ll start to see some point solutions that will emerge in that space. from a regulatory standpoint, I think it’s going to be interesting, you know, similar to death and taxes, I think, you know, regulate regulation is always going to be there, particularly in financial services. And it’s to do the things that we talked about before protecting customers protecting the banking system protecting, you know, different areas that are important. So I think that’s, that’s a certainty. And for us, you know, I think it’s, there’s likely to be different, different changes that will occur as a result of the technology and the data that’s available. I don’t see it as being drastic changes to the regulations. But more looking back at some of the existing regulations and saying, given the new technology, given the new data sets that exist out there, are there things we need to change about some of those existing regulations to make sure that they’re, they’re still controlling for the right things?

Whitney McDonald 16:59
You’ve been listening to the buzz, a bank automation news podcast, please follow us on LinkedIn. And as a reminder, you can rate this podcast on your platform of choice. Thank you for your time, and be sure to visit us at Bank automation For more automation news,

Transcribed by



Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments