Are U.S. Banks Ready for Open Banking

It’s not that traditional banks do not want to modernize and offer their customers the most cutting-edge products available; major challenges such as IT infrastructure and business processes stand in the way.

Join Don Cardinal of Financial Data Exchange (FDX) to discuss the current state of open banking adoption in the U.S.

Learn more about the open banking regulatory responses, challenges, and opportunities.

Jennifer Henderson:

Welcome everyone, my name is Jennifer Henderson with Adaptigent. I want to thank you for joining us today for our webinar, Are US Banks Ready for Open Banking? We will have everyone muted during the webinar, but we are happy to answer any questions that you have. Please just type them in, at any time, in the questions window in the webinar control panel. We will have some time set aside at the end to get those answered. We will also email out a copy of the presentation and a link to a recorded version of the webinar in a few days.

So, I will be your moderator today and I’m joined by Don Cardinal who is managing director of the Financial Data Exchange FDX. They are a nonprofit working to unify the financial services industry around a common interoperable standard for secure exchange of financial data. Cardinal has more than 20 years of industry experience in digital banking and financial cyber security, mostly as senior vice president at Bank of America, where he led the ecosystem trust and safety teams for the cybersecurity defense division of the bank’s global information security department. So, go ahead Don, take it away.

Don Cardinal:

Great job reading off my bio, just the way Mama had written it, so I appreciate it. So, who in the heck is Financial Data Exchange? Everyone asks, and let me give you some context of what we’re trying to solve and what we’re looking at. Right now, in order to share data in most places for apps, for budgeting, for tax prep, for whatever, most people have to give up their ID and password to their bank accounts, their investment firms. We estimate, based on surveys and informal questions, there’s about a 100 million Americans who’ve done so. That’s a lot of IDs and passwords floating around out there. In addition, when you talk to an FI, a bank or brokerage, we generally find somewhere between 10% to 15% of our customer base actually have given out their IDs and passwords. These are very innovative and creative folks, early adopters, if you will.

But on the bank side, if you’re hosting all these web sessions that are being accessed with these IDs and passwords, we’ve also found that, oftentimes, at least a quarter of your online sessions are not humans at all. They’re actually automatic access from these apps trying to pull back data. So, you’re spending a lot of money on hardware to paint screens for another computer that doesn’t even read screens. So, we’ve written a methodology for sharing this data securely without sharing IDs and passwords that’s way more efficient. We have 100% of our financial institution members are actually using FDX. And it’s no small number, we have about 12 million consumers who have already started the migration away from the 20, 25 year old legacy tech of giving up your ID and password and actually tokenizing it through an API. And it’s a highly available source for data. So, it’s fast, it’s free, and it’s way more secure.

So, again, but in order to do that we have to define how much data to share, who to share it with, what field is what, and work on all those things together. And you have to do so through consensus, which our job, as a standards body, is to put all those folks together. So, why in the world would we do this? Well, it eliminates held away IDs and passwords. It eliminates screen scraping. It puts you on a path to strong customer auth, or multi auth, multifactor auth. It enables biometric. In a lot of cases, these tokens expire, the access expires so, God forbid, you forget you have an app running, eventually, it times out all by itself, customer doesn’t need to do anything. And, of course, we capture consent privacy and customer control’s a key thing.

And the idea is if you’re going to share data, you should be able to specify to whom, from whom, for what purpose, and for how long. And, of course, everything is clearly defined, so people have a real sense of what they’re doing.

And the next visual, actually if we go forward, is kind of where we are right now. If you see the consumer is kind of in the middle of this big sea of data, data, data everywhere and nary a drop to drink. But how in the world do you plug all these folks together? Give the customer visibility and control over who gets to see what for how long and for what purpose? And the customer is able to equip by themselves to do that. So, all the players you see around the board need to come together and agree on how best to do so. And that’s effectively what we’ve done.

So, we started out with fundamentally what is it, we want to try to enshrine to drive all the tech behind this. So Alex, if you’ll go forward, we came up with five principles. First and foremost is control. The customer you see in the center of the screen is in control of where their data is. They get to see where it goes. They get to permission it. It’s all based on their wishes, their wants, their desires. JC Penny said a long time ago, “Customer is king,” and it’s still true, Sam Walton said that. It’s based on access. The consumer should be able to have access directly or through proxies in relatively straightforward manner. In addition, it should be transparent. I mean, at the end of the day, you should be able to know who’s seeing what, and you should understand that.

Traceability, you should be able to follow every link in the chain. Where that data gone? Where is it? You should know this. And, of course, a security. I think everyone you talk to, every node in the chain, every player on that previous screen in blue knows we’re in a trust-based ecosystem, and without trust, we really don’t have businesses. It doesn’t matter what you’re in, in financial services. So, let’s go ahead and roll forward one more.

Today, and this model’s been around for 20, 25 years depending on who you ask, if you want to use a cool app to do budgeting, to do bookkeeping, to do tax prep, or any one of a number of wonderful things, boosting your credit score, quite honestly, you have to give up your ID and password to your bank, or brokerage, or utility company. And that was great in 1995 but why do you have to do that now? And turns out you don’t. It turns out the way to work in the future is that same app will say, “Hey, do you really want to share this data, Alex?” You go, “Yeah, I sure do. I bank with XYZ company.” The session then hands you over the bank says, “Hey, you really want to do this?” “Yes I do.” “Here’s the partner you’re sharing with, are you cool with it?” “Yes I am.” And it hands you back.

And nobody besides you ends up knowing your password ever. If you don’t know it, if you don’t know a secret, you can’t lose it. If they don’t hold your credentials, they can’t load them. It’s a wonderful thing. It’s known as a rule of least privilege. So NIST, National Institute of Standards and Technology promotes data minimization as a key control for data protection. So, we’re doing all this stuff in this space and we’ve talked about all these apps, but what does it look like end-to-end?

So Alex, let’s roll forward just one more. So, if you think about it, if you’re the end consumer and you decide, let’s say, it was a month earlier and you’re still doing your taxes, I know there’s an extension now, but you want to do your tax prep. So, you download an app, or you pull an app up on your phone, and you said, “Great.” “Where do you bank?” It then says, “Great, let me talk to the bank,” and the bank then grabs your data. In the old world you’d do it with your ID and password. Nowadays, they don’t have to do that. They just hand the session over. And what’s beautiful about that is then when the session arrives at that bank or brokerage, they say, “Oh hey, welcome back Alex. Touch phone, look at your phone for a biometric,” way more secure. I love biometrics are easier. You don’t have to remember passwords, but it makes use of the financial institution’s existing security stacks. They don’t have to spend anything else to do that.

And the idea being, whether it’s a FinTech app, or maybe even your broker, or your CPA … I’m a recovering CPA by trade and I wish I could tell you my peer CPAs don’t have your IDs and passwords on sticky notes under the keyboard, but we do. So, wouldn’t it be great if we got away from all that? And wouldn’t it be great if this technology was free of charge and royalty free to use forever? Spoiler alert, it is.

So, we’re doing all this around financial services and everybody goes, “Wait a minute, what do the regulators say?” So Alex, let’s go forward and see what the regulators do say. So, the Office of the Comptroller of the Currency actually calls out and says APIs [inaudible 00:07:49] the code. And a great example is FDX. We were thrilled and really flattered that the OCC would actually mention us as a useful tool, and as somebody that it’s pointing to.

Of course, The Treasury got on board last summer and they raised a flag about, hey, IDs and passwords time has come and APIs, to quote them, “Are potentially a more secure method of financial account transaction data sharing,” that’s based on input and guidance that we’d given them as well. So, a lot of folks take this to heart, but when your regulators are on board with this technology, when it’s free to use it is a wonderful thing. Now, that’s great from the user perspective, the users are in control, the financial services companies are on board but, at the end of the day, you still have an interface. It’s a loading dock, if you will, but we still have to work and solve for connecting all the data on the backend to the mainframes, to the mid range systems, to all those systems of record, some of which have been acquired through three and four acquisitions ago. So, you really need a methodology to make this work for mapping all that together. And, luckily, we have someone on the call who’s very good at doing this, Alex?

Dr. Alex Heublein:

Fantastic, Don. Hey, thanks. That was a great overview and, hopefully, everyone can see sort of the challenge that we’re really facing here, and the solution that FDX is bringing to the table. It’s a wonderful solution and it really solves some very, very fundamental, very key problems, particularly for consumers, right? We’ve all seen all the data breaches, we’ve all seen all the challenges, data privacy and integrity going forward. But like Don said, one of the big challenges that we see is that is fantastic, except all of this data we’re talking about or, in some cases, most of this data for banks and other financial institutions is still living on old legacy systems, many of which the applications were designed 20, 30, 40, even 50 years ago.

So, the trick is you’ve got some great new modern standards, there’s some great technology behind it. You’ve got an industry consortium working together. Fantastic. How do I hook my systems up to this to make it work? And so, that’s what we’re going to talk a little bit about. So, let’s do a little recap of some of the things that I thought were interesting about Don’s presentation.

One of them is that I think this is really the right time for change. We’ve seen this sort of screen scraping thing going on for years, and years, and years now because it was sort of the path of least resistance. You didn’t really have to have a whole lot of permission to go out and screen scrape a bank’s website if you were a financial aggregator, for instance. But, like Don said, it is incredibly inefficient. It’s not nearly as secure and it’s very, very inefficient from both a computing resource standpoint and from a time standpoint.

We were talking to a customer very recently and they said that over 40% of all their website traffic was from screen scraping bots from aggregators and that’s a pretty amazing number. 40% of all their website traffic, which ultimately was then going out and hitting their mainframe based core banking systems, and then retrieving that data, sending it back to the website, ultimately sending it back to the aggregator. And you almost couldn’t design a less efficient way of doing things. So, we’re starting to see this big industry shift away from screen scraping, away from proprietary API development and moving towards open standards based technologies like FDX. So, that’s sort of the first thing we saw.

Second is that there’s huge growth going on in the FDX API usage. There’s 12 million consumers who are already using the APIs. And when we look at other geographies and regions, like if you go over to Europe and look, they’re also experiencing just exponential growth in these open standards based APIs for doing financial exchanges, and security, and interactions. So, this isn’t something that’s sort of just getting started and it’s still in its very, very small formative phases. You’ve got millions of people throughout the world using these APIs already. So, the question you have to ask yourself, if you’re a bank or a financial institution, is, am I on board with this? Because this train is leaving the station. It’s time to get on board.

The third thing we saw is that the reliability, the scalability and the security of these APIs is absolutely essential. I mean, Don spent a lot of time talking about the security, but reliability and scalability are also very, very important when you’re going out and opening up systems that may not really have ever been open to the world except maybe from some screen scraping. Now, I’m opening up these APIs and these systems, these core banking systems to potentially many, many millions more, or even tens of millions of more transactions. Because what you see when you have something like this is the more people realize what they can do with it, the more they use it. And the more they use it, the more they realize what they can do with it. And there tends to be this very virtuous cycle in that regard.

But if you’re not ready from a scalability, and reliability, and flexibility standpoint, that virtuous cycle can often turn into a vicious cycle, if you happen to be one of the poor IT people trying to make all this work. So, that’s the last thing in the world you want. You want it to be a virtuous cycle, not a vicious cycle.

Fourth thing we saw is that if you look at the FDX availabilities is averaging sort of 99, sort of three nines at this point. But the challenge is, and we’ve seen this both in the US and we’ve also seen it in other geographies like Europe, is that there’s a huge amount of variability in terms of reliability of the integrations required to make these APIs work. So, you’ve got a set of core banking systems, for instance, running on a mainframe, they’re super, super reliable. The FDX stuff is super liable. It’s the integration layer that you’ve got to worry about. And we’ve seen banks really struggle with this all over the world is how do I make that integration layer? How do I make the glue as reliable as whatever it is I’m gluing together, those parts that I’m gluing together?

So, we find that that’s still a big challenge for some financial institutions. Some of them have figured it out and they’re doing great work, but others we’ve seen with like 97% availability. And you say to yourself, “Well, 97% availability’s not bad, right?” Well, 97% availability is like being down for a whole day every month. So, sounds good on paper, but it’s not really that reliable when you really start looking at it from a practical standpoint.

And then, the final thing is that the open banking business and technology landscapes are evolving at a very rapid pace, but this is still early days. Even though we’re seeing huge amounts of growth, these standards and these technologies that integrate to these APIs are evolving very, very rapidly. And that’s not going to stop. We’re not going to all just settle on one API, and have it done, and then we’re good to go. There’s going to be increases in functionality, and performance, and reliability, and scalability, and new demands that we’ll see from the business. So, being able to go out and evolve with that is going to be critical.

So, if we look at the implications of those sorts of things, the first implication is what we’re seeing is that open innovation, for a long time, some of it was driven by IT. Particularly in geographies where this sort of thing was mandated by the government, which isn’t the case in the US but we saw this in other places where government said, “You got to open up these systems. It’s mandatory. It’s something we’re going to legislate.” At first they looked at it and said, “Oh man, just another government regulation I’ve got to go comply with.” And so they did, but they did it very begrudgingly and so on and so forth.

But we’ve seen this huge shift in the last year or two where the core banking systems and the banks are saying, “Wait a minute, this isn’t just a compliance exercise. We can actually make money off of this. There’s actually a monetization opportunity, there’s an opportunity to drive more value for our customers.” And so, as a result of that, we’re starting to see more line of business executives driving open banking innovation than ever before. And that trend we think is just going to continue to increase.

The second implication is that, like I said before, if you are an IT organization you’ve got to be able to securely, and reliably open up, and scalably open up these core systems to many millions, or tens of millions, or hundreds of millions more transactions per day. Because, again, you get into this virtuous cycle where the more people find out they can use this sort of thing, the more people use it, and the more they actually say, “Wait a minute, I can do even more.” And it goes in a really, really nice cycle to where if it all works out well, everybody’s using it. The challenge with everybody using it is that a lot of the core IT systems and the integration technologies that were developed were not designed to go scale securely and reliability to the types of volumes that we might be seeing going forward. So, it’s potentially a challenge for you if you’re an IT organization.

The third implication of that is that scaling and things like latency, because I don’t want to spend 20 seconds returning this information I need to do it in a few hundred milliseconds at the most, but scaling and dealing with latency and things like that they require a different integration architecture than traditional system-to-system level integration, particularly internal integration. So, you’ve got to get that architecture right. If you don’t get that right, you’re going to be in a situation where no matter what you do, no matter how much processing power you throw at this thing you’re not going to get it right because we got the wrong architecture in place.

The fourth implication because a lot of core banking systems still run on mainframes, and that will probably be the case for many, many years to come, what we’ve seen is that legacy IT skillsets will become a bottleneck pretty rapidly. And those skillsets are going to have to evolve. So, I mean a non-financial example of this is the current situation that a lot of US states are in where they’re trying to deal with millions and millions of unemployment applications. Well, believe it or not, a lot of those unemployment systems still run on mainframes, and they just don’t have the Co programmers to be able to make the modifications needed to get these applications more efficient to scale them, so on and so forth. And they don’t have a lot of the legacy system knowledge. So, they’re being hamstrung by that lack of IT skill sets. So, you’ve got to figure out a way around that. If you can’t figure out a way around that, then you’re going to end up with a bottleneck being, “I don’t have enough people to understand my banking systems in my core legacy systems so that they can evolve.”

And then, finally along that same theme, is that IT organizations are going to have to adapt and evolve more quickly than ever before. I mean, I’ve been in the software and IT world for the last 30 years and the pace of innovation, the pace of change that’s really just happened in the last, let’s say, five to seven years is just enormous. Things are changing at a breakneck pace and they’ve always changed pretty quickly. But recently we’ve seen these changes happen even faster. So, if your IT organization isn’t agile, if it’s not adaptable, and if you don’t have the tools and the technologies that you need to be adaptable you’re going to run into a lot of challenges going forward trying to open up those core banking systems.

So, the big challenge that we see though with doing this is that there’s this one small problem called these mainframes. And we’ve got a picture here of some pretty old hardware, but a lot of the applications that are running today were written back when these two guys were changing out tape drives. The hardware’s evolved. It’s gotten a lot faster, a lot better, a lot more reliable, so on and so forth. The software, it’s anyone’s guess. Some organizations have evolved their legacy software, some of them haven’t. But you’ve got applications that were designed, architected, and implemented decades ago.

So, why is integrating with this so difficult? We get this question all the time. You got these mainframe systems, there’s, there’s one of the modern versions of a mainframe and they’re super powerful machines, incredibly reliable, great stuff. But why is integrating with these things so difficult compared to, maybe, more modern systems? Well, there’s a whole bunch of reasons, but I’ll sort of highlight three here.

The first is that candidly, like I said, many of these applications are older than I am, and that’s just a challenge because 50 years ago, 60 years ago we didn’t know as much about writing software as we do today. Given that what people had back then, they did a remarkably good job. But we’ve learned a lot in the last 20, 30, 40 years of writing software. So, the applications were designed in an era where integration wasn’t a priority.

The second challenge you see is that on mainframes, in particular, there’s a lot of complex data structures that, frankly, we just don’t use anymore. We figured out some better ways of structuring data, but like Don said, everyone’s awash in this sea of data. Getting the data out of very complex data structures is challenging. And then, there’s a lot of tight coupling between these applications. And by tight coupling, I mean these applications are very, very intertwined with one another. And they’re looking for very specific data points being passed in and out of different applications and different subroutines, and so on and so forth. So, 20 or so years ago we said, “That’s a bad idea. We want to be able to have some loose coupling between components. So, when we change one component out, we don’t have to go change every other component,” which is one of the challenges in evolving these applications is that they’re so tightly coupled together that pulling out one part and replacing it with some more modern technology is way, way more difficult than it should be.

And then, the third thing that we see is there’s still a very heavy reliance, believe it or not, it’s hard to believe, but it’s the absolute truth. There’s a huge reliance on green screen applications. I mean, you guys have probably all seen these. You’ve got somebody typing into a terminal that looks like this, and they’ve got to enter in a certain number of characters in each field. And while it works, the challenge with it is a lot of the business logic is built into these green screens and the applications that support these green screens.

So, it’s not always a matter of, “Well, I’m going to go get the data directly from the database or make changes directly to the database.” That would be an easy way of doing it. The problem with doing that as it doesn’t have the business logic, and all the checks, and all the data integrity to be able to make those changes. So, we’ve seen a lot of institutions with green screen applications that they just had to live with. And so, this involves much higher labor costs to do anything, if you want to make changes to these systems are very challenging. So, being able to deal with that type of technology, the complex data structures, the tight coupling, and the older architecture, that makes integration a real challenge.

So, if you’re going to go put a solution into place there’s a couple of scenarios you end up seeing from an integration standpoint. The first integration is, what we call, inbound integration. So, this is, “Look, I’ve got a mainframe over here, I’ve got whatever applications I need to integrate with, or standards that I want to integrate with, like FDX, over here, and the aggregators and so on and so forth. What I really need in the middle is an abstraction layer. I need something that will make it so that the people that are calling into these systems don’t know that they’re actually talking to a 40 year old mainframe program. I need them to believe they’re talking to whatever and not care about it.”

So you, typically, see the implementation of rest APIs or, in some cases, SOAP APIs. But the challenge is how do I make sure that the people over here on the right are none the wiser that they’re actually talking to an old mainframe system as they make these calls? So, having something in the middle here, having an abstraction layer that sort of abstracts out that logic, the data manipulation, the formats, et cetera, and presents it back to the caller in a way that makes sense is absolutely critical.

But here’s the challenge that you run into, because the systems on the left, over here, are so tightly coupled together, oftentimes, to do something that you and I would think is relatively straightforward, and relatively simple I’ve got to go call two green screen applications, three CICS transactions and go run a COBOL program to aggregate all the data I have, and then send it back. So, it’s not just a matter of calling one program, or one interface, or looking up something in the database. It’s usually a relatively complex integration workflow.

So, the idea is that you need some way of setting up that workflow so that it makes sense and the caller doesn’t have to know the intricacies of what’s happening on the mainframe. The caller doesn’t need to know, “Okay, I need to go hit this one interface,, and then go hit this other interface and then based on the results I get back, go hit this other interface.” This provides you a solution, if you’ve got a workflow engine, that can serve that data back up to the caller in a very, very simple, very easy way.

It also helps from a latency standpoint because what we’ve seen a lot of people do in the past, a lot of IT organizations say, “I’m just going to put a bunch of microservices in front of my mainframe,” which sounds great on paper. The problem with putting a whole bunch of microservices in front of your main frame that talk to very small elements of that mainframe is twofold. One is the callers have to know what order and the logic in which they need to execute those. So, that’s why this workflow engine idea is so important. They have to know more than they should have to know to accomplish a business task to go say, “I want to execute this transaction or get this piece of data back.”

The second challenge you run into, though, is what you typically see is a tremendous amount of latency because instead of making one call into an abstraction layer that then goes and very quickly does all this, you’re making 20 different calls and usually they’re very serially dependent. So, what you see what you typically see here and what we’ve seen in the past is that level of latency just goes up, and up, and up the more microservices transaction calls you have to make. So, it’s the difference between very finely grained APIs and very coarsely grained APIs. But if you’ve got something back here that can actually do that workflow, implement that integration logic, not so much the business logic of what to do with this data when you get it back, but the integration logic of getting the data that these callers need back very, very important in terms of being able to do this efficiently with low latency, high performance, et cetera. So, that’s the inbound scenario.

Now, what happens though, if I have mainframe applications that need to call out to these modern open standards? I’ve got a COBOL program and I really need that COBOL program to go make a call out to another financial institution, or a real-time payment processor, or whatever. How do I make that happen? I mean, it’s relatively straightforward to do it inbound, but how do I do this outbound part? So, if you look at the outbound part, there’s sort of two challenges to it. One is it’s very, very difficult to make outbound calls from a mainframe without going out and modifying the existing applications because A, these applications have never necessarily had to make an outbound call to something new. And, again, this outbound could be anything. It could be sending some data through FDX to another member of a financial institution. It could be doing a real-time payment, it could be doing fraud, or anti-money laundering call-outs to providers that check sort of thing.

But A, these applications never really had to do that to an external entity. And B, what you find is that most of the people that run these applications, the developers that are responsible for maintaining, and supporting, and keeping these applications running, don’t typically have a huge amount of interest in learning the complexities of a lot of these modern integration technologies. Some of them tell us, and we get this all the time, “Look, I’m two years from retirement, I’m not learning REST and JSON, and I’m not going to figure out how to do that in COBOL. So, I need something out there that’s going to let me do that with A, having to make only minimal changes to my mainframe applications and B, just like we shielded the complexity of the modern systems from knowing they’re talking to a mainframe, I want to shield these developers on the mainframe from knowing that they’re talking to some REST interface using JSON data payloads. I don’t want them to have to know any of that.”

And then, there’s a third element to it that if you can accomplish that without them having to go do a lot to their code, and learn all of these new technologies. I also speed my time to market, first of all. And I also reduce the potential for errors. Anytime anyone is learning a new technology, they’re going to make mistakes. Every programming language I’ve ever learned in my life, every piece of code I think I’ve ever written except “Hello World!,” had a bug in it or 100 bucks in it, to tell you the truth. It’s tricky to do that, but you’re dealing with very, very sensitive data. When you go out and make real-time payments, guess what? You’re not getting that data, you’re not getting that money back if you go make a real-time payment to someone because it’s real-time the money’s gone. So, being able to reduce that learning curve gives you a faster time to value, and it also gives you the ability to go out there, and make fewer errors when you’re making these very, very mission critical transaction calls outbound.

So, to summarize there, what you really need is something that can go both ways. You got to figure out a technology solution that can do both the inbound APIs and make sure that those work effectively, but also make those outbound API calls as well. And if you can do that, and you can do it in a way where the people over here in the modern world don’t have to know what’s going on here in the legacy world and vice versa, then you’ve really put yourself in a great position. Because, over time, I can change out these components. Or I can make changes to these components and they will not affect the callers that are coming in, or the transactions that are going out into these more modern APIs, and open standards based systems. So, that’s a great if I don’t have to make those modifications to my inbound applications.

But I also, when I go out and make calls from these programs, these standards can change, these interfaces can change and evolve over time, and I can handle that in this abstraction layer without these programs needing to make changes to them. So, that gives you a lot of flexibility in terms of how you deploy things both today, but really the value that you see is in the gains that you get from not having to make changes when you deploy changes, or new technologies going forward. That’s a huge benefit to everyone involved. It doesn’t matter if you’re out here in the modern world, or you’re over here in the legacy system world, everyone benefits from having an abstraction layer there that enables you to go both ways.

So, look, some key takeaways from this. Hopefully, this has been interesting for you, but when I look at this there’s two or three key takeaways. One is that, look, this open banking landscape is changing rapidly. It is growing exponentially. I mean, the time to get ahead in all of this is now, and not waiting six months, or a year, or a couple years to see how everything exactly shakes out. Getting on board with FDX, putting the necessary technology platforms in place and the standards in place for making that happen. The time to get started really was yesterday, but let’s just call it today given that we don’t have a time machine. So, this is not something you want to wait on. This is not something you want to postpone for a year, and then come back and see how things are going a year from now. This train is leaving the station. It is moving very, very rapidly now. The last thing you want to do is play too much catch up over time.

The second is that being able to rapidly, securely, reliably, and flexibly integrate with these legacy systems, it still remains a very, very key inhibitors to success. We talked to a lot of companies throughout the world, and this is one of their single biggest challenges is all of this sounds great, but actually making it happen, given the limitations of my legacy systems, is what’s actually slowing me down in terms of making it a reality. So, being able to overcome that is a huge critical success factor going forward.

And then, finally, choosing the right platform and the right set of technologies here to make that happen is critical because they’ve got to be secure, they’ve got to be scalable, they’ve got to be adaptable, they’ve got to have low latency. And we’ve seen, time and again, people choose the wrong technology platform, or the wrong standards, or the wrong implementation. And they’ve tried to go implement this vision, but the reach has exceeded their grasp. They simply can’t do it with the technologies and the standards they have in place today. So, being able to make sure that you choose the right platform, you choose the right set of technologies, absolutely critical going forward.

All right, so I think we have some time for some questions now. Jennifer, are you back with us? I think you’ve got some questions that have come in online.

Jennifer Henderson:

I do. Can you hear me all right?

Dr. Alex Heublein:

We can. It’s good to have you back.

Jennifer Henderson:

I do. Can you hear me alright?

Dr. Alex Heublein:

We can. It’s good to have you back.

Jennifer Henderson:

Okay. So sorry. All right, awesome. One of the questions that we got is, “For those of us who are security minded, what’s happening at the abstraction layer to ensure that the data is secure?”

Don Cardinal:

Well, there’s a lot of elements going in. I mean, right now when an app hands you over, let’s say I’m doing budgeting, it’ll say, “Okay, this is great download an app,” and it hands me off to my existing financial institution. The authentication’s being done on their existing off stack, which most FIs have got pretty rocking good security, not because they want to, but because they have to. And so, there’s a lot of things going on to authenticate the session through mutual TLS. The data itself is encrypted end-to-end. But, then, there’s also a lot of other enterprise things going on along those rails. Not to mention authenticating you to make sure it really is indeed Alex and not Jen. Again, things like biometrics, like I mentioned. And so, there’s the identification authentication, and then there’s the corp-to-corp stuff on just the data flow from the border outbound. I’ll let Alex talk about from there, inbound.

Dr. Alex Heublein:

Yeah so, when you get to that abstraction layer you’ve got this encrypted data stream, you’re taking the credentials and, well, the information that you’re getting. And then, you’ve got to be able to authenticate and run those transactions via native mainframe transactional APIs. So, having a platform that gives you the ability to talk to a mainframe on its terms is absolutely critical, its security terms is absolutely critical. So, that’s one of the things we’ve seen over, and over, and over, again, is that need to be able to authenticate and do the work on the backend that is the functional equivalent from a security standpoint as the data that’s coming in. So, you can’t just do it from sort of the firewall to the firewall. Once you get even behind the firewall, you’ve got to be able to go out and get a platform and choose some technology standards that can very easily integrate with the native security capabilities of these legacy systems.

Jennifer Henderson:

Awesome. Thanks guys. The next question is, “In terms of real-time payments, how does an API standard help with that?”

Dr. Alex Heublein:

Yeah, so real-time payments are something that’s also a very, very big … a big thing that’s happening throughout the world. And, when you look at real-time payments there’s really different types of real-time payments. We’ve seen a lot of applications like Zelle, et cetera, or Venmo that look like they’re sort of peer-to-peer, consumer-to-consumer real-time payments. The reality is, I don’t actually think they’re real-time. I think they still clear the transactions twice a day.

But when you start actually talking about B2B real-time payments between different financial institutions one of the challenges in doing that is that integration layer, you’ve got to get it absolutely right. And, like Don said, it’s got to be secure end-to-end. And it’s also something that you can’t go experimenting and hoping you don’t make any mistakes doing it. So, real-time payment standards, you’re starting to see initiatives like Fed now et cetera come out in the US, there’s some other real-time payment standards that you see. But having a standard to be able to do real-time payments is absolutely critical because the last thing in the world you want is this spaghetti diagram of point-to-point integrations, which is one of the things that FDX kind of solves for the world is being able to not have that 1000 point-to-point integrations, but rather let’s have one standard we can all integrate to and then everything becomes very, very easy. So, I think it’s just as true when you look at real-time payments, as with any other type of financial transactions that are out there.

Don Cardinal:

I do want to throw a shout out to The Clearing House’s RTP. They actually do settle and close the transaction in real-time. But, in addition, the ability, quite honestly, of even riding FDX’s rails to confirm that the account’s open, that Alex is a beneficial owner in real-time and there’s transactions for a few weeks going back to the odds of him being synthetic and haven’t been created by my evil twin a few minutes ago are a lot lower. And that lowers fraud costs which, again, makes these systems very, very cheap and affordable for everyone to use.

Dr. Alex Heublein:

Great point.

Jennifer Henderson:

Awesome. Thanks guys. We have one more question until the end. “What’s the benefit,” and it looks like it’s actually two questions. “What’s the benefit of using an open API standard, and do you see open banking being mandated?”

Don Cardinal:

Well, the benefits of having the industry control and members who use it drive it, is no one’s ever going to be as close to the customer as a business who actually is dependent on them for their living. And that means they’re also going to be focused exclusively on their needs, and be able to react very quickly in real-time. The idea of it being open and responsive to members’ needs is, again, I think the members should drive what the market is. And the market will deliver any solution to any problem if you let it. And that’s a wonderful thing. The market will find new things.

I mean, if you’d asked me 10 years ago, would it be possible to boost your credit score simply by giving a bureau access to your cell phone data? I would’ve said, no. And yet here it is. So, I think the free markets’ innovation are wonderful. Now, I can’t comment on policy, so I’ll leave that to the lobbyists and that sort of thing. But FDX is unique in that we’re the only ones who are doing it without a government mandate. And yet we’re growing faster than anyone. I think it tells you that demand in the marketplace has, and really the wealth of members that we have, we over 170 members now and growing. So, I think there’s a demand out there. And I think if we continue to follow what the customer dictates, I think we’ll solve a lot of their problems for free.

Jennifer Henderson:

Awesome. Thanks Don and Alex for your time today, and thank you everyone who was able to join us. If anyone listening has any more questions or you want to know more about open banking, please visit our website and follow us on social media, on LinkedIn and Twitter. And we’ll also be in touch via email after the webinar. So, thanks everyone. Have a great day and be safe out there.

 

Learn More!

If you’re interested in expanding your capabilities around legacy technology, we should talk. Send us your contact info and someone will reach out shortly.