Open Finance – Is it Time to Get Ahead?

In this webinar, Chris Michael, Head of Technology at the Open Banking Implementation Entity, discusses the state of Open Banking globally and what Open Finance could mean for other financial services organizations.

Open Finance: Is It Time to Get Ahead?

Simon Mikellides: Good afternoon everyone, and welcome to the latest in the Advanced Webinar Series. Today’s topic is open finance, and we’ll be giving you a current view and talking to you about how you can also get ahead of the curve. I’ve got two excellent speakers who have real insight and hands-on experience. This is a popular topic, actually, judging by the interest that we’ve had in this webinar and all of you that have joined. Thanks to everyone that’s joined.

Let’s begin with a very quick word on Advanced. For those that don’t know us, Advanced are now the third largest IT company headquartered in the UK. However, our customer base is global. We provide specialist software and services across different industry verticals to organizations ranging in size from SMEs right up to some of the world’s largest banks. This webinar today is hosted by the Advanced Application Modernization Practice, a long established business unit within Advanced with a long history of helping companies that struggle with legacy technologies. We’ve got a good 35 years of experience and hundreds of successful projects.

Now, the key to success has always been to combine both know-how with best of breed software, with a real emphasis on high degrees of automation. For this webinar, we’ve brought together the same two components in our speakers today. In partnership with our speakers, our interest is in helping where legacy technologies are considered problematic or a barrier to adoption and change.

This is particularly pertinent to open banking, and the extension of open finance, but in terms of the technology challenges as well as the opportunities. How you can view what was once originally a compliance-led initiative as an opportunity to get ahead of the game? With open banking successfully being enforced since 2018, the Financial Conduct Authorities examining the possibility of rolling out open finance to other financial services, in particular insurance, mortgages, savings and pensions.

Today, you’re going to hear about the latest updates on open banking and open finance, the commercial opportunities these regulations bring and why you need to prioritize legacy applications when implementing APIs, and then how GT Software’s mature, market leading integration solution can help to securely open your mainframe applications. We’ve got two excellent speakers as I’ve mentioned, Chris Michael from OBIE, the Open Banking Implementation Entity within the UK, who’s Head of Technology there, and Alex Heublein, Chief Revenue Officer and Technologist from GT Software. Without any further ado, I pass it over to Chris.

Chris Michael: Thank you. My name is Chris Michael. I’ve been working for the last three and bit years for Open Banking or OBIE and my prime role here has been to lead the development of the UK open banking standards. What I wanted to talk to you about is a number of things.

First of all, the background where open banking has come from, globally what’s sovereign, it’s in the UK, and then talk to you a little bit about the UK ecosystem where we are in the UK. We’re going to run through some example concepts and authentication journeys to show you what open banking looks like, how it’s being implemented at the moment, talk through some of the example use cases, and then look at what’s next and the challenges.

Open banking is actually a global construct. I’m sure you’re all aware it means different things in different markets, and in North America, it’s very much an industry-initiative. You’re seeing it obviously in the UK and Europe now as a revenue driver under PSD2 too, and then the UK CMA order, and you’re seeing it in other markets like Australia, where there are now regulations coming into force.

In every other market, there’s a bit of a mix between a regulatory driver versus a market-led initiative, by which is it something that banks have to do and have to provide for free, or is it something that is entirely in the competitive space for the industry to adopt? Typically, it’s a mix of somewhere in the middle. Typically, what we’re seeing globally now is this construct of some regulatory framework to say who can play in this ecosystem, who can be a provider of open banking services.

Then there’s a mix of some stuff that might have to be provided free and some stuff can be provided on top of that in the commercial model. The key thing, I suppose in the UK, is that open banking really is a subset of PSD2 in Europe, and I’ll talk about that in a minute, but the key thing is that this is limited somewhat to a process called online payments accounts. Open banking in the UK is a subset of open finance, which is a future iteration which houses open banking into other account types like savings, mortgages, pensions, et cetera. Whereas if you look in other markets, a lot of these things…open banking in other markets means open finance. These terms are a little bit interchangeable.

Where did open banking come from in the UK? Well, the starter of this was something called the CMA Order. The Competition and Markets Authority created a CMA order, which has mandated the nine largest retail banks in the UK to implement APIs to a common standard. That is somewhat limiting in terms of business current accounts, and accounts in Sterling. When you look at PSD2, the central payment service directive in Europe, this is something that is governed by the Competition and Markets Authority in each member state.

In the UK, this is governed by the Financial Conduct Authority. The scope event is somewhat larger; it includes all online payments accounts, which includes actually things like cards, and account set or a payment account that could be a combined lending and payments account, for example. It also covers any currency and any type of business accounts even large corporate of course in this. But the interesting thing is that open banking in the UK or rather the CMA order and PSD2 in the UK and Europe is really only a subset of open banking because what the market needs is quite a lot more than that.

There are a lot of use cases, some of which are already in the markets based on screen scraping, which require things like access to other accounts. This is where this difference between open banking and open finance comes in. The phrase I would use is that PSD2 and the CMA Order, they’re a subset or a small part of open banking, but then open banking really moves into open finance.

I’ll just talk a little bit about the model that we’ve adopted in the UK, is we’ve created an API standard. And by the way, as an aside, open banking does not actually or PSD2 does not actually talk about APIs nor does it require banks to have APIs. They have to have some form of access which is restricted, or controls to allow custom authentication and secure communication.

Open banking doesn’t talk or PSD2 doesn’t talk about APIs, but in the UK under the CMA order, the nine largest banks were required to provide APIs and our mandate as the OBIE was to create a standard. This standard is based on an oath, it’s the principle that the payment service user in the top left has a contract or an agreement that gives consent to an authorized third party in this case, an AISP.

The customer is then required to authenticate with the bank using the IKE credentials, and the third party is then granted access or a token. The principles that we’ve tried to stick to in this is, firstly, no shared credentials. Unlike previous methods where a customer would typically give their banking credentials to a third party, in this case, there is no shared credentials. The customer authenticates with the bank.

The bank is required to make available the same authentication methods. Typically, what this means is where a customer is using a mobile app with biometrics to access their bank accounts in the normal course of events, that’s how they should authorize and authenticate to give access to a third party.

That leads into the third point which is no unnecessary steps or friction. This is about making sure there is strong customer authentication. A customer is authenticated with their bank but it has to be easy with no unnecessary steps or friction. What I’m going to do is show you two very quick videos now and there’s no sound to these videos, but the first is for accounts information.

What this video shows is I’m using a third party application called the Yolt have actually first authorized third parties to go live in the UK. I’m using Yolt to access my bank accounts in this case, it’s a Barclays Bank app. I’m going to add Barclays to my Yolt dashboard. You’ll see how the process works and me giving consent to Yolt, being redirected from one mobile app to another mobile app, i.e my banking app, my Barclays banking app. I use my face to authenticate, and then I’m redirected back.

There we have it. You’ll see that there were no shared credentials there. The customer, me in this case, I didn’t have to actually type anything in. I literally gave consent to third party to use my face to authenticate to give access. I’m going to show you a similar thing for making a payment now and you’ll get the picture. There we have a similar flow from making a payment. This is what we’re starting to see in the UK now and, in fact, Europe is adopting a similar models, similar pattern, although there are a number of other standards in Europe.

One of the other things that we’ve done in the UK, and is we’ve created something called the open banking directory.

Effectively, what you have is a growing number of third parties. You have a large number of banks or account providers in the UK and Europe. The process by which banks and third parties know who each other and trust each other is something called IATA certificates which are issued by qualified trust service provider. There are a number of these QTSPs in Europe, and there are a number of competent authorities who issue the authorizations that that allows QTSP to issue these certificates. We’ve created a trust framework that just makes it easier for banks and third parties to validate the identity of each other and to establish connections. It’s a fairly simple model, actually, but it has not been without its challenges in terms of influencing.

I’m going to move on now to talk about the timeline. Where are we in this process? I mentioned I’ve been here for about three and a bit years, and we’ve been working on this standard now since the OBIE was formed in 2016. We are now on the third iteration of the standard, and that is now going through various releases to add additional functionality and capabilities and standard to meet some of the requirements for change from the regulations, but all of this is leading up to this deadline of March the 20th when the FCA adjustment period ends effectively.

RTS was supposed to be implemented by all the banks by September, but because of some of the challenges around OBIE the certificate strong customer authentication, the FCA in this country allowed a six month adjustment period and endless FCA are now going to start enforcing and policing if you like the PSD2 from March onwards. The CMA order is also due for an update as well, which is going to be published sometime in the next few weeks as well, which will put some additional requirements and obligations on the CMA9.

What’s been happening as the standard has been evolving, the banks have been implementing this with incremental releases, and we’ll talk a bit about some of the changes later. But what we’re seeing, and have seen, is an incremental growth in adoption of the standard by AISPs the account information so the read access, this is largely now being driven by cloud accounting packages, migrating customers over to from screen scraping and proprietary forms of access to open banking APIs.

That is a migration of one existing business model from one inter phase to another. What we’re starting to see now is new products coming into service around payments initiation, something called CBII, which I won’t go into today, but the point is account information has got large volumes of activity as a migration. Payment initiation is launched with the offering of new services in the UK, and we are very early on it in that process. As of December, and I don’t have updated stats at the moment, but we have roughly 75 AISP banks effectively in the UK using the standard using the directory service that we provide.

We’ve got 100 technical service providers. These are not authorized in their own rights as firms, but they provide various services either bank side or TPP side to help connectivity they are sometimes vendors that have contracts with that or policies or both they provide aggregation services, et cetera. The key thing is what is happening customer side and that is the growth of third parties, or authorized third parties. We’re getting on 450 authorized third parties, but we’ve got almost twice that in the ecosystem. It’s just only 150 of them are currently authorized by the Financial Conduct Authority or transported in from another authorization in another markets.

That resulted in about 100 customer facing applications and I’ll talk about some examples of those in a second. As well, the end of December, we had over a million customers who are actually using open banking APIs. Many of them might not have been aware that we’re using the APIs, but the actual volume of API calls is over 250 million API calls a month in December. That’s been growing exponentially, and we think customers and API calls are going to grow exponentially over the next month or so, as the cloud accounting packages migrate the remaining customers over to open banking APIs. These figures, by the way, are from the CMA9. They’re not from all the banks in the market, it’s just as under the CMA order. The CMA9 are required to provide and publish statistics on their performance and availability.

I mentioned payments; it is very early on. We’ve had around 50,000 payments in December. It’s still minute if you compare that to the number of payments that are made by cards or direct credit transfers or direct debits and standing orders. It’s a very small volume, but it’s probably more than just basic testing there. That is starting to get volume and we think this is where the future growth is going to be once screen scrapers have migrated over to APIs. The big growth is going to be payments from towards the end of this year. In terms of the actual use cases, I showed you the demonstration. The first demonstration video was the Yolt. This is an example of a personal finance manager. I’ve spoken about business accounting packages migrating from API from screen scraping to APIs.

We started to see some interesting propositions around un-bundling overdrafts. Tools that monitor your bank accounts via an APIs and those tools can then do things like offer you micro lending to take money, to lend you money and to repay the loan as soon as you’ve got the money back in your account. These services have been around for a while, but, with open banking, they just get quite a lot better and they can offer a lot of value to the customers in terms of saving on unauthorized overdraft fees, for example. We are seeing really strong commercial models now around better lending.

Interestingly, banks are being able to partner with third parties to offer lending products to customers who don’t bank with them. Many banks traditionally wouldn’t do that. They would only offer loans to customers who had a current account with them, because that’s how they could be sure about the customer’s affordability.

With open banking APIs, you’re seeing more and more products coming into market which offer lending based on open banking APIs and that’s a really powerful example of something that’s good for customers, good for banks, and good for third parties. It’s a way of effectively monetizing the API channel. I mentioned payments and we’re seeing starting to see a few examples of e-commerce and also some examples of international payments via APIs. Businesses are coming into the market offering alternatives to using your current account for making overseas payments when you travel.

There are a whole load of other use cases, and probably what’s really interesting is something that NESTA is doing as a government funded organization. OBIE with NESTA on our prize fund this year. There were 100 different ticket fees or firms who entered into this challenge. Fifteen of those are through to the final stage and have received some initial funding.

These are firms that offer really interesting…I’m going to pick out one, which is helping customers who were maybe vulnerable customers. By dividing it to where vulnerable customers can delegate a third party to see a view or read-only view of their account.

That third party, it might be a relative, it might be a friend, can monitor the vulnerable person’s account and if the vulnerable person’s account looks like there is some strange spending behavior, the relatives or friends can take them out for a cup of tea and, you know, it sounds silly, but there are use cases now where this is proved to save people’s lives, where two people have been extremely vulnerable and spending money, which has indicated severe depression, for example. These are really interesting use cases. There are a whole lot of things that are starting to come out and do more than just have a financial benefit; they actually provide real customer value as well.

What’s next? The key focus for OBIE is to continue to evolve the standard. There are a number of regulatory changes and other requirements, that are of the CMA order, are being published by the CMA in terms of their road map, confirmation to pay, and contingent reimbursement model. These are things that are there to provide additional protection in a payment journey and we’re looking at how that affects test driven journey as well. The concept of variable payments to enable things like sweeping. Developing a standard that enables more of these use cases that are some form of regulatory requirements is our first priority.

We’re also looking at continually monitoring and providing support to help firms with better implementation, specifically looking at reducing friction and authentication. Trying to encourage firms to run conformance tools that we provide and also certified to prove that they’ve implemented the standard properly. Thirdly, ecosystem growth, looking at how we can help banks TPPs, TSPs, and help everyone use the APIs better, and partner with each other to provide more propositions to end customers.

The net result is to look at helping drive these end customer benefits that were envisaged by the CMA order. But there is a really interesting point around where we could go next, whether OBIE does this or this is built on top of open banking, and this is all about identity. We think that identity services, a low hanging fruit in the UK, is to enable banks to provide attributes of identity or identity services either alone or combining them with PSD2 services, so for example, providing a verification of someone’s address along with the initiation of payment.

If the bank does that, then all of a sudden, post payments can become very, very valuable indeed, potentially, banks can monetize that as well. It’s a step up from PSD2 by combining identity services with a payment service. That leads onto a bank ID type service. It’s something that exists in many other markets, it would be great if the UK could introduce a bank by the services certainly with the infrastructure that banks have built for open banking, it could very easily be extended by banks to offer these services.

We think this is important to actually enable open finance because when you start looking at other financial products, many of them don’t have the same ongoing customer engagement. For example, whether it’s pensions, or savings, or lending products, you typically don’t, as a customer, engage with those on a regular basis in terms of logging into an account. A bank ID, or login with your bank could be something that really does open up open finance if you choose the top IE and provide access in a secure way to non-PSD2 accounts. Then there’s an extension of that, something the UK government has looked at called smart data.

It’s all basically the concept of open banking as open finance, but open up to other sectors like utilities, telcos, energy, health, et cetera. This is a kind of evolution and it’s all something that we think the key to unlock all of this is identity. We’re making, whoever does this, and however this is done over the next few months and years, I think it’s important to align this to other things that are happening in the UK.

Initiatives, there’s a new digital identity and it’s a government body to kind of look at the whole constructs of the trust framework and standards and liability mostly for identity that is supported by the open identity exchange. These initiatives that are maybe coming out over the next couple of years, how can we all keep this in sync? And also, how can we keep this in sync and interoperable with other global standards as they emerge?

These are some of the opportunities and challenges that we see. The reason why we think this is useful and important is what we’ve seen over the past year, maybe two years, but certainly in the past year, is there has been a shift in terms of the mindset, particular mindset of facts. I think it’s safe to say even a year ago, but certainly two years ago, and certainly three years ago, when the banks started on this journey of open banking and APIs, they were very much in the complete mindset, it was damage limitation; what’s the least we have to do? How can we minimize costs? How do we minimize the amount we have to do to restrict it to just what is required by the regulations?

What we’ve seen is that as we started to grow and get adoption, both by banks and third parties, we started to see value out of that. I mentioned the use case of banks and Fintech partnering to offer lending products for open banking APIs. All of the banks, not all of them, but in large numbers they are also looking at providing their own aggregation services in their own mobile apps. What we’re starting to see now is the banks looking at commercialization. How can they take this API channel that they’ve developed they’ve invested in and start to get a return on investment? That shift of mindset takes you to a different place. What we’ve seen, I mentioned this before, in terms of growth of API traffic, this is very positive.

We’ve seen the traffic grow almost exponentially into the… I haven’t got December’s figures here, but December’s well over 250 million. This will grow exponentially certainly over the next couple of months. As we start to see counting packages come on, I think it will flatten off or traffic will flatten off for council formation, but then it will probably grow again exponentially for payment initiation as that starts to take off.

The good news is that the response times, the performance, this is data across the CMA9 again, caveats in the text of the slide and this is published on OBIE’s website. I’m not sharing any confidential information here. This over the past couple of years, we’ve seen a significant span of a bit improvements in the average API response times getting on for average of over two and a half seconds to 100 seconds.

This doesn’t paint the whole picture. You can go onto the website and look at the detailed statistics, but what you’ll see is that response times vary quite a lot across the CMA9. Some of them are in the two to 300 milliseconds per API call response time, which for a large institution with complex legacy systems is the step forward from where they were a year or so ago. Some of the banks are not as good as that.

The slightly disappointing, so here we have actually the API response times by brand… apologies, I just meant that the other side so you can see how it varies. Somewhere in the 200 to 300 millisecond response times, and some are significantly slower. This does vary by API endpoints as well. Some of it depends on the volume of transactions on the transactions, endpoints, et cetera.

There’s a lot more granular details on the website you can look at, but you can see that the variance is there across the CMA9 brands. When we look at API availability, this is not such a rosy picture, and we’ve seen a lot of instability in the availability and also the performance of quality issues with some of the APIs. What we’re seeing here is, over the course of the last 18 months or so we’ve seen the availability vary between mid to low 90s, up to about 99%. But again, if you look at November, you’ll see by brands, you’ll see some of the brands are pretty much close to 100%.

You can see this shows both planned and unplanned downtime, but the important thing when you look at this is, overall, it’s not great, but some brands have actually been consistently close to 100% availability, and some have been struggling. There are reasons for that and they vary by brand, but the availability is really one of the key factors that needs to be fixed to enable payments initiation services to really take off. With account information if the API isn’t available quite often the customer isn’t present, it’s an update of an accounting package or a balance that can be done in a few minutes late or once the API is back up or done overnight or updated overnight. This is something that customers are used to anyway in their accounting packages or financial management packages.

They’re not always real time because the underlying API… So, the underlying transaction systems of the banks aren’t necessarily real time. But when it comes to payments, if you try and compare payments initiation services as an alternative to cards, call payments, typically the underlying infrastructure or certainly that the customer experience is 100% available and still really close to it; that’s what the expectation is.

In order for payment initiation, for you guys to really start to work and offer… the value they could offer the API, availability needs to be much closer to 100%, and that’s just a real challenge for the industry. I think if individual banks, and some of them will be able to solve it sooner than others, you’re all going to start to see that being incredible for payment initiation, and it’s a real…the commercialization of these APIs, I think, is going to be the big incentive that drives that. Thank you.

Alex Heublein: Fantastic. Well, thanks Chris. This is Alex Heublein. I’m the Chief Revenue Officer for GT Software, and we’re a software company that focuses in on helping customers modernize and integrate with their legacy systems and their core systems that they’ve had within the organizations for many years. There are a few things that I thought were interesting about what Chris had to say. Just to do a quick recap in terms of some of the challenges and opportunities that are out there with regard to open banking.

One of the first ones is that we have this industry shift from compliance to adoption to monetization, and this is really important, because as Chris said, the mindset shifts. You get this shift from, “Well, we have to do it,” to, “Wait, maybe there’s a way to make money off of this.” When that happens, the pressure to not only create these APIs and implement these APIs, but also to create new ones to change them, evolve with the standards as they move along. The pressure from the business increases as a direct result of that.

The second interesting thing is the exponential growth in the open banking APIs, and you saw the graph there, you saw that chart that shows this exponential growth of the API calls. I think what’s interesting about this is that it follows a very similar pattern to a lot of things in life. That is, that the more you make things available to people, and if you do it in a simple and ubiquitous way, people use more and more of them. It’s not that there’s this fixed amount of demand for consuming these APIs out there and you’re just fulfilling that demand. The more you fulfill that demand, the more demand there actually is for it.

You can see this in many endeavors in life. It’s like when you see organizations, we see governments looking at adding lanes to motorways, they say, “Well, traffic is really poor on this motorway. We’ll add some more lanes to alleviate the traffic.” It does that, but what’s interesting is, is that what they’ve found over the years in doing that is that not only do they add… When they add the lanes, not only does it alleviate the traffic, but the traffic actually starts to get worse and worse and worse because more and more people use the motorway.

You see the same thing happening, I think, in open banking APIs today. The more that this information and the more of these capabilities are made available to people, to third parties, et cetera, the more demand there will be for the consumption of those APIs. Again, in Chris’s charts, reliability, scalability, and security are table stakes when it comes to this sort of thing. You can see some of the challenges in terms of reliability that’s out there really highly varied. You can see some of the challenges in terms of scaling these things out. Security is something that people just assume is going to be there for their information and they demand it.

The fourth thing is that transaction latency has improved quite significantly over the course of the last year. But you saw that very wide disparity even among the CMA9 banks. I suspect if we were to go look at other banks that are implementing API that aren’t part of the CMA9, we would see and even wider disparity among the banks out there. Being able to focus on how do you make that transaction latency as low as possible, I think it’s going to be one of the critical success factors going forward.

Then finally, open banking. Not only the technology landscape, but the business landscape, is rapidly evolving and that pace is just continuing to increase over time. There’s some significant implications of that. I think one is that, we’re not in the finals here. We’re in the second game of the season. There’s going to be a great deal of change over the next few years. There’s going to be new services and new opportunities to implement open banking and the standards will evolve as well. Being cognizant of that and having a strategy to deal with that going forward is going be a critical success factor, but also a huge opportunity for a lot of banking institutions.

Let’s talk about the implications a little bit, particularly the implications for IT of some of the information that Chris shared with us. One implication is that, I think what you’ll increasingly see is your line of business executive driving open banking innovation. Rarely are you going to see line of business executives get involved or excited about something that’s merely a compliance initiative. As Chris said, they’re going to do the minimum they can to go meet the government requirements and that’s all we’re going to do. But when there’s a commercial incentives and a monetization opportunity, that’s when you’ll see more line of business executives driving and pushing on open banking innovation going forward.

It will have some profound implications for IT, in the sense that the demand isn’t going to be driven by a compliance officer with the legal team or what have you. It’s really going to be driven from the top down from the line of business to executives. The second implication is that, as these open banking APIs and the consumption of them increases exponentially, IT has to have a strategy for securely and reliably opening up core banking systems to many millions more transactions per day. Again, you get this situation where usage begets more usage.

You end up in the same type of situation here with regards to your core systems. You’ve got to be able to come up with a strategy and an architecture that lets you scale that out effectively, cost-effectively, reliably and securely over time. The third implication is scaling in latency. If you’re going to have a highly scalable system and you’re going to do it in a very low latency way, that requires a fundamentally different architecture than what I’ll call a traditional integration. A lot of traditional integration has been how do we integrate internal systems of internal systems or maybe how do we integrate our website to our core systems? But now the ability to scale that cost effectively and to be able to do it in a very low latency way, will require some different thinking architecturally then you would see in a more traditional integration scenario.

The fourth implication is that, a lot of these core banking systems are still based on what I call legacy systems and the skill sets within those legacy organizations will potentially become a significant bottleneck and will need to evolve rapidly over the next few years. There’s a danger there, there’s a challenge there in the sense that in order to be able to go out and meet the demands of those line of business executives for open banking innovation. You may not have the skill sets or you may have a lack of the skill sets that you need to be able to do that, if you choose certain routes to go down.

Then finally, IT organizations are going to need to be able to adapt more quickly than ever before. This is a fast moving space, it’s a space that’s going to evolve, also evolve exponentially over the next few years. The ability to focus on time-to-market, to get things out the door quickly, but still do it with the same level of reliability, scalability, and security that you see in your core systems, that will be one of the biggest challenges for IT going forward is that adaptability and flexibility.

The big challenge that we see, and we’ve seen this with many of our customers, is there’s just one small problem and that’s the core systems that a lot of banks run on today. Those core systems tend to be mainframe-based. They tend to be very reliable, very secure, very scalable, but not very what I’ll call integratable and there’s some reasons for that.

One of the questions we often get with our software is why is this mainframe integration thing so difficult? Why is it so challenging? I started my career as a mainframe programmer, but pretty rapidly moved into what I’ll call the distributed systems world where integration was part and parcel of what you did. There’s some big challenges in terms of integrating with mainframes though and, of them, is that, a lot of these applications are older than I am. I’ve been in the IT industry for over 30 years. When I say they’re older than I am, I mean that they go way back. We talk to customers on a very frequent basis and they tell us, “Yeah, this application was written in 1967.” I was like, wow, I wasn’t even born in 1967, but this code is still running.

I think that’s a testament to the abilities and the capabilities of these core systems to have evolved over time to meet the increasing demands and the power of the mainframe architecture, all those great things. But the big implication there is a lot of these applications were never designed to talk to external entities. They were designed to talk to people through green-screen interfaces or through kicks transactions. They weren’t designed to be integrated with what I’ll call modern distributed systems, and you can’t blame anyone for that. That’s how they were designed, and those systems didn’t exist. But that represents some pretty big challenges in mainframe integration.

The second big reason we see is that the mainframes have very complex data structures and they’re also very tightly coupled together. The systems aren’t designed to be loosely coupled, they’re designed to be very tightly coupled. That gives you a lot of the advantages of things like very low latency, very high transaction through-put, but makes integrating with them very challenging. You’ve also got different notions in terms of how this data is structured. In a lot of these cases, instead of having a relational databases, you’ve got hierarchical databases. Being able to convert the data, the formats, et cetera into something that’s meaningful to modern distributed systems via APIs can be a big challenge if you have to go do that all by hand.

Then finally, there’s still, amazingly, a very heavy reliance on green-screen interfaces and we see this over and over and over again. When these applications were written to have someone on a 3270 terminal typing stuff and the application type evolved past that point in some cases. The ability to integrate with these very, very old legacy systems is challenging, because that’s really the only way to get it with a lot of the transactional logic and a lot of the data look up logic that exists in these applications.

There’s other challenges, but we see these as the three big ones in terms of integrating with these core banking systems.

There is a better way, and I want to talk to you a little bit today about what we do and the solutions that we have to help organizations overcome a lot of these challenges that I’ve just talked about. If you look at mainframe integration from an inbound standpoint, this is where someone is calling into an API that you’ve created to talk to your mainframe. One of the big challenges you run into is that, you’ve got this on premise mainframe, you’ve got these API standards like open banking or if you’re in the US, FDX Financial Data Exchange.

One of the issues there is that building those integrations, those things are written in very, very clear specifications in terms of restful APIs and using technology standards that are what I call modern. Implementing those on a mainframe is very challenging. What we provide is a product called Ivory Service Architect. What this does is it creates an abstraction layer between the callers and the mainframe systems, the core banking systems that exist today. There’s a run-time environment that we have that we can generate these APIs into. It can be run on the mainframe or it can be run off the mainframe. It can be run in a Windows Virtual Machine if they run a Docker container, where you run in Red Hat Open Shift in their cloud environment.

There are a lot of very flexible deployment options and these flexible deployment options allow you to go out and achieve the right balance of security, scalability and reliability that your application requires. We’re able to build these APIs. Now, one of the challenges is, like implementing an API, I have it hit my mainframe. But what you find in mainframes is that oftentimes what you and I would consider a relatively simple transaction, like go give me the transaction history for this customer account for the last 30 days, might actually involve multiple look-ups to multiple systems.

They could be kick transactions or they could be database look-ups, so they could be green screen applications that I need to go hit. There’s a lot of orchestration logic and workflow logic that has to happen to fulfill, again, what you and I probably considered to be relatively simple requests. We built a workflow, an integration workflow engine that allows you to do that in a very simple way. A very easy drag-and-drop type of user interface that goes off and generates those APIs for you without having to write code, and that results in a successful outbound integration or inbound integration rather.

If you look at the way we do it, we have a visual designer, looks a lot like Vizio. You can go in drag and drop components into this from the mainframe, wire them together without writing any code. The end result of that is an API that gets generated into that runtime environment. You go visually design it, hit the deploy button, and it actually takes that, generates all of the APIs for you and puts them into that run time in the environment. Again, that run time is deployable pretty much anywhere you want to deploy it. Again, you can make the right architectural decisions to balance the needs for security, scalability, reliability, latency, et cetera.

Now, one of the questions we get a lot is, that’s great for inbound integration, but what happens if my application that’s on my mainframe today, my core banking system needs to make a call out to something else? Let’s say I need to go as the result of an inbound API request. Let’s say if someone wants to create a new account or find something out about their account, I might need to go do a fraud detection call out to an external system, or I might need to be able to do an anti-money laundering call out.

As a result of those things, we also have the ability to go out and create outbound integration. If my mainframe core banking applications need to make calls out to the modern world, we have a solution for that and that solution is that we can go create those APIs, create those interfaces to those outbound components, and then we can generate very compact, self-contained code snippets that can be put into existing applications without any other modifications.

That is really great because it enables you to go do these things and it shields those legacy developers from having to understand a lot of unfamiliar technologies like SOAP and REST and JSON packets, they don’t have to learn any of that. They simply have to be able to take a pre-generated set of code that’s going to make that API call, put it into their existing applications, recompile them, test them, and off they go.

Now, what that really does is it speeds time to value, but it also makes the error rates in these types of things go down significantly because we’re generating the code. We’ve been generating the code for years and years, and we know how to do it very effectively. But the amount of human error that can get introduced into that equation goes down very significantly and the last thing you want when implementing a complex financial API transactions is have a lot of human error.

We’ve got the ability to do that, and ultimately, what that results in is successful outbound integration. We can do inbound integration as well as outbound integration with again, visual drag-and-drop tools, no coding required and you can implement some very complex workflow logic into the app, into those API implementations, that ultimately gives you what your customers are asking for and what the APIs are meant to deliver.

A couple of case studies really quick for you. Anybody can talk about this, but let’s talk about some real world examples. This case study refers to a large French bank, the challenge that they had was they needed to be able to go out and use third-party applications, do some fraud detection, et cetera, all in real time.What they really needed to do was real time payments. They need to be able to take an API in, make a call out to a third party real-time payment provider and then get the results of that back and send it back to the caller. This was one of those situations where we used both an inbound API call as well as an outbound API call.

We were able to generate the code that they needed for that to be able to do it, to clear to pay from a core banking COBOL application without them having to write any code required to do that real time payment. As a result, they were actually the first bank in France to be able to initiate a real time payment.

The drag-and-drop interface, the no-code development studio allowed them to actually… they moved from proof-of-concept to production in under two months. This wasn’t a situation where it took them six months or a year to go figure this out. This was literally something it took them a matter of a short number of weeks to go out and take this thing from, here’s the proof of concept to here’s the actual production implementation of that outbound of real time payments.

A second case study involved a large Swiss bank. A lot of these banks don’t really like us to use their names so they won’t let us use the names because some of this stuff is quite innovative for them. A large Swiss bank had needed to rapidly implement the ability to go and verify the status of a new customer. They needed us to do a KYC check against the World Check System and they needed a uniform set of API calls that could be initiated from the mainframe to go out and do those KYC checks.

Using Ivory Service Architect, they were able to deploy the APIs. They did both SOAP and REST-based APIs without writing any code, both at the integration layer and on the mainframe layer so no code was required to be written to be able to do this. They were able to make the APIs ultimately accessible to all the systems within the bank going forward.

The results of that is they were able to meet the functional specifications required by the banking regulations, to go out and check to make sure that these account holders weren’t terrorists or known criminals and they did it in the specified time frame at a fraction of the cost and time to market their traditional methods would have taken them. Again, really interesting use case that shows you the power of both this ability to go create what we call inbound APIs, but also as a part of that have those core banking systems call out to other systems or other external systems to do things like fraud detection, anti-money laundering, KYC, et cetera.

Just a few key takeaways from this and maybe a next step for some of you. One of the big key takeaways I think I’d walk away from this with is that the open banking landscape is changing very rapidly, and the time to get ahead of that curve is now, the last thing you want is to be your sitting here a year from now and saying, “We should really have started on this a year ago because now we have a line of business executives breathing down our necks to go and make this happen.” The time to get started is now… and like I said, it’s very early days here. In the open banking world, there’s going to be a lot of evolution. There’s going to be a tremendous innovation happening in the space over the next several years. The time to get on this and get ahead of the curve is now. Not six months from now and not a year from now, not two years from now.

The second implication is that rapid, secure, reliable, low-latency legacy system integration is one of the biggest inhibitors that we see to implementing open banking and working with our customers. They look at it and they say, “We’d love to be able to do this.” But the challenge is that it’s very slow. It’s very difficult to change these legacy systems, it’s very difficult to build integrations with them.

They want anything that they can have that will speed up that process and also ultimately lower the total cost of ownership. Because these things are going to change over time and the ability to take a tool, or I’m not having to go maintain a huge code base, I’m not having to use a bunch of programmers involved and I need to make changes to something. I can go make those changes in a visual drag-and-drop, no-code interface and deploy those changes in a matter of days or weeks instead of months or years, that’s a critical success factor going forward.

Then finally, we can help you do this. We’ve got a really nice product called Ivory Service Architect that we’ve talked about. We found customers can literally implement APIs five to 10 times faster than traditional methods, at a substantially lower cost, but also a very significant total cost of ownership benefit as well in terms of maintaining those APIs, modifying those APIs as standards change, as the innovation landscape changes. The ability to do that very rapidly and do it very cost effectively are key success factors going forward.

Anybody can stand up here and talk about this. We’re willing to prove it to you. You can go to the URL on the screen, it’s https://www.adaptigent.com/poc, and we can schedule a free proof-of-concept with you where you can come in, show you how this works, set up a very rapid proof of concept for you to show you that this doesn’t indeed work. We can do these things in a matter of days rather than weeks or months or years, and we’re happy to go out and prove it to you. Please go to that URL, please reach out to us if you are interested in that proof-of-concept. Thank you. With that, I’ll turn it back over to Simon.

Simon: Thank you guys. Thanks Chris, Alex. I think everybody would agree that was a very interesting and informative presentation, so thanks again. Also, thank you Alex for going through the key takeaways there. We’ve been running a poll, in parallel, and your second point there around the legacy applications being inhibitors to change and successful integration, that’s echoed by the results of the poll as well. One of the questions I was asking, whether people saw that as a barrier, and 100% of those that responded agreed with that statement. That was interesting and obviously echoes the key points from all your good years of experience there.

I think we’ve got just a few minutes remaining and I would say we’ve had some questions and we’ve got a really good one here. We’ve probably only got time for one, so let’s ask this question, but then I’d like to hand over to both Chris and to Alex to answer, so we get both of their views on this question. The question that’s coming, and again, it’s about legacy applications. If core banking systems tend to be very reliable, why are we seeing such low levels of availability with open banking APIs? Could I go first of all to Chris please, to give us your perspective on that question?

Chris Michael: Yeah, sure. I think the actual APIs themselves, most of them are built in fairly modern technology, either using a well-known API vendor or they are built from scratch using a platform in the cloud, such as AWS. The challenge has been that the API standards are a fairly new thing. This concept of, not APIs themselves, but these specific standards are a fairly new thing where you are effectively, it’s not just two parties, it’s three parties in the chain, so it’s complex to implement. The standards have been evolving, and the banks predominantly, almost technology companies themselves, they rely on the vendors and third-parties, and those vendors in third-parties and the banks themselves have struggled with introducing change and dealing with several versions of the standards of implementing several versions over a relatively short period of time.

They’ve all got different approaches to do this, but the problem is you’re seeing all of the underlying technology not being reliable. It’s the way it’s been implemented, and quite often it’s to do with the way that the banks have integrated their APIs into their core systems. That’s the weak points, and what happens, a loss of the availability is when the banks actually take the services down, they take their API channel down to introduce a new change. Some of the banks have managed to introduce changes in more of a dev ops continual deployment fashion where they don’t have to take things down, and that’s why they also have a higher availability level. But it’s to do with the velocity of change and the fact that the challenge is how the API is actually integrated into the core systems. That’s I think the underlying reason.

Simon: Thank you Chris. And Alex, do we have your view?

Alex Heublein: Yeah, absolutely. I would echo what Chris said, and I would say probably the biggest reason is that—this is hard. This is not an easy thing to do. You’re dealing with systems that are extremely reliable. They’re extremely robust. They’re extremely scalable. Both those core systems where the data and the transactions reside, that stuff is rock solid. The big challenge is that, again, it was never designed to integrate with the outside world, so you can have a rock solid core banking system but implement your APIs in a way that’s far less reliable.

I think there’s also some architectural challenges there, as well. There’s many different ways of implementing APIs. There’s many different architectures that you can choose from to implement those APIs. I’d say some of them are more reliable than others, but what you’re ultimately getting down to is an optimization problem. You’ve got a set of variables you have to optimize for. You’ve got to optimize for security, reliability, scalability, low latency and so on and so forth. So, choosing the right architecture to implement those APIs is just absolutely critical in being able to do that.

So, you see transaction volumes going up, you see this latency going down, but you see reliability going down or at best maintaining a relatively low level of reliability. You can see that people have optimized for a couple of the variables, but they haven’t optimized for that reliability variable. I think that’s one of the big challenges you see, is you can see some of the architectures that have been implemented are more of a traditional integration architecture versus more modern distributed types of integration architecture, and that’s where I think you see some of the challenges with reliability.

Simon: That’s great. Thank you very much both, and again, a very comprehensive answer, and hopefully that answers that question but also addresses any related concerns. So, I think that brings us to the end of the webinar today. We’ve shown that there are ways to get in contact, and of course, there’s a great offer there from GT Software to move ahead and let that be proved to you with a free POC. Okay, so thanks very much, and thank you for attending.

Chris Michael: Thank you.