Data Caching Engine
A distributed, in-memory data cache designed to dramatically improve performance of mainframe integration load and reduce costs.
Intelligent Caching Engine by Adaptigent
Your Legacy Crisis Solved
While mainframe systems remain one of the most secure and high-volume transaction processing solutions available, the fact is that operating costs continue to increase year after year. Many companies are looking for innovative ways to reduce these increasing costs while continuing to deliver critical solutions to the business. This is particularly true when building external APIs into mainframe applications, as they open up the potential for even greater processing power consumption.
Not Your Typical Cache
Adaptigent’s Intelligent Caching Engine is a first-of-its-kind, pre-emptive cache that is designed to reduce mainframe integration load and operational costs while reducing API response times from 10X to 50X for data and transaction calls.
We do this by temporarily storing the results of repetitive requests for data in a distributed memory cache running on commodity hardware. It puts full control in the hands of users to explicitly define which data elements are cacheable ahead of time, with an unprecedented level of granularity, using a truly no-code visual interface.
Reduce API response times up to 50X!
In-memory distributed caching solution for data coming back from the mainframe
Intelligent and explicit caching with support for dynamic and pre-emptive caching strategies
Truly no-code, drag-and-drop interface
Designed to increase performance and reduce peak mainframe load
Traditional caching engines use a naïve proxy cache that only caches the entire transaction, or none of it. Our caching engine uses workflows that allow users to enable and set caching policies at the individual mainframe transaction in a highly granular way.
The engine can pre-emptively cache data defined by the integration designer, allowing you to pre-load the cache with highly cacheable data at periodic intervals in order to offload more mainframe processing when demand is high.
It also supports partial caching, in which a single API request may have cacheable and non-cacheable elements to it. The runtime environment will intelligently pull data from the cache and live data from the mainframe to fulfill the API request.
The Future of Core Systems Performance
Adaptigent’s Intelligent Caching Engine provides a unique, pre-emptive, distributed cache that allows you to pre-load highly cacheable data with defined expiration policies at periodic intervals when mainframe demand is relatively low. This allows you to offload more mainframe processing when demand is high.
What can Adaptigent do for you?
If you're interested in expanding your capabilities around core systems and legacy technology, we should talk. Send us your contact info and someone will reach out shortly.