Momento Announces Serverless Caching Service


Startup Momento recently came out of stealth mode by launching a serverless cache. The new service is generally available and offers a highly available cache with an on-demand pricing model based on data transferred in/out.

Created by former AWS employees, Momento Serverless Cache is an elastic service capable of handling dynamic bursts of traffic of up to millions of requests per second. Daniela Miao and Khawaja Shams, co-founders of Momento, explain:

Today we have serverless compute, storage, databases, queues and streams. Everything except caching. It is a problem. A cache is essential when building interactive applications that can handle dynamic bursts of traffic. With existing caching solutions, provisioning a cache requires tackling a lot of error-prone configurations and learning common lessons, one preventable outage at a time.

Memento highlights different use cases for the new serverless option: working as a cache for DynamoDB and other NoSQL databases like Mongo or Cassandra, a caching layer for relational databases and applications serverless, fast object store, or serverless data store.

As for documentation, the Momento Serverless Cache is available using the CLI and open source SDKs for Go, Java, JavaScript/Node.js, Python, .NET, Rust, and PHP. The command configure in the CLI is used to set the cache name and default TTL for cache entries, while the command hidden is used to interact with records. For instance:

$ momento cache set --key infoq --value news
$ momento cache get --key infoq

According to the founders, with a single API call and “five lines of code”, developers can create a secure and highly available cache by relying on the JSON API over HTTP or a custom RPC protocol. The SimpleCache client object uses gRPC to communicate with the Momento service.


Speaking about their experience at AWS, Miao and Shams add:

We saw this discrepancy when we observed similar outages across Amazon teams, AWS customers, and our own experience adding caches to our stacks. Creating a DynamoDB table is a one-time API call, but adding a cache is measured in work sprints.

Alex DeBrie, an AWS Data Hero and author of The DynamoDB Book, explains on the Momento blog why a cache might be necessary to speed up DynamoDB:

There are certain types of applications where DynamoDB partition throughput limits are a problem. Think of social media apps like Twitter or Reddit where popular tweets or threads can get millions of impressions in a short time. These are examples of Zipfian distributions where the most popular items are viewed orders of magnitude more than the average item. Because DynamoDB wants a more even distribution of your data, your application may be throttled when trying to access popular items.

Tom Killalea, MongoDB Board Member, tweet:

Running a large-scale, highly resilient cache is one of those tasks that most developers don’t want to undertake. Thanks to the experts at Momento, they no longer have to.

Announcing the general availability of the service, Momento released a guide to caching strategies and patterns, outlining the options and common challenges involved in deciding where to cache (local or remote), when to cache (read or writing) and how to cache. (in line vs. sideways).

Default service limits for the new service include an API rate of 100 requests/second per cache, throughput of 1MB/s, and a maximum lifetime of one day.

Momento has an on-demand pricing model. Developers do not select a cluster size in advance and charges are based on cache queries. The service costs $0.15/GB for data transferred in/out, with the first 50 GB free every month.


Comments are closed.