Blog
Redis vs ElastiCache: What “support” actually means in production
When teams compare Redis and Amazon ElastiCache, the discussion usually starts with performance, cost, or scale. Those factors matter. But once Redis is on the critical path, another question becomes more important:
Who owns Redis when something breaks? That question sits at the center of Redis support, and it’s where managed Redis services start to differ in ways that matter in production.
For production systems, support isn’t a procurement detail. It shapes how incidents are handled, how quickly the root cause is identified, and how much risk teams take on as engines, versions, and policies change. The difference is easy to miss until Redis behavior, not infrastructure, becomes the problem.
This is where Redis Cloud and ElastiCache diverge in ways that matter in real systems. And if you also run Redis yourself, Redis Software follows the same Redis-defined support and lifecycle model, so you’re not switching support paradigms between managed and self-managed deployments.
What “managed” covers & what it doesn’t
Cloud-managed Redis services are built around infrastructure availability. Their service level agreements are designed to answer a specific question: was the managed service reachable?
That scope works when Redis and Redis alternatives are treated as a hosted engine and are used as a best-effort cache. It becomes limiting when apps depend on Redis performance, consistency, or behavior under load—not just whether an endpoint is reachable. Command semantics, replication behavior, persistence guarantees, or version-specific issues often sit outside the guarantees of infrastructure-focused SLAs.
ElastiCache is explicit about this boundary. Its SLA is defined in terms of ElastiCache availability as a managed service, not Redis command semantics or data-level outcomes. If the service meets its uptime target, the SLA is considered met, even if an incident is driven by Redis-specific behavior rather than an ElastiCache outage.
In practice, this means ElastiCache support is scoped to:
- Managed service availability
- Node and infrastructure health
- Cloud-level operational issues
It doesn’t cover Redis command behavior, data guarantees, or Redis-specific correctness issues.
This support boundary isn’t unique to AWS. Google Cloud Memorystore follows the same model. Its SLA, like ElastiCache’s, is written in terms of service uptime, not Redis behavior, command semantics, or data guarantees. In both cases, the cloud provider manages the platform around Redis, but doesn’t contractually own Redis itself.
That distinction matters during incidents. Infrastructure can be healthy while Redis behavior isn’t. In many production systems, Redis availability and data correctness are app-critical, even when teams don’t initially design it that way.
Security, patching, & ownership don’t come for free
Security is another place where the difference between Redis and cloud-managed Redis services becomes clear. ElastiCache and Memorystore have historically offered older engine versions and left customers to decide when and how to upgrade. Common Vulnerabilities and Exposures (CVEs) entries were often documented, but patching and enforcement weren’t owned by the cloud provider.
Redis Cloud and Redis Software take a stricter approach. We support modern Redis versions and actively backport security fixes to supported releases. Customers aren’t expected to track vulnerabilities, assess risk, or decide whether an upgrade is urgent. Security and patching are part of what we manage.
This matters even when Redis and Redis alternatives are “just a cache.” Caches routinely hold session data, tokens, and derived app state. A vulnerable cache is still a vulnerable system.
Valkey introduces a support & compatibility break
ElastiCache now uses Valkey for newer engine versions. Valkey is a fork that started from Redis version 7.2.4 but is now being developed independently from Redis; the two codebases are already diverging and are expected to continue to do so over time.
This affects support in practical ways. Feature delivery, bug fixes, and behavior no longer align one to one with Redis releases. Over time, compatibility differences become more likely, especially for teams that rely on specific Redis features or operational characteristics.
AWS documents extended support for older Redis Open Source (OSS) versions as a temporary measure, with fixed end dates. These policies are designed to give customers time to migrate as engines change. For teams running Redis as a core system, this introduces long-term uncertainty around upgrade paths, behavior changes, and support coverage.
For Redis and Redis fork users, this raises new questions:
- How closely Valkey behavior will continue to match Redis
- How Redis-specific issues are escalated and resolved
- What happens when behavior differs but infrastructure is healthy
At the same time, AWS support remains scoped to the managed service layer. Engine semantics, whether Redis or Valkey, fall outside what AWS owns under its SLA. Valkey is open source, but open source doesn’t mean supported. There’s no vendor on the hook for behavior-level issues, no formal escalation path, and no contractual accountability. When infrastructure is healthy but engine behavior isn’t, that gap shows up fast.
Redis defines Redis support & owns Redis outcomes
Redis Cloud and Redis Software take a different approach. Redis is treated as a production platform to be actively operated, secured, and supported, not just a service to be kept online.
Redis publishes its own support policy and service level agreement. This is what Redis enterprise support actually covers in production: defined P1 response times, formal escalation, and ownership of Redis-level behavior. These documents define response targets, support tiers, and availability commitments that apply directly to Redis. Escalation goes to Redis engineers who work on Redis and understand its internals, not just the surrounding infrastructure. The same model applies if you run Redis in your own environment with Redis Software, which uses the same Redis-owned support framework for tiers, response times, escalation, and end-of-life as Redis Cloud.
For production Redis Cloud deployments, Redis commits to up to 99.999% (Active-Active), 99.99% (Multi-AZ), or 99.9% (Standard), with service credits if not met. Additionally, the Redis Cloud Support Policy defines 24x7 support, P1 response times as low as 15 minutes, and formal escalation and RCA processes for Enterprise tiers.
This matters when diagnosing issues like replication edge cases, persistence behavior, failover timing, or regressions tied to specific versions. The question being answered is whether Redis behaved correctly and how to fix it, not only whether an endpoint was reachable.
Redis support stays consistent across clouds
Support models also shape how Redis and Redis alternatives behave at scale across environments.
ElastiCache support is tied to AWS. Memorystore support is tied to Google Cloud. Each service inherits its provider’s SLA definitions and support model. The same workload can have different availability SLAs and support scopes depending on which cloud-managed service it runs on. For teams operating Redis and Redis forks across environments, this fragmentation increases operational risk during incidents.
Redis Cloud runs on AWS, Google Cloud, and Azure, but support is defined once by Redis. The same support policy, response targets, and escalation paths apply regardless of cloud. This reduces ambiguity and shortens incident response when Redis is part of a multi-environment architecture. For teams that also need to run Redis themselves—on-prem or in their own cloud accounts—Redis Software extends that same Redis-defined support model into self-managed environments.
Why this shows up during incidents
Support differences are rarely visible when systems are healthy. They surface under pressure.
How quickly issues are acknowledged. Who owns root-cause analysis. Whether fixes come from engineers who understand Redis behavior or from general infrastructure triage. These details directly affect recovery time and confidence in the platform.
Cloud-managed forks are optimized for infrastructure convenience. Redis support is optimized for Redis reliability in production.
The takeaway
If Redis is foundational to your system, support is part of your architecture.
ElastiCache offers a strong availability SLA for its managed cache service on AWS, but that SLA is scoped to ElastiCache infrastructure. Redis provides Redis-first SLAs and support policies, with explicit commitments around Redis availability and production support across the clouds where Redis Cloud runs. As Valkey continues to evolve independently from Redis, that difference in ownership and support model becomes more important, not less.
Whether you choose fully managed Redis Cloud or self-managed Redis Software, you can make Redis support—not a fork or a generic cloud support team—part of your reliability architecture. Reachability and accountability aren’t the same thing. In production, that difference matters.
Want to compare support policies directly? Talk to a Redis expert or review the Redis Cloud support SLA.
Get started with Redis today
Speak to a Redis expert and learn more about enterprise-grade Redis today.
