Valky: Open Source Caching, Performance, and Strategic Tech Decisions

Valky: Open Source Caching, Performance, and Strategic Tech Decisions

The InfoQ Podcast Feb 09, 2026 english 5 min read

Explore Valky's evolution from a Redis fork, its advanced caching capabilities, and deep technical optimizations for memory efficiency and throughput.

Key Insights

  • Insight

    Valky emerged as an open-source fork of Redis 7.2 due to Redis's license change in 2024, founded by a collective of engineers from major tech companies under the Linux Foundation.

    Impact

    This ensures the continued availability and community-driven development of a permissive open-source caching solution, offering an alternative for businesses impacted by Redis's license shift.

  • Insight

    Valky provides a 'drop-in replacement' for Redis open source 7.2, maintaining full backwards compatibility and supporting seamless online upgrades via managed services like Amazon Elasticache and GCP Memory Store.

    Impact

    Facilitates low-risk migration for existing Redis users, minimizing operational disruption and accelerating adoption in enterprise environments.

  • Insight

    Valky differentiates itself by offering complex data types for values and comprehensive features like horizontal clustering, replication, durability, and observability, going beyond a simple key-value store.

    Impact

    Enables more sophisticated application architectures and use cases, providing a versatile and robust in-memory data store for diverse enterprise needs.

  • Insight

    Significant hash table modernization in Valky (from 2023 to late last year) achieved up to 40% memory reduction and 20-30% higher throughput for specific workloads, primarily through memory compaction and advanced collision resolution.

    Impact

    Translates to lower operational costs, increased data density, and enhanced performance per core, allowing systems to scale more efficiently and handle higher request volumes.

  • Insight

    Valky's performance measurement focuses heavily on throughput rather than raw latency, as network latency typically dominates command execution times for its microsecond-level operations.

    Impact

    Guides optimization efforts towards maximizing requests per second and efficient resource utilization, crucial for high-volume caching scenarios and cost-sensitive cloud deployments.

  • Insight

    The core Valky infrastructure, written in C, will likely remain in C due to the high risks and uncertain benefits of rewriting a well-tuned system in Rust, while Rust is preferred for new modules and extensions.

    Impact

    Informs strategic technology adoption, advising against wholesale rewrites of established, high-performance C codebases, but promoting Rust for new development to leverage its safety and modern features effectively.

Key Quotes

"Valkyrie is a drop-in replacement to Redis open source 7.2."
"When we talk about performance, we're always always talking actually about throughput."
"Like deep core infrastructure that's already built and well tuned, should probably stick around and see, but you should at least try building new things in Rust for sure."

Summary

The Unforeseen Genesis of Valky: A Fork Born from Licensing Shifts

In the dynamic landscape of open-source software, pivotal moments often shape the future of technology. The decision by Redis in March 2024 to transition from a permissive BSD license to commercial SSPL and RSAL variants sent ripples through its vibrant developer community. This move sparked a rapid, community-driven response, leading to the creation of Valky. Within a mere eight days, a coalition of engineers from leading tech companies like Alibaba, Ericsson, Tencent, Huawei, and Google, alongside former Redis maintainers, established Valky under the Linux Foundation. This initiative ensured the continuity of collaborative development on a truly open-source platform, forging a drop-in replacement that has since seen multiple major releases and widespread adoption across managed service providers.

Beyond Simple Key-Value: The Architectural Prowess of Valky

Valky, while often perceived as a straightforward key-value store or hash map over TCP, distinguishes itself through its sophisticated architecture and rich feature set. Its real power lies in supporting complex data types for values, enabling use cases far beyond basic string storage—think sets for user session tracking or advanced data structures for real-time analytics. Furthermore, Valky's robust core encompasses critical enterprise-grade capabilities: horizontal clustering, high-availability replication, data durability, and comprehensive observability. These foundational elements transform it from a mere data structure into a full-fledged, scalable, and resilient product capable of underpinning demanding applications.

Deep Dive into Performance: The Hash Table Modernization Journey

A testament to Valky's commitment to efficiency is its recent hash table modernization, a significant engineering undertaking that commenced in 2023. This effort targeted inefficiencies stemming from an older design, primarily addressing issues like independent memory allocations, linked-list-based collision resolution, and suboptimal utilization of modern hardware capabilities. By compacting memory structures, embedding key and value objects directly, and adopting sophisticated collision resolution techniques (inspired by "Swiss tables" using SIMD instructions), Valky achieved substantial memory savings—up to 40% in specific workloads—without performance regressions in its core key-value operations. For many internal workloads, throughput improvements of 20-30% were observed. This critical work underscores a nuanced approach to performance, prioritizing throughput over raw latency, especially given that network latency often dominates in distributed caching systems.

Strategic Technology Choices: C vs. Rust for Core Infrastructure

The discussion around Valky's codebase, predominantly written in C, also highlights a pragmatic stance on technology migration. While Rust is acknowledged for its modern safety and performance benefits, the Valky team emphasizes the considerable risks and limited immediate benefits of rewriting a highly optimized, well-tuned C core. Instead, they advocate for a strategic approach: leverage Rust for new modules, extensions, and areas where its strengths truly shine (e.g., LDAP authentication via an SDK), but maintain established core infrastructure in its proven language. This approach balances innovation with stability, ensuring performance and avoiding unnecessary project risks.

Conclusion: A Resilient Future for Open-Source Caching

Valky stands as a prime example of open-source resilience and innovation. Born from a necessity to preserve community values, it has rapidly evolved into a technically advanced and enterprise-ready caching solution. Its ongoing development, guided by a vendor-neutral technical steering committee and a focus on deep performance engineering, ensures its continued relevance for businesses seeking efficient, scalable, and community-driven data infrastructure.

Action Items

For organizations currently using or considering Redis open source 7.2, evaluate Valky as a direct, fully compatible, and actively maintained open-source replacement.

Impact: Ensures adherence to open-source principles while leveraging a robust, community-driven caching solution, mitigating risks associated with licensing changes.

Investigate managed Valky services, such as Amazon Elasticache or GCP Memory Store, for simplified deployment, seamless upgrades, and built-in high availability.

Impact: Reduces operational overhead and infrastructure management complexities, allowing engineering teams to focus more on application development rather than infrastructure maintenance.

When assessing in-memory data stores for high-throughput applications, prioritize solutions that demonstrate strong throughput performance per core and efficient memory utilization.

Impact: Optimizes infrastructure costs and ensures the system can handle peak loads effectively, leading to better resource allocation and user experience.

Adopt a hybrid language strategy for large-scale software projects: maintain highly optimized existing core infrastructure in proven languages (e.g., C) and use modern languages (e.g., Rust) for new modules, extensions, or less performance-critical components.

Impact: Maximizes efficiency and stability by leveraging the strengths of different programming paradigms while mitigating the risks and costs associated with extensive rewrites.

Engage with the Valky open-source community through its blog and Slack channels to stay informed on technical advancements and contribute to its evolution.

Impact: Fosters a collaborative environment, enables direct access to expertise, and influences the roadmap of a critical open-source technology relevant to your stack.

Mentioned Companies

Instrumental in the creation and governance of the Valkyrie project, providing a neutral and supportive environment.

Supports Valkyrie and provides seamless upgrade paths, demonstrating confidence in Valkyrie's stability and utility.

Focuses on building secure and highly reliable features for the Valkyrie engine, indicating significant investment and trust.

Contributed an engineer to Valkyrie's creation and uses Valkyrie in telecommunication equipment, showcasing real-world enterprise adoption.

Contributed an engineer to Valkyrie's creation and GCP Memory Store offers Valkyrie support, indicating strong endorsement.

Offers managed Valkyrie services, simplifying adoption and deployment for users.

One of Valkyrie's co-creators is from Alibaba, highlighting multi-vendor collaboration.

Contributed an engineer to Valkyrie's creation, demonstrating broad industry support.

Contributed an engineer to Valkyrie's creation, showing diverse industry involvement.

Ivan

3.0

A third-party provider offering managed Valkyrie services, expanding its ecosystem.

A third-party provider offering managed Valkyrie services, further validating its market presence.

Tags

Keywords

Valky database Redis fork in-memory data store hash table performance open source migration caching architecture throughput optimization Rust vs C development cloud caching services enterprise caching solutions