Nvidia
Senior Solutions Architect - KV Cache and AI Storage
Company
Role
Senior Solutions Architect - KV Cache and AI Storage
Location
China
Job type
Full time
Posted
3 hours ago
Salary
Job description
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
At NVIDIA, we are developing advancements in AI and accelerated computing, pushing the boundaries of what's possible. We are looking for a Senior Solutions Architect - CRISP System to join our dynamic team. You will collaborate closely with our largest customers to build next-generation LLM inference platforms powered by NVIDIA GPUs, Dynamo/KVBM, and CMX. This is an outstanding opportunity to help compose the future of AI storage and context-memory products!
What you'll be doing:
Lead technical exploration with customer architects to understand models, frameworks, SLOs, and KV cache usage patterns.
Build end-to-end KV cache solutions using tiered memory and NVIDIA modern networking technologies.
Analyze performance profiles, identify bottlenecks, and drive PoCs and benchmarks to validate improvements.
Translate customer difficulties into clear feature requests and roadmap input for NVIDIA products.
Build reference architectures, best-practice guides, and deliver tech talks to support our field teams and customers.
What we need to see:
Bachelor's degree or higher in Computer Science or a related field with strong systems or storage background.
5+ years of relevant experience, including 2+ years passionate about KV stores/caches or storage backends.
Hands‑on experience with distributed storage, caching, or large‑scale backend systems.
Solid understanding of Transformer / LLM inference and KV cache concepts, plus experience with at least one LLM serving stack (for example vLLM, TensorRT‑LLM or SGLang).
Strong knowledge of NVMe SSDs, KV SSDs, and modern storage servers, including controller/firmware behavior and I/O characteristics.
Practical experience with tiered memory and KV cache optimizations such as offloading (HBM → DRAM → NVMe), eviction/selection strategies, compression/quantization, or attention‑level optimizations.
Familiarity with at least one large‑scale storage or caching system (such as Ceph, Redis, Cassandra, RocksDB‑based KV, object storage, or distributed logs).
Ways to stand out from the crowd:
Experience building or running LLM inference platforms or large‑scale online services in cloud or internet companies (multi‑tenant, quota, cost control).
Development experience with KV cache subsystems in file systems, user‑space storage engines, or memory/cache managers, or building custom KV stores/cache layers optimized for AI/LLM.
Exposure to NVIDIA technologies such as Triton Inference Server, TensorRT‑LLM, NeMo, Dynamo/KVBM, BlueField / DOCA, GPUDirect Storage, Spectrum‑X, or CMX.
Public talks, papers, blogs, or open‑source work in LLM inference, KV cache, or storage systems.
With competitive salaries and a generous benefits package, we are widely considered to be one of the world’s most desirable employers! We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our best-in-class engineering teams are rapidly growing. If you're a creative and autonomous person with a real passion for technology, we want to hear from you.