Nvidia
Solutions Architect, LLM Model Builder
Company
Role
Solutions Architect, LLM Model Builder
Location
United States of America
Job type
Full time
Posted
1 hour ago
Salary
Job description
NVIDIA is seeking an outstanding Solutions Architect, Foundation Models to join our growing team focused on partner enablement for reasoning models, multimodal models, and production inference! In this role, you will act as both a strategic technical expert and a hands-on advisor, helping partners build, benchmark, fine-tune, optimize, and deploy foundation model solutions for customer workloads.
The Partner Solutions Architecture team acts as a trusted advisor to the ecosystem. We enable partners to translate customer requirements into architectures, benchmark recipes, cluster test plans, compute sizing, and production readiness—accelerating time to value through the full-stack accelerated computing platform.
What you'll be doing:
Serve as the lead technical advisor for partners delivering reasoning, multimodal, fine-tuning, and model-serving solutions.
Guide partners to the right approach for customer workloads across fine-tuning, distillation, quantization, compression, benchmarking, and evaluation.
Define benchmark plans, synthetic data and evaluation workflows, and repeatable validation recipes.
Advise on compute planning, including cluster sizing, GPU and network selection, storage, memory tradeoffs, latency and throughput targets, and production-readiness testing.
Guide inference architecture across prefill and decode tradeoffs, batching, routing, disaggregated inference, and serving efficiency.
Develop reference architectures, playbooks, benchmark recipes, TCO calculators, and sizing models across CUDA, NeMo, Nemotron, Dynamo, TensorRT-LLM, Triton, NIMs, and related tooling.
Support pre- and post-sales engagements by translating complex model and infrastructure topics for partner and customer teams.
What we need to see:
MSc, PhD in Computer Science, Electrical Engineering, Software Engineer, ML Engineer, or related fields (or equivalent experience).
5+ years of relevant experience working with LLMs, VLMs, and large-scale inference systems, with hands-on expertise in fine-tuning, benchmarking, evaluation, optimization, and production deployment as a Research Engineer, Deep Learning Engineer, or equivalent.
Strong understanding of foundation models across data preparation, fine-tuning, post-training, evaluation, and inference.
Familiarity with reasoning models, reinforcement learning, and synthetic data generation and evaluation workflows.
Strong programming skills in Python and hands-on experience with PyTorch, JAX, or TensorFlow.
Familiarity with Nemotron, NeMo, Dynamo, TensorRT-LLM, Triton, vLLM, and similar inference and optimization stacks.
Strong communication and presentation skills, with the ability to advise both technical teams and executives.
Ways to stand out from the crowd:
Experience helping partners or customers deploy large-scale AI systems in production.
Built benchmark suites, fine-tuning recipes, sizing calculators, or TCO models for AI workloads.
Strong knowledge of GPU infrastructure, including NVLink, InfiniBand, MPI, NCCL, or adjacent cluster technologies.
Active OSS contributions in model tooling, inference, evaluation, or performance optimization.
Comfortable moving between deep technical reviews, architecture guidance, benchmarking, and partner enablement.
You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.