Centific

Centific

AI Engineer- Speech/Audio

RemoteRemoteFull-time$135k - $145k/YEARTodayvia LinkedIn

Job description

About Centific AI Research

Centific AI Research is at the forefront of developing cutting-edge AI solutions that bridge the gap between research innovation and real-world applications. Our team of scientists and engineers work collaboratively to create impactful technologies across speech, audio, and multimodal AI domains. We are committed to building responsible AI systems that deliver measurable impact while maintaining the highest standards of research quality.

About the role

We are seeking an AI Engineer: Speech/Audio to join our growing team and drive innovation in next-generation audio AI technologies. This role focuses on Large Audio Language Models (LALMs), Large Audio Reasoning Models, and Speech-to-Speech (S2S) systems that can understand, reason over, and generate audio with human-like capabilities.

You will work at the intersection of cutting-edge research and production systems, developing Spoken Language Models (SLMs) that perform complex audio reasoning and engage in natural speech-based interactions. This position offers the opportunity to shape our technical direction in audio-native AI while collaborating with world-class researchers and engineers.

Key Responsibilities
• Design, develop, and deploy Large Audio Language Models (LALMs) capable of native audio understanding, reasoning, and generation.
• Build Large Audio Reasoning Models that perform complex chain-of-thought reasoning over speech and audio inputs, including medical, technical, and conversational domains.
• Contribute to Speech-to-Speech (S2S) system development, including speech understanding, dialogue management, and speech synthesis components.
• Research and implement alignment mechanisms between speech encoders and LLM backbones using lightweight adapters, LoRA, and efficient fine-tuning strategies.
• Design efficient speech tokenization and temporal compression techniques suitable for long-form audio reasoning and multi-turn spoken dialogue.
• Build comprehensive evaluation frameworks for audio reasoning capabilities, including benchmarks for speech QA, audio understanding, and reasoning accuracy.
• Optimize inference pipelines for low-latency, streaming applications in speech systems.
• Collaborate with cross-functional teams to transfer research innovations into production systems and customer-facing applications.
• Contribute to technical documentation, research write-ups, and publications at top-tier venues (NeurIPS, ICML, ACL, Interspeech).

Minimum Qualifications
• Master's degree (required) or Ph.D. (preferred) in Computer Science, Electrical Engineering, or a related field with a focus on speech, audio ML, or multimodal learning.
• 2+ years of industry or applied research experience in speech/audio AI, Large Language Models, or multimodal systems.
• Demonstrated applied research contributions through publications, patents, or shipped products in speech/audio AI or LLMs.
• Strong proficiency in Python and PyTorch, with hands-on experience in GPU-accelerated training for large-scale models.
• Solid understanding of speech and audio signal processing, acoustic modeling, and audio representations.
• Working knowledge of modern LLM architectures (Transformers, SSMs) and training paradigms including instruction tuning and alignment methods.
• Familiarity with modality alignment techniques: adapter-based integration, cross-modal attention, or audio-text fusion methods.
• Strong experimentation habits: clean code, systematic ablations, reproducibility, and clear technical communication.

Preferred Qualifications
• Publication record at top-tier venues (NeurIPS, ICML, ICLR, ACL, Interspeech, ICASSP) in audio language models, speech reasoning, or multimodal learning.
• Hands-on experience building or fine-tuning Large Audio Language Models (e.g., Qwen-Audio, SALMONN, LTU, Gemini Audio).
• Experience with speech representation pretraining (HuBERT, Wav2Vec 2.0, Whisper, WavLM) and discrete speech tokenization.
• Familiarity with Speech-to-Speech components: neural audio codecs (EnCodec, SoundStream), vocoders, or speech synthesis systems.
• Experience with audio reasoning benchmarks (AIR-Bench, MMAU, AudioBench) or building evaluation harnesses for audio QA.
• Hands-on experience with distributed training (FSDP, DeepSpeed) and inference optimization (ONNX, TensorRT, quantization).
• Familiarity with speech frameworks such as ESPnet, SpeechBrain, NVIDIA NeMo, or Fairseq.
• Experience with multilingual speech systems, code-switching, or domain adaptation for specialized applications (medical, legal, technical).
• Background in evaluating safety, bias, hallucination, or adversarial robustness in audio language models.

Technical Environment
• Core: PyTorch, CUDA, torchaudio/librosa, Hugging Face Transformers
• LLM Stack: Large language model backbones, lightweight adapters (LoRA, Q-Former), instruction tuning pipelines
• Audio Models: Neural audio codecs, speech encoders, vocoders, discrete speech tokenizers
• Infrastructure: Modern GPU clusters, experiment tracking (Weights & Biases), distributed training frameworks
• Deployment: FastAPI/gRPC for services, ONNX/TensorRT for optimized inference

What We Offer
• Competitive compensation package with comprehensive benefits
• Opportunity to work on cutting-edge Large Audio Language Models and audio reasoning research with real-world impact
• Collaboration with experienced applied scientists and engineers in speech and multimodal AI
• Support for publications at top-tier conferences and professional development
• Access to state-of-the-art GPU infrastructure for training large-scale audio models
• Flexible work arrangements with hybrid/remote options

Location: Redmond, WA / Palo Alto, CA (Remote)

Employment Type: Full-Time

Benefits:
• Comprehensive healthcare, dental, and vision coverage
• 401k plan
• Paid time off (PTO)
• And more!

Company Overview:

Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster.

Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets.

Learn more about us at centific.com.

Centific is an equal pportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, ancestry, citizenship status, age, mental or physical disability, medical condition, sex (including pregnancy), gender identity or expression, sexual orientation, marital status, familial status, veteran status, or any other characteristic protected by applicable law. We consider qualified applicants regardless of criminal histories, consistent with legal requirements.

Computer and Information Research Scientist Resume Example

See a professional resume example for this role with key skills, action verbs, and ATS-friendly formatting.

View resume example

Responsibilities

  • Design, develop, and deploy Large Audio Language Models (LALMs) capable of native audio understanding, reasoning, and generation
  • Build Large Audio Reasoning Models that perform complex chain-of-thought reasoning over speech and audio inputs, including medical, technical, and conversational domains
  • Contribute to Speech-to-Speech (S2S) system development, including speech understanding, dialogue management, and speech synthesis components
  • Research and implement alignment mechanisms between speech encoders and LLM backbones using lightweight adapters, LoRA, and efficient fine-tuning strategies
  • Design efficient speech tokenization and temporal compression techniques suitable for long-form audio reasoning and multi-turn spoken dialogue
  • Build comprehensive evaluation frameworks for audio reasoning capabilities, including benchmarks for speech QA, audio understanding, and reasoning accuracy
  • Optimize inference pipelines for low-latency, streaming applications in speech systems
  • Collaborate with cross-functional teams to transfer research innovations into production systems and customer-facing applications
  • Contribute to technical documentation, research write-ups, and publications at top-tier venues (NeurIPS, ICML, ACL, Interspeech)
  • Core: PyTorch, CUDA, torchaudio/librosa, Hugging Face Transformers
  • LLM Stack: Large language model backbones, lightweight adapters (LoRA, Q-Former), instruction tuning pipelines
  • Audio Models: Neural audio codecs, speech encoders, vocoders, discrete speech tokenizers
  • Infrastructure: Modern GPU clusters, experiment tracking (Weights & Biases), distributed training frameworks
  • Deployment: FastAPI/gRPC for services, ONNX/TensorRT for optimized inference

Qualifications

  • Master's degree (required) or Ph.D
  • Collaboration with experienced applied scientists and engineers in speech and multimodal AI

Benefits

  • Competitive compensation package with comprehensive benefits
  • Opportunity to work on cutting-edge Large Audio Language Models and audio reasoning research with real-world impact
  • Support for publications at top-tier conferences and professional development
  • Access to state-of-the-art GPU infrastructure for training large-scale audio models
  • Flexible work arrangements with hybrid/remote options
  • Comprehensive healthcare, dental, and vision coverage
  • 401k plan
  • Paid time off (PTO)
  • And more!

Track your job applications with Mokaru

Save jobs, track applications, and let AI tailor your resume for each position.

Similar jobs

Ready to land your next role?

Join thousands of professionals who use Mokaru to manage their job search. AI-powered resume tailoring, application tracking, and more.

Create Free Resume