Netradyne

Netradyne

Staff Data Engineer

Company

Netradyne

Role

Staff Data Engineer

Job type

-

Posted

11 hours ago

Share this job

Salary

Not disclosed by employer

Job description

Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. We are a leader in fleet safety solutions. With growth exceeding 4x year over year, our solution is quickly being recognized as a significant disruptive technology. Our team is growing, and we need forward-thinking, uncompromising, competitive team members to continue to facilitate our growth.

About Netradyne:

Netradyne provides AI-powered technologies for fleet management and safer roads. An award-winning industry leader in fleet safety and video telematics solutions, Netradyne empowers thousands of commercial fleet customers across North America, Europe, and Asia to enhance their driver performance, reduce risk, and optimize operations.

Netradyne sets the standard among transportation technology companies for enhancing and sustaining road safety, with an industry-leading 25+ billion miles vision-analyzed for risk and an industry-first driver scoring system that reinforces safe behaviors. Founded in 2015, Netradyne is headquartered in San Diego with offices in San Francisco, Nashville, the UK and Bangalore. For more details visit: www.netradyne.com

Job Overview:

As a Staff Data Engineer – ML at Netradyne, an award-winning industry leader in fleet safety and video telematics solutions, you will take senior technical ownership of the machine learning and data platforms that underpin our AI-powered fleet analytics. This role is responsible for designing, building, and scaling production-grade ML pipelines, generative AI capabilities, and real-time data streaming systems across cloud and edge environments, delivering actionable insights that make fleet operations safer and more efficient. You will collaborate closely with machine learning engineers, data scientists, and product teams to integrate advanced AI technologies into Netradyne's platform, including on-device edge intelligence and natural-language-driven capabilities. Your work will directly shape the data infrastructure behind Netradyne's Physical AI vision, where AI continuously interprets multimodal vehicle data to understand drivers, vehicles, and road environments as an integrated system.

Key Responsibilities:

  • You will be embedded within a team of machine learning engineers and data scientists, responsible for building and productising generative AI, deep learning, and data engineering solutions. You will:
  • Design, develop and deploy production-ready, scalable solutions that utilise generative AI, traditional ML models, data science workflows, and ETL/ELT pipelines on AWS cloud infrastructure (and hybrid edge-cloud environments).
  • Build and manage real-time and batch data streaming pipelines to handle high-volume, high-velocity data from fleet devices, leveraging technologies such as Apache Kafka and Amazon Kinesis for low-latency processing and near real-time insights.
  • Collaborate with cross-functional teams (Product, Data Science, ML Engineering, Operations) to integrate AI-driven solutions into business operations and customer-facing products.
  • Design and implement generative AI solutions using large language models (LLMs), including prompt engineering, fine-tuning, and retrieval-augmented generation (RAG) patterns, to enable intelligent analytics, natural-language interfaces, and context-aware insights across fleet and safety data.
  • Build and scale agentic AI systems by developing autonomous and semi-autonomous agents that orchestrate data retrieval, reasoning, and actions across ML pipelines and services, ensuring these agentic workflows are production-grade, observable, and aligned with safety, reliability, and governance requirements.
  • Implement MLOps best practices including CI/CD pipelines for model training, automated testing, model versioning, continuous monitoring, and modern workflow orchestration (e.g. Apache Airflow, MLFlow) to achieve reliable, reproducible model deployments at scale.
  • Champion data quality and governance by implementing data contracts, rigorous validation, and monitoring to ensure that analytical pipelines meet defined SLAs and that downstream models and business users can trust the data.
  • Conduct research and stay current with the latest advancements in generative AI, vector search techniques, LLM frameworks, orchestration platforms, and related technologies.

Mandatory Skills:

  • Experience with data visualisation and BI tools (e.g. Tableau, Grafana, Plotly Dash).
  • Exposure to real-time data streaming and messaging systems (e.g. Apache Kafka, Amazon Kinesis, Amazon SQS).
  • Knowledge of containerisation and orchestration (e.g. Docker, Kubernetes/EKS)
  • Strong understanding of generative AI system design, including prompt engineering, RAG architectures, vector search, and evaluation techniques for LLM-based applications in production environments.
  • Experience with agentic AI patterns, such as designing, orchestrating, and monitoring autonomous or semi-autonomous agents that coordinate tools, data sources, and ML services with appropriate safety, observability, and governance controls.
  • Experience with modern workflow orchestration platforms such as Apache Airflow, Prefect, or Dagster for scheduling and managing complex ETL/ML pipelines. These tools are now standard for data pipeline engineering.
  • Proficiency with at least one Python web framework (e.g. FastAPI, Django, Flask) for developing data or model-serving APIs.
  • Experience with rapid prototyping tools (e.g. Streamlit, Gradio, Dash) for demonstrating ML solutions to stakeholders.
  • Familiarity with modern ML frameworks and libraries such as PyTorch, TensorFlow, and Hugging Face Transformers
  • Background in large-scale data processing and lakehouse or data warehousing technologies (e.g. Snowflake, Amazon Redshift, Apache Spark/EMR, Delta Lake).

Qualifications and Education Requirements:

  • B.Tech, M.Tech or PhD in Computer Science, Data Science, Electrical Engineering, Statistics, Mathematics, Operations Research, or a related field (or equivalent professional experience).
  • 5–8 years of industry experience in data engineering, machine learning engineering, or related roles, with a track record of building and scaling data/ML solutions in production.
  • Strong programming skills in Python and SQL, with solid fundamentals in algorithms, data structures, and object-oriented programming.
  • Experience building end-to-end solutions on AWS cloud infrastructure including developing data pipelines (ETL/ELT), deploying ML models at scale, and utilising cloud services (e.g. S3, Lambda, EKS).
  • In-depth understanding of data storage systems: proficient with relational databases (SQL), NoSQL datastores, and modern vector databases, including schema design and query optimisation for large-scale datasets.
  • Hands-on experience with generative AI tools and workflows, and practical work with large language models (LLMs), including building LLM-powered applications and utilising RAG techniques to incorporate external knowledge.
  • Working knowledge of MLOps principles and tools — e.g. model lifecycle management, CI/CD pipelines for ML, automated data/feature pipelines, and monitoring/alerting for deployed models.
  • Strong analytical and problem-solving skills with keen attention to detail.
  • Solid foundation in statistics, probability, and estimation theory.

Equal Opportunity Employer Statement:
Netradyne is an equal opportunity employer and are committed to creating an inclusive environment for all employees.

We are committed to an inclusive and diverse team. Netradyne is an equal-opportunity employer. We do not discriminate based on race, color, ethnicity, ancestry, national origin, religion, sex, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, genetic information, marital status, or any legally protected status.

If there is a match between your experiences/skills and the Company's needs, we will contact you directly.

Netradyne is an equal-opportunity employer.

Applicants only - Recruiting agencies do not contact.

Recruitment Fraud Alert!

There has been an increase in fraud that targets job seekers. Scammers may present themselves to job seekers as Netradyne employees or recruiters. Please be aware that Netradyne does not request sensitive personal data from applicants via text/instant message or any unsecured method; does not promise any advance payment for work equipment set-up and does not use recruitment or job-sourcing agencies that charge candidates an advance fee of any kind. Official communication about your application will only come from emails ending in ‘@netradyne.com’ or ‘@us-greenhouse-mail.io’.

Please review and apply to our available job openings at Netradyne.com/company/careers. For more information on avoiding and reporting scams, please visit the Federal Trade Commission's job scams website.

Resume ExampleCover Letter Example

Explore more