Skip to main content
Remote Jobs 

May 17th

Software Engineer - Machine Learning

Find the best remote full-stack programming jobs on Remote Jobs by HBTech. Full-stack developers are versatile professionals who work on both front-end and back-end development, handling everything from user interfaces to database management. Our platform connects you with opportunities to build complete web applications and systems, working with technologies like React, Node.js, Python, and more.

About the role:

We're looking for a Machine Learning Engineer to accelerate our AI research-to-production pipeline. This person will build infrastructure enabling our research team to rapidly deploy and safely test new models while maintaining efficient, scalable production inference systems. This person should have a strong backend engineering background in distributed systems and containerization, and be deeply interested in optimizing the path from research innovation to production value. This is a cross-functional role that requires close collaboration with both research teams developing models and engineering teams supporting the broader platform.

What You’ll Do:

  • Design and implement tooling that enables researchers to quickly deploy and evaluate new models in production 

  • Build and maintain high-performance, cost-efficient inference pipelines in production

  • Optimize infrastructure for both iteration speed and production reliability

  • Develop and maintain user-facing APIs that interact with our ML systems

  • Implement comprehensive observability solutions to monitor model performance and system health

  • Troubleshoot complex production issues across distributed systems

  • Continuously improve our MLOps practices to reduce friction between research and production

What You’ll Need:

  • Strong backend engineering experience with Python

  • Experience building and operating distributed, containerized applications, preferably on AWS 

  • Proficiency implementing observability solutions (monitoring, logging, alerting) for production systems

  • Ability to design and implement resilient, scalable architectures

An ideal candidate should also have some of the following:

  • MLOps experience, including familiarity with PyTorch and Kubernetes

  • Experience working in startup environments demonstrating ownership, decisiveness, and rapid iteration

  • Experience collaborating with remote, globally distributed teams

  • Comfort working across the entire ML lifecycle from model serving to API development

  • Experience in audio-related domains (ASR, TTS, or other domains involving audio processing)

  • Experience with other cloud providers

  • Familiarity with Ray.io, Bazel, and monorepos

  • Experience with alternative ML inference frameworks beyond PyTorch

  • Experience optimizing for low-latency, real-time inference

Pay Transparency:

AssemblyAI strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on paying competitively for our size, stage, and industry, and are one part of many compensation, benefit, and other reward opportunities we provide.

There are many factors that go into salary determinations, including relevant experience, skill level, qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.

The provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range which will be communicated to candidates throughout the interview process.

Salary range: $157,500-$175,000