Software Engineer - Voice AI (Inference Runtime)
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE:
Voice is becoming the internet’s next interface, but a production-grade Voice AI system is "hard to build". You’ll join a small founding team of Baseten Voice AI, focused on bringing state-of-the-art open source models into production for Voice AI customers across productivity, customer service, clinical conversation, creator tools, education, and more. You’ll make a meaningful impact on people’s daily lives and help reshape these industries.
This is a high-impact, high-ownership role. You will be the primary owner of Baseten Voice AI - our in-house inference stack to power Voice AI models - from product roadmap through engineering implementation. You’ll partner closely with Forward Deployed Engineers, Model Performance Engineers, and sister engineering teams to push the boundaries of Voice AI.
EXAMPLE INITIATIVES:
The world's fastest Whisper — with streaming and diarization
Collaborate with the Core Product team on the orchestration framework to build a multi-model voice agent
Collaborate with the Training Platform team to enable continuous training of voice models
Design an ergonomic, developer-friendly API and SDK that enables self-serve adoption of Baseten Voice AI products.
RESPONSIBILITIES:
Own and lead Voice AI product areas end-to-end — from architecture and system design through implementation, rollout, and long-term production operations.
Design, build, and operate real-time, large-scale, high-performance model serving systems for STT, TTS, and voice agent workloads with clear SLOs for mission-critical customer deployments.
Drive cross-team collaboration with sister engineering teams to solve full-stack technical problems, aligning on priorities, and coordinating end-to-end delivery across the product surface area.
Mentor teammates through code reviews, design docs, and technical leadership.
REQUIREMENTS:
Bachelor's degree or higher in Computer Science or related field
Proven track record owning production-grade real-time, large-scale systems where tail latency (p99) matters.
Proficient coding abilities in one or more popular programming or scripting languages; Python proficiency is a plus.
Good taste in product, particularly developer-oriented tools
Interest in ML/AI infrastructure and willingness to learn
Strong collaboration and communication skills
Comfortable using AI coding assistants (e.g., Claude Code, Codex, Cursor) as a daily productivity multiplier — as an AI-native company, we see this as a must-have skill.
NICE TO HAVE:
Experience implementing pipeline-level model runtime optimizations such as dynamic batching, async scheduling, or decode-side throughput improvements.
Experience building developer platforms: SDKs, CLIs, APIs, and self-serve workflows for ML or infrastructure products.
Experience with containerization and orchestration technologies (Docker, Kubernetes), service meshes, or distributed scheduling.
Familiarity with speech/audio ML models (STT, TTS, speech-to-speech)
Familiarity with model-serving runtimes (vLLM, TensorRT, ONNX).
Familiarity with systems-level performance profiling across host-device boundaries (e.g. PyTorch Profiler), diagnosing GPU utilization issues
Exposure to customer-facing engineering: pre-sales prototyping, technical discovery, or working directly with customers to ship solutions.
BENEFITS
Competitive compensation, including meaningful equity.
100% coverage of medical, dental, and vision insurance for employee and dependents
Flexible PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
Paid parental leave
Fertility and family-building stipend through Carrot
Company-facilitated 401(k)
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).