Applied AI Research Engineer – ML Systems & Structured Data
Granica | Mountain View, California, United States | 2d ago
$160,000 – $250,000/yr| full-time | on-site | mid
skills: pytorch, jax, tensorflow, rust, c++, cuda, python, machine learning, probabilistic modeling, optimization, large-scale ml systems, representation learning, structured data, tabular data, graph data, distributed systems, ml pipelines, data systems, query engines
Location: Bay Area (Mountain View)
Employment Type: Full-time
Work Model: On-site
Department: Research
Compensation: $160K – $250K + Equity
Overview
Granica is building the next generation of efficient AI infrastructure.
Today’s AI systems are limited not only by model design but by the inefficiency of the data that feeds them. At enterprise scale, redundant data, inefficient representations, and poorly optimized learning pipelines create enormous cost and latency.
Granica’s mission is to eliminate that inefficiency.
We combine advances in information theory, machine learning, and distributed systems to design data infrastructure that continuously improves how information is represented and used by AI.
Granica’s research effort is led by Prof. Andrea Montanari (Stanford) and focuses on building learning systems that operate efficiently on large-scale structured and tabular data.
While much of the industry focuses on text or media models, Granica is building the foundations of AI systems that learn directly from structured enterprise data.
This role focuses on building machine learning systems for structured and tabular data rather than general LLM application development.
The Role
The Applied AI Research Team sits at the intersection of theory and production.
Your work will take ideas emerging from fundamental research and turn them into practical algorithms, optimized pipelines, and production-ready ML systems that operate across petabytes of structured enterprise data.
This is a high-ownership role for engineers who can think like researchers and build like systems engineers.
You will translate theory into measurable performance improvements and help define the engineering foundations of structured AI.
What You’ll Do
Turn research into working systems
Technical Depth
The world’s most valuable data is structured.
Most AI systems today are not built to learn from it efficiently.
Granica is building the systems that close this gap.
Your work will help define the engineering foundations of structured AI — designing the algorithms, pipelines, and infrastructure that enable efficient learning from enterprise data at global scale.
This Role Offers
Compensation Range: $160K - $250K
Employment Type: Full-time
Work Model: On-site
Department: Research
Compensation: $160K – $250K + Equity
Overview
Granica is building the next generation of efficient AI infrastructure.
Today’s AI systems are limited not only by model design but by the inefficiency of the data that feeds them. At enterprise scale, redundant data, inefficient representations, and poorly optimized learning pipelines create enormous cost and latency.
Granica’s mission is to eliminate that inefficiency.
We combine advances in information theory, machine learning, and distributed systems to design data infrastructure that continuously improves how information is represented and used by AI.
Granica’s research effort is led by Prof. Andrea Montanari (Stanford) and focuses on building learning systems that operate efficiently on large-scale structured and tabular data.
While much of the industry focuses on text or media models, Granica is building the foundations of AI systems that learn directly from structured enterprise data.
This role focuses on building machine learning systems for structured and tabular data rather than general LLM application development.
The Role
The Applied AI Research Team sits at the intersection of theory and production.
Your work will take ideas emerging from fundamental research and turn them into practical algorithms, optimized pipelines, and production-ready ML systems that operate across petabytes of structured enterprise data.
This is a high-ownership role for engineers who can think like researchers and build like systems engineers.
You will translate theory into measurable performance improvements and help define the engineering foundations of structured AI.
What You’ll Do
Turn research into working systems
- Transform foundational ideas from Granica Research and Prof. Andrea Montanari’s group into scalable algorithms and prototypes
- Build evaluation harnesses, datasets, and benchmarks that measure real signal from research ideas
- Define and improve metrics that quantify progress in structured AI systems
- Develop efficient learning methods for relational, tabular, graph, and enterprise datasets
- Prototype representation learning architectures and compression-aware models
- Explore new approaches for learning from heterogeneous structured data
- Implement fast training and inference pipelines using PyTorch, JAX, or custom kernels
- Optimize memory usage, compute utilization, and data movement
- Improve cost, latency, and throughput for large-scale ML workloads
- Design systems integrating symbolic, relational, and neural components
- Enable AI models to reason over structured datasets without relying on text intermediaries
- Work with Research Scientists to validate hypotheses at scale
- Work with Systems Engineers to integrate algorithms into Granica’s data platform
- Work with Product Engineering to ship features powering real enterprise workloads
- Run controlled experiments and analyze performance improvements
- Deliver results with clear benchmarks and reproducible evaluations
- Drive the cycle from prototype → production → optimization
Technical Depth
- Strong background in machine learning, probabilistic modeling, optimization, or large-scale ML systems
- Experience building algorithms for structured, relational, tabular, or graph data
- Ability to reason from first principles about scaling behavior, efficiency, and information flow
- Hands-on experience with PyTorch, JAX, TensorFlow, or similar ML frameworks
- Strong programming skills in Python
- Experience with systems languages such as Rust, C++, or CUDA is a plus
- Experience building large-scale ML pipelines, evaluation frameworks, or distributed systems
- Proven ability to turn research ideas into performant, reliable code
- Comfort working in research-driven environments with ambiguous problem definitions
- Strong experimentation discipline and focus on measurable performance improvements
- Structured representation learning, tabular ML, relational learning, or graph ML
- Experience with large-scale training infrastructure or distributed ML
- Familiarity with data systems, query engines, or large-scale data pipelines
- Experience building evaluation infrastructure for ML systems
- Open-source contributions or collaborative work bridging research and production systems
The world’s most valuable data is structured.
Most AI systems today are not built to learn from it efficiently.
Granica is building the systems that close this gap.
Your work will help define the engineering foundations of structured AI — designing the algorithms, pipelines, and infrastructure that enable efficient learning from enterprise data at global scale.
This Role Offers
- high ownership
- real research impact
- immediate production relevance
- and the opportunity to shape a new generation of AI systems.
- Competitive salary, meaningful equity, and substantial bonus for top performers
- Flexible time off plus comprehensive health coverage for you and your family
- Support for research, publication, and deep technical exploration
Compensation Range: $160K - $250K
Benefits
equity · bonus · flexible time off · health coverage · research support · publication support
Get new builder jobs daily: