Founding Security ML Engineer — Saido Labs (ParryAI + Plastron)
Company Description
Saido Labs is a pre-seed deep-tech startup building ParryAI (prompt-injection defense) and Plastron (output-attestation runtime) — the first bidirectional AI security layer with filed IP (SL-2026-001, 50 claims, March 2026). We have 14 warm pilot conversations across financial services, healthcare, and AI-agent platforms, a pre-seed round in flight, and an audit-grade testing protocol pre-registered before our first benchmark run. Saido Labs LLC will spin off ParryAI into its own Delaware C Corp company.
You'd be our first engineering hire. You'd own the ParryAI detector ensemble and the Plastron attestation runtime, stand up our internal benchmark rig against Protect AI / Lakera / NeMo Guardrails / Robust Intelligence, and be the technical face on CISO calls. You'd also be the person whose name sits next to the founder's on the first published benchmark and CVE disclosure — and the first equity check on the cap table beyond the founder.
Role Description
- ParryAI input-defense models and the detector ensemble (prompt-injection, jailbreak, agent-abuse).
- Plastron output-attestation runtime (Ed25519 signing path, behavioral-drift detection, canary-token pipeline).
- The public-facing benchmark harness — TPR / FPR / AUROC with 95% CIs against five competitors.
- One CVE disclosure, one OWASP LLM / Agents Top-10 contribution, or one MITRE ATLAS TTP writeup in your first 90 days.
- Technical voice on enterprise security reviews with CISOs, Deputy CISOs, and Heads of AI Security.
Qualifications
- 5–8+ years shipping production ML in an adversarial-sensitive domain — fraud, abuse, trust & safety, security, or direct AI safety / red-team work.
- Hands-on experience at one or more of: Anthropic, OpenAI, Google DeepMind / Responsible AI, Meta AI, AWS AI safety, Microsoft Responsible AI, or an AI-security startup (Protect AI, Lakera, HiddenLayer, Robust Intelligence).
- Published work — a paper, a CVE, a blog post, a Black Hat / DEF CON AI Village talk — on prompt injection, jailbreaks, adversarial robustness, or LLM red-teaming.
- Comfort being the sole ML voice on a customer call with an enterprise security team.
- Exposure to OWASP LLM Top 10, OWASP Agents Top 10, MITRE ATLAS, NIST AI RMF.
- Experience with SOC 2 evidence collection or FedRAMP-adjacent environments.
- A writing portfolio. Research-as-product is how we win.
Compensation
- Base: $160K – $200K depending on seniority and location.
- Equity: 1.0% – 3.0% founder-scale grant, 4-year vest, 1-year cliff.
- - Remote-first. US-based. Standard health / dental / vision at FTE scale.
- We are a single-founder org today run with a team of AI agents. Filed IP, no dilutive capital, pilot conversations running. Your start date is contingent on the pre-seed close targeted for Q2 2026.
- You would be the technical co-founder for the ParryAI + Plastron runtime as well as other projects that have been built and projects in the pipeline.
- We'd rather be honest that this is day-one deep-tech than oversell a Series A that doesn't exist yet.
How to apply
Send a note to jesse@saidolabs.com with: (1) one paragraph on the most interesting adversarial-ML problem you've shipped, (2) a link to any public work on prompt injection, jailbreaks, or model security, (3) the earliest date you could realistically start. No résumé required on the first touch.