LLM/ML Engineer (Inference) at Reducto (W24)
$150K - $240K  •  0.10% - 1.00%
Unlocking data behind complex documents
San Francisco, CA, US
Full-time
3+ years
About Reducto

Nearly 80% of enterprise data is in unstructured formats like PDFs

PDFs are the status quo for enterprise knowledge in nearly every industry. Insurance claims, financial statements, invoices, and health records are all stored in a structure that’s simply impractical for use in digital workflows. This isn’t an inconvenience—it’s a critical bottleneck that leads to dozens of wasted hours every week.

Traditional approaches fail at reliably extracting information in complex PDFs

OCR and even more sophisticated ML approaches work for simple text documents but are unreliable for anything more complex. Text from different columns are jumbled together, figures are ignored, and tables are a nightmare to get right. Overcoming this usually requires a large engineering effort dedicated to building specialized pipelines for every document type you work with.

Reducto breaks document layouts into subsections and then contextually parses each depending on the type of content. This is made possible by a combination of vision models, LLMs, and a suite of heuristics we built over time. Put simply, we can help you:

  • Accurately extract text and tables even with nonstandard layouts
  • Automatically convert graphs to tabular data and summarize images in documents
  • Extract important fields from complex forms with simple, natural language instructions
  • Build powerful retrieval pipelines using Reducto’s document metadata
  • Intelligently chunk information using the document’s layout data
About the role

We would love to meet you if you:

  • Philosophy: You are your own worst critic. You have a high bar for quality and don’t rest until the job is done right—no settling for 90%. We want someone who ships fast, with high agency, and who doesn't just voice problems but actively jumps in to fix them.
  • Experience: You have deep expertise in Python and PyTorch, with a strong foundation in low-level operating systems concepts including multi-threading, memory management, networking, storage, performance, and scale. You're experienced with modern inference systems like TGI, vLLM, TensorRT-LLM, and Optimum, and comfortable creating custom tooling for testing and optimization.
  • Approach: You combine technical expertise with practical problem-solving. You're methodical in debugging complex systems and can rapidly prototype and validate solutions.

The core work will include:

  • Architecting and implementing robust, scalable inference systems for serving state-of-the-art AI models
  • Optimizing model serving infrastructure for high throughput and low latency at scale
  • Developing and integrating advanced inference optimization techniques
  • Working closely with our research team to bring cutting-edge capabilities into production
  • Building developer tools and infrastructure to support rapid experimentation and deployment.

Bonus points if you:

  • Have experience with low-level systems programming (CUDA, Triton) and compiler optimization
  • Are passionate about open-source contributions and staying current with ML infrastructure developments
  • Bring practical experience with high-performance computing and distributed systems
  • Have worked in early-stage environments where you helped shape technical direction
  • Are energized by solving complex technical challenges in a collaborative environment

This is an in person role at our office in SF. We’re an early stage company which means that the role requires working hard and moving quickly. Please only apply if that excites you.

Other jobs at Reducto

fulltimeSan Francisco, CA, USMachine learning$150K - $240K0.10% - 1.00%3+ years

fulltimeSan Francisco, CA, USFull stack$150K - $240K0.10% - 1.00%3+ years

fulltimeSan Francisco, CA, USMachine learning$150K - $240K0.10% - 1.00%3+ years

fulltimeSan Francisco, CA, US$100K - $160K0.10% - 0.30%Any (new grads ok)

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›