View jobs

Forward Deployed Engineer

  • GTM
  • Full-time
  • New York or San Francisco

About HoneyHive

At HoneyHive, we are building the new observability stack for AI agents. Our platform is used by Fortune 500 enterprises to observe, evaluate, and optimize AI agents in production. Developers use our platform debug complex agents, evaluate output quality, monitor agent failures in production, and a whole lot more.

We’ve raised $7.4M Seed from Insight Partners and are on track to scale further. Our founding team includes AI, infra, and product experts who’ve shipped large-scale systems at Microsoft AI, Amazon, Amplitude, Plaid, and more.

About the role

As our first Forward Deployed Engineer at HoneyHive, you'll be the technical bridge between our platform and the AI engineering teams who depend on it. This is a post-sales, deeply hands-on role — you'll own the full implementation lifecycle for our enterprise customers, from initial evaluation through production deployment and long-term adoption.

Our customers are Fortune 500 enterprises building complex agentic systems using LLMs. They operate in sensitive, infrastructure-heavy environments — private VPCs, air-gapped environments, hybrid cloud setups — and they need a trusted technical partner who can navigate that complexity with them.

You'll get your hands dirty: spinning up Helm charts, debugging OTEL pipelines, walking a platform engineer through a Terraform module, or sitting alongside an AI engineer as they instrument their first LLM traces. You'll also be the sharpest technical voice in the room when it comes to evals, observability, and agent architecture.

This is a foundational role with enormous scope. You'll shape how we engage with enterprise customers at scale and have a direct line to engineering, GTM, and both co-founders.

In this role, you will:

  • Own end-to-end customer implementations — lead self-hosted and SaaS deployments, configure Kubernetes-based infrastructure, manage Helm releases and ArgoCD pipelines, and ensure HoneyHive integrates cleanly into existing AI and data infrastructure

  • Instrument and debug complex systems — help customers instrument their LLM applications and agents using OpenTelemetry, troubleshoot trace ingestion, configure custom evaluators, and optimize observability pipelines end-to-end

  • Drive technical adoption — design and run workshops, code-alongs, and best practices sessions tailored to AI engineering teams; translate platform capabilities into concrete wins for customer use cases

  • Act as a technical advisor on AI engineering — guide customers on evaluation strategy, prompt and context engineering, agent orchestration patterns, and production monitoring

  • Bridge product and customers — synthesize technical feedback from the field into clear product requirements, collaborate with engineering on deployment architecture, and help build repeatable, scalable implementation playbooks

Our stack

HoneyHive is built on React, Next.js, and Express on the frontend, with AWS powering our cloud infrastructure. Our SDKs are written in Python and TypeScript, built on top of OpenTelemetry (OTEL) as the foundation for our telemetry layer. Enterprise deployments run on Kubernetes, managed via Helm and Terraform, and span AWS, GCP, and Azure environments — including VPC-isolated and air-gapped deployments.

As an FDE, you'll become a deep expert in how HoneyHive deploys and integrates across all of these environments, and you'll often be the person making it work.

About you

We think you'd be a great fit if you have:

  • 5+ years of technical experience as a software engineer, solutions engineer, or technical product manager — ideally at a developer tools, infrastructure, or AI company

  • Deep infrastructure expertise — you're comfortable owning Kubernetes deployments end-to-end, reviewing Helm charts, managing GitOps pipelines with ArgoCD, and using Terraform to provision cloud infrastructure across AWS (and ideally GCP/Azure as well)

  • Hands-on cloud networking knowledge — VPCs, private endpoints, IAM, security groups, and PrivateLink aren't intimidating to you; you've deployed production systems in locked-down enterprise environments before

  • Enterprise deployment experience — you have a track record of implementing infrastructure-heavy products in Fortune 500 environments and know how to navigate their security reviews, procurement cycles, and change management processes

  • Strong communication across audiences — you can whiteboard a Kubernetes architecture with a platform engineer in the morning and present a business case for expanding a deployment to a CTO in the afternoon

What sets great candidates apart:

  • Practical AI engineering knowledge — you've built or instrumented LLM applications and agents yourself; you understand how evals work, what good observability looks like for agentic systems, and you're familiar with orchestration frameworks like LangGraph, CrewAI, or similar

  • OpenTelemetry fluency — you understand the OTEL data model (traces, spans, attributes), have configured collectors and exporters, and can debug instrumentation issues in Python or TypeScript codebases

Why join

  • You'll be the first FDE hire and shape how we scale our activation motion

  • Work directly with enterprises solving the most complex technical challenges

  • We have product market fit and are scaling fast with sophisticated, engaged customers

  • You'll become an expert in the broader AI ecosystem

Benefits

  • Competitive salary + meaningful equity

  • Health, vision, and dental benefits

  • Unlimited PTO

  • Assistance in relocating to NYC or SF

  • MacBook Pro + peripherals

  • Annual AI Stipend

View job location on map