View jobs

Forward Deployed Engineer

  • GTM
  • Full-time
  • New York or San Francisco

About HoneyHive

At HoneyHive, we are building the new observability stack for AI agents. Our platform is used by leading AI startups and Fortune 500 enterprises to build, deploy, and observe AI agents. AI engineers use our platform debug complex agents, evaluate output quality, monitor agent failures in production, and a whole lot more.

We’ve raised $7.4M Seed from Insight Partners and are on track to scale further. Our founding team includes AI, infra, and product experts who’ve shipped large-scale systems at Microsoft OpenAI, Amazon, Amplitude, New Relic, and more.

About the role

As our first Forward Deployed Engineer at HoneyHive, you'll be the technical bridge between our platform and the AI engineering teams who depend on it. This is a post-sales, deeply hands-on role — you'll own the full implementation lifecycle for our enterprise customers, from initial evaluation through production deployment and long-term adoption.

Our customers are Fortune 500 enterprises building complex agentic workflows using LLMs. They operate in sensitive, infrastructure-heavy environments — private VPCs, air-gapped on-prem clusters, hybrid cloud setups — and they need a trusted technical partner who can navigate that complexity with them.

You'll get your hands dirty: spinning up Helm charts, debugging OTEL pipelines, walking a platform engineer through a Terraform module, or sitting alongside an AI engineer as they instrument their first LLM traces. You'll also be the sharpest technical voice in the room when it comes to evals, observability, and agentic architecture.

This is a foundational role with enormous scope. You'll shape how we engage with enterprise customers at scale and have a direct line to engineering, GTM, and our co-founders.

In this role, you will:

  • Own end-to-end customer implementations — lead on-prem and cloud deployments, configure Kubernetes-based infrastructure, manage Helm releases and ArgoCD pipelines, and ensure HoneyHive integrates cleanly into existing AI and data infrastructure

  • Instrument and debug complex systems — help customers instrument their LLM applications and agents using OpenTelemetry, troubleshoot trace ingestion, configure custom evaluators, and optimize observability pipelines end-to-end

  • Drive technical adoption — design and run workshops, code-alongs, and best practices sessions tailored to AI engineering teams; translate platform capabilities into concrete wins for customer use cases

  • Act as a technical advisor on AI engineering — guide customers on evaluation strategy, prompt and context engineering, agent orchestration patterns, and production monitoring

  • Bridge product and customers — synthesize technical feedback from the field into clear product requirements, collaborate with engineering on deployment architecture, and help build repeatable, scalable implementation playbooks

Our stack

HoneyHive is built on React, NextJS, and Express on the frontend, with AWS powering our cloud infrastructure. Our SDKs are written in TypeScript and Python, built on top of OpenTelemetry (OTEL) as the foundation for our tracing and observability layer. Enterprise deployments run on Kubernetes, managed via Helm and ArgoCD, and span AWS, GCP, and Azure environments — including VPC-isolated and air-gapped on-prem deployments.

As an FDE, you'll become a deep expert in how HoneyHive deploys and integrates across all of these environments, and you'll often be the person making it work.

About you

We think you'd be a great fit if you have:

  • 5+ years of customer-facing technical experience in software engineering, solutions engineering, or technical consulting — ideally at a developer tools, infrastructure, or AI/ML platform company

  • Deep infrastructure expertise — you're comfortable owning Kubernetes deployments end-to-end, writing and reviewing Helm charts, managing GitOps pipelines with ArgoCD, and using Terraform to provision cloud infrastructure across AWS (and ideally GCP/Azure as well)

  • Hands-on cloud networking knowledge — VPCs, private endpoints, IAM, security groups, and PrivateLink aren't intimidating to you; you've deployed production systems in locked-down enterprise environments before

  • Enterprise deployment experience — you have a track record of implementing infrastructure-heavy products in Fortune 500 or Global 2000 environments and know how to navigate their security reviews, procurement cycles, and change management processes

  • Strong communication across audiences — you can whiteboard a Kubernetes architecture with a platform engineer in the morning and present a business case for expanding a deployment to a CTO in the afternoon

What sets great candidates apart:

  • Practical AI engineering experience — you've built or instrumented LLM applications and agents yourself; you understand how evals work, what good observability looks like for agentic systems, and you're familiar with orchestration frameworks like LangChain, LlamaIndex, or similar

  • OpenTelemetry fluency — you understand the OTEL data model (traces, spans, attributes), have configured collectors and exporters, and can debug instrumentation issues in Python or TypeScript codebases

Why join

  • You'll be the first FDE hire and shape how we scale our activation motion

  • Work directly with enterprises solving the most complex technical challenges

  • We have product market fit and are scaling fast with sophisticated, engaged customers

  • You'll become an expert in the broader agentic AI space

Benefits

  • Competitive salary + meaningful equity

  • Health, vision, and dental benefits

  • Unlimited PTO

  • Assistance in relocating to NYC or SF

  • MacBook Pro + peripherals

  • Annual AI Stipend

View job location on map