[Youtube] Mastering Synthetic Monitoring with Datadog: A Practical Guide

[Youtube] Mastering Synthetic Monitoring with Datadog: A Practical Guide
Photo by Carlos Muza / Unsplash

Want to catch issues before your users do? Synthetic monitoring lets you proactively test your applications—before, during, and after deployments. In this guide, we’ll show you how to set up and run synthetic API and browser tests in Datadog, the key features to take advantage of, and cost-effective practices to monitor critical user flows.

What Are Synthetic Tests?

Synthetic tests are automated, simulated requests that replicate user behavior. They run from global locations or your CI/CD pipeline, helping you validate:

  • Uptime
  • Latency
  • Functional correctness

Think of it as QA on autopilot—available 24/7.

Datadog supports scheduled runs (e.g., every minute/hour) and triggered runs (e.g., pre-deployment via GitHub Actions or Jenkins). This helps detect both environmental drift and code regressions before they hit production.

Types of Synthetic Tests in Datadog

Datadog supports 9 test types, but they broadly fall into three categories:

  1. API Tests - Run at the protocol level (HTTP, DNS, TCP, WebSocket, etc.) to validate backend services.
  2. Browser Tests - Simulate real user journeys through a browser, clicking, filling forms, and navigating.
  3. Mobile Tests - Designed for validating mobile app behavior in real environments.

Creating an API Test: Step-by-Step

1. Start from the scratch or from a template

Go to Digital Experience > New Test > API.

You can either use a prebuilt template or start from scratch. You’ll first pick a test subtype like HTTP, DNS, or SSL.

2. Define the Request

  • Select method (GET, POST, etc.)
  • Input your target URL (e.g., shopist.io)
  • Use snippets to speed up setup if needed
  • Add assertions: for instance, body must contain "shop"

3. Tag the Test

Always use:

  • env:<prod|staging>
  • team:<owning-team>
  • service:<backend-service>

These tags enable correlation across APM, dashboards, and alert routing.

4. Set Locations

Choose from 29+ global locations, or add private ones in your own datacenters. Be mindful—more locations = higher cost.

5. Schedule and Frequency

Decide:

  • How often the test should run
  • Whether retries are allowed
  • Failure alert thresholds (e.g., fail on 2 out of 3 locations)

Strike a balance: higher frequency gives faster detection but raises cost.

6. Alert Message Setup

Write an actionable alert:

  • Title: Shopist is down
  • Steps: Link to dashboard, relevant runbooks, or team contacts
  • Use variables for location, status, etc.

7. Run and Monitor

Once saved, the test runs automatically. You’ll see:

  • Pass/fail status
  • Latency per location
  • Network breakdown
  • Manual vs. scheduled vs. CI-triggered runs

Filter by failed runs for triage, and inspect detailed response headers, bodies, and trace context.

Creating a Browser Test: Step-by-Step

Browser tests validate real user experiences using a headless browser and record interactions like a user would.

1. Start from Digital Experience > New Test > Browser

Input the URL and define:

  • Environment
  • Team
  • Service
  • Browser/device (e.g., Chrome desktop)

Be selective—every extra device/location multiplies run cost.

2. Record a Test Flow

Install the Datadog Chrome Extension to start recording:

  • Example: Navigate → Add to Cart → Checkout
  • Assertions: Verify success messages (e.g., page contains “Thank you”)

3. Save and Run

Datadog will replay these steps on schedule. Failures show visual diffs and even backend traces if APM is connected.

Best Practices for Synthetics

  1. Keep tests independent — avoid test pollution across runs.
  2. Tag everything — for correlation, routing, and dashboards.
  3. Monitor cost — frequency, retries, and locations directly impact billing.
  4. Reuse subtests — especially for common steps like login.
  5. Use billing dashboards like DataHacker to identify expensive tests.

What’s Next?

This video was your intro to Datadog Synthetics: how to create API and browser tests, set them up correctly, and avoid surprises in cost.

Coming soon:

We’ll cover advanced flows—authentication, chained tests, variable extraction, and CI/CD automation.

Read more