Skip to content
Learn Agentic AI
Learn Agentic AI12 min read8 views

Taking Screenshots and Recording Videos with Playwright for AI Analysis

Learn how to capture full-page screenshots, element-level screenshots, and record browser session videos with Playwright, then feed them to GPT-4 Vision for automated visual analysis.

Visual Intelligence for AI Agents

Text extraction alone is often insufficient for AI agents operating on the web. Visual elements — charts, images, layouts, error modals, CAPTCHAs — carry information that is not present in the DOM text. Playwright provides powerful screenshot and video recording capabilities that allow AI agents to capture visual state and feed it to multimodal models like GPT-4 Vision for analysis.

This post covers every screenshot and recording feature in Playwright, with practical examples of integrating visual captures with AI analysis.

Basic Screenshots

Playwright can capture screenshots in PNG (default) or JPEG format:

flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus<br/>classify"]
    PLAN["Plan and tool<br/>selection"]
    AGENT["Agent loop<br/>LLM plus tools"]
    GUARD{"Guardrails<br/>and policy"}
    EXEC["Execute and<br/>verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus<br/>next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    browser = p.chromium.launch()
    page = browser.new_page()
    page.goto("https://example.com")

    # Default screenshot (viewport only, PNG)
    page.screenshot(path="viewport.png")

    # Full page screenshot (scrolls the entire page)
    page.screenshot(path="full_page.png", full_page=True)

    # JPEG format with quality setting
    page.screenshot(path="compressed.jpg", type="jpeg", quality=80)

    # Screenshot as bytes (no file saved)
    screenshot_bytes = page.screenshot()
    print(f"Screenshot size: {len(screenshot_bytes)} bytes")

    browser.close()

The full_page=True option is particularly useful for AI agents because it captures content below the fold that would otherwise require scrolling.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Element-Level Screenshots

Capture specific elements instead of the full page — useful for focusing AI analysis on a particular component:

# Screenshot a specific element
page.locator("table.results").screenshot(path="results_table.png")

# Screenshot a chart
page.locator("#revenue-chart").screenshot(path="chart.png")

# Screenshot an error message
error = page.locator(".error-banner")
if error.is_visible():
    error.screenshot(path="error.png")

# Screenshot with padding (captures surrounding context)
page.locator("#main-content").screenshot(
    path="content_with_context.png",
)

Screenshot Configuration Options

Fine-tune your screenshots for different AI analysis needs:

# Custom viewport size before screenshot
page.set_viewport_size({"width": 1920, "height": 1080})
page.screenshot(path="desktop_view.png")

page.set_viewport_size({"width": 375, "height": 812})
page.screenshot(path="mobile_view.png")

# Clip a specific region of the page
page.screenshot(
    path="header_region.png",
    clip={"x": 0, "y": 0, "width": 1920, "height": 200}
)

# Transparent background (for pages with no background)
page.screenshot(path="transparent.png", omit_background=True)

# Disable animations for consistent screenshots
page.screenshot(
    path="static.png",
    animations="disabled"
)

Recording Browser Session Videos

Playwright can record entire browsing sessions as videos. This is invaluable for debugging AI agent behavior and for feeding session recordings to vision models:

from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    browser = p.chromium.launch()

    # Enable video recording on the context
    context = browser.new_context(
        record_video_dir="./videos/",
        record_video_size={"width": 1280, "height": 720}
    )

    page = context.new_page()

    # Perform actions — all are recorded
    page.goto("https://example.com")
    page.get_by_text("More information").click()
    page.wait_for_load_state("networkidle")
    page.go_back()

    # Close context to finalize and save the video
    context.close()

    # Get the video path
    video_path = page.video.path()
    print(f"Video saved to: {video_path}")

    browser.close()

Videos are saved as WebM files. You must close the context (or page) to finalize the video file — the recording is flushed to disk on close.

Feeding Screenshots to GPT-4 Vision

The real power of Playwright screenshots emerges when you combine them with multimodal AI models. Here is how to capture a page and analyze it with GPT-4 Vision:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

import base64
from openai import OpenAI
from playwright.sync_api import sync_playwright

def analyze_page_with_vision(url: str, question: str) -> str:
    """
    Navigate to a URL, screenshot the page, and ask GPT-4 Vision
    a question about what it sees.
    """
    # Step 1: Capture the screenshot
    with sync_playwright() as p:
        browser = p.chromium.launch()
        page = browser.new_page()
        page.set_viewport_size({"width": 1280, "height": 720})
        page.goto(url, wait_until="networkidle")
        screenshot_bytes = page.screenshot(full_page=False)
        browser.close()

    # Step 2: Encode as base64
    screenshot_b64 = base64.b64encode(screenshot_bytes).decode("utf-8")

    # Step 3: Send to GPT-4 Vision
    client = OpenAI()
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": question},
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": f"data:image/png;base64,{screenshot_b64}",
                            "detail": "high",
                        },
                    },
                ],
            }
        ],
        max_tokens=1000,
    )

    return response.choices[0].message.content

# Usage
analysis = analyze_page_with_vision(
    "https://news.ycombinator.com",
    "What are the top 3 trending topics on this page? "
    "Summarize the themes you see."
)
print(analysis)

Building a Visual Monitoring Agent

Combine periodic screenshots with AI analysis to create a visual monitoring agent:

import time
import base64
from datetime import datetime
from openai import OpenAI
from playwright.sync_api import sync_playwright

def visual_monitor(url: str, interval: int = 60, checks: int = 5):
    """Monitor a page visually by taking periodic screenshots."""
    client = OpenAI()

    with sync_playwright() as p:
        browser = p.chromium.launch()
        page = browser.new_page()
        page.set_viewport_size({"width": 1280, "height": 720})

        for i in range(checks):
            page.goto(url, wait_until="networkidle")
            timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")

            # Capture screenshot
            path = f"monitor_{timestamp}.png"
            screenshot_bytes = page.screenshot(path=path)

            # Analyze with GPT-4 Vision
            b64 = base64.b64encode(screenshot_bytes).decode()
            response = client.chat.completions.create(
                model="gpt-4o",
                messages=[
                    {
                        "role": "user",
                        "content": [
                            {
                                "type": "text",
                                "text": "Describe the current state of this "
                                        "page. Flag any errors, broken "
                                        "layouts, or unusual content.",
                            },
                            {
                                "type": "image_url",
                                "image_url": {
                                    "url": f"data:image/png;base64,{b64}",
                                },
                            },
                        ],
                    }
                ],
                max_tokens=500,
            )

            status = response.choices[0].message.content
            print(f"[{timestamp}] {status}")

            if i < checks - 1:
                time.sleep(interval)

        browser.close()

visual_monitor("https://example.com", interval=30, checks=3)

FAQ

How large are Playwright screenshots, and how does that affect API costs?

A typical 1920x1080 PNG screenshot is 200-500 KB. For GPT-4 Vision, images are resized and tiled internally. Using "detail": "low" reduces the image to a fixed 512x512 tile (fewer tokens, lower cost). "detail": "high" splits the image into multiple 512x512 tiles for finer analysis. For most monitoring use cases, low detail is sufficient and significantly cheaper.

Can I extract text from screenshots instead of using DOM methods?

Yes, and sometimes it is more reliable. OCR-based extraction via GPT-4 Vision can capture text from canvas elements, images, SVGs, and other non-DOM sources that text_content() cannot reach. However, DOM-based extraction is faster and cheaper when the text is available in the HTML. Use visual extraction as a fallback or for content that only exists as rendered pixels.

How do I record video in headless mode?

Video recording works identically in headless and headed modes. Set record_video_dir on the browser context, perform your actions, and close the context. The video file is written to disk regardless of whether the browser is visible. This makes it suitable for CI/CD pipelines and cloud deployments where there is no display.


#PlaywrightScreenshots #GPTVision #VideoRecording #AIVisualAnalysis #BrowserAutomation #MultimodalAI #WebMonitoring

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

OpenAI Computer-Use Agents (CUA) in Production: Build + Evaluate a Real Workflow (2026)

Build a working computer-use agent with the OpenAI Computer Use tool — clicks, types, scrolls a real browser — then evaluate task success on a benchmark suite.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.