- Most JavaScript errors in production go undetected—users encounter them and leave without telling you
- Manual approaches like
window.onerrorand try/catch blocks miss entire categories of errors including unhandled promise rejections and third-party script failures - Session replay with built-in error logging lets you see the exact user actions that triggered each error—no reproduction steps needed
- Inspectlet captures JavaScript errors automatically with the same one-line tracking script used for session recording—no separate SDK or configuration
- An effective error triage workflow prioritizes by user impact, not just error frequency
Why Production Error Tracking Matters
There is a class of JavaScript errors that only happens in production. They don't show up in your development environment, they pass your test suite, and they slip through staging. They appear when a real user on a four-year-old Android phone running an outdated browser encounters a race condition your team never anticipated.
The problem isn't just that these errors exist. It's that you never find out about them. Studies consistently show that the vast majority of users who encounter a bug will simply leave the site rather than report it. For every one user who emails support to say "your checkout is broken," there are dozens who silently abandoned their cart.
Production error tracking closes this gap. Instead of relying on users to tell you something is broken, you instrument your application to tell you itself—in real time, with full context about what went wrong, where, and for whom.
The stakes are concrete. A single unhandled TypeError in your checkout flow that fires for 2% of sessions could be costing you thousands in lost revenue per week. An intermittent network failure that causes your "Add to Cart" button to silently fail only on certain API endpoints might persist for months without anyone noticing. You can't fix what you can't see.
Types of JavaScript Errors in Production
Before choosing a tracking approach, you need to understand the different categories of errors you're dealing with. Each behaves differently and requires different instrumentation to catch.
Uncaught Exceptions
These are the classic runtime errors: TypeError, ReferenceError, RangeError, and their cousins. They happen when code tries to call a method on undefined, access a property that doesn't exist, or perform an operation on an incompatible type. In production, uncaught exceptions are often triggered by:
- API responses that return unexpected data shapes (a field is
nullthat your code assumes is an object) - Browser-specific behavior differences (a method available in Chrome but not in Safari 14)
- Race conditions where a component renders before its data has loaded
- Edge cases in user input that your validation didn't account for
Uncaught exceptions bubble up to the global scope and can be intercepted by window.onerror. They're the easiest type to catch automatically.
Unhandled Promise Rejections
Modern JavaScript is heavily asynchronous. When a Promise rejects and no .catch() handler is attached, the error doesn't trigger window.onerror—it fires a separate unhandledrejection event. This is a common blind spot. Many production tracking setups only listen for window.onerror and miss promise rejections entirely.
Promise rejections are particularly common in:
fetch()calls where the network fails or the server returns an error and the calling code doesn't handle itasync/awaitfunctions without try/catch blocks- Third-party library initialization that fails silently
- Event handlers that are
asyncbut whose callers don't await or catch the result
Network and API Failures
Not all production errors are JavaScript exceptions. Failed API calls—requests that return 4xx or 5xx status codes, time out, or fail due to CORS issues—are a major source of broken user experiences. The user clicks a button, the underlying XMLHttpRequest or fetch() call fails, and the UI either shows a generic error or (worse) does nothing at all.
Network failures are especially tricky because they're often intermittent. An API endpoint might work fine 99% of the time but fail under load, for specific user accounts, or in specific geographic regions. Without monitoring the network layer, these failures are nearly impossible to diagnose.
Third-Party Script Errors
Your site probably loads dozens of third-party scripts: analytics, ads, chat widgets, A/B testing tools, payment processors. When one of these scripts throws an error, it can break your own code if it corrupts shared state or blocks the main thread. The frustrating part is that these errors often manifest as Script error. with no useful stack trace, because browsers restrict error details for cross-origin scripts unless the proper CORS headers are in place.
To get full error details from third-party scripts, ensure the <script> tag has crossorigin="anonymous" and the hosting server sends Access-Control-Allow-Origin headers. Without both, the browser will suppress the error message and stack trace.
Traditional vs. Modern Error Tracking Approaches
There are several ways to track JavaScript errors in production, ranging from manual instrumentation to fully automated tooling. Each has trade-offs in coverage, effort, and context.
Manual window.onerror Implementation
The simplest approach is to hook into the browser's global error handler:
window.onerror = function(message, source, line, column, error) {
navigator.sendBeacon('/api/errors', JSON.stringify({
message: message,
source: source,
line: line,
column: column,
stack: error ? error.stack : null,
url: location.href,
userAgent: navigator.userAgent,
timestamp: new Date().toISOString()
}));
};
This catches synchronous uncaught exceptions and gives you the basic facts: error message, file, line, and stack trace. To catch promise rejections, you'd add a separate listener:
window.addEventListener('unhandledrejection', function(event) {
navigator.sendBeacon('/api/errors', JSON.stringify({
message: event.reason ? event.reason.message : 'Unhandled rejection',
stack: event.reason ? event.reason.stack : null,
type: 'unhandledrejection'
}));
});
Pros: Zero dependencies, full control, you own the data.
Cons: You have to build the backend ingestion, deduplication, alerting, and dashboard yourself. You get no user context—you know what broke but not what the user was doing when it broke. You'll miss network errors entirely. And maintaining this across multiple applications becomes a burden.
Strategic try/catch Blocks
Wrapping critical code paths in try/catch gives you targeted error handling:
try {
await processCheckout(cartItems);
} catch (err) {
logError(err, { context: 'checkout', cartSize: cartItems.length });
showUserFriendlyError('Something went wrong. Please try again.');
}
This works well for known risk areas (API calls, data parsing, third-party integrations) but has fundamental limitations. You can only catch errors you anticipate. You can't wrap every line of code in try/catch, and if an error occurs somewhere you didn't instrument, it's invisible.
Dedicated Error Tracking Services
Tools like Sentry, Bugsnag, and Rollbar provide comprehensive JavaScript error monitoring. They capture errors automatically, group them by root cause, track error frequency over time, and integrate with issue trackers. Most require you to install a separate SDK, configure source map uploads for readable stack traces, and manage DSN keys and project settings.
These services are powerful but have a significant limitation: they show you the error without showing you the user. You get a stack trace and maybe some breadcrumbs (a log of recent user actions), but you can't see what the user actually experienced. Reproducing the bug often requires guessing at the sequence of events that led to the error.
Session Replay with Built-In Error Logging
This is where the approaches diverge most sharply. A session recording tool with integrated error logging captures JavaScript errors and records the entire user session. When an error occurs, you don't just see a stack trace—you can watch the recording to see exactly what the user clicked, what was on screen, and what happened before and after the error.
Inspectlet takes this approach. Its tracking script automatically hooks window.onerror to capture every uncaught exception with the full stack trace, error message, source file, line number, and column number. Errors are deduplicated (the same error won't flood your dashboard) and each one is linked directly to the session recording where it occurred.
Track Errors with Full Session Context
Every error is linked to a session recording. See what the user did, not just what broke.
Setting Up Production Error Tracking with Inspectlet
One of the biggest advantages of using Inspectlet for production error tracking is that there's nothing extra to configure. If you've already installed Inspectlet for session recording or heatmaps, error logging is already active.
Step 1: Install the Tracking Script
Add the Inspectlet tracking snippet to your site. This is a single JavaScript block that goes in your page's <head> section. You'll find your site-specific snippet in your Inspectlet dashboard under Settings.
That's it. There is no separate error logging SDK to install, no source map upload process, no additional configuration. The same one-line tracking script that powers session recording and heatmaps also captures JavaScript errors automatically.
Step 2: Verify Errors Are Being Captured
To confirm everything is working, open your site in a browser and trigger a test error from the console:
// Open your browser's console and run this on a page with Inspectlet installed
throw new Error('Test error from console');
Then check the Error Logging section in your Inspectlet dashboard. Within a few minutes, you should see the test error appear with the message, source location, and a link to the session where it occurred.
Step 3: Review the Error Dashboard
The error logging dashboard shows a table of all captured errors. For each error, you'll see:
- Error message — the text of the exception
- Source file and line — where the error originated
- Stack trace — expandable full call stack so you can trace the error back to its root
- Error count — how many times this specific error has occurred
- Frequency timeline — a sparkline chart showing when the error occurs over time, so you can spot patterns (did it start after a deployment?)
Step 4: Watch the Session Replay
This is what separates Inspectlet from traditional error tracking. Click any error and you'll see the sessions where it occurred. Click a session to open the full recording. You can jump directly to the moment the error was thrown and watch what the user was doing: which page they were on, what they clicked, what they typed, and how the UI responded (or didn't).
The replay also includes Console and Network tabs synchronized to the timeline. You can see console warnings, other errors, network requests with their status codes, timing, and payload sizes—all aligned to the exact moment in the recording.
Reading Error Data Effectively
Capturing errors is step one. Making them actionable requires a systematic approach to reading and interpreting the data.
Error Grouping and Frequency
Not all errors are equal. An obscure TypeError that fires once a week for a single user on an obsolete browser is very different from a ReferenceError that fires 500 times a day on your checkout page. The error dashboard groups identical errors together and shows their count, so you can immediately see which errors are widespread.
Look at the frequency timeline to identify patterns:
- Sudden spike — likely caused by a recent deployment. Cross-reference the timing with your release schedule.
- Steady baseline — a longstanding bug that's been silently affecting users. Often the most damaging because it's been draining conversions for months.
- Periodic spikes — may correlate with traffic patterns (peak hours), specific marketing campaigns, or time-based operations like daily batch jobs that interact with the frontend.
Stack Trace Analysis
The stack trace tells you the chain of function calls that led to the error. When reading a production stack trace, work from top to bottom:
- Top frame — where the error was thrown. This is the immediate location, but often not the root cause.
- Middle frames — the call chain. Look for the transition point between your code and library code. The bug is usually at the boundary where your code passes data into a library or vice versa.
- Bottom frames — the origin of the call. This tells you what triggered the chain of events (a click handler, a timer, a network callback).
If your JavaScript is minified in production, stack traces may reference compressed file and function names. Source maps can help here, but even without them, the error message and the overall call pattern are usually enough to locate the issue in your source code.
Correlating Errors with User Behavior via Session Replay
This is where production error tracking becomes truly powerful. A stack trace tells you what broke. A session replay tells you why.
When you watch the session recording linked to an error, pay attention to:
- What the user did immediately before the error — did they click a button, submit a form, navigate to a specific page?
- The state of the page — was the page fully loaded? Were there other errors or slow network requests happening simultaneously?
- What happened after the error — did the user see a broken UI? Did they try again? Did they leave the site entirely?
- Network activity — check the Network tab in the replay. Was the error preceded by a failed API call? A timeout? A 500 response?
This context turns an abstract error report into a reproducible bug. Instead of filing a ticket that says "TypeError: Cannot read property 'id' of undefined," you can say "When a user adds item X to cart and the /api/inventory endpoint returns 503, the cart component crashes because it doesn't handle the failure case."
Building an Error Triage Workflow
Production error tracking only works if you act on the data. Here's a practical workflow for teams.
Prioritizing by Frequency and User Impact
Sort errors by count to see which ones affect the most users. But don't stop there—consider the business impact:
- An error on the checkout page that fires 50 times a day is more critical than an error on the "About Us" page that fires 200 times
- An error that causes a blank page (total loss of functionality) is worse than an error that breaks a tooltip
- An error affecting logged-in users (who are closer to conversion) is more urgent than one affecting first-time visitors on a blog post
Multiply error frequency by page value. If your checkout page generates $50,000/month and an error affects 3% of sessions on that page, fixing it is likely worth more than fixing an error that affects 20% of sessions on a page that doesn't drive revenue.
Reproducing Errors via Session Replay
The traditional debugging cycle is: see error → guess reproduction steps → try to reproduce → fail → guess again → maybe reproduce. This can take hours or days, especially for intermittent bugs.
With session replay, the cycle collapses to: see error → watch recording → understand exactly what happened. You see the user's browser, viewport size, the sequence of interactions, the network conditions, and the UI state at the moment of the error. For many bugs, watching a single recording is enough to identify the root cause without ever opening your local development environment.
Fixing and Verifying
After deploying a fix, use the error dashboard's frequency timeline to verify the error has stopped occurring. If the sparkline drops to zero after your deployment, the fix is confirmed. If it continues, watch new session recordings to see if the error has changed shape or if there's a secondary trigger you missed.
Advanced Techniques
Custom Error Tagging
Inspectlet lets you tag sessions with custom metadata using the JavaScript API. This is powerful for error investigation because you can attach business context to sessions where errors occur:
__insp.push(['tagSession', {
error_context: 'checkout_flow',
cart_value: calculateCartTotal(),
user_tier: getCurrentUserPlan()
}]);
When you're triaging errors, these tags let you filter sessions by context. You might discover that a particular error only affects users on the "Pro" plan, or only occurs when the cart contains more than 10 items. This context dramatically narrows the debugging surface.
Network Request Monitoring
Inspectlet's tracking script monitors XMLHttpRequest activity automatically. It captures the request URL, HTTP method, status code, response time, and payload sizes for every AJAX call made during a session. Failed requests (4xx and 5xx status codes) are visible in the Network tab of the session replay.
This is invaluable for tracking errors that originate from the backend. When a JavaScript error is caused by a failed API call, you can see both the network failure and the resulting UI breakage in the same timeline. You don't need to correlate timestamps between your frontend error tracker and your backend logging system—it's all in one place.
Combining Error Data with Form Analytics
JavaScript errors during form submission are conversion killers. By combining Inspectlet's error logging with its form analytics, you can identify exactly where users encounter errors during multi-step flows like checkout, registration, or onboarding.
Look for patterns where errors cluster around specific form fields or steps. A common example: a payment form that throws a JavaScript error when the user enters a card number with spaces, because the validation code expects only digits. The error log shows you the exception, the form analytics shows you the abandonment spike on that field, and the session replay shows you the user typing their card number with spaces, seeing no feedback, trying again, and eventually leaving.
Common Production JavaScript Errors and How to Fix Them
After analyzing millions of production error reports, certain patterns appear again and again. Here are the most common JavaScript errors teams encounter in production and how to resolve them.
TypeError: Cannot read properties of undefined
This is the single most common production JavaScript error. It occurs when code attempts to access a property or call a method on a value that is undefined or null. In production, the typical trigger is an API response that's missing expected data.
Fix: Use optional chaining (user?.address?.city) and provide default values. Never assume nested objects are fully populated. Validate API response shapes before using them.
ReferenceError: [variable] is not defined
Often caused by script load order issues. A script tries to use a variable or function defined in another script that hasn't loaded yet. This is more common on slow connections where scripts load out of order.
Fix: Use module bundlers that handle dependency ordering. If loading scripts independently, check for existence before use or use defer/async attributes strategically.
Script error. (No details)
This isn't a real error message—it's the browser's way of saying "an error occurred in a cross-origin script, but I can't tell you the details for security reasons." You'll see this for errors in third-party scripts loaded from different domains.
Fix: Add crossorigin="anonymous" to your <script> tags and ensure the script's server sends the Access-Control-Allow-Origin header. For scripts you don't control (ads, widgets), consider whether the third-party provider offers error reporting on their end.
Failed to fetch / NetworkError
These errors indicate that a fetch() call couldn't complete. Causes include network connectivity issues, CORS misconfigurations, server downtime, or browser extensions blocking requests.
Fix: Always wrap fetch() in error handling. Implement retry logic with exponential backoff for transient failures. Show users a meaningful error message instead of letting the UI break silently. Use the Network tab in Inspectlet's session replay to see the exact request that failed and correlate it with the user's experience.
ResizeObserver loop limit exceeded
A surprisingly common production error. It occurs when a ResizeObserver callback triggers a layout change that triggers another resize observation, creating an infinite loop that the browser breaks.
Fix: Debounce your ResizeObserver callbacks. Ensure that changes made inside the callback don't cause the observed element to resize again. In many cases, this error is benign (the browser breaks the loop automatically), but it clutters your error log and may indicate a performance issue.
ChunkLoadError / Loading chunk [n] failed
Common in single-page applications that use code splitting. The browser tries to load a JavaScript chunk on-demand, but the file is missing (often because a new deployment changed chunk hashes while the user still has the old HTML cached).
Fix: Keep old chunks available for a grace period after deployments. Implement a fallback that reloads the page when a chunk fails to load. Use performance monitoring to detect when chunk load times are degrading.
Error Monitoring vs. Error Tracking: Building a Complete System
There's a subtle but important distinction between monitoring and tracking. Error monitoring tells you something is wrong (an alert fires, a dashboard turns red). Error tracking tells you what's wrong, who's affected, and how to fix it.
A complete production error system needs both:
- Real-time alerting — get notified when a new error type appears or when error rates spike above baseline. This is your early warning system for deployments gone wrong.
- Contextual investigation — when you receive an alert, you need tools that let you drill into the error, see its frequency, read its stack trace, and understand its user impact. This is where error tracking with session replay shines.
- Trend analysis — over time, track your total error rate as a percentage of sessions. A healthy production application should maintain a stable or declining error rate. An upward trend means technical debt is accumulating.
The combination of Inspectlet's automatic error capture, session replay, and purpose-built error tracking provides the investigation layer. Pair it with your team's alerting system (PagerDuty, Slack webhooks, or similar) for a complete production error management pipeline.
Frequently Asked Questions
Do I need to install a separate SDK to track JavaScript errors with Inspectlet?
No. Unlike tools like Sentry or Bugsnag that require a dedicated error tracking SDK, Inspectlet captures JavaScript errors automatically with the same tracking script used for session recording and heatmaps. If you've installed Inspectlet, error logging is already active.
Does Inspectlet capture errors from third-party scripts?
Inspectlet captures all errors that reach window.onerror. For cross-origin scripts, the browser may limit error details to "Script error." unless the script tag includes crossorigin="anonymous" and the server sends appropriate CORS headers. This is a browser security restriction, not a tool limitation.
How does error deduplication work?
When the same error fires multiple times in a session (common in render loops or interval-based code), Inspectlet deduplicates by error hash with a cooldown period so your error log isn't flooded with identical entries. Each unique error is counted, but rapid-fire duplicates within a short window are consolidated.
Can I track custom errors or log messages?
You can tag sessions with custom metadata using __insp.push(['tagSession', { key: 'value' }]) to add business context to sessions where errors occur. This lets you filter and search sessions by custom attributes like user plan, cart value, or feature flags.
How does this compare to using Sentry alongside Inspectlet?
Sentry provides deeper error-specific features like source map integration, release tracking, and issue assignment workflows. Inspectlet provides something Sentry can't: the ability to watch the actual user session where the error occurred. Many teams use both—Sentry for error management workflows and Inspectlet for understanding user impact via session replay. If you're choosing one tool to start with, Inspectlet gives you error tracking plus session recording, heatmaps, and form analytics in one script.