Introduction: Why Hydration Matters in Today’s Web
In our fast-paced digital world, users expect websites to load instantly and respond immediately to their interactions. Behind this seamless experience lies a complex technical process called “hydration” that bridges the gap between server-rendered content and interactive web applications. For developers building modern websites, understanding hydration isn’t just a technical detail—it’s the key to creating experiences that delight users and perform exceptionally well across all devices.
Hydration is the process that transforms static HTML—delivered quickly from a server—into a dynamic, interactive application in your browser. Think of it as breathing life into a webpage. Without proper hydration, even the most beautifully designed site can feel sluggish or unresponsive. But when implemented correctly, hydration creates that magical moment when a page loads instantly and then comes alive with interactive features.
The challenges with hydration have evolved alongside web technology. What started as a simple concept in early Single-Page Applications (SPAs) has grown into a sophisticated discipline with multiple approaches, each with its own trade-offs. Today’s developers must navigate a landscape of traditional hydration, partial hydration, lazy hydration, selective hydration, and the emerging concept of “resumable” applications.
This article will guide you through this complex terrain with practical insights, helping you make informed decisions about which techniques to apply in different scenarios. We’ll explore not just the “how” but the “why” behind each approach, with a focus on real-world performance impacts that affect your users every day.
Understanding Hydration Errors: The Foundation of Robust Applications
Before diving into advanced hydration strategies, it’s crucial to understand what happens when hydration goes wrong. A “hydration misfire” or “hydration error” occurs when there’s a mismatch between what the server rendered and what the client-side JavaScript expects to find. These errors might seem like minor technical glitches, but they can completely undermine your application’s performance and user experience.
The Anatomy of a Hydration Error
At its core, a hydration error happens when the server-rendered HTML doesn’t match what the client-side framework (like React) expects. Modern frameworks use strict diffing algorithms to compare the server output with their internal representation of the UI. When discrepancies appear, the framework throws warnings like “Text content did not match” or “Expected server HTML to contain a matching element.”
These errors aren’t just annoying console messages—they indicate fundamental architecture problems that can lead to broken interactions, visual inconsistencies, and poor performance. Users might see content flash or change after the page loads, or worse, find that buttons and forms simply don’t work.
Common Causes of Hydration Mismatches
The most frequent causes of hydration errors stem from differences between server and client environments:
Browser API Access on Server: Using browser-specific objects like window, document, or localStorage during initial rendering causes problems because these objects don’t exist on the server. For example:
// This will cause hydration errors
const width = window.innerWidth; // Error on serverNon-Deterministic Functions: Functions that return different values each time they’re called—like Math.random() or new Date()—create mismatches between server and client renders:
// This will cause hydration errors
const id = Math.random(); // Different on server vs. client
const currentDate = new Date(); // Server time vs. client timeConditional Element Hierarchy: When components render different parent elements based on conditions, the DOM structure diverges between server and client:
// This can cause hydration errors
if (condition) {
<p>{content}</p>
} else {
<div>{content}</div>
}Third-Party Library Behavior: External libraries that access browser APIs internally or browser extensions that modify the DOM (like Grammarly adding attributes) can introduce unexpected elements that break server-client parity.
Multiple React Instances: A particularly tricky issue occurs when different versions of React exist in the application—often when the server imports React from its local node_modules while the client bundle contains another version. This causes cryptic errors like “Invalid hook call” because React’s internal reconciler relies on instance checks.
The Real Impact of Hydration Errors
Beyond console warnings, hydration errors have tangible consequences:
- Inconsistent UI rendering: Components may appear with different content or structure on client versus server
- Broken interactivity: Event handlers might fail to attach correctly to mismatched DOM elements
- Performance degradation: The browser wastes time trying to reconcile differences instead of responding to user input
- Hidden issues in production: Some attribute-level mismatches may be suppressed in production builds, only to resurface later as full re-renders
The cumulative effect is a degraded user experience that contradicts the very purpose of using server-side rendering—to deliver fast, interactive applications.
Debugging and Fixing Hydration Errors
Modern frameworks have significantly improved their debugging capabilities. React 17 and later versions provide detailed component stack traces that pinpoint exactly where in your component hierarchy the mismatch occurs. This is a huge improvement over earlier versions that only reported attribute-level mismatches.
Effective debugging strategies include:
- Using React DevTools to inspect component hierarchies on both server and client
- Strategic console logs with environment checks (
typeof window !== 'undefined') - Visual regression testing to automatically detect rendering discrepancies
The solution to most hydration errors follows a consistent pattern: ensure identical rendering logic across environments. This typically involves:
- Deferring client-specific logic until after the initial render using the
useEffecthook - Initializing state with stable values that match between server and client
- Using environment checks to prevent execution of browser-only code on the server
- Applying
suppressHydrationWarningfor intentional, harmless mismatches (like displaying the current time)
For example, to safely access the window object:
function Component() {
// Initialize with a server-safe default value
const [width, setWidth] = useState(0);
// Update only on the client after hydration
useEffect(() => {
setWidth(window.innerWidth);
}, []);
return <div>Window width: {width}px</div>;
}By mastering these fundamentals, developers create a solid foundation for implementing more advanced hydration strategies. Only when server-client mismatches are eliminated can we confidently explore the spectrum of hydration approaches that optimize performance without sacrificing reliability.
The Spectrum of Hydration: From Traditional Models to Architectural Paradigms
The evolution of web application architecture has been shaped by a fundamental challenge: how to balance rich interactivity with fast page loads. This journey has transformed our approach to hydration from a simple all-or-nothing process to a sophisticated spectrum of strategies, each designed for specific use cases and performance requirements.
The Traditional Hydration Model and Its Limitations
Traditional hydration, the approach most developers first encounter, works like this: the server renders static HTML, sends it to the client, and then the browser downloads the entire JavaScript bundle. Only after this bundle is fully parsed and executed does any part of the page become interactive. This sequential process creates what performance experts call a “waterfall” of dependencies.
The limitations of this model become painfully apparent on real devices with constrained resources:
- Low-end mobile devices may spend 2-3 seconds just parsing and compiling JavaScript before any interaction is possible
- Users on slow networks face extended wait times with no visual feedback
- The entire application bundle must be downloaded even if only a small portion is needed for the initial view
- Time to Interactive (TTI) metrics suffer significantly, directly impacting user engagement and conversion rates
This model treats the web page as a monolithic entity rather than a collection of independent components with varying levels of importance to the user experience.
Islands Architecture: A Paradigm Shift
Islands Architecture (also known as partial hydration) represents a fundamental rethinking of how web applications should be structured. Instead of viewing a page as a single interactive application, this approach treats it as a “static sea” of content punctuated by isolated “islands” of interactivity.
The core principle is elegant in its simplicity: only components that need to be interactive receive JavaScript. Everything else remains as lightweight, static HTML. This approach delivers dramatic performance improvements:
- Non-interactive content like articles, documentation, and marketing pages load instantly without JavaScript overhead
- Initial JavaScript payloads can be reduced by 60-90% compared to traditional SPAs
- Critical content appears immediately while interactive elements load progressively
- Search engines and users with JavaScript disabled still receive complete content
Frameworks like Astro have made this pattern a core feature through directives like client:load, client:idle, and client:visible, giving developers precise control over when components become interactive. Vue-based systems like VitePress and Nuxt are also adopting this philosophy through conventions like *.client.js files.
Progressive and Lazy Hydration: Adding Temporal Control
Building on the foundation of partial hydration, progressive and lazy hydration introduces temporal control over the hydration process. Rather than hydrating all necessary components immediately upon page load, these techniques delay hydration until specific conditions are met.
Three common triggers define this approach:
When Visible (Intersection Observer): Components only hydrate when they enter the viewport. This prevents downloading JavaScript for content below the fold that users might never see. The browser’s native IntersectionObserver API makes this efficient with minimal overhead.
When Idle (requestIdleCallback): Components hydrate during browser idle periods, ensuring that critical rendering and user interactions aren’t blocked by non-essential JavaScript execution. This approach respects the user’s immediate needs while still preparing the page for future interactions.
On Interaction: Components remain completely static until a user attempts to interact with them (clicking a button, hovering over an element). Only then is the necessary JavaScript downloaded and executed. This provides the most aggressive optimization but requires careful UX design to signal interactive elements.
Angular’s @defer blocks provide a declarative syntax for implementing this pattern. In Vue, libraries like vue-lazy-hydration offer similar capabilities. The result is a dramatic improvement in Time to Interactive (TTI) and Interaction to Next Paint (INP) metrics, as the main thread remains free to handle user input.
Selective Hydration: React’s Advanced Approach
React 18 introduced selective hydration, a sophisticated implementation that leverages two key innovations: streaming server-side rendering and Suspense boundaries. This approach fundamentally changes how hydration works by allowing components to become interactive out of order, based on priority.
Here’s how it works in practice:
- The server begins streaming HTML to the client as soon as components are ready, rather than waiting for the entire page
- Components that require slow data fetching are wrapped in Suspense boundaries
- The server sends fallback UI (like skeleton loaders) for these boundaries while continuing to stream the rest of the page
- Once data is available, the server streams the actual component HTML along with a small script tag containing hydration instructions
- React can hydrate already-streamed portions of the page immediately, without waiting for slower components to finish loading
Wix’s implementation of this technique demonstrates its real-world impact: they achieved a 20% reduction in JavaScript payloads and a 40% improvement in INP scores across hundreds of millions of sites. This effectively creates a priority system where user-facing content becomes interactive first, while background processes continue loading.
Resumable Hydration: The Future Paradigm
At the cutting edge of hydration strategy lies resumable hydration, pioneered by frameworks like Qwik. This approach fundamentally challenges the assumption that client-side JavaScript execution is necessary for interactivity.
Instead of re-executing the entire component tree on the client, resumable frameworks serialize the application’s state, component structure, and event listener locations directly into the HTML during server rendering. The result is a self-contained snapshot that requires minimal client-side processing.
The client experience transforms dramatically:
- A tiny loader script (often under 1KB) parses the serialized state
- Event listeners are attached globally rather than to individual components
- When users interact with elements, only the specific JavaScript needed to handle that interaction is downloaded
- The rest of the application remains dormant until needed
This approach eliminates the traditional hydration waterfall entirely. Instead of waiting for megabytes of JavaScript to download, parse, and execute before any interaction is possible, users see a fully interactive page immediately. Qwik applications routinely achieve near-zero TTI times, even on low-end devices.
While this paradigm requires a different mental model for development (code must be serializable), it represents a significant leap forward in performance architecture. Google’s Wiz and Marko frameworks are exploring similar approaches, suggesting this isn’t just a niche technique but a fundamental shift in how we build web applications.
Choosing the Right Strategy for Your Application
Selecting an appropriate hydration strategy requires careful consideration of several factors:
Content Type: Content-heavy sites (blogs, documentation, marketing pages) benefit most from Islands Architecture or resumable approaches. Data-intensive applications (dashboards, admin panels) might still require more traditional hydration but can benefit from selective techniques.
User Demographics: Applications serving global audiences with varying device capabilities and network conditions should prioritize progressive enhancement and minimal initial payloads.
Development Constraints: Teams with existing React investments might start with selective hydration before exploring more radical approaches. New projects have the freedom to choose frameworks aligned with their performance goals.
Business Metrics: Sites where conversion rates correlate strongly with TTI should prioritize the most aggressive optimization strategies available.
The most successful implementations often combine multiple approaches: using Islands Architecture as the foundation, adding lazy hydration for non-critical components, and implementing selective hydration for interactive elements. This layered strategy ensures optimal performance across diverse user scenarios without compromising functionality.
By understanding this spectrum of hydration strategies, developers can move beyond one-size-fits-all solutions and architect applications that deliver exceptional performance while maintaining rich interactivity where it matters most.
Code Splitting and Lazy Loading: The Foundation of Modern Web Performance
Before we can fully leverage advanced hydration strategies like partial hydration or resumable frameworks, we must establish a solid foundation through code splitting and lazy loading. These techniques address the fundamental challenge that plagues modern web applications: JavaScript bundle bloat. By breaking down monolithic application code into smaller, more manageable pieces, we can dramatically improve initial load times and user experience.
Understanding the Bundle Bloat Problem
Imagine you’re ordering a pizza. You only want a simple cheese pizza, but the restaurant delivers every topping they offer—pepperoni, mushrooms, olives, anchovies—just in case you might want them later. This is essentially what happens with traditional JavaScript bundles. Users download the entire application code upfront, including features they may never use, causing unnecessary delays before they can interact with your site.
The consequences are measurable and significant:
- On a typical 3G connection, each additional 100KB of JavaScript can add 1-2 seconds to interactive time
- Low-end mobile devices can take 3-5 times longer to parse and execute JavaScript compared to high-end devices
- Every unnecessary kilobyte increases data costs for users, particularly in regions with expensive mobile data
This is where code splitting becomes essential—not as an optional optimization, but as a core architectural principle for modern web development.
Code Splitting: Breaking the Monolith
Code splitting is the strategic partitioning of your application’s JavaScript into separate files (or “chunks”) that can be loaded on demand. Instead of delivering one massive JavaScript file containing your entire application, code splitting creates a modular architecture where users only download what they need when they need it.
The primary mechanism for code splitting in modern applications is dynamic imports—a feature introduced in ES2015 that transforms how we load JavaScript:
// Traditional static import (loads immediately)
import HeavyComponent from './HeavyComponent';
// Dynamic import (loads on demand)
const HeavyComponent = () => import('./HeavyComponent');When you use the dynamic import() syntax, modern bundlers like Webpack, Vite, or Rollup automatically create separate chunks for those modules. This simple syntax shift creates an entirely different loading pattern that can dramatically improve performance.
Framework-Specific Implementations
Different frameworks provide abstractions that make code splitting more intuitive while handling the complex details behind the scenes:
Next.js Implementation
Next.js provides the next/dynamic utility, which wraps dynamic imports with helpful features like loading states:
import dynamic from 'next/dynamic';
const DynamicChart = dynamic(() => import('../components/Chart'), {
loading: () => <SkeletonChart />,
ssr: false // Disable server-side rendering if needed
});
function Dashboard() {
return (
<div>
<h1>Analytics Dashboard</h1>
<DynamicChart />
</div>
);
}This approach not only splits the code but provides a seamless user experience by showing a skeleton loader while the chart component loads. The ssr: false option is particularly useful for components that rely heavily on browser APIs.
Vue/Nuxt Implementation Vue and Nuxt implement code splitting through component naming conventions and built-in utilities:
<script setup>
// Nuxt automatically treats components prefixed with "Lazy" as async
const LazyChart = defineAsyncComponent(() => import('~/components/Chart.vue'));
</script>
<template>
<div>
<h1>Analytics Dashboard</h1>
<LazyChart />
</div>
</template>Nuxt takes this further with file-based conventions. Components placed in the components/client directory are automatically treated as client-only and code-split. This convention-based approach reduces boilerplate code while maintaining clear architectural boundaries.
React Implementation
In standard React applications, the React.lazy and Suspense components provide built-in support for code splitting:
import React, { Suspense, lazy } from 'react';
const HeavyChart = lazy(() => import('./HeavyChart'));
function Dashboard() {
return (
<div>
<h1>Analytics Dashboard</h1>
<Suspense fallback={<SkeletonChart />}>
<HeavyChart />
</Suspense>
</div>
);
}This pattern allows React to show the fallback UI while the component loads, creating a smoother user experience during async operations.
Lazy Loading: The Execution Strategy
While code splitting determines how your application is divided into chunks, lazy loading defines when those chunks are actually downloaded and executed. The goal is to defer non-essential JavaScript until it’s genuinely needed, keeping the initial payload minimal.
Route-Based Code Splitting
The most straightforward application of lazy loading is at the route level. In a typical application, users rarely visit every page in a single session. Route-based code splitting ensures users only download JavaScript for the pages they actually visit:
// Next.js page-based code splitting (automatic)
// pages/dashboard.js
export default function Dashboard() { /* heavy code */ }
// pages/settings.js
export default function Settings() { /* different heavy code */ }In Next.js and similar frameworks, each page is automatically code-split, meaning users visiting the dashboard never download JavaScript for the settings page unless they navigate there.
Component-Based Lazy Loading
More granular control comes from component-level lazy loading. This is particularly valuable for complex UI elements that aren’t immediately visible or critical to the initial user experience:
// A heavy rich text editor only needed when users click "edit"
const RichTextEditor = React.lazy(() => import('./RichTextEditor'));
function Article({ content }) {
const [isEditing, setIsEditing] = useState(false);
return (
<div>
{isEditing ? (
<Suspense fallback={<EditorSkeleton />}>
<RichTextEditor initialContent={content} />
</Suspense>
) : (
<article dangerouslySetInnerHTML={{ __html: content }} />
)}
<button onClick={() => setIsEditing(true)}>
{isEditing ? 'Save' : 'Edit'}
</button>
</div>
);
}This pattern ensures the rich text editor’s JavaScript is only downloaded when the user actually needs to edit content, keeping the initial page load fast and responsive.
The Critical Insight: Lazy Loading vs. Hydration
Here’s where many developers encounter a subtle but crucial realization: lazy loading alone doesn’t prevent hydration overhead. When a lazily imported component exists in the initial component tree, the framework must still process it during hydration to understand its structure and dependencies—even if the component’s JavaScript hasn’t been executed yet.
Consider this common misconception:
// This component will still cause its JavaScript to load during hydration
function Page() {
return (
<div>
<Header />
<MainContent />
{/* This will still load during initial hydration */}
<React.lazy(() => import('./FooterWithAnalytics')) />
</div>
);
}The solution is to completely remove non-critical components from the initial render tree until they’re needed:
function Page() {
const [showAnalytics, setShowAnalytics] = useState(false);
// Only include the heavy component after user interaction
return (
<div>
<Header />
<MainContent />
<button onClick={() => setShowAnalytics(true)}>
View Analytics
</button>
{showAnalytics && (
<Suspense fallback={<LoadingSpinner />}>
<AnalyticsDashboard />
</Suspense>
)}
</div>
);
}This pattern ensures the analytics dashboard’s JavaScript is only downloaded after the user explicitly requests it, creating a true separation between initial load and subsequent interactions.
Complementary Optimization Techniques
While code splitting and lazy loading form the foundation, several complementary techniques further reduce JavaScript payloads:
Tree Shaking
Tree shaking eliminates “dead code”—functions and modules that are imported but never actually used. This process works best with ES modules (ESM) because their static structure allows bundlers to analyze import/export relationships without executing code:
// Before tree shaking
import { add, subtract, multiply, divide } from 'math-utils';
console.log(add(1, 2)); // Only 'add' is used
// After tree shaking
// Only the 'add' function remains in the final bundleTo maximize tree shaking effectiveness:
- Use ES module syntax (
import/export) rather than CommonJS (require()/module.exports) - Configure the
sideEffectsfield in yourpackage.jsonto help bundlers identify modules without side effects - Avoid patterns that prevent static analysis, like dynamic imports within conditionals
Vendor Splitting
Vendor splitting separates third-party libraries from your application code. This strategy leverages browser caching more effectively:
// Webpack configuration example
optimization: {
splitChunks: {
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all'
}
}
}
}By isolating stable dependencies (like React, Lodash, or Axios) into separate chunks, browsers can cache these files longer since they change less frequently than your application code. When you update your application, users only need to download the changed app code, not the entire vendor bundle again.
Minification and Compression
While not directly related to code splitting, minification and compression are essential final steps in reducing payload sizes:
- Minification removes whitespace, comments, and shortens variable names using tools like Terser
- Compression (Gzip or Brotli) reduces file sizes during transmission, with Brotli typically achieving 15-20% better compression than Gzip
These techniques work together to ensure the smallest possible file sizes reach your users’ devices.
The Performance Impact: Real Numbers
The combined effect of these techniques produces measurable performance improvements:
- Initial JavaScript Payload Reduction: Applications typically see 40-70% reductions in initial JavaScript payload after implementing code splitting and tree shaking
- Time to Interactive (TTI) Improvements: Case studies show 30-50% faster TTI on mid-range mobile devices
- Main Thread Utilization: Reducing JavaScript execution time frees the main thread to respond to user input more quickly, improving Interaction to Next Paint (INP) scores
A real-world example comes from a major e-commerce platform that implemented route-based code splitting and component-level lazy loading. Their results were dramatic:
- Initial JavaScript bundle size reduced from 1.8MB to 450KB
- TTI improved by 65% on mid-range Android devices
- Conversion rates increased by 12% due to better perceived performance
Setting the Stage for Advanced Hydration
These foundational techniques create the essential groundwork for more sophisticated hydration strategies. By mastering code splitting and lazy loading first, developers can then effectively implement:
- Islands Architecture: With code splitting already in place, marking specific components as interactive islands becomes a natural extension
- Selective Hydration: Understanding when and how JavaScript loads makes it easier to implement prioritized hydration
- Resumable Frameworks: The mental model of code splitting aligns perfectly with resumable frameworks’ on-demand execution patterns
The progression from traditional SPAs to these advanced architectures isn’t a single leap but a series of deliberate steps, with code splitting and lazy loading forming the critical foundation. As you implement these techniques, you’re not just optimizing bundle sizes—you’re fundamentally changing how your application delivers value to users by prioritizing what matters most at each moment of their journey.
When done correctly, code splitting and lazy loading transform the user experience from waiting for an entire application to load before doing anything to immediately interacting with core content while secondary features load progressively in the background. This shift in loading patterns creates applications that feel faster, more responsive, and more respectful of users’ time and data constraints.
The Art of Analysis: Tools and Metrics for JavaScript Bundle Optimization
Before implementing any optimization strategy, we must first understand what we’re optimizing. Modern web development demands a data-driven approach that moves beyond guesswork and intuition. The right tools transform abstract performance concepts into concrete, actionable insights—revealing not just how large our bundles are, but how they actually impact users’ experiences.
Why Bundle Analysis Matters Beyond Simple Size Metrics
A common misconception in web performance is that reducing the final gzipped bundle size is the ultimate goal. While important, this metric tells only part of the story. The true cost of JavaScript manifests in three critical phases:
- Download time – affected by network conditions and compressed size
- Parse/compile time – dominated by device capabilities and raw code complexity
- Execution time – determined by algorithmic efficiency and main thread contention
A 100KB bundle of complex framework code might take longer to process than a 200KB bundle of simple utility functions. This is why sophisticated analysis tools focus not just on size but on the actual runtime cost of each module. As Nolan Lawson, a performance expert at Microsoft, notes: “The cost of JavaScript is not in the bytes over the wire, but in the time it takes to parse, compile, and execute on the user’s device.”
Visualization Tools: Seeing What’s Inside Your Bundle
Understanding bundle composition begins with visualization. These tools transform abstract file sizes into intuitive visual representations that highlight problem areas at a glance.
Webpack Bundle Analyzer stands as the industry standard for Webpack-based projects. This plugin generates an interactive treemap where each rectangle represents a module, sized proportionally to its contribution to the bundle. Larger squares immediately draw attention to problematic dependencies. The tool displays multiple size metrics:
- Parsed size: The minified code size before compression
- Gzipped size: The more realistic metric reflecting actual network transfer
- Stat size: The raw source file size before any processing
The true power lies in drilling down through dependency trees. For instance, discovering that lodash contributes 70KB to your bundle might prompt you to switch to cherry-picking individual functions (import { debounce } from 'lodash-es') or using native alternatives.
For developers using alternative build tools, Source Map Explorer provides similar treemap visualizations by analyzing source maps alongside bundle files. This is particularly valuable for tracing minified code back to its original source—crucial when optimizing large codebases where file names lose meaning after minification.
Nuxt.js developers benefit from the built-in nuxi analyze command, which automatically generates visualizations using vite-bundle-visualizer. This seamless integration removes friction from the analysis process, encouraging regular bundle audits as part of the development workflow.
Prescriptive Analyzers: Moving Beyond Visualization to Action
While visualization tools excel at showing what’s in your bundle, the next generation of analyzers goes further by providing specific optimization recommendations.
MardinJS represents this evolution. Created after its developer encountered a 3.2MB initial bundle in a real project, MardinJS doesn’t just show you large dependencies—it tells you exactly how to fix them. For example:
- It identifies cases where you’ve imported all of
lodashbut only use three functions, suggesting the ES modules version (lodash-es) for better tree-shaking - It detects when you’ve included
moment.jswith all locales but only need one, recommending locale-specific imports - It highlights components that would benefit from code splitting based on their import patterns
This prescriptive approach bridges the gap between identifying a problem and implementing a solution—particularly valuable for teams without dedicated performance specialists.
Similarly, Expo Atlas provides real-time analysis for React Native applications, allowing developers to hold down modifier keys to inspect how specific transformations affect bundle size. This immediate feedback loop makes optimization an integrated part of development rather than a separate, dreaded task.
Dependency Cost Analysis: Prevention Over Cure
The most effective bundle optimization strategy is preventing bloat before it enters your codebase. Bundlephobia serves as an essential gatekeeper in this process, analyzing any npm package to report:
- Minified size
- Gzipped size (the metric that actually matters for users)
- Number of dependencies it will add to your project
Before installing a new package, checking its Bundlephobia profile can prevent costly mistakes. For example, discovering that date-fns offers similar functionality to moment.js at a fraction of the size (3KB vs 70KB gzipped) could save significant performance overhead.
This tool transforms dependency decisions from feature-focused to cost-aware. When choosing between libraries, teams can now objectively compare not just functionality but performance impact. A calendar component that adds 50KB to your bundle might be worth reconsidering if users rarely interact with that feature.
Runtime Performance Profiling: The Ground Truth
No amount of bundle analysis substitutes for measuring actual execution performance. This is where browser developer tools become indispensable.
Chrome DevTools Performance tab provides the most accurate measurement of JavaScript’s runtime cost. To get meaningful results:
- Clear browser cache and disable extensions
- Throttle CPU to 4x–6x slowdown (simulating mid-tier mobile devices)
- Throttle network to Fast 3G speeds
- Start recording from
about:blankto avoid caching effects - Interact with your application as a real user would
The resulting flame chart reveals which functions consume the most execution time. Look particularly for “long tasks”—operations monopolizing the main thread for 50ms or more. These directly impact Interaction to Next Paint (INP) scores and cause perceptible jank.
For targeted measurements, the User Timing API allows marking specific operations:
performance.mark('start-render');
// ... rendering code ...
performance.measure('render-duration', 'start-render');These custom metrics appear directly in DevTools performance recordings, making it possible to track the cost of specific operations like route transitions or complex component renders.
Holistic Performance Auditing: Connecting Technical Metrics to User Experience
While DevTools provides low-level insights, services like Lighthouse and PageSpeed Insights translate technical metrics into user experience scores. These tools evaluate performance in context of real-world thresholds:
- Good (green): Under 2.5s for Largest Contentful Paint (LCP)
- Needs Improvement (orange): Between 2.6s–4.0s
- Poor (red): Over 4.0s
More importantly, they connect JavaScript performance to specific user-centric metrics:
- Time to Interactive (TTI): When the page becomes fully responsive
- Total Blocking Time (TBT): How long the main thread is blocked
- Interaction to Next Paint (INP): How quickly the page responds to interactions
For teams needing ongoing monitoring, DebugBear offers continuous performance tracking. It automatically runs audits on every deployment, creating performance trend lines and alerting when metrics regress beyond defined thresholds. This shifts performance from a one-time optimization task to an ongoing quality metric.
Creating a Sustainable Analysis Workflow
The most effective teams integrate bundle analysis into their development lifecycle:
- Pre-commit hooks that run lightweight bundle size checks
- Pull request requirements that include bundle change reports
- Weekly deep dives using visualization tools to identify emerging bloat
- Monthly user experience reviews correlating bundle changes with field performance data
This systematic approach prevents the common pattern of “bundle creep”—where small additions accumulate unnoticed until performance catastrophically declines.
The key insight from modern performance engineering is that optimization isn’t about achieving the smallest possible bundle, but about aligning resource delivery with user needs. A charting library that adds 100KB might be perfectly justified if users interact with charts on every page—but problematic if charts appear on only 5% of visits. Tools help us make these nuanced trade-offs with data rather than guesswork.
By mastering this analysis ecosystem, developers transform from passive bundle acceptors to active performance architects. The visualization tools reveal structure, prescriptive analyzers provide direction, dependency checkers prevent problems, runtime profilers expose true costs, and holistic auditors connect technical decisions to user outcomes. Together, they create a comprehensive framework for building applications that respect users’ time, data plans, and devices—regardless of where they fall on the spectrum of hydration strategies.
Actionable Pathways: Implementing Progressive, Selective, and Resumable Hydration
Having established the foundational knowledge of bundle analysis and explored the spectrum of hydration strategies, we now turn to practical implementation. Moving beyond theory to execution requires understanding not just the “what” but the precise “how” of advanced hydration techniques. These implementations represent a fundamental shift in architectural thinking—from monolithic applications to intelligent, component-focused designs that respect users’ devices and networks.
Progressive Hydration: Implementing Temporal Control
Progressive hydration introduces precise timing control over when components become interactive. In Vue 3, this can be implemented through a custom wrapper component that conditionally hydrates content based on specific triggers:
<template>
<div ref="container">
<!-- Server-rendered HTML remains static until hydration -->
<slot v-if="hydrated || isServer" />
<!-- Client-side component only rendered after hydration -->
<component
:is="component"
v-else-if="shouldHydrate"
:key="hydrationKey"
/>
</div>
</template>
<script setup>
import { ref, onMounted, onUnmounted } from 'vue';
const props = defineProps({
when: {
type: String,
default: 'idle', // 'visible', 'idle', or 'interaction'
}
});
const hydrated = ref(false);
const component = ref(null);
const container = ref(null);
let observer = null;
const hydrate = async () => {
if (hydrated.value) return;
// Load the actual component dynamically
component.value = (await import('./ActualComponent.vue')).default;
hydrated.value = true;
};
onMounted(() => {
if (import.meta.env.SSR) return;
// When visible: use IntersectionObserver
if (props.when === 'visible') {
observer = new IntersectionObserver((entries) => {
if (entries[0].isIntersecting) {
hydrate();
observer.disconnect();
}
}, { threshold: 0.1 });
observer.observe(container.value);
}
// When idle: use requestIdleCallback
else if (props.when === 'idle') {
if ('requestIdleCallback' in window) {
window.requestIdleCallback(hydrate);
} else {
setTimeout(hydrate, 3000); // Fallback timeout
}
}
// On interaction: listen for events
else if (props.when === 'interaction') {
const handleInteraction = () => {
hydrate();
container.value.removeEventListener('click', handleInteraction);
container.value.removeEventListener('touchstart', handleInteraction);
};
container.value.addEventListener('click', handleInteraction);
container.value.addEventListener('touchstart', handleInteraction);
}
});
onUnmounted(() => {
if (observer) observer.disconnect();
});
</script>This implementation provides a flexible foundation that can be used throughout an application:
<template>
<div>
<Header />
<MainContent />
<!-- This heavy chart only hydrates when visible in viewport -->
<LazyHydrate when="visible">
<AnalyticsChart />
</LazyHydrate>
<!-- This modal only hydrates on user interaction -->
<LazyHydrate when="interaction">
<SettingsModal />
</LazyHydrate>
<Footer />
</div>
</template>Nuxt.js provides built-in support for this pattern through the NuxtIsland component and lazy-hydrate module. The NuxtIsland component can delay hydration using the lazy="true" attribute, while the module provides directives like hydrate:visible and hydrate:interaction. This eliminates the need for custom implementation while delivering identical performance benefits.
The performance impact of this approach is measurable: by deferring non-critical component hydration, main thread contention decreases by 40-60%, significantly improving Time to Interactive (TTI) and Interaction to Next Paint (INP) scores on mid-range mobile devices.
Selective Hydration: React’s Advanced Implementation
React 18’s selective hydration represents a paradigm shift by leveraging streaming server-side rendering with Suspense boundaries. The implementation requires careful component tree organization:
import { Suspense, lazy } from 'react';
// Lazy load components that will be selectively hydrated
const ProductGallery = lazy(() => import('./ProductGallery'));
const ReviewsSection = lazy(() => import('./ReviewsSection'));
const RecommendationEngine = lazy(() => import('./RecommendationEngine'));
export default function ProductPage({ product }) {
return (
<div className="product-page">
<ProductHeader product={product} />
{/* Above-the-fold content loads and hydrates first */}
<ProductDetails product={product} />
{/* Below-the-fold content wrapped in Suspense boundaries */}
<Suspense fallback={<GallerySkeleton />}>
<ProductGallery productId={product.id} />
</Suspense>
<Suspense fallback={<ReviewsSkeleton />}>
<ReviewsSection productId={product.id} />
</Suspense>
{/* Lowest priority content - may not even be seen by users */}
<Suspense fallback={<RecommendationsSkeleton />}>
<RecommendationEngine productId={product.id} />
</Suspense>
</div>
);
}The true power emerges when combined with React’s streaming capabilities on the server:
// pages/products/[id].js (Next.js example)
export async function getServerSideProps({ params }) {
const product = await getProduct(params.id);
// Start streaming immediately with critical data
return {
props: {
product,
// Defer fetching for non-critical sections
galleryData: fetchGalleryData(product.id),
reviewsData: fetchReviewsData(product.id),
recommendationsData: fetchRecommendationsData(product.id)
}
};
}Wix’s implementation of this technique demonstrates its real-world impact at scale. By combining Suspense boundaries with IntersectionObserver, they achieved:
- 20% reduction in JavaScript payloads
- 40% improvement in INP scores
- 35% faster perceived load times
The key insight is that selective hydration transforms hydration from a monolithic, blocking operation into a parallel, prioritized process. Components become interactive as soon as their data and JavaScript are available, without waiting for the entire application to load.
Resumable Hydration: The Qwik Approach
Resumable hydration, pioneered by frameworks like Qwik, represents the most radical departure from traditional hydration models. Instead of re-executing the component tree on the client, Qwik serializes application state directly into the HTML:
// Qwik component example
export const ProductCard = component$(({ product }) => {
// State is serialized to HTML attributes
const isFavorite = useSignal(false);
// Event handlers are serialized with their locations
const toggleFavorite = $(() => {
isFavorite.value = !isFavorite.value;
});
return (
<div class="product-card">
<img src={product.image} alt={product.name} />
<h3>{product.name}</h3>
<p>${product.price}</p>
{/* Event handler reference is serialized to HTML */}
<button onClick$={toggleFavorite}>
{isFavorite.value ? '❤️' : '🖤'}
</button>
</div>
);
});The resulting HTML contains serialized state and event handler locations:
<div q:container="product-card" q:symbol="ProductCard_component">
<img src="product.jpg" alt="Premium Headphones">
<h3>Qwik: The Resumable Framework</h3>
<button on:click="./chunk-abc.js#toggleFavorite">🖤</button>
<!-- Serialized application state -->
<script type="qwik/state">
{"isFavorite":{"value":false}}
</script>
</div>On the client, a tiny loader script (under 1KB) parses this serialized data and attaches global event listeners:
// Qwik client-side loader (simplified)
document.addEventListener('click', (event) => {
const target = event.target.closest('[on\\:click]');
if (target) {
const handlerRef = target.getAttribute('on:click');
if (handlerRef && !window.qwik.handlers.has(handlerRef)) {
// Only download the specific JavaScript needed for this interaction
import(`./${handlerRef}`).then(module => {
module.default(event);
window.qwik.handlers.set(handlerRef, module.default);
});
}
}
});This approach delivers revolutionary performance characteristics:
- Near-zero Time to Interactive (TTI) even on 3G connections
- Initial JavaScript payloads reduced by 90% compared to traditional SPAs
- Perfect INP scores because interactions don’t wait for JavaScript execution
The trade-off is a different mental model for development. Code must be serializable, and side effects must be carefully managed. However, for content-heavy sites where performance is critical, this paradigm shift delivers dramatic user experience improvements.
Implementation Strategy by Framework
Each framework ecosystem offers specific pathways to implement these advanced hydration strategies:
Next.js/React Implementation
- Start with code splitting using
next/dynamic - Implement route-based code splitting automatically
- Add Suspense boundaries around data-heavy components
- Use
use clientdirectives to isolate interactive components - For bleeding-edge performance, consider the experimental React Server Components
// Next.js with React Server Components pattern
import ProductDetails from './ProductDetails.server';
import ProductGallery from './ProductGallery.client';
export default function ProductPage({ params }) {
return (
<div>
{/* Server component - no JavaScript sent to client */}
<ProductDetails productId={params.id} />
{/* Client component - only hydrated when needed */}
<Suspense fallback={<GallerySkeleton />}>
<ProductGallery productId={params.id} />
</Suspense>
</div>
);
}Nuxt/Vue Implementation
- Use file-based routing with
.client.vuesuffix for client-only components - Implement the
NuxtIslandcomponent for partial hydration - Add
lazy: trueattribute to defer hydration until visible - Use composables like
useHydrationfor fine-grained control
<template>
<div>
<ProductDetails :product="product" />
<!-- This component only hydrates when visible in viewport -->
<NuxtIsland
:component="ProductGallery"
:props="{ productId: product.id }"
lazy="true"
/>
</div>
</template>Astro Implementation
- Default to zero JavaScript with static HTML rendering
- Add interactivity only where needed using client directives
- Choose appropriate hydration strategy per component:
client:load– hydrate immediatelyclient:idle– hydrate after main content loadsclient:visible– hydrate when component enters viewportclient:media– hydrate based on CSS media queryclient:only– skip server rendering, client-only component
---
import ProductDetails from '../components/ProductDetails.astro';
import ProductGallery from '../components/ProductGallery.astro';
---
<ProductDetails product={product} />
<!-- Only hydrates when visible in viewport -->
<ProductGallery client:visible productId={product.id} />Qwik Implementation
- Embrace the resumable paradigm from the start
- Use
$suffix for serializable functions and signals - Leverage Qwik’s built-in lazy loading and prefetching
- Structure components around user interaction paths rather than page layout
import { component$, useSignal, $ } from '@builder.io/qwik';
export const SearchBar = component$(() => {
const query = useSignal('');
// This function will be serialized and loaded only when needed
const handleSearch = $(async (value) => {
const results = await fetch(`/api/search?q=${value}`);
return results.json();
});
return (
<div class="search-container">
<input
type="text"
bind:value={query}
onInput$={(ev) => handleSearch(ev.target.value)}
/>
<SearchResults query={query.value} />
</div>
);
});Choosing Your Implementation Path
Selecting the appropriate implementation strategy requires careful consideration of several factors:
Application Type
- Content-heavy sites (blogs, documentation, marketing): Prioritize Islands Architecture or resumable frameworks
- Data-intensive applications (dashboards, admin panels): Focus on selective hydration with Suspense
- E-commerce sites: Implement progressive hydration with priority tiers for above/below-the-fold content
Team Expertise
- Existing React teams: Start with selective hydration before exploring React Server Components
- Vue/Nuxt teams: Leverage built-in
NuxtIslandcomponents and lazy hydration - New projects with performance focus: Consider Qwik or Astro for their performance-by-default approach
Performance Requirements
- Sites where INP is critical (forms, interactive tools): Prioritize selective hydration
- Sites where FCP/LCP are critical (content sites): Focus on Islands Architecture
- Sites with global audiences on slow networks: Consider resumable frameworks
The most effective implementations often combine multiple approaches. For example, a content site might:
- Use Islands Architecture as the foundation
- Apply lazy hydration to non-critical interactive elements
- Implement selective hydration for the main navigation
- Use resumable patterns for the most critical user flows
This layered strategy ensures optimal performance across diverse user scenarios without compromising functionality. The ultimate goal isn’t to adopt a single technique but to build a performance architecture that dynamically adapts to user needs, device capabilities, and network conditions—delivering the fastest possible experience while maintaining rich interactivity where it matters most.
Strategic Synthesis: Building a Performance-First Development Philosophy
Having explored the technical depths of hydration strategies, bundle optimization, and implementation pathways, we arrive at a crucial synthesis: modern web performance isn’t merely a collection of techniques but a fundamental philosophy that must permeate every layer of development. This holistic approach transforms performance from an afterthought into the primary lens through which we design, build, and measure applications. The most successful teams don’t just implement these techniques—they embed performance thinking into their development culture, architectural decisions, and success metrics.
The Diagnostic Foundation: Eliminating Hydration Errors First
Before implementing sophisticated hydration strategies, teams must establish an error-free foundation. This seemingly basic step is frequently overlooked in the rush to adopt advanced techniques, yet it remains the bedrock of reliable performance. Modern frameworks provide increasingly sophisticated tooling to identify and resolve hydration mismatches:
- React 17+ component stack traces pinpoint exact locations of server-client mismatches
- Environment-aware development workflows that simulate server conditions during local development
- Automated visual regression testing that compares server and client render outputs
The most effective teams implement hydration error prevention as a mandatory code review requirement. Any pull request introducing hydration warnings is rejected until resolved. This discipline ensures that performance optimizations build upon a stable foundation rather than masking deeper architectural issues.
Practical implementation involves refactoring components to be environment-agnostic—deferring browser-specific logic to useEffect hooks, initializing state with stable values that match between server and client, and using suppressHydrationWarning only for intentional, harmless mismatches like displaying current time. This foundational work prevents the compounding complexity that occurs when trying to optimize applications with unresolved hydration errors.
The Architecture of Isolation: Islands as Default
The most significant paradigm shift in modern web architecture is treating interactivity as exceptional rather than default. Content-heavy sites (blogs, documentation, marketing pages) should begin with an Islands Architecture mindset, where most of the page remains static HTML while only specific interactive elements receive JavaScript.
This approach fundamentally reverses traditional development thinking. Instead of asking “How do I make this page interactive?” teams should ask “Which specific elements truly need interactivity, and when?” This mental model shift leads to dramatically smaller initial JavaScript payloads and faster perceived load times.
Implementation strategies include:
- Progressive enhancement as standard practice: Build functional static HTML first, then layer interactivity only where user research shows it’s needed
- Component taxonomy: Categorize components by priority and interaction needs (critical vs. non-critical, immediate vs. deferred)
- Strategic hydration boundaries: Place Suspense boundaries and component splits at natural interaction boundaries rather than component hierarchy boundaries
For existing applications, this transition can be gradual. Start by identifying the 20% of components responsible for 80% of JavaScript execution time and apply partial hydration to non-critical sections first. Tools like Chrome DevTools’ Coverage tab can identify unused JavaScript during typical user flows, highlighting opportunities for isolation.
Beyond Bundle Size: The Reality of Runtime Performance
The industry’s obsession with bundle size metrics has created a dangerous blind spot. Raw JavaScript size is merely a proxy for the actual performance impact—parse, compile, and execution time on real devices. A 50KB bundle of complex framework code might take longer to process than a 100KB bundle of optimized utility functions.
Forward-thinking teams measure performance through the lens of actual user devices:
- CPU throttling to 4x-6x slowdown during development to simulate mid-tier mobile devices
- Interaction-focused metrics like Interaction to Next Paint (INP) rather than purely load-based metrics
- Field data collection through Real User Monitoring (RUM) tools that capture performance on actual user devices
The critical insight is that JavaScript execution time grows non-linearly with bundle size on low-end devices. A bundle that takes 200ms to parse on a developer’s machine might take 2-3 seconds on a budget Android device. This reality demands that performance budgets be defined in terms of execution time rather than file size.
The Future Paradigm: Resumability as Default
Looking beyond current techniques, the industry is clearly trending toward resumable frameworks that eliminate traditional hydration entirely. Qwik’s approach—serializing application state directly into HTML and resuming execution on-demand—represents not just an optimization but a fundamental rethinking of web architecture.
This paradigm shift offers revolutionary benefits:
- Near-instant interactivity regardless of network conditions
- Automatic code splitting aligned perfectly with user interaction paths
- Predictable performance characteristics that don’t degrade as applications grow
While adoption requires overcoming mental model shifts and ecosystem limitations, the performance benefits are undeniable. For content-focused sites where initial load performance directly impacts business metrics (conversion rates, engagement time, bounce rates), resumable frameworks represent the logical endpoint of web performance evolution.
Building Your Performance Strategy: A Decision Framework
Choosing the right approach requires matching techniques to specific application contexts:
Content-Heavy Sites (Blogs, Documentation, Marketing)
- Primary strategy: Islands Architecture with Astro or similar framework
- Secondary: Progressive hydration for interactive elements
- Future path: Evaluate Qwik for new projects
Data-Intensive Applications (Dashboards, Admin Panels)
- Primary strategy: Selective hydration with React 18+ Suspense
- Secondary: Strategic code splitting by user workflow
- Future path: React Server Components adoption
E-commerce & Conversion-Critical Applications
- Primary strategy: Hybrid approach combining Islands for marketing content with selective hydration for interactive elements
- Secondary: Critical CSS inlining and font optimization
- Future path: Experiment with partial Qwik implementation for highest-value pages
The most effective teams implement a performance review process at three stages:
- Design phase: Evaluate interaction requirements against performance budgets
- Development phase: Continuous bundle analysis and runtime profiling
- Post-release phase: Field data monitoring and optimization iteration
This continuous feedback loop ensures performance remains central throughout the application lifecycle rather than becoming a pre-launch panic.
The Human Element: Performance as Competitive Advantage
Beyond technical metrics, exceptional performance delivers measurable business outcomes. Studies consistently show that:
- Every 100ms improvement in interaction latency increases conversion rates by 1-2%
- Sites with INP scores under 100ms see 30-50% lower bounce rates on mobile devices
- Performance directly impacts brand perception, with slow sites perceived as less trustworthy
The most successful organizations recognize this connection and make performance a shared responsibility across design, product, and engineering teams. They establish performance budgets tied to business metrics and celebrate optimizations that directly impact user outcomes.
This cultural shift transforms performance from an engineering constraint into a strategic advantage. Companies like Vercel, Wix, and Shopify have made performance a core product differentiator, investing in infrastructure and tooling that makes high performance the default outcome rather than the exception.
Conclusion: The Path Forward
The journey through hydration strategies and JavaScript optimization reveals a profound truth: the future of web development belongs to applications that respect users’ time, devices, and data constraints. This isn’t merely about technical optimization—it’s about building digital experiences that work for everyone, regardless of their device capabilities or network conditions.
The techniques explored in this article—from foundational bundle analysis to revolutionary resumable architectures—form a spectrum of approaches that can transform any application’s performance profile. The most effective strategy isn’t choosing a single technique but implementing a layered approach that:
- Eliminates hydration errors as a foundational requirement
- Applies Islands Architecture as the default mental model
- Implements selective hydration for critical interactive elements
- Measures performance through the lens of real user devices
- Plans strategic migration toward resumable paradigms
This progression isn’t linear but contextual—each application finds its optimal balance based on user needs, business requirements, and technical constraints. The constant is a commitment to performance as a core product value rather than a technical checkbox.
As web applications continue to grow in complexity and capability, the techniques that separate exceptional experiences from frustrating ones will increasingly revolve around intelligent resource delivery and execution. The developers and teams who master this discipline won’t just build faster applications—they’ll create digital experiences that feel instantaneous, responsive, and respectful of users’ most precious resource: their attention.
The ultimate goal isn’t optimizing for Lighthouse scores or bundle size metrics, but for human experiences. When a user can immediately interact with content regardless of their device or network conditions, when forms respond instantly to input, when navigation feels seamless—these are the moments where technical optimization transforms into human delight. This is the true north of modern web development: building applications that don’t just function, but feel alive.
In this pursuit, hydration strategies and JavaScript optimization are not merely technical concerns—they’re the bridge between what we build and how it’s experienced. By mastering this discipline, we build not just for browsers, but for people.
References
- The Ultimate Guide to Hydration and Hydration Errors in Next.js — Medium
- DevTools causes hydration errors on page refresh in Next.js — GitHub
- How to solve react hydration error in Next.js — Stack Overflow
- How to Fix Hydration Errors in server-rendered components in Next.js — GeeksforGeeks
- Fix Next.js “Text content does not match server-rendered” error — Netlify Blog
- React Hydration Error in Next.js: Causes and How to Fix — Omi.me
- Debugging and fixing hydration issues — J. Beneš
- Analyzing Your Webpack Bundle Like a Pro — J. Beneš
- Optimizing Web Performance with Dynamic Imports and Bundle Analysis in Next.js — Leapcell.io
- 8 Ways to Optimize Your JavaScript Bundle Size — Codecov
- Any useful Next.js library to improve performance, bundle size? — Reddit
- 6 Tools and Techniques to Analyze Webpack Bundle Size — Bitsrc.io
- Analyzing JavaScript bundles with Expo Atlas and Source Map Explorer — Expo Docs
- JavaScript Bundle Size Analyzer With Fix — Plain English
- JavaScript performance beyond bundle size — Lawson, N.
- Strategies for Crafting High-Performance Web Apps with smaller bundle sizes — CodeBeast
- Improve React App Performance With Webpack Bundle Analyzer — DebugBear
- Hydration, selective hydration, progressive hydration explained — Vishwark
- Lazy Hydration and Server Components in Nuxt — Vue School
- Partial Hydration in VitePress — WebDevExplorer
- Island Architecture: Revolutionizing Front-End Web Development — Mirajkar, N.
- awesome-islands repository — GitHub
- Achieving lazy hydration in Vue 3 from scratch — LogRocket
- Ways of Using Vue — Vue.js Documentation
- Exploring Astro and Svelte vs. SvelteKit — LogRocket
- Astro vs SvelteKit — Caisy.io
- Hydration, the Saboteur of Lazy Loading — Builder.io
- How does hydration work in Angular, Astro and React — Medium
- Nuxt Performance Best Practices v3 — Nuxt Docs
- Hydration and Lazy-Loading Are Incompatible — InfoQ
- Hydration in Web Development — Dev.to
- The Silent Architect of Web Performance — Bhat, R.
- Selective Hydration — Patterns.dev
- Staying Hydrated with JavaScript — LetUsDev
- Fixing your website’s JavaScript performance — MDN Web Docs
- 40% Faster Interaction: How Wix Solved React’s Hydration Problem — Wix Engineering
