The brief was ambitious: a 3D virtual shopping mall where users could walk between store fronts, click on products, and proceed to checkout without leaving the WebGL canvas. The tech choice was React Three Fiber — R3F — the React renderer for Three.js. I'd used it in smaller experiments. This was the first time I'd taken it to production at scale.
Why React Three Fiber Over Raw Three.js
The honest answer: the component model. When you're building an environment with 40+ interactive store fronts, each with hover states, click handlers, and product data fetched from an API, the imperative Three.js approach becomes unmaintainable quickly. R3F lets you describe a 3D scene the same way you'd describe a React UI — declaratively, with props and state. A store front becomes a component. Its hover state is just a useState hook.
- R3F's useFrame hook replaces requestAnimationFrame with a clean, predictable pattern
- useGLTF from @react-three/drei makes model loading feel like an import statement
- Suspense boundaries work — wrap a model in <Suspense fallback={<LoadingPlaceholder />}> and you get lazy loading for free
- The ecosystem (drei, postprocessing, rapier) is mature enough for production in 2024
- react-three-fiber npm downloads exceed 1M/week — the community is active and support is fast
The Performance Problems I Did Not Anticipate
The first demo ran at 12fps on a mid-range laptop. Every store front was a separate GLTF load. Every product had its own draw call. The scene graph had no concept of distance — objects 200 units away were rendered at the same detail as objects right in front of the camera.
- Draco compression on all GLTF models — reduced total scene payload from 48MB to 11MB
- Instanced meshes for repeated geometry (store frames, floor tiles, light fittings) — 40 draw calls became 4
- Level of Detail (LOD) via drei's <Lod> component — distant stores load at 30% polygon count
- Texture atlasing — combined 60 small textures into 4 atlases, eliminated hundreds of texture binds per frame
- Frustum culling verification — confirmed Three.js was actually culling off-screen objects (it was; the problem was overdraw, not culling)
After optimisation: 58fps on the same mid-range laptop that had run at 12fps. Mobile (iPhone 13) achieved a stable 45fps.
What Worked Better Than Expected
The Suspense-based loading model. By wrapping each store section in its own Suspense boundary, users could enter the mall and start exploring while distant sections were still loading. The progressive reveal felt intentional rather than broken — and it meant time-to-first-interaction dropped from 8 seconds to under 2.
The other surprise: click-to-product conversion in the 3D environment outperformed the standard 2D product grid in A/B testing by a meaningful margin. Users spent longer in the 3D view and clicked more products. The engagement data was compelling enough that the client expanded the scope to include a second mall environment.
When Not to Use 3D
3D on the web is not universally better — it is contextually better. If your primary audience is on low-end Android devices on mobile data, a WebGL experience is likely to be a worse experience than a well-designed 2D alternative. Test on your actual device distribution before committing to 3D. Battery drain and thermal throttling on mobile are real constraints that performance benchmarks on desktop don't reveal.