🫢 lazydev
PS1 style rendering in Three.js
Ever wondered how to achieve that nostalgia style for 3D graphics on the web? I used to play PS1 a lot as a kid. Recently I’ve explored this rendering style in Blender and then as an experiment ported it over to web, using Three.js.
So what makes this look and feel so unique? Hardware limitations! :D
PS1 is pretty low-end console when compared to today’s monstrous gaming machines. Devs didn’t have many options and flexibility, they had to count polygons to make sure it’s not going over the limit, which was 12k polygons perf frame at 30FPS. I mean, look at this seemingly low poly model from Sketchfab, it actually has 8k triangles. Designing good-looking and most importantly animatable (because of geometry stretching) low-poly models is art on its own.
Here’s 700 triangles character model, it still looks very good. Because good texturing is the key! Go and enable wireframe view in both examples, you’ll see the difference in amount of polygons per character.
So yeah, low-poly is the very base of PS1 style. Next one is textures, namely color depth, resolution, mapping and filtering.
Even though PS1 supported up to 24-bit color depth, a more practical and commonly used value was 15-bit, that’s 32,768 vs 16,777,216 colors! Typical texture resolution was 128×128 and sometimes 256×256. Then due to reduced color depth, textures got color banding, especially noticed on gradients. Dithering was used to create an illusion of wider color space, although some games look better without it. Here’s “Silent Hill” with and w/o dithering. It was less noticeable back in the days on CRT TVs.
Code: dithering and posterization in fragment shader
// fragment shader
vec4 RGBtoYUV(vec4 rgba) {
vec4 yuva;
yuva.r = rgba.r * 0.2126 + 0.7152 * rgba.g + 0.0722 * rgba.b;
yuva.g = (rgba.b - yuva.r) / 1.8556;
yuva.b = (rgba.r - yuva.r) / 1.5748;
yuva.a = rgba.a;
yuva.gb += 0.5;
return yuva;
}
vec4 YUVtoRGB(vec4 yuva) {
yuva.gb -= 0.5;
return vec4(
yuva.r * 1.0 + yuva.g * 0.0 + yuva.b * 1.5748,
yuva.r * 1.0 + yuva.g * -0.187324 + yuva.b * -0.468124,
yuva.r * 1.0 + yuva.g * 1.8556 + yuva.b * 0.0,
yuva.a);
}
float ditherChannelError(float col, float colMin, float colMax)
{
float range = abs(colMin - colMax);
float aRange = abs(col - colMin);
return aRange / range;
}
const float dither0[8] = float[8](0.0, 32.0, 8.0, 40.0, 2.0, 34.0, 10.0, 42.0);
const float dither1[8] = float[8](48.0, 16.0, 56.0, 24.0, 50.0, 18.0, 58.0,
26.0);
const float dither2[8] = float[8](12.0, 44.0, 4.0, 36.0, 14.0, 46.0, 6.0, 38.0);
const float dither3[8] = float[8](60.0, 28.0, 52.0, 20.0, 62.0, 30.0, 54.0,
22.0);
const float dither4[8] = float[8](3.0, 35.0, 11.0, 43.0, 1.0, 33.0, 9.0, 41.0);
const float dither5[8] = float[8](51.0, 19.0, 59.0, 27.0, 49.0, 17.0, 57.0,
25.0);
const float dither6[8] = float[8](15.0, 47.0, 7.0, 39.0, 13.0, 45.0, 5.0, 37.0);
const float dither7[8] = float[8](63.0, 31.0, 55.0, 23.0, 61.0, 29.0, 53.0,
21.0);
float dither8x8(vec2 position, float scale, float brightness)
{
int x = int(mod(position.x / scale, 8.0));
int y = int(mod(position.y / scale, 8.0));
float limit = 0.0;
if (x < 8) {
float d;
if (x == 0) {
d = dither0[y];
} else if (x == 1) {
d = dither1[y];
} else if (x == 2) {
d = dither2[y];
} else if (x == 3) {
d = dither3[y];
} else if (x == 4) {
d = dither4[y];
} else if (x == 5) {
d = dither5[y];
} else if (x == 6) {
d = dither6[y];
} else if (x == 7) {
d = dither7[y];
}
limit = (d + 1.0) / 64.0;
}
return brightness < limit ? 0.0 : 1.0;
}
vec4 ditherAndPosterize(vec2 position, vec4 color, float colorDepth, float
ditherScale)
{
vec4 yuv = RGBtoYUV(color);
vec4 col1 = floor(yuv * colorDepth) / colorDepth;
vec4 col2 = ceil(yuv * colorDepth) / colorDepth;
yuv.x = mix(col1.x, col2.x, dither8x8(position, ditherScale, ditherChannelError(yuv.x, col1.x, col2.x)));
yuv.y = mix(col1.y, col2.y, dither8x8(position, ditherScale, ditherChannelError(yuv.y, col1.y, col2.y)));
yuv.z = mix(col1.z, col2.z, dither8x8(position, ditherScale, ditherChannelError(yuv.z, col1.z, col2.z)));
return YUVtoRGB(yuv);
}
// usage
color = ditherAndPosterize(gl_FragCoord.xy, color, 15.0, 1.0);
Textures filtering? Nope. That’s why all textures look pixelated on PS1. Smoothing them out would result in a bit more realistic looking picture.
Here’s how to set texture filtering to nearest in Three.js:
model.scene.traverse((obj) => {
if (obj.isMesh) {
const { map } = obj.material;
map.minFilter = THREE.LinearFilter;
map.magFilter = THREE.NearestFilter;
}
});
To get even more crispier pixels I recommend to apply nearest filtering at browser rendering level as well, via CSS:
canvas {
image-rendering: pixelated;
}
And finally textures mapping. PS1 used affine texture mapping technique, which didn’t account for perspective. This sometimes resulted in distinctive texture wobbling effect for sharp camera angles. To fight this effect you’d have to increase density of polygons, but remember that PS1 could only process a handful of them per frame. As with everything in programming, it’s all about making tradeoffs.
// vertex shader
varying vec2 vUv;
varying float vAffine;
float dist = length(mvPosition);
float affine = dist + (pos.w * 8.0) / dist * 0.5;
vUv = vUv * affine;
vAffine = affine;
// fragment shader
varying vec2 vUv;
varying float vAffine;
vec2 uv = vUv / vAffine;
vec4 color = texture2D(map, uv);
The last one and the most famous effect is geometry jittering or vertex snapping. Here’s a quick look at the effect in a small scene I’ve worked on.
It wasn’t done on purpose, but again caused by hardware limitations. PS1 didn’t support floating-point numbers. Hell, these days iOS chips include floating-point math tailored for JavaScript numbers to make your crazy websites run fast! Having only integers explains where vertex snapping name is coming from. Basically, when positioning stuff on a screen it was impossible to put a point between integer coords 1 and 2, in other words: vertices are snapping to a pixel grid. When emulating this style we can define resolution of the grid and thus tune jittering effect.
Here’s vertex shader code that snaps vertices to a predefined grid:
vec2 resolution = vec2(320, 240);
vec4 pos = projectionMatrix * mvPosition;
pos.xyz /= pos.w;
pos.xy = floor(resolution * pos.xy) / resolution;
pos.xyz *= pos.w;
Oh, and let’s not forget about screen resolution and lack of antialiasing, PS1 outputs a video signal at 320×240. Yeah, 4:3 TVs. It’s a bit too extreme to my taste, I prefer to lower resolution by 2 of 4 times at max and force pixelated rendering.
Now the main question: how to apply those shaders to Three.js materials? There’s a couple of libraries to monkey patch and compose shaders, but if you are lazy™ like me and want to apply the effect globally — just patch shaders source before loading your scene.
- Dithering and posterization function should go into
THREE.ShaderLib.physical.fragmentShader
and its usage intoTHREE.ShaderChunk.map_fragment
- Vertex snapping code — into
THREE.ShaderChunk.project_vertex
- Affine mapping — into
THREE.ShaderChunk.uv_pars_vertex
,THREE.ShaderChunk.uv_pars_fragment
andTHREE.ShaderChunk.map_fragment
Overall it’s a very nice set of effects if you are into retro graphics, they are easy to implement and super cheap to run.