If you zoom in on the image, you can see that it is not just composed of black vertical lines, but also has pixels with different gray tones in the white areas. When you move your head sideways, you perceive the gray tones more.
If you were to remove the black lines, you could see the face clearly. Initially I thought that by blurring the gray shapes when your head moved, they became more visible as they seemed larger. On reflection though I think that actually what's happening is that the high contrast between the black lines and the mostly white background causes your perception to adjust so it doesn't easily see the mid tones. This is because we have a low dynamic range in our vision (relative to absolute brightness, but compared to camera CCDs we have a high dynamic range) - we have to adjust the light sensitivity to compensate for the overall brightness of the image. This is called brightness adaptation. There's a good free textbook for further reading about this at Utah U's Webvision.
When you move your head, the black and white lines blur together which makes the overall brightness appear to be the average brightness of the black and white. So against that background your light sensitivity increases and the areas where the pixel tone is different from the average - the gray pixels of the face - start to stand out.
By reducing the brightness you can see the faint image in the background much more clearly...
No comments:
Post a Comment