Stereoscopic vision with reduced definition in one eye

by David Gibson, david[at]caves..., 27 April 2015.
My homepage is at

Minor updates 30-Apr-2015. (For last-modified date see page footer)

Summary: A number of stereo pairs are presented, demonstrating that an adequate stereoscopic image can be formed by the human brain even when one of the images is considerably defocussed or pixelated. In a digital transmission, the reduced-definition image can have a bandwidth (pixels x palette size) as low as 1/48th that of the other. It is suggested that this could have useful implications in the transmission of stereoscopic images over reduced bandwidth channels.

I am short-sighted, requiring optical correction to my vision of about -5 dioptres. However, as I have aged, I have also developed far-sight, requiring a correction of +2 dpt. For outdoor sport in the rain, I have worn daily-disposable contact lenses that corrected only my short-sightedness, which meant that I found it very difficult to read maps or operate a camera. My optician suggested that I try a combination of contact lenses, correcting for short-sight in my dominant eye and for far-sight in my other eye (i.e. left eye: -5 dpt; right eye: -3 dpt). Although it took a bit of 'training', this appears to work well. My brain clearly filters the information and allows me to see objects both far away and close at hand without difficulty. It seems an amazing solution to the problem.

My curiosity was raised, however, because my stereoscopic vision is unhindered by the lack of information supplied by one eye. One can imagine the brain deciding to 'ignore' information from one eye, but clearly that is not precisely what it is doing, because it still manages to produce a good stereoscopic image.

This reminded me of the optics behind conventional analogue colour television. An analogue TV system does not transmit 'red', 'green' and 'blue' channels, but a 'luminance' (monochome) channel (known as 'Y') and two 'chrominance' channels that contain Blue minus Y and Red minus Y information. This allows the RGB data to be re-constructed but, importantly, it allows 625-line colour television broadcasting to remain compatible with the earlier monochome-only receivers - the additional colour information being transmitted on a sub-carrier that the monochrome receivers simply ignore (see Wikipedia).The salient point, which I am leading up to, is that the chroma channels are transmitted at a lower bandwidth; that is, the image is rather blurred compared with the luma channel. Clearly, the human eye does not 'mind' receiving this low-bandwidth information, and manages to reconstruct a complete image, regardless. Is that what is happening with stereoscopic vision with a reduced definition in one eye?

If that is the case, it suggests that the bandwidth of a stereoscopic transmission can be reduced without compromising the resolution of a flat, non-stereoscopic picture. This may be of use on bandwidth-limited channels such as Mars landers or remote operations in deep coal mines.

The effect of having a limited definition in one eye can easily be demonstrated using a set of stereo pairs, displayed on your computer monitor. For this, you need a good screen width (say 1000 pixels). Your browser's screen width is reportedly 000 pixels. (If you change your browser's window size, type CTRL-r to refresh the image sizes).

Index to images that follow...

  1. Original Image
  2. Blur - radius 3.5 (left eye)
  3. Blur - radius 3.5 (right eye)
  4. Blur - radius 3.5, applied 4x (left eye)
  5. Blur - radius 3.5, applied 4x (right eye)
  6. Overlaid Images
  7. Pixelation x8
  8. Pixelation x4
  9. Pixelation x4, 8-bit palette
  10. Pixelation x4, 4-bit palette
  11. Pixelation x4, 4-bit palette, monochome

1: Original Image

First of all, here is the original image that I will be processing in different ways. This is a 'stereo pair' but - unconventionally - the image intended for the right eye is on the left. This makes it easier to view the images without a special viewer. With your eye at a normal viewing distance, cross them, so that you see three images and then, with a bit of effort, it is possible to get the central image to snap into 3D. You will see a 'smaller' version of the image floating in space in front of the computer screen. It can take a bit of practice, but it is far easier than trying to view a conventional stereo pair without a viewer, because making the eyes cross is easier than making them diverge - which they would need to, as the images are further apart than the separation between the eyes. A word of warning though - I have been told that if you view stereo pairs professionally, for a living, viewing them with your eyes crossed will lead to a very bad habit that will affect your work! (back to list)

2: Blur - radius 3.5 (left eye)

Having mastered the technique, we can now perform a few experiments. In this pair, the image for the left eye (remember, this is on the right in this 'swapped over' stereo pair) has been blurred. You should find that - surprisingly - it makes little difference to the stereoscopic image. This is a demonstration of what I see when wearing my adjusted contact lenses. (back to list)

For those interested in the technicalities, I achieved this blur by using my graphics processing package to write a customised filter, which smeared each pixel over a circle of diameter seven pixels. This is equivalent to a slight de-focussing of the image into a 'circle of confusion' of radius 3.5 pixels. It is difficult to establish a direct correlation between this smear and the defocussing power of a lens, because it depends how you define the angle of view, but a reasonable estimate is about +0.7 dpt, which I obtained by deriving c/A = D1/D0, where c is the circle of confusion (in pixels), a is the width of the image (in pixels this was 480 but, strictly speaking, using the image width is incorrect - I need to think about this), D1 is the power of the correcting lens in dpt, and D0 is the power of the human eye (taken as 50 dpt). (back to list)

3: Blur - radius 3.5 (right eye)

It is interesting to try the opposite effect - with the blur applied to the right eye. Do you notice any difference between this and the previous image? I dont. (back to list)

4: Blur - radius 3.5, applied 4x (left eye)

The customised filter option in my graphics software has a limited scope, so I was unable to blur the image any further (without recourse to tedious and slow Matlab processing). However, a blur 'of sorts' can be achieved by simply applying my filter multiple times. Here it is applied 4 times. (back to list)

5: Blur - radius 3.5, applied 4x (right eye)

(back to list)

6: Overlaid Images

At this point, you may be thinking that, whatever the eye and brain are up to, it is aided by having the two images in separate channels. So what happens if we overlay the original and the blurred images into a single picture? Surprisingly, it does not look very different to the original. In the pair that follow, the left image is the original (right eye) and the right image is the original (R) overlaid with the severely blurred (R) image above. There is no point in trying to view these in 3D as they are the same image - but look closely at how the blur in the right image has been reduced to a vague fuzziness. (Also notice that it is almost a 'useful' artistic effect and is possibly similar to the 'soften' filter that you might find in your photo editing application). (back to list)

Again, for those interested in the technicalities, I achieved this overlay by using my graphics processing package to combine the two photos using an "arithmetic addition" of the pixels and a division by two of the resulting values. (back to list)

7: Pixelation x8

The above images demonstrate the operation of my contact lenses; but I mentioned low bandwidth transmission channels. If we assume that the transmission is digital, rather than analogue then the low bandwidth would be achieved by pixelation rather than blurring. Does this make any difference to the effect? The left-eye image below has been reduced in linear resolution by 8 times. That is, each 1x1 'pixel' comprises 8x8 of the original pixels, so it occupies 1/64th of the bandwidth. (back to list)

8: Pixelation x4

This is a slightly less severe bandwidth reduction, the digital size of the reduced image is 1/16th. (back to list)

9: Pixelation x4, 8-bit palette

Now consider the above image, but with the colour space reduced from the standard 24-bit to 8-bit. This decreases the bandwidth by an additional three times although, as you can see, it introduces banding in the sky. The digital size of the reduced image is 1/48th. (back to list)

10: Pixelation x4, 4-bit palette

With a 4-bit palette the effect is more extreme, but we still get a stereoscopic picture! The digital size of the reduced image is 1/96th. (back to list)

11: Pixelation x4, 4-bit palette, monochome

With a 4-bit palette, there is no advantage in using monochrome over colour - it would simply mean specifying 16 shades of grey instead of 16 colours, but it is interesting to note that even if we were to use a monochrome palette, the image remains substantially the same - true, there is less 'colour' in it, but it is still stereoscopic. The digital size of the reduced image is 1/96th. (back to list)

If you liked this, why not throw a handful of small change (£0.99) in my direction. I have to make a living somehow!

This page, was last modified on Sun, 09 Sep 2018 10:39:47 +0000
(Running on britiac3 at