A mirror box or Beamsplitter 3D Rig is necessary when doing close-up shots or OTS (over the shoulder) and other such narrative scenes in movies.
This experiment is to see if there is a workaround and thus if better scene blocking and camera maneuverability can be achieved with small parallel cameras.
Admittedly, this is the worst Video Clip of the three Short Chess experiments that were shot. (see the other clips on our youtube channel). I tried to make the hand not break the stereo window on the left, but the point of this video is about something else:
Excessive Positive Parallax and Creative Scene Framing:
In this clip, The scene starts where the focus of attention is the hand moving the chess piece completing the chess move.
...THIS IS IMPORTANT.... as if this was a scene from a 3D movie, the editor then would cut within a couple of seconds to another shot.
Only if the scene lingers for a bit longer will people try to roam the scene and "fuse" the background..
There is nothing distracting in the background, just a bland floor.
...So... even though there is excessive positive parallax on the floor, would it hurt viewers eyes?
The argument here is, that if there was something of interest to "fuse" say some interesting objects or a pile of books (as seen in the other chess video clips) ...then audiences would automatically gravitate towards trying to "fuse" for example...the book titles.
It would be even worse if depth of field was at play and the book titles were slightly out of focus.
However, in this scene there is just a bland floor... can such scene framing help in closeup scenes in 3D?
the experiments continue....
*** about the Video *****
This video can be seen in 3D by selecting the appropriate 3d viewing option from the menu below the video.
The video is part of a few experiments I did, being bored at home and while cleaning my Previz Camera, getting ready for a Location scouting shoot for movie assignment.
No 3D monitoring was used. It was shot "blind" looking only at the left camera's tiny viewfinder.
With live 3D monitoring, precise error free and better framing would be possible. This way only minimal HlT or other post manipulation would be required.
The camera rig has an Interaxial of approx 3 inches. Rig is parallel.
( I'm a bit against toe in or converged cameras.. but depends on scene)
Camera's are lanc synced, and the scene was lit only by natural light coming in from a window.
Distance from Camera rig to table was approx 3 1/2 feet. (no beam splitter used to capture the scene).
With cameras like the Si2K, Iconix and other MVC cameras out there, is there really a need for a bulky beamsplitter rig for "closeup" shots? ... the experiments continue...
Pictures of the rig: [ Ссылка ]
[ Ссылка ]
Ещё видео!