Researchers at ETH Zurich and Disney have created a new algorithm for correcting the type of large 3D disparity in many live events that can lead to headaches and nausea as things “jump out” of the screen too far.  The algorithm is simple enough that it could be implemented directly into displays allowing for an automatic correction, or used to post-process video prior to broadcast.  The article is in German, but the Google Translation reads like:

Smolic and his team have now found an efficient way, later to change the disparity. With a Smolic developed algorithm, individual picture elements for a balanced depth to be recalculated. With non-linear operations, researchers can also change at individual levels in the image limit or deep structure of a film scene on the basis of individual objects change. In addition, the algorithms include a time component, as our three-dimensional perception of the temporal relationship between the two frames for the right and the left eye also depends.

Looks promising, but I would love to see some more information about the details of the algorithm.

Ein Algorithmus für mehr 3D-Sehgenuss. (Translated to English)