vizworldfeatureLarge Format Displays, especially Tiled Displays, are becoming more and more popular.  They’re nothing new, having been around for over a decade.  A quick search of the internet finds Princeton’s 18 Megapixel wall back in 1999.  SGI was demonstrating projection-based multi-screen walls back in the mid 90’s, calling them Powerwalls.  Currently, the Texas Advanced Computing Center has a 300+ MegaPixel wall called Stallion in operation.

But what do all these pixels get you?  Is More Pixels like More Horsepower, you can never get enough?  Or is there a diminishing return?  Have we already hit the maximum needed?  How many pixels can the human eye process? Let’s find out.

The Technology behind Tiled Displays

Tiled displays come in two main flavors:  Projected & LCDs.  In the early days of tiled displays, they were almost exclusively Projector based.

An example of edge-blending for two projectors (red and blue), showing the loss of space (the purple region).

An example of edge-blending for two projectors (red and blue), showing the loss of space (the purple region).

Using rear-projected displays, projectors could be placed close together to maximize the number of pixels-per-inch, and prevent distortion of long projection distances.  The main difficulty of projection systems was in handling the edges.  SGI offered some pricey technology for edge-blending, which could maintain a constant luminosity (Brightness) across the seam by sacrificing some screen size for overlap of two neighboring projectors (shown above).  It works great for a single line of projectors (eg, a screen 3 projectors wide), but in a 2D matrix it gets much more complex.

Two neighboring projectors with a Shim near the screen (right) to prevent overlap.

Two neighboring projectors with a Shim near the screen (right) to prevent overlap.

The alternative for handling edges on projectors was to use shims (shown above).  Placing shims between the projector and the screen can give you a hard edge that you can control, then allowing you to place 2 projected edges right next to each other.  There is no overlap, so the shims must be placed very exactly to prevent any visible seam from blank-space or overlap.  It’s tricky to get right and very susceptible to user intervention (“Oops I bumped the screen” is akin to yelling Fire).  It’s also tricky since you’ll be cutting pixels off of your display most likely, requiring possible code and system configuration changes to handle the now unusual projected sizes.

TACC's Tiled Display Wall, using LCD's.

TACC's Tiled Display Wall, using LCD's.

As LCD’s came on the scene and the price-per-pixel of LCD’s dropped below projectors, people began building tiled displays from huge matrices of LCD panels.  The biggest problem with LCD panels is the bezels, to date noone has built an LCD panel without a bezel.  But with a suitable screen and visualizations that account for the lost space of the bezels, you can create a very powerful “window” effect, and the bezels effectively vanish from the user’s vision.  It’s an effect better seen than described, but your brain does cease to notice the bezels after only a short period of exposure.

The Benefits of Tiled Displays

In many modern visualization settings, you need to display millions of triangles to properly view the dataset.  On the biggest desktop monitors running at 2560×1600 resolution, that’s 4-million pixels.  Therefore, basic math says that if you have to render more than 4-million triangles, some of them will wind up smaller than a single pixel, and therefore not rendered (Either dropped entirely, or used in antialiasing to combine multiple triangles into a single pixel).  Attempting to render anything more than what the monitor can support is simply wasted effort.

Tiled displays offer a simple way to add pixels to the display:  Double the number of screens, double the number of pixels.  Build a 4×3 matrix of 30″ displays running at 2560×1600, and you’ve got a 50-megapixel display.   Also, unlike a projector which would offer similar screen-size, you’ve still got pixels that are the size of “pixels”, and not the size of postage stamps.  Modern visualization packages like ParaView & EnSight support tiled-displays out-of-the-box, making it a viable option for people working with large-data to see the finest details of their data while maintaining the context of the whole.

And working with Tiled Displays is getting easier.  It used to require advanced configurations and specialized tools like SGI’s Performer, however new software makes it available to anyone.  Tools like Chromium, cglx, and Xdmx can enable almost any X-based or OpenGL-based application to run on a tiled display with relative ease, turning large tiled-display into simply enhanced massive desktops with minimal effort.

The Problems of Tiled Displays

The problems of tiled-displays are twofold: Infrastrucure and the Human Factor.

Building a tiled display requires more than initially thought.  Even the most powerful PC’s can only manage 2 or 4 monitors at a time, and at significantly reduced performance to do so.  For a proper tiled display, you can realistically only drive one or two monitors per PC, meaning that your large display area will probably require a cluster to manage it.  Building a cluster then means a fast interconnect, probably Infiniband or at least 10Gig-E, and large uniform storage space accessible from all the nodes.  None of these are trivial matters and add alot of complexity in terms of cost and manpower.

The other, more interesting, problem is the Human Factor.  The Human eye’s retina consists of two main components: Rods and Cones.  Cones are responsible for luminance, brightness, essentially black and white vision.  Cones are responsible for our color perception.  Exact measurements of the number of rods and cones in the Human eye vary, but one commonly used number is 120Million rods and 6-7 million cones, meaning 20x more rods than cones.  This large disparity in numbers is because the Color Vision system is a fairly recent evolution of the vision system.  Most animals have only Rods, which are most important for tracking fast motion like what would be needed to track fast-moving prey.  The disparity also  accounts for a wide variety of unusual optical effects called “equiluminance”, which was the subject of a fantastic VizWeek2008 Keynote talk by Margaret Livingstone. Essentially since the Rods & Cones are processed separates by two different parts of the brain, using images consisting of a constant luminance (constant greyscale value) but different colors can cause confusion in the brain leading to unusual effects like seeing motion that isn’t really there.

An example of an equiluminance illusion that evokes motion, when none is present.

An example of an equiluminance illusion that evokes motion, when none is present.

Another interesting point to make is that these rods and cones are not uniformly distributed among the eye.  They follow more of a gaussian distribution, which is why you have good vision in the center and increasingly poor vision in the periphery.  There is one exception, however, where the optic nerve connects to your eye, which leads to the famous “blind spot”.  To compensate for this your brain keeps your eyes in constant motion and “fills in” the blind spot, so it’s not noticable.

So What’s the Optimal Size?

So, if the human eye can see approximatey 120million points of greyscale and 7-million points of color (if you manage to fully saturate the eye), then how many pixels can you see at once?  This is an argument of great debate right now in visualization, with no real answer.  Many people believe that the human eye can only effectively process about 5million pixels at a time.  More than that requires you to move your head or physically move your eyes to focus on more parts of the screen.  Is this the effective “maximum” size of a display?

I say no.  While the human eye can’t process a display much larger in a single pass, larger displays do have more value.  A large 100-million pixel display can be viewed at a distance to great effect, and then the user can physically approach the screen to view detail.  This allows a more physical approach to the data, and maintains contextual information in the user’s field while allowing detail that would be lost while “zooming in” on a single display.

Right now, the optimal size seems to be approximately 50-100million pixels, which is somewhere between 12-24 30-inch 2560×1600 screens (currently top-of-the-line LCD’s).  12 or 24 screens could be driven with anywhere from 6 dual-display workstations to 24 systems, still within the reach of a single-rack system.  More than that would require multiple racks and more infrastructure, which would push the cost up exponentially.  100million pixels is also a “Sweet spot” of the human vision system, allowing users to view the entire screen at a distance while still being able to focus at a single point.

Larger screens, like TACC’s 300million pixel display, currently find themselves often “hacked up” into several smaller displays simultaneously.  Dedicating portions of the screen to different tasks simultaneously lets the entire wall be used in a somewhat collaborative manner, but no one application overwhelms the user.  Typically they wind up with one or two large visualizations on the display, with additional notes or reference material taking up the remainder of the space.

So what do you think?  Chime in, What’s your optimal display size?