A while back we ran a comparison of PNG, JPG, WebP, and HIPIX looking at the various compression details and artifacts. The results were interesting, with HIPIX emerging as the leader in detail and filesize. However, in the comments someone brought up another format called ‘JPEGMini’, however the entire algorithm is proprietary and locked up so that I couldn’t test it.
That might change now, thanks to the JPEGMini website. Using their website you can upload images or entire albums to be recompressed with their algorithm for an estimated 5x on savings with no perceivable difference in quality. And all for free. The example images they show are impressive, hopefully I can get some time later today to actually run some tests.
Now here’s a field I’ve heard very little about in the commercial space. MediaCybernetics has a product called ‘AutoQuant’ for image deconvolution. For the non-math nerds out there, it’s a nice software package for mathematically removing well-defined blur from images, like what would come from incredibly high magnification microscopes.
AutoQuant offers the most complete suite of 2D and 3D deconvolution algorithms available. Save time and money using blind algorithms that calculate Point Spread Functions (PSF) without having to spend countless hours capturing images of beads. Prepare image sets quickly, align stacks or channels, deconvolve in 2D or 3D, visualize time, Z, and channel, and then analyze all parameters within the same advanced application. See complex datasets come to life with the easy-to-use 3D visualization and manipulation tools and then easily export to any partner application.
In the new version, v X2.2.1, they’ve added support for more files and some nice 5-D visualization tools (5D defined as stacks of images with multiple channels and timeseries).
A great article in The Linux Journal discusses the creation of an OpenGL-based Image Processing system that can analyze video captured from an attached camera in real-time.
This article discusses using OpenGL shaders to perform image processing. The images are obtained from a device using the Video4Linux 2 (V4L2) interface. Using horsepower from the graphics card to do some of the image processing reduces the load on the CPU and may result in better throughput. The article describes the Glutcam program, which I developed, and the pieces behind it.
In the end, he has it running a single edge-detection kernel, but it could easily be modified to do much much more.
The Unity Blog has been updated with example of some of the new Image Effects you’ll be able to use in the upcoming Unity3. They have examples of lens flares, depth of field, outline shaders, bloom, sun shafts, lens effects, and many more.
When I first stumbled across this video, I shrugged it off as a hoax. The results are so amazing, and so fast, that it seems beyond the realm of feasibility. However, I just found that it originated from the Adobe Blog by John Nack, giving it a whole new level of legitimate background.
One of the biggest requests we get of Photoshop is to make adding, removing, moving or repairing items faster and more seamless. From retouching to completely reimagining an image, heres an early glimpse of what could happen in the future when you press the delete key. How might you use this new capability in your workflow?
You have to see it to believe it folks, just watch the video below. I’ve seen similar effects from PDE & Level Set algorithms at old IEEEViz conferences, but those required significant computing power (supercomputer-level) and lots of time, nothing this fast. Perhaps it’s making use of GPGPU acceleration? Perhaps they’ve found some shortcut? Perhaps it has nothing to do with that technology? We’ll find out soon enough.
When you run a website, compressing images becomes a way of life. Finding clever ways to crush every single byte of of them not only reduces bandwidth, but reduces load times and costs as well. Over at the WebDesignerDepot, they have an extensive writeup on techniques for ‘optimizing’ your images.
Making pages load fast is crucial to keeping the attention of visitors. They’re fickle folks these users, easily disappointed if they don’t get immediate results. When they click a link, they want the target right away.
One of the biggest bottlenecks on web pages is the size and quantity of images. The obvious solution is to use fewer images. But other techniques can help us get the most out of every pixel.
The cover the usuals (Don’t make unnecessarily large images, use sprites) but also get into some interesting tricks like Photoshop’s “19%” Glitch and how various formats treat vertical and horizontal details (like Bar-codes).
Superfish has developed a new image search algorithm and deployed it via a browser addon for Internet Explorer and FireFox that allows you to find a product on any of several popular online stores and automatically see similar items from around the internet. Unlike systems like Amazon, the similar items are based upon image processing algorithms that analyze the item on the page to present similar looking items, so you see similar shoes, purses or other items with the same properties as what you see. In a recent blog post they discuss some of their technology:
Visual search for flat objects already exists, and it is not bad at all. For example, there are pretty good optical character recognition (OCR) and bar-code readers in use today. But we live in a three-dimensional world where objects take on dissimilar visual forms when viewed from different viewing angles. The same shoe looks completely different from the front, back, side, top and bottom. While even a young child can abstract a real-world object from its myriad appearances, computers can only compare images by their apparent features. Superfish employs algorithms that handle complex geometries to recognize an object regardless of the angle the image was captured.
The addon works for Windows & Mac, Internet Explorer & Firefox (no Safari, Chrome, or Opera support it seems) and is currently available in a “Beta” state but pretty functional. Hit their site for a demonstration video and to download it.
Quantum Computing has long been the fodder of science fiction, with the reality of it being far too bizarre and complicated to be the concern of mere mortals, but it seems the braniacs at Google and D-Wave have worked together to create a working prototype of an image detection algorithm that blows all existing algorithms out of the water.
In the search, Google first took 20,000 photographs — half with cars in them and without. In each picture they drew boxes around the cars (if there were any), identifying the “car” graphic element. Next Google took a second set of 20,000 photos — half with cars and half without. They then put the second set to the trained quantum search, which identified the cars faster than any traditional algorithm in Google's data farms.
So it does require a “training” cycle, but that’s a small price to pay if I can simply upload a few images of myself and then search the entire internet for more pictures of me. Plus with “big” search terms, crownsourcing would complete the training almost immediately.
Researchers at Georgia Tech have modified Google Earth to integrate information obtained through automatic analysis of video camera footage, creating a somewhat real-time Google Earth showing the locations of cars and people.
They use motion capture data to help their animated humans move realistically, and were able to extrapolate cars’ motion throughout an entire stretch of road from just a few spotty camera angles.
From their video of an augmented virtual Earth, you can see if the pickup soccer game in the park is short a player, how traffic is on the highway, and how fast the wind is blowing the clouds across the sky.
Their work will be presented as a paper at the upcoming IEEE International Symposium on Mixed and Augmented Reality next month, but you can read a draft PDF here. See a video demonstration after the break.
We first covered this back in July but it’s back in the news again with more fancy reconstructions.
It took 500 computer processors 13 hours to match 150,000 photos for Rome’s landmarks, and eight more hours to construct a 3-D image of them. Venice involved 250,000 images, which took 27 hours to match and 38 hours to reconstruct. By contrast, using the algorithms on which Photosynth is based, it would have taken 500 processors at least a year to match 250,000 photos.
Not only is it just pretty and fun, one could easily imagine the same algorithm being applied to a wide variety of uses: Accident reconstruction, Military planning, Urban environment mapping and study.
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.