I haven’t heard much from the Indigo Renderer crew lately, but I just noticed that they’ve now released Indigo 3.0. Boasting CUDA & OpenCL support, network rendering, integrated queue management, and much more, it really is starting to look more like a professional grade tool, more than ever before.
Many changes resulting in improved rendering performance and image quality have been made in the core. Indigo 3.0 produces better images for the same number of samples per pixel as 2.x.
Indigo 3.0 also introduces optional camera vignetting and faster HDR environment mapping.
Looks like they’ve put some significant work into their bump-mapping code as well, as evidenced by the image to the right. You can see more of what they’ve added in the Video Preview below.
A neat little proof-of-concept/research project from Toshiya Hachisuka is the “Parthenon” rendering, a GPU-accelerated global-illumation renderer. It boasts a nice collection of features like direct illumination, indirect illumination, and a hybrid approach of both CPU and GPU algorithms.
His website contains PDF’s of his presentations on the work from SIGGRAPH presentations and his GPU Gems 2 contribution, as well as the EXE for you to download and play with. Requires DirectX9 or later, and a video card that supports floating-point buffers, vertex shader 2.0, and Pixel Shader 2.0
The Thea Renderer is now available in a free beta version, available for Windows, Linux, and Mac OSX. The beta has watermarks and a resolution limit, but is completely functional otherwise.
Finally, we are here! Thea Render has reached a level that we can safely move forward and publish our work. Our new site has been deployed although there will be improvements and changes in the next weeks for better integration. We want to thank all for your patience and support and we hope that you are going to like Thea and Plugins. We will be waiting for your feedback in the development forums and of course, we are going to answer any question you may have.
They offer plugins for 3ds Max, Blender, Cinema4D, Sketchup, and SoftImage. Try it out, and if you like it then you can buy the Render for only 161 €.
It was back in January when we first brought you news of the Arion hybrid GPU+CPU renderer from the makers of fryrender, and while it looked impressive it was sadly not for sale. That’s no longer the case, as now you can buy Arion in several configurations directly from the website. So far the loadout looks like this:
Single-GPU License : 795 EUR
Multi-GPU License: 995 EUR
Extra Slave Licenses: 245 EUR (Includes NetWarrior queue manager)
Special Educational discounts of 125 EUR (1-year license, noncommercial work)
Special bundles of FryRender + Arion
Hit their site for all the details, with some impressive gallery and example shots.
BettinaTizzy sent in a link to a video demonstration of a system called ‘Unlimited Detail’, which claims to offer realtime and interactive rendering of point cloud data. The video gets into the typical problems of the common polygonal geometry rendering solutions (low tessellation leads to blocky visuals), ray-tracing (very very slow), and voxels (they never really say what’s wrong with voxels, to be honest), and claim they have a new system they equate to a ’3d Search Algorithm’.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the SEARCH tool and typed in a word like MONEY the search tool quickly searches for every place that word appeared in the document. Google and Yahoo are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
Sounds very much like a ray-tracing algorithm to me. I do take issue with their ‘unlimited detail’ claim, as they talk about visualizing billions of points simultaneously and interactively. Nothing is unlimited, as eventually you will run out of memory.
With all of that said, however, the demo is impressive. They claim that the cancellation of larrabee will hurt their release, but that their algorithm is primarily software-based anyway so it should be fine. Watch the video below, and post your thoughts in the comments. Hype, or a vision of the future?
Update: After a discussion with a colleague, I was reminded of a paper presented at SIGGRAPH2000 on a tool called ‘WarpEngine’, which used dedicated ASIC’s to combine & warp pre-rendered images into a simulated-3D scene. The source images could be of any detail level (even photographs), and with enough of them you could compose fully interactive 3D scenes. This looks eerily familiar, and probably suffers from the same limitations:
Massive input dataset (you have to have several images of the objects in the scene rendered from multiple viewpoints), but a simple Octree storage system makes it trivial to navigate
With modern hardware, this seems very possible to do directly on the CPU. Read the “WarpEngine” paper here.
Felix, a renderer developed in partnership with Next Limit technologies (creaors of the Maxwell Renderer), aims to bring renderfarm power to everyone via the Cloud. Similar in concept to vSwarm, you buy “credits” and then submit your job to be rendered on 40 to 160 cores in the cloud.
Unfortunately, it currently does not support Animation, only still renders, although that feature is coming soon. The results are impressive tho, rendering the Benchwell scene in a mere 55 seconds.
Glare Technologies has just released a new version of their Indigo Renderer, veresion 2.2 . In this version they’ve doubled the speed on virtually all scene renderings, and increased the speed by 10x when using environment maps. Also, tone-mapping, aperture diffraction, and subsurface scattering all got some attention leading to improved performance and better results. They’ve also introduced a new Material editor:
The all-new Indigo Material Editor, introduced with Indigo 2.2, allows fully featured creation and editing of materials in a graphical environment. All material functions are available in the Indigo Material Editor, with Indigo Shader Language able to control any attribute. The Indigo material editor also allows direct uploads to and downloads from the online Indigo Material Database.
However, in a somewhat odd announcement, they’ve decided to raise the price to ‘compete’ with other renderers.
In addition to the 2.2 release announcement, Glare Technologies is also announcing that from the 1st of February, the cost price of Indigo licenses will rise to compete with other unbiased renderers on the market. A single full license will cost 595€ and node license will be 195€, with options for bulk discounts.
I’ve heard of lowering prices to compete, but never raising prices. See the full press release after the break.
A new render engine is on the block, this time using GPU acceleration to bring you interactive physically-based visuals in real-time.
Octane Render is the worlds first GPU based, un-biased, physically based renderer. As opposed to the handfull of processor cores available in CPU’s, the GPU typically has hundreds of cores for parallel processing making it the best resource for rendering in your computer. Yet, to date no other software makes use of it in the way that Octane does. With even a single mid-range GPU you can typically expect to see a 1000%-1500% 10X to 15X speed increase over a typical un-biased, CPU based renderer
A limited-feature demo will be released soon, with the 1.0 version available in February for 199 €.
FurryBall, a GPU accelerated renderengine for Autodesk Maya boasts real-time simulation of lighting effects and physics inside the native Maya Environment. From their website:
Have you ever dreamed of a 3D renderer implemented directly into Maya 3D that has the capacity to light, rotate, or change parameters in real-time in scenes that contain textures, bump maps, soft shadows, reflections, refractions, and dynamic hair? Your dreams have come true – introducing FurryBall!
Demonstration videos on their website do a good job showing the capabilities, and a 30-day trial (registration required) is available on their website. The full version sells for $490 – $2200, depending on the feature set.
See a commercial rendered entirely via FurryBall on a GeForce GTX285, 1 minute per frame, after the break.
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.