Medical imaging company Sectra is demonstrating a new interactive touchscreen visualization tale at RSNA2011, merging automatic 3d segmentation algorithms with new high-resolution touchscreen displays.
With powerful algorithms, Sectra Visualization Table identifies a bone or a bone fragment, according to the user’s touch interaction, and removes it from the image. Accordingly, orthopaedic surgeons can gain an overview of the joints, thereby facilitating pre-operative planning specifically in orthopaedic surgery. As with the rest of the functionality, the new segmentation tool is operated using the fingertips.
I love technology like this, even though it’s reminiscent of Immersadesk displays. However, a nice visualization method is only half the problem: Convincing doctors to do a lengthy & expensive CT or MRI instead of a cheap & quick x-ray is the bigger hurdle.
An interesting visualization tool presented this week at IEEE InfoViz adds another datapoint for the “Death to the Rainbow Colormap” club and backs it up with an impressive increase in accuracy from 39% to 91%.
“Our goal was to design a visual representation of the data that was as accurate and efficient for patient diagnosis as possible,” says lead author Michelle Borkin, a doctoral candidate at the Harvard School of Engineering and Applied Sciences (SEAS). “What we found is that the prettiest, most popular visualization is not always the most effective.”
Electronic Health Records are a topic near and dear to my heart (my wife is an RHIA coder), but frequently I find her and myself at odds as to what data is considered “important”. While I believe there’s some discrete numbers of use, she (and most medical professionals) believe that more data is never a bad thing, which leads to unwieldy templates and huge amounts of data bloat. Over at Rxinformatics, they look at some of the new technology dealing with the incoming tsunami of raw health data.
For example, take a look at this sample template of a H&P; the template alone is 281 lines! Now imagine that being populated with data. This is all before the provider even types a single letter into it. For a patient who presented to the ED with a CC of hypotension and the A/P consisted of “dehydrated; gave IVF and D/Ced”, it becomes silly why the progress note would look like a senior thesis. Of course, in an era where most progress notes don’t include enough information, this observation may seem absurd to some. It becomes pertinent, though, because it seems there is an inverse relationship where healthcare providers write less the more an EHR imports data.
St Jude Medical has a press release out about a new “Visualization Tool” designed for medical use in electrophysiology labs.
The system includes a 56 inch HD monitor that can display up to eight video images simultaneously with four times the resolution seen in standard 1080p consumer monitors. The VantageView System offers exceptional image quality with greater detail than the monitors typically used in EP labs today. In addition to enhanced image quality, the VantageView System can be seamlessly integrated with the EP lab’s multiple diagnostic and treatment systems, allowing clinicians to customize screen displays and more easily view and control patient and procedure information.
No doubt it’s impressive, and in typical medical fashion (suspected from the ceiling to reduce floor obstruction). But I would love to see some software with it for data fusion (combining data from multiple sources in a registered overlay setting). Oh well, maybe in the next rev.
Researchers at Purdue have published some research on using pulsed near-infrared lasers to create 3D scans of arterial structures, allowing detailed scans without requiring cuts.
The laser generates molecular “overtone” vibrations, or wavelengths that are not absorbed by the blood. The pulsed laser causes tissue to heat and expand locally, generating pressure waves at the ultrasound frequency that can be picked up with a device called a transducer.
CMIO has an article from two radiologists discussing the problems and benefits of modern imaging reconstruction systems. Previously, non-technical radiologists had to suffer through difficult 3D reconstruction tools to convert scans into usable 3D models suitable for analysis by experts. As they typically did not have the computer skills or the time to do it, the work was usually done by Technologists under the watchful guidance of a radiologists.
But as the images have become more advanced, the processing and reconstruction of those images demands greater skill and specialization. The process itself, argued Reuben Mezrich, MD, PhD, of the University of Maryland School of Medicine in Baltimore, is an iterative process that often requires a radiologist’s knowledge of anatomy and skill in interpretation to accurately reproduce the image.
However, increasingly the process is being done via completely automated algorithms for segmentation and surface reconstruction.
In a rebuttal, Mezrich expressed an additional concern: that radiologists would risk losing ground to other specialties if they cease to perform reconstructions. A technologist who renders standardized images could then pass the studies on to specialists who claim enough experience to diagnose the patients themselves.
“If it will be a technologist, or perhaps even the clinician who creates the 3D image, one might ask what the added value of the radiologist is in the interpretation,” Mezrich wrote. Just as reading ultrasound has moved into the hands of urologists, obstetricians and cardiologists, so would radiologists lose turf to other specialties in advanced visualization.
Personally I favor the automated reconstruction over the human reconstruction, as it’s less likely to be influenced by personal bias and pick up only what is actually in the data. Even in the arguments of relying too heavily on a computer, I think I’ld trust it for this situation. What do you think?
The Stanford Medical University has a new toy from Anatomage that provides lifelike interactive visuals with a multitude of anatomical datasets.
The new virtual dissection table takes advantage of 20th-century technological advancements in imaging, such as X-rays, ultrasound and MRIs, and combines them for use in a 7-foot by 2.5-foot screen. At Stanford, the table is being tested as a way to further enhance that age-old teaching method — the dissection of human cadavers.
Costing $60,000, it’s part of a new wave of technology that integrates VR, touchscreens, 3d visuals, high-resolution data scals, and more into a realistic educational tool. In addition to simply using it for education, Stanford is working on a “Searchable Digital Anatomical Library” that they can use with it to offer their extensive library of medical scans to other institutions.
A new press release from NVidia discusses an interesting project from the University College London Hospitals (UCLH) Heart Hospital and UK Visual effects company Glassworks. Together they build a virtual heart simulator perfect for transesophageal echocardiography (Imaging the heart through the mouth & esophagus) training. Using the impressive power of NVidia Quadro GPU’s and 3D Vision technology, they’re able to maintain realistic stereoscopic images running at a full 30fps. In fact, it’s already an integral part of training at Duke University.
“Simulation technology has enabled us to take a quantum leap forward in our teaching,” said Dr. Madhav Swaminathan, MD, FASE, FAHA, of Duke University School of Medicine’s Division of Cardiothoracic Anesthesia. “This particular system essentially simulates the beating heart clearly. To explain how an ultrasound image is formed and how it correlates to anatomical features is extremely difficult. When you’re changing the image plane with a probe it’s hard to understand what parts of the heart you are seeing on the screen — because the heart is three dimensional, and you’re using 3D on a 180 degree plane. A simulator makes it possible to see side by side not only how an ultrasound image is generated, but what the cuts mean in a controlled, relaxed environment where you don’t have to worry about interfering with a patient’s clinical care or taking too much time. This virtual environment technology gives residents a jump start.”
An interesting note, this is done without CUDA. Currently, the application simply uses OpenGL and GLSL, meaning it should run on ATI cards as well. However, in the end they state they’re investigating improving performance by moving lots of the system to CUDA.
Get the full release after the break.