Grants and Contributions:

Title:
Novel display, visualization and interaction paradigms for enhanced understanding of complex, multidimensional medical images
Agreement Number:
RGPIN
Agreement Value:
$125,000.00
Agreement Date:
May 10, 2017 -
Organization:
Natural Sciences and Engineering Research Council of Canada
Location:
Quebec, CA
Reference Number:
GC-2017-Q1-03455
Agreement Type:
Grant
Report Type:
Grants and Contributions
Additional Information:

Grant or Award spanning more than one fiscal year. (2017-2018 to 2022-2023)

Recipient's Legal Name:
Kersten, Marta (Concordia University)
Program:
Discovery Grants Program - Individual
Program Purpose:

Over the last decade, significant advances have been made in the fields of display technologies, computer graphics and interaction devices. This progress has opened up new possibilities for visualizing and interacting with data and has necessitated new paradigms that allow for intuitive understanding of computer-rendered images using state-of-the-art technologies. The following research program entails the development of new visualization techniques for complex multidimensional data and novel methods for viewing, displaying and interacting with 3D data in augmented and virtual reality environments.
The main source of data used to test the developed techniques will come from the field of image-guided surgery. In image-guided surgery a navigation system guides the surgeon by displaying the position of real surgical tools with respect to a 3D anatomical virtual model of the patient anatomy. This is not unlike a car’s GPS (global positioning system), which relates a car's position in real space to a virtual representation of it on a map. A recent survey of image-guided surgery systems showed that there has been a lack of focus on how to best visualize, display and interact with anatomical patient data. The result of this is that new technologies have had a very limited impact on clinical practice. For example, clinicians continue to rely on 2D planar reconstructions of 3D anatomy even though these do not translate well to interventional tasks that require a 3D understanding of the anatomy. Rather than using projective methods and new display technologies, clinicians continue to rely on computer monitors, which require them to look away from the patient for guidance thereby disrupting the surgical workflow. Furthermore, surgeons do not directly interact with image-guided surgery systems but require the presence of technicians in the operating room to make adjustments to visualization or view parameters. Similarly, diagnosis and treatment planning rely on 2D planar images rather than the use of novel display techniques and volumetric visualization methods.
To address these limitations, the purpose of the proposed research is to develop novel methods that take advantage of the most current graphics cards, display hardware and interaction technologies in order to enhance medical image interpretation for improved diagnosis, planning and surgical treatment. Although focused on the clinical domain, the algorithms and solutions developed through this research program will be applicable to other domains such as art and museology, heritage, gaming, and aviation, and in general to the fields of data visualization, human-computer interaction and human-factors engineering. This program will not only provide news ways of visualizing and interacting with complex multidimensional data but also investigate the capabilities of the human visual system in understanding computer-generated images.