Grants and Contributions:

Title:
Computational Methods for Capturing and Analyzing Personalized Genomic and Medical Data
Agreement Number:
RGPIN
Agreement Value:
$210,000.00
Agreement Date:
May 10, 2017 -
Organization:
Natural Sciences and Engineering Research Council of Canada
Location:
Ontario, CA
Reference Number:
GC-2017-Q1-03534
Agreement Type:
Grant
Report Type:
Grants and Contributions
Additional Information:

Grant or Award spanning more than one fiscal year. (2017-2018 to 2022-2023)

Recipient's Legal Name:
Brudno, Michael (University of Toronto)
Program:
Discovery Grants Program - Individual
Program Purpose:

My research centers on the development of Computer Science methods to help solve problems in Biology and Medicine. While my research has application to many areas of medicine and genomics, the unifying thread through all of my work is the development of novel computational methods, firmly within the domain of Computer Science, to solve these problems. In the next five years I want to focus on the development of three computational approaches that can be combined for more accurate and rapid capture and analysis of medical data. In all of these cases the effort will be primarily on developing novel computational methodology and approaches, rather than deployment of existing technologies in the medical setting, and many of the methods developed will be generalizable outside of the medical setting.

Aim 1 of my grant will be on the development of mobile devices and HCI techniques for the capture of data from a patient visit. The core interaction between a patient and their doctor involved the doctor making observations, taking notes (on computer or paper), and finally writing (or dictating) a report. This report serves as the main record of the full interaction with the patient and myriad of signs and symptoms visually interrogated by the doctor (not all of which may be recorded in the notes, especially if not abnormal). We propose to develop a next generation of user interfaces, using wearable technology on the clinician (camera and microphone), combined with mobile devices (tablets), integrated into the clinical workflow, and capable of capturing the full visual and audio spectrum of the patient interaction.

In Aim 2 of the grant we will work to develop ML methodology to identify concepts from the captured data – audio, video, and text. We will utilize biomedical ontologies to help improve the accuracy of the methods, training models that utilize proximity in text and proximity in biomedical ontology space to help train classifiers. We will work on integrating this approach into existing speech-to-text tools to help improve audio processing, and apply these approaches to audio recording of patient exams to test their accuracy.

Finally, in Aim 3 of the proposal we will work on data visualization of recorded patient data, with the aim of providing the clinician with a easy-to-understand summary of patient symptoms over time, as well as to allow for quick comparison between a patient and others with similar disorders (for differential diagnosis) and same disorder (to understand disease variability and prognosis).