From pixel to region to temporal volume: a robust motion processing framework for visually-guided navigation
1 online resource (262 pages) : PDF
University of North Carolina at Charlotte
The ability to view pre-operative CT colonoscopy images co-aligned with optical colonoscopy images from endoscopic procedures can provide useful information to the gastroenterologist and lead to improved polyp detection. Colonoscopy data presents significant challenges from an image processing perspective: colon deformation, insufficient visual cues, temporary loss of features due to blurry images, etc.In this dissertation, advanced mathematical tools and computer vision techniques are used to tackle these challenges, resulting in an automatic and robust tracking algorithm capable of processing relatively long sequences of colonoscopy images. There are three specific contributions. (1) Multi-scale optical flow is used to identify relative image displacements between consecutive optical colonoscopy images, and egomotion estimation based on the Focus of Expansion is used to estimate camera motion parameters. Straight and curved phantoms were designed to quantitatively validate the accuracy of the method, and clinical colonoscopy sequences from multiple patients were used to qualitatively evaluate the algorithm's robustness. Phantom results validated that the error was less than 10 mm of the 288 mm displaced in tracking consecutive images (2) A region-flow based method is used to measure large visual motion of pairs of images interrupted by a blurry image sequence, and an incremental egomotion estimation algorithm is developed to maintain accuracy. Large camera motion is computed by subdividing visual motion into a sequence of optical flow fields. Accuracy of the approach was statistically validated by excluding sequences of images, using phantom images. In the straight phantom, after 48 frames were excluded, error was less than 3 mm of 16 mm traveled. In the curved phantom, after 72 frames were excluded, error was less than 4 mm of 23.88 mm traveled. The accuracy was also evaluated by visually inspecting co-aligned optical and virtual colonoscopy images. (3) Temporal volume flow improves on the region flow algorithm by comparing temporal volumes separated by blurry images, followed by selecting the best image pair for region flow computation. Results are demonstrated by comparing tracking results with and without temporal volume flow.Based on these new techniques, we have been able to continuously track over 4000 images of colonoscopy sequences comprising multiple colon segments and multiple blurry sequences.
COLONOSCOPY TRACKINGEGOMOTION ESTIMATIONOPTICAL FLOWREGION FLOWTEMPORAL VOLUME FLOWVISUALLY-GUIDED NAVIGATION
Subramanian, KalpathiYoo, TerryLu, AidongSouvenir, RichardWeldon, Thomas
Thesis (Ph.D.)--University of North Carolina at Charlotte, 2011.
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). For additional information, see http://rightsstatements.org/page/InC/1.0/.
Copyright is held by the author unless otherwise indicated.