Difference between revisions of "Configurable fiducial-based device to image registration"

From NAMIC Wiki
Jump to: navigation, search
(Created page with '__NOTOC__ <gallery> Image:PW-SLC2013.png|Projects List Image:yourimagehere.png| Image description </gallery> ==Key Investigators== * Junic…')
 
m (Text replacement - "http://www.slicer.org/slicerWiki/index.php/" to "https://www.slicer.org/wiki/")
 
(4 intermediate revisions by one other user not shown)
Line 8: Line 8:
  
 
* Junichi Tokuda, Brigham and Women's Hospital
 
* Junichi Tokuda, Brigham and Women's Hospital
 +
* Nobuhiko Hata, Brigham and Women's Hospital
  
 
==Project Description==
 
==Project Description==
Line 14: Line 15:
 
<div style="width: 27%; float: left; padding-right: 3%;">
 
<div style="width: 27%; float: left; padding-right: 3%;">
 
<h3>Objective</h3>
 
<h3>Objective</h3>
* Develop a method for fully-automated configurable fiducial detection and registration for calibration of interventional device (i.e. needle guidance robot, etc) to images.
 
* Implement the method as a CLI module for 3D Slicer.
 
</div>
 
<div style="width: 27%; float: left; padding-right: 3%;">
 
<h3>Approach, Plan</h3>
 
 
*Background
 
*Background
 
**Any devices for guiding needle insertion under image-guidance has to be registered to the image coordinate system. Fiducial markers are widely used to localize the mechanical structure of the device on the image, and find the spatial correlation between the physical space and the image space. However, detection of the fiducial markers often requires some user interaction, e.g. pointing a fiducial markers on the image, or use of active tracking method rather than a simple marker that create a bright spot on the image.  
 
**Any devices for guiding needle insertion under image-guidance has to be registered to the image coordinate system. Fiducial markers are widely used to localize the mechanical structure of the device on the image, and find the spatial correlation between the physical space and the image space. However, detection of the fiducial markers often requires some user interaction, e.g. pointing a fiducial markers on the image, or use of active tracking method rather than a simple marker that create a bright spot on the image.  
Line 26: Line 22:
 
***automatically find the correspondence of the points detected on the image, and the points in the physical space (defined as part of mechanical design)
 
***automatically find the correspondence of the points detected on the image, and the points in the physical space (defined as part of mechanical design)
 
***compute the linear transformation that defines the location and orientation of the device in the image coordinate system
 
***compute the linear transformation that defines the location and orientation of the device in the image coordinate system
 +
</div>
 +
<div style="width: 27%; float: left; padding-right: 3%;">
 +
<h3>Approach, Plan</h3>
 
*Approach
 
*Approach
 
**We will use a tube-shape fiducial markers that can be automatically segmented by Hessian filter.
 
**We will use a tube-shape fiducial markers that can be automatically segmented by Hessian filter.
 
*Deliverable
 
*Deliverable
 
**CLI module.
 
**CLI module.
**The source code is available from: https://github.com/SNRLab/MarkerRegistration
+
**The source code is available from: https://github.com/SNRLab/LineMarkerRegistration
  
 
</div>
 
</div>
 
<div style="width: 27%; float: left; padding-right: 3%;">
 
<div style="width: 27%; float: left; padding-right: 3%;">
 
<h3>Progress</h3>
 
<h3>Progress</h3>
* Progress here
+
* Implemented the following features:
 +
** Line-to-line distance metric to match the model to the detected markers.
 +
** CSV parser to read marker configuration file.
 +
** Tested with Z-frame data from MRI-guided prostate biopsy program at Brigham and Women's Hospital.
 +
** [https://www.slicer.org/wiki/Documentation/Nightly/Modules/LineMarkerRegistration Slicer Documentation page]
 +
 
 
</div>
 
</div>
 
</div>
 
</div>

Latest revision as of 17:11, 10 July 2017

Home < Configurable fiducial-based device to image registration

Key Investigators

  • Junichi Tokuda, Brigham and Women's Hospital
  • Nobuhiko Hata, Brigham and Women's Hospital

Project Description

Objective

  • Background
    • Any devices for guiding needle insertion under image-guidance has to be registered to the image coordinate system. Fiducial markers are widely used to localize the mechanical structure of the device on the image, and find the spatial correlation between the physical space and the image space. However, detection of the fiducial markers often requires some user interaction, e.g. pointing a fiducial markers on the image, or use of active tracking method rather than a simple marker that create a bright spot on the image.
  • Objectie
    • The objective of this project is to develop an image processing method for general-purpose fiducial detection. Specifically, the method can:
      • automatically detect the fiducial markers attached on the mechanical structure without user interaction
      • automatically find the correspondence of the points detected on the image, and the points in the physical space (defined as part of mechanical design)
      • compute the linear transformation that defines the location and orientation of the device in the image coordinate system

Approach, Plan

Progress

  • Implemented the following features:
    • Line-to-line distance metric to match the model to the detected markers.
    • CSV parser to read marker configuration file.
    • Tested with Z-frame data from MRI-guided prostate biopsy program at Brigham and Women's Hospital.
    • Slicer Documentation page