Description
Image segmentation is one of the most critical tasks in medical imaging, which identifies target segments (e.g., organs, tissues, lesions, etc.) from images for ease of analyzing. Among nearly all of the online segmentation challenges, deep learning has shown great

Image segmentation is one of the most critical tasks in medical imaging, which identifies target segments (e.g., organs, tissues, lesions, etc.) from images for ease of analyzing. Among nearly all of the online segmentation challenges, deep learning has shown great promise due to the invention of U-Net, a fully automated, end-to-end neural architecture designed for segmentation tasks. Recent months have also witnessed the wide success of a framework that was directly derived from U-Net architecture, called nnU-Net (“no-new-net”). However, training nnU-Net from scratch takes weeks to converge and suffers from the unstable performance. To overcome the two limitations, instead of training from scratch, transfer learning was employed to nnU-Net by transferring generic image representation learned from massive images to specific target tasks. Although the transfer learning paradigm has proven a significant performance gain in many classification tasks, its effectiveness of segmentation tasks has yet to be sufficiently studied, especially in 3D medical image segmentation. In this thesis, first, nnU-Net was pre-trained on large-scale chest CT scans (LUNA 2016), following the self-supervised learning approach introduced in Models Genesis. Further, nnU-Net was fine-tuned on various target segmentation tasks through transfer learning. The experiments on liver/liver tumor, lung tumor segmentation tasks demonstrate a significantly improved and stabilized performance between fine-tuning and learning nnU-Net from scratch. This performance gain is attributed to the scalable, generic, robust image representation learned from the consistent and recurring anatomical structure embedded in medical images.
Reuse Permissions
  • Downloads
    pdf (3.6 MB)

    Details

    Title
    • Pre-trained Models for nnUNet
    Contributors
    Date Created
    2021
    Resource Type
  • Text
  • Collections this item is in
    Note
    • Partial requirement for: M.S., Arizona State University, 2021
    • Field of study: Computer Science

    Machine-readable links