Radiotherapy is now a common approach to treat cancers in the Head-and-Neck (HaN) region. To ensure that healthy organs are exposed to a minimal amount of tumor-killing radiation, it is required to segment them. Automated segmentation of such organs is required since there exists inter-annotator (i.e. between radiation oncologists) variations in this time consuming process. Such variations lead to radiation dosage differences and planning time requirements leads to delayed treatments.
This post shall compare and critique different deep learning approaches for automated segmentation of such Organs-At-Risk (OAR). The areas of model development that we shall focus on are :-
This post is the second in a series on writing efficient training code in Tensorflow 2.x for 3D medical image segmentation. Previously, we saw how one can extract sub-volumes from 3D CT volumes using the
tf.data.Dataset API. Here, the focus is on writing custom training loops with a specific focus on using the
tf.function decorator in an eager execution context.
One real world application of medical image segmentation is organ segmentation in image-guided radiotherapy (IMRT). Here, it is required that the photon/proton beams accurately target only the tumor and not the organs to avoid adverse affects of radiotherapy. Organ segmentation…
In the last two posts, we discussed design choices for a Tensorflow dataloader using the
tf.data.Dataset API and the
tf.function decorator to reduce time and memory demands. This post shall look into how to use the inbuilt profiler in Tensorflow to primarily see if our deep learning models are sitting idle waiting for data. Those users with advanced knowledge of Tensorflow, can directly refer this post and skip the first two.
The profiling feature shall once again be discussed in context of automated segmentation of organs in a CT scan of the head and neck (HaN) area. …
Machine learning algorithms are designed so that they can “learn” from their mistakes and “update” themselves using the training data we provide them. But how do they quantify these mistakes? This is done via the usage of “loss functions” that help an algorithm get a sense of how erroneous its predictions are when compared to the ground truth. Choosing an appropriate loss function is important as it affects the ability of the algorithm to produce optimum results as fast as possible.
As a student or researcher in the field of radiation medicine, have you ever received a medical dataset from the PACS system of a hospital and wondered how should I go about understanding all those DICOM files? Using MicroDicom, a lightweight viewer, one can easily understand the various DICOM tags and view the CT/MR/US scans with ease.
The viewer contains three main windows — DICOM Browser, Image Viewer and the DICOM Tags Pane (from left to right)
Originally published on Playment.io
Myriad efforts have been made over the last 10 years in algorithmic improvements and dataset creation for semantic segmentation tasks. Of late, there have been rapid gains in this field, a subset of visual scene understanding, due mainly to contributions by deep learning methodologies. But deep learning techniques have an Achilles’ heel of consuming vast amounts of annotated data. Here we review some widely used and open, urban semantic segmentation datasets for Self Driving Car applications.
The task of Semantic Segmentation is to annotate every pixel of an image with an object class. These classes could…
I'm a PhD Candidate at Leiden University Medical Centre. My research focuses on using deep learning for contour propagation of Organs at Risk in CT images.