Review Article - Volume 2 - Issue 2

Review on Computational Models for Prostate Segmentation from Ultrasound Medical Images

J Ramesh1*; K Thangavel2; R Manavalan3

1Department of Computer Applications, K.S.Rangasamy College of Arts and Science, Tiruchengode.
2Prof. and Head, Department of Computer Science, Periyar University, Salem.
3Department of Computer Science, Arignar Anna Government Arts College, Villupuram.

Received Date : Jan 23, 2022
Accepted Date : Feb 15, 2022
Published Date: Mar 09, 2022
Copyright:© J Ramesh 2022

*Corresponding Author : J Ramesh, KSR Kalvi Nagar, Thokkavadi, Namakkal District, India. Tel: 9865971033.
Email
:
bj_ramesh_14@yahoo.co.in
DOI: Doi.org/10.55920/2771-019X/1096

Abstract

The segmentation method plays an important character to obtain region of interest from Prostate Ultrasound (US) image. Segmentation of the prostate ultrasound image is a tough task, and the challenges significantly differ from one imaging modality to another. The partition tool is utilized to segregate an image into multiple slices for easy evaluation. The design of automatic segmentation is a challenging task to obtain a required portion of the image, due to auditory intervention and artifacts like shadow acquire in prostate US image. This research article reviews the methods so far developed for prostate segmentation using ultrasound images. The main objective of this review work is to contemplate the key similarities and contrasts among the various methods, highlighting their strengths and weaknesses to assist in the choice of an appropriate prostate ultrasound image segmentation methodology. The prime task of the analysts working in the field of image processing and analysis is to build up a technique for effective and better image segmentation. This paper focuses on the different techniques that are broadly used for prostate image segmentation.

Keywords: Ultrasound Prostate Image; Segmentation methods; Region of Interest (RoI); Region Growing; Edge Detection; Similarity Detection.

Introduction

The crucial task of image processing is to the extraction of the required content of information from the input image without affecting other features. The medical apparatus always generates the de-noised image of the specific organs and the same is given as input to segmentation. The purpose of image segmentation is to identify the exact region of interest in medical images for further analysis. The main step of image segmentation is subdivided into multiple segments. Each segment will denote some sort of information such as color, texture, and intensity. In the form of the segment, the boundaries of every image are isolated. To differentiate between various regions, the segmentation process will allocate a single value to each pixel of the input image. Based on the properties of an image such as intensity, color and textures are segmented in each image. Segmentation techniques are selected based on the nature of the problem domain. Some of the image segmentation techniques used by the researchers are Edge Detection, Region-Based, Threshold, Histogram and Watershed Transformation methods.

The edge-based segmentation method divides the image based on its edges. In the region-based methods, the background is separated from an image by using a threshold. The segmentation result plays a vital role in medical applications for diagnosis cancer diseases like brain, prostate, lung, etc. It is a challenging task for researchers to identify a universal method for prostate ultrasound image segmentation. In the segmentation process, the input image is divided into the foreground and background, whereas the foreground is related to the region of interest and background remaining part of the image. Many image segmentation methods are developed by researchers and scientists. In this paper, image segmentation techniques used for the prostate region from TRUS images are extensively reviewed. The rest of the paper comprises as follows, the review about the segmentation techniques used for the extraction of prostate ultrasound image are described in Section 2. Section 3, explains the performance measures adopted for the evaluation of segmentation methods. Issues in prostate segmentation from ultrasound images are described in Section 4. The conclusion about this survey is presented in Section 5 with possible future directions.

Survey of Image Segmentation Methods

The main objective of image segmentation is to recognize the exact regions and evaluate the same for diagnosis. There are a few existing procedures that are exploited for image segmentation. All these segmentation methods can be categorized as region-based or edge-based segmentation and they have their particular significance. Every method can be applied to different ultrasound images to perform essential segmentation. The various prostate image segmentation methods are summarized as follows. Many prostate segmentation methods were used so far and these deformable contour models were introduced earlier as tools for image segmentation by [1]. It extracts more edges and region detection. In 1992, the Feed-Forward Neural Networks method based on Artificial Neural Networks (ANN) was introduced by [2] for the segmentation of the prostate from the Transrectal Ultrasound Images. Three Neural Network architectures have been constructed and trained using a small portion of a training images segmented by an expert and then validated by the proposed model using test images [3] described an edge detection technique based on nonlinear Laplace filtering for contour fortitude of the prostate in ultrasound images in 1994. An edge intensity image is constructed by combining this method with information about edge location and strength. The final outline is built by selecting the boundary edges and linking them together. Result of the proposed method yields 6% of the exact volume.

In 1996, [4] presented an algorithm that depends on clustering each pixel of an ultrasound image. It is also a pixel classifier, which is based on four texture measures associated with each pixel in the image. The number of clusters is not predictable in this method for a particular image which leads to disconnected regions of the prostate. It extracts more edges with high speed and low cost. A non-linear Laplace filter based pre-processing algorithm for detection of grey level transitions with multiple scales of resolution to improve contour detection of objects in ultrasound medical images was proposed in 1994 by [5]. The filter size is adapted to the local variation of the image and multiple scales of resolution were applied to a pair of one-dimensional signal for revealing the influence of large filter sizes to the number of detected edges as well as for the improved detection of less pronounced edges.

In 1998, [6] proposed Discrete Dynamic Contours for prostate image segmentation. The active contour models or Snake models handle topological changes in contours such as merging or splitting poorly during segmentation. The difficulties can be overcome with care, albeit considerable computational overhead exists. The success of prostate segmentation by deformable models using snakes also called "Discrete Dynamic Contours" was largely dependent on careful initialization of the contour in a position near the desired boundary. The computational method produced a high prostate volume of 9.4% than the manual process [7] presented a method for extracting the contour of the prostate. The Gaussian kernel of Marr-Hildreth operator or Laplacian of Gaussian (LoG) acts as a low pass filter to eliminate high-frequency noises. LoG operators of various sizes have been applied to the ultrasound images and enhanced results were obtained. The proposed method extracts edge and region qualitatively [8] used deformable contours with initialization and modeling of prostate medical images for segmentation. The method is based on a one-dimensional Dyadic wavelet transform as a multiscale contour parameterization technique to confine the shape of the prostate model. The active contours model detects prostate boundary where constraints were imposed on the model's deformation according to a predefined model shape. This deformable method produced 14.26% of Volume Difference, 3.76% of Contour Difference and 11.94% of Absolute Volume Difference [9] evaluated the performance of an optimized back propagation in predicting the outcome of a cancer diagnosis from the information of transrectal ultrasonography (TRUS) in 1999. This model achieved higher positive predictive value and negative predictive value when compared to the logistic regression method. The method was evaluated by Positive Predictive Value (PPV) and Negative Predictive Value (NPV), it yields 81.82% and 96.95% respectively [10] proposed an initialization and a discrete dynamic contour for a semi-automatic segmentation algorithm for the prostate from 2D ultrasound images. The outline of the prostate is estimated by selecting four points around the prostate using cubic interpolation functions and shape information. The estimated contour is deformed automatically to better fit the image. Albeit, this method segments a wide range of prostate images and it requires manual intervention for selecting four initial points. The computational results revealed that the average pixel of Mean Difference is 0.5 and the Mean Absolute Difference is 4.4 and Maximum Difference is 19.5 [11] proposed an algorithm for automatic prostate edge detection with manual editing. The sticks algorithm is used to enhance contrast and reduce speckle followed by anisotropic diffusion filter for smoothing the image and some basic prior knowledge of the prostate, such as shape and echo pattern, is used to detect the most probable edges which indicate the prostate shape. However, it required a manual linking procedure to integrate information of detected edges. The computational result showed that the proposed method achieves Hausdorff Distance is 1.8±1.0mm (mean and standard deviation) and Mean Absolute Distance is 0.7±0.4mm.

In 2001, [12] proposed a semi-automatic contour extraction scheme for prostate from TRUS image uses wavelet transform and active contour models, or snakes. The image is decomposed into edge maps at different resolutions using wavelet transform, through the seed points present in the coarsest edge map. By examining the maxima along the radial from an anchor point selected manually as points which are used to initialize a snake. Then snake will evolve across the edge maps at different resolutions and finally converge to the contour of the prostate. It earns the edge map at scale 22. In 2001, [13] introduced the Gabor Filter method for prostate image segmentation. In this method, Gabor features of an image were reconstructed to be invariant to the rotation of the ultrasound probe followed by a hierarchical deformation strategy. Using a multi-resolution technique, the model concentrates on the similarity of different Gabor features at different deformation stages. An adaptive deformable model based on the attribute vector uses a Gabor filter bank in both multiple scales and multiple orientations to characterize the prostate boundaries. The proposed method yielded the average boundary error ranges from 0.9 to 1.1 pixels, while the average correspondence error was from 0.15 to 0.78.

In 2003, [14] proposed a multistage algorithm. A multistage algorithm is known as a scanning algorithm for prostate boundary detection which starts with enhancing the contrast of the image by sticks filtering followed by smoothing the image by gauss kernel. Knowledge-based rules are applied to find a seed point inside the prostate and the same is used to remove the false edges. The remaining false edges are removed by applying a morphological opening operator. Then, the seed point was used to scan the image in radial directions to find the prostate's boundary. It detects the boundary of the prostate successively and it takes only 5 minutes. In [15] presented a method for segmentation that starts with placing four points manually on the boundary of a selected slice and the boundary is refined until it fits the actual prostate boundary using the Discrete Dynamic Contour. The remaining slices are segmented by iteratively propagating the result to another slice and by implementing the refinement. Results disclosed the prostate volume with mean and mean absolute errors of -5.4±4.4% and 6.5±2.1%.

A statistical shape model to segment the prostate from Transrectal ultrasound images is introduced by [16] A Gabor filter bank is used in both multiple scales and multiple orientations to characterize the prostate boundaries. A spline interpolation is used to determine the initial contour based on four manually defined initial points. Then the discrete dynamic contour refines the initial contour based on the approximate coefficients and the wavelet coefficients using the dyadic wavelet transform and the best contour is chosen using selection rule. The experimental results showed the average distance range from 2.3 to 4.6 pixels and overlap area error ranges from 2.76% to 5.66%.

In 2004, [17] introduced a dyadic wavelet transform and the discrete dynamic contours based semi-automatic segmentation algorithm. The initial contour is determined by four user-defined initial points and discrete dynamic contour refines the initial contour based on the approximate coefficients and the wavelet coefficients are generated using the dyadic wavelet transform. A selection rule is then used to choose the best contour. The method yields Mean Absolute Deviation is 3.78 pixels and Maximum Deviation is 19.05 pixels [18] in 2004 presented a deformable model with prior knowledge about the prostate shape to find model initialization and constraining model evolution. The prostate shape has been modeled using deformable super ellipses. The algorithm was evaluated by Hausdorff Distance Error and Mean Absolute Distance Error. It produces 1.32±0.62 mm and 0.54±0.20mm respectively [19]. The top-down approach based on the snake model is used for prostate segmentation from ultrasound images. Median filtering is used as an effective tool for removing speckle noises. The logic combination of Laplacian of Gaussian (LoG) and Sobel operator was good in finding the useful image gradients. Parameters of the snake were dynamically optimized, and the shape information of the prostate was also used as a strong guidance during the deformation process [20] proposed semi-automated segmentation of prostate ultrasound image in 2004. Semi-automated segmentation for prostate ultrasound image is projected by applying anisotropic diffusion filter to reduce speckles and by using the Instantaneous Co-efficient of Variation (ICOV) to enhance the images for edge detection. Segmentation is accomplished through a parametric active contour model in a polar coordinate system. The prostate boundary is approximated by detecting a primary contour with an elliptical model, followed by a primary contour optimization using an area-weighted mean-difference binary flow geometric snake model. The proposed method yields root mean square error average is 1.16 and the standard deviation is 0.41. [21] proposed a semi-automatic segmentation algorithm in 2004. A fast semi-automatic prostate contouring method is developed using model-based initialization and an efficient Discrete Dynamic Contour (DDC) for boundary refinement. Four points on the prostate boundary is identified by scaling and shaping a prostate model, and the final prostate contour is refined with a DDC.

In [22] proposed the Level Set Method Incorporating Region and Boundary Statistical Information for prostate image segmentation in 2004. This method integrates the image and boundary region statistical information instead of the conventional method that uses spatial image gradient information. It gives the global view of the boundary information within the image and it is well adapted to situations where edges are weak and overlap, and images are noisy. When evaluated on ultrasound, the results of medical images of CT, and X-ray modalities were found to be reliable and efficient.

The hybrid level set method was developed for prostate segmentation from ultrasound images by incorporating shape constraints into a region-based curve evolution process. It alternates between two processes, shape Model Estimation (ME) and Curve Evolution (CE). An implicit parametric model derived from manually outlined training data was used to encode prior shape information and using which, the ME computes the maximum a posteriori estimate of the model parameters. The estimated shape was used to guide the CE step, which provides a new model initialization for the ME step. When the curve locks onto the specific prostate shape, the process stops automatically.

In [24] developed a method for the automatic segmentation of trans-abdominal ultrasound images of the prostate. The contours are enhanced without changing the information in the image by using adaptive morphological and Median filtering to detect and smooth out the noise-containing regions. The algorithm was evaluated by comparing, using distance-based and area-based metrics. Mean Distance yields 3.2 pixels and a standard deviation of 2.7 pixels and the average surface coverage index is 93%. Morphological transformations and region-based thresholding are applied to remove speckle noise and specific speckles are removed and feature-based measures are computed by using the Grey-Level Co-occurrence Matrix (GLCM). Kohonen clustering network is employed to identify prostate pixels by using spatial information as well as GLCM measures. A fully connected prostate contour is formed by processing the clustered image.

A segmentation procedure consists of four main stages. Locally adaptive contrast enhancement method is used to generate a fine- contrasted image in the first stage. In the second stage, the image is threshold to extract an area containing the prostate. Morphological operators are applied to obtain a point inside of the prostate area and Kalman estimator is employed to distinguish the boundary from irrelevant parts caused by shadow to generate a coarsely segmented prostate image. In the third stage, dilation and erosion operators are applied to the image to extract outer and inner boundaries. Fuzzy membership functions describe regional and grey-level information that is employed to enhance the contrast within the prostate region selectively. In the final stage, the prostate boundary is extracted using strong edges obtained from the selectively enhanced image and information from the coarse estimation [27] applied slice-based 3D prostate segmentation using Continuity Constraint in 2005. It is composed of 3 steps such as End Point Filtering, Contour Re-Initialization, and Contour Deformation. First, in the cross-sectional plane, a continuity constraint was applied for the endpoints of the prostate boundaries. Secondly, in each slice endpoints are inserted as initial contour to obtain a new contour and finally it achieve the surface of the prostate in all slices. It was evaluated by using average distance and standard deviation. Thus, the segmented results produced average distance as 2.79mm and standard deviation as 1.94mm.

In [28] proposed a semi-automatic Discrete Dynamic Contour (DDC) model which is a combination of multi-resolution model refinement procedure and domain knowledge of the image class for prostate segmentation. Domain knowledge-based Fuzzy Inference System (FIS) and a set of adaptive region-based operators were used to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process is embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, without the need of user interaction after initialization. The computational method yields the success rate, mean deviation and maximum deviation is 98%, 2.69 pixels, and 10.26 pixels respectively.

In [29] offered two-dimensional (2D) Active Shape Models (ASM) for semi-automatic segmentation of the prostate from ultrasound images. Minimum description length landmark placement for ASM construction and specific values for constraints and image search are optimal. The method produced distance-based error values of MD=0.12±0.45 mm, MAD=1.09±0.49 mm, MAXD=7.27±2.32 mm and volume-based error values PVD=0.22±4.58% and PAVD=3.28±3.16%.          An image warping algorithm with an edge-detector was introduced by [30] for the segmentation of the prostate image from B-mode TRUS images. Image warping makes the prostate shape elliptical and the edge-detector measures points along the prostate boundary for finding the best elliptical fit. The segmentation result is obtained by applying a reverse warping algorithm to the elliptical fit. The proposed method was faster than the manual segmentation. It yields Mean Absolute Difference (MAD) was 0.68±0.18mm and the Maximum Difference (MAXD) was 2.25±0.56mm.

A method for extracting and analyzing spectral features from TRUS images for prostate tissue characterization by [31] 2006 was introduced. Gabor filters use the frequency and spatial domain features of the image to achieve an accurate Region of Interest (ROI) identification. For each ROI, the spectral feature sets and geometrical features sets are constructed. A classifier-based feature selection algorithm CLONALG, proposed optimization technique based on clonal selection of the Artificial Immune System (AIS), is used to select an optimal subset from the extracted features. Support Vector is adapted for classification. It yields Power Spectrum Density (PSD) ranges from 72.2% to 93.75%.

Deformable models have been effective for semi-automatic prostate segmentation and not suitable for fully automatic segmentation because of the initialization of seed or control points. An automatic level set prostate segmentation where classification method was employed to locate the approximate location of the prostate which was used to initiate the proposed elliptical level set contour was developed by [32] in 2006. The deformations of the level sets are guided by a velocity function which is derived using the TRUS prostate image histogram. Spherical Harmonics method was proposed [33] in 2006 for the segmentation of the prostate image. This method undergoes two phases as model building and Bayesian Framework for segmentation. Model building is used to model the shape of the prostate and shape information is extracted. The method yields mean absolute distance error was 1.26±0.41mm and the overlap was 83.5±4.2mm.

In [34] proposed a Level Set Framework for the segmentation of prostate image using Shape and Intensity Priors. It has three phases such as Shape Prior Extraction, Intensity Prior Extraction, and Segmentation. In the first phase, the accurate shape information of the images is extracted and the Probabilistic Intensity Models are utilized to identify the intensity information. While the automatic segmented method was compared with manual segmentation, it extracts accurate region of interest of prostate image with the average Correct Segmentation Rate (CSR) of 0.82 and Incorrect Segmentation Rate of 0.19. Also, the standard deviation of CSR and ISR correspondingly were 0.05 and 0.08. In [35] proposed an approach that modifies the images without affecting their anatomic contents for effective segmentation by a relatively simple process. The performance of this method was tested in a series of silico and in vivo experiments. They have insisted that the proposed method obtained less Normalized Mean Square Error and it yields NMSE was 0.17.

A Graph theory-based spectral clustering segmentation algorithm that does not require any function design, optimization or any contour on the boundary was introduced by Samar [31]. in 2007. This method has no manual interaction. The algorithm also produced good results when compared to the expert radiologist segmented images. The proposed algorithm obtained excellent gland segmentation results with 93% average overlap areas.

Neuro-Fuzzy classification (NEFCLASS) tool to classify prostate cancer was introduced by [37. The tool had rich features such as batch learning, automatic cross-validation, automatic determination of the rule base size, and handling of missing values to increase its interpretability. The performance of the tool was tested with medical data obtained from real prostate cancer and Benign Prostatic Hyperplasia (BPH) since the symptoms of these two are very similar and crucial for differentiation. The results showed that the classifier performs better for the diagnosis of patients with prostate cancer or BPH. To increase the contrast of the ultrasound prostate image, the intensity values of the original images were adjusted using a median filter, followed by the Pulse-Coupled Neural Network (PCNN) segmentation algorithm used to detect the boundary of the image. By combining noise reduction and segmentation enables to the elimination of PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection is very difficult. This method was proposed by [38].

In [37], proposed an energy-based method for the segmentation of ultrasound prostate images using active contour modeling guided by a dot-pattern textural energy map. The impulsive noise and speckles are reduced with median filtering and top-hat transform. Features are then extracted from the filtered images using a non-linear dot-pattern select operator. An elastic template shape model that incorporates a priori knowledge of the average geometric shape of the prostate boundaries, as well as the energy derived from the dot pattern features of image, are used to search for the optimal prostate contour. Experimental results yielded average overlap area error is 4.6% and the distance pixels are 18.

In [40] presented an Agent-based approach for image segmentation using opposition-based reinforcement learning. It was used optimally to find the appropriate local values and segment the object. The agent uses an image and its manually segmented version changes the environment for the quality of the segmented image. The agent is provided with a scalar reinforcement signal as a reward or punishment and it uses the same to explore or exploit the solution space. The obtained values were used as valuable knowledge to fill the Q-matrix. The same author developed another algorithm using a reinforcement learning scheme to find the appropriate local values for sub-images and to extract the prostate image in 2008. The acquired knowledge was stored in the Q-matrix and used for new input images to extract a coarse version of the prostate. The proposed method provides Mean Error Values with mean=8.6% and a standard deviation of =3.1%.

Deformable models have been effective for semi-automatic prostate segmentation and not suitable for fully automatic segmentation because of the initialization of seed or control points. An automatic level set prostate segmentation where classification method was employed to locate the approximate location of the prostate which was used to initiate the proposed elliptical level set contour was developed by [41]. The deformations of the level sets are guided by a velocity function which is derived using the TRUS prostate image histogram. [42] proposed the Histogram Equalization method for Prostate image segmentation. This method increases the vividness between the dark region and the bright region. Since the prostate is darker than the surrounding region of the image. Then, the prostate region becomes much darker than its surrounding region; it can be extracted and used for the further process [42]. The proposed method acquired 96% of sensitivity and 95.9% of specificity.

In 2007, the Modified Discrete Dynamic Contour (MDDC) method was proposed by [43] for prostate image segmentation. The MDDC method, based on the distance of two endpoints different mass for different contour points are applied. And the small force is added to the total force of internal, external and damping forces. It speeds up the process and provides good smoothness of the contour and also helps the contour to identify the weak and real boundary of the prostate. The experimental results achieve the value of MD is -1.22 pixels, MAD is 2.99 pixels, MAXD is 14.16 pixels, Sensitivity Percentage is 96.4% and Accuracy Percentage is 93.04%. [45] presented a method that involves edge-preserving noise reduction and smoothing as a preprocessing stage and then segments the prostate. The speckle reduction was achieved by using stick filter and top-hat transform has been implemented for smoothing. A feed-forward neural network and local binary pattern together used to find a point inside prostate object. In the final stage, the boundary of the prostate was extracted by the inside point and an active contour algorithm. Experimental results demonstrated that the algorithm extracted the prostate boundary with less MSE relative to the boundary which is manually provided by physicians. The proposed method produced less MSE value with high boundary detection. [46] presented a computationally efficient method for the segmentation of the prostate in the TRUS image. The method relies on a variation formulation based on a deformable super-ellipse and region energy based on the assumption of a Rayleigh distribution. The implicit representation of a deformable super-ellipse was applied to the energy to minimize which yielded a super-ellipse evolution able to accurately segment prostate and surrounding tissues while handling boundary gaps on the contour. It detects the edge successively than manual process. It obtained Standard Deviation was 0.0058%.

A method of utilizing a priori shapes estimated from partial contours for segmenting the prostate was introduced by [47]. It extracted prostate boundary from 2-D TRUS images automatically without user interaction for shape correction in shadow areas. During the segmentation process, missing boundaries in shadow areas are estimated by using a partial active shape model, which takes partial contours as input and returns complete shape estimation. An optimal search is performed using this shape guidance by a discrete deformable model to minimize the energy functional for image segmentation, which is achieved efficiently by using dynamic programming. The segmentation of an image is executed in a multi-resolution fashion from coarse to fine for robustness and computational efficiency. The experimental results of the proposed method earn the average mean absolute was 1.79mm±0.95mm and the standard deviation was 3.29 mm ± 3.4mm.

An intelligent scheme, employing a combination of fuzzy logic, PCNN, wavelets, and rough sets, for analyzing prostate ultrasound images to diagnose prostate cancer was presented by [48]. An algorithm based on type-II fuzzy sets was used to enhance the contrast of the image followed by performing PCNN-based segmentation to identify the region of interest and to detect the boundary of the prostate pattern. Wavelet features are extracted and normalized, then a rough set analysis was applied to discover the dependency between the attributes and to generate a set of reducts consisting of a minimal number of attributes. Finally, a Rough Set classifier was designed for discrimination of different regions of interest to determine whether they have cancer or not. The experimental results on various images showed that the overall classification accuracy was high when compared with other intelligent techniques including Decision Trees, Discriminant Analysis, Rough Neural Networks, Fuzzy ARTMAP and Neural Networks. It obtained an accuracy percentage was 87.2.

Modified Fuzzy C-Means Algorithm was introduced by [49] for prostate image segmentation in 2009. This method works based on the Fuzzy C-Means algorithm and updating the cluster centers methods are modified. The region of interest is extracted efficiently and accurately. It produced accuracy as 93.50%. The capabilities of ultrasonagrams can be used as a quantitative tool in clinical medicine to characterize the health of the tissue. J. A. Noble reviewed and found that the two fields were tightly coupled, influenced by factors such as more open software-based ultrasound system architectures, increased computational power, and advances in imaging transducer design. It obtained the result with a high edge than manual segmentation.

A new energy-based method for automatic prostate segmentation in TRUS images is presented by [51]. This method involves three main stages: a preprocessing step (edge-preserving noise reduction and smoothing), inside point finding step and prostate segmentation respectively. In preprocessing, speckle reduction was achieved by using stick filter and top-hat transform has been implemented for smoothing. A feed-forward neural network was used to find a point inside prostate object. An active contour algorithm extracts the boundaries of the prostate in the final step. Several experiments were conducted to validate this method and it detects the prostate boundary with Normalized Area Error (NAE) lower than 4.8%. For the problem of TRUS segmentation, an approach based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. The tracking of morphological features defines "weak" shape priors which act as a regularization force. It minimally biases the segmentation procedure, while rendering the final estimate the stable and robust. It was introduced by [52]. The experimental method revealed the result with less error rate value of NMSE= 0.34% and SD= 0.053%.

In [53] introduced a method that uses fusion of Magnetic Resonance Imaging (MRI) and Transrectal Ultrasound (TRUS) images for TRUS guided prostate biopsy, improves the localization of the malignant tissues. The texture features from approximation coefficients of the Haar wavelet transform was used for propagation of shape and appearance-based statistical model used to segment the prostate in a multi-resolution framework. A parametric model of the propagating contour is derived from Principal Component Analysis (PCA) of prior shape and texture information of the prostate from the training data. These parameters are modified with prior knowledge of the optimization space to achieve optimal prostate segmentation. The author claimed that their method yields a better edge with Dice Similarity Coefficient (DSC) value is 0.95±0.01mm and Mean Segmentation Time (MST) is 0.72±0.05 seconds.

In [54] Proposed graph cuts in a Bayesian framework for automatic initialization and propagate multiple mean parametric models derived from Principal Component Analysis (PCA) of shape and posterior probability information of the prostate region to segment the prostate. The proposed method produced high edges with less DSC value [55] proposed a probabilistic framework for the propagation of a parametric model derived from PCA of prior shape and posterior probability values to achieve the prostate segmentation. It was an automatic model that performs accurate prostate segmentation in the presence of intensity heterogeneity and imaging artifacts. Experimental results of the proposed method shows better segmentation of mean DSC and MAD was 0.96±0.01mm and 0.80±0.24mm respectively [56] introduced a probabilistic framework for automatic initialization and propagation of multiple mean parametric models derived from principal component analysis of shape and posterior probability information of the prostate region to segment the prostate. A posterior probability of the prostate region builds the texture model of the prostate and the information was used in initialization and propagation of the mean model. Also, multiple mean models were used instead of a single mean model to improve segmentation accuracies. The proposed method achieves mean Dice Similarity Coefficient value of 0.97±0.01mm and average Mean Absolute Distance value of 0.49±0.20mm.In [57], proposed another method to enhance the texture features of the prostate region using Local Binary Patterns (LBP) for the propagation of shape and appearance-based statistical models to segment the prostate in a multi-resolution framework. A parametric model of the propagating contour was derived using PCA from the prior shape and texture information of the prostate from the training data. The estimated parameters are then modified with the prior knowledge of the optimization space to achieve an optimal segmentation. It was also computationally efficient and produced accurate prostate segmentation in the presence of intensity heterogeneities and imaging artifacts. It yields average DSC and MST values of 0.94±0.01mm and 0.6±0.02 seconds respectively.

In [59] proposed a method to improve Region of Interest (RoI) TRUS detection and biopsy guidance using computer-aided diagnosis techniques for ultrasound images. The method uses automated segmentation of regions of interest followed by a supervised classifier. The method yields 78% Average Sensitivity (AS) and 90% of Average Accuracy (AA) without losing the content of information [60] proposed a method based on Ant Colony Optimization, which will increase efficiency and minimize user involvement in prostate boundary detection from ultrasound images.

In [61] presented a TRUS video segmentation algorithm using both global population-based and patient-specific local shape statistics as shape constraints. By adaptively learning shape statistics in a local neighborhood during the segmentation process, the algorithm could effectively capture the patient-specific shape statistics and quickly adapted to the local shape changes in the base and apex areas. The learned shape statistics were then used as the shape constraint in a deformable model for TRUS video segmentation. The experimental result showed that the method has improved segmentation with the value of AMAD was 1.65±0.47mm. [62] proposed Morphological Operators and DBSCAN algorithm for prostate image segmentation in 2011. This method consists of three stages as Local Adaptive Thresholding, Morphological Operators and DBSCAN. The threshold is used to differentiate the foreground from the background of the image. Then Morphological Operators are applied with large structuring element to segregate the object related to the prostate region. Finally, the DBSCAN algorithm is used to group the separated background image from the region and threshold pixel values. It acts as a density-based algorithm in which define the number of positive pixels is equal to or greater than the minimum number of pixels. The performance of the method was evaluated by using Rand Index (RI), Global Consistency Error (GCE), Variations of Information (VOI) and Boundary Displacement Error. And the method produced an exact region of the prostate accurately. It earns the value of RI was 0.2764, GCE was 0.0524, VOI was 4.4769 and BDE was 20.3805.

In [63] developed a method for automatic prostate segmentation system for TRUS images to eliminate the process of manual outlining the prostate region. The method combines the Active Contour Model (ACM) with a prostate classifier. The prostate classifier consists of a Validation Incremental Neural Network (VINN) and a Radial-Basis Function Neural Network (RBFNN). Experimental results showed that the proposed method earned higher accuracy than that of the regular ACM method. The performance evaluated by TPF (True Positive Fraction), FNF (False Negative Fraction), FPF (False Positive Fraction), TNF (True Negative Fraction) and Accuracy. It yields TPF was 88.56%, FNF was 11.44%, FPF was 2.80%, TNF was 97.20% and Accuracy was 94.05%. The proposed method produced 3.5% more accuracy than the Active Contour Method (ACM).

In [64] introduced the Anatomical Structure Segmentation method in 2012. According to this method, the prostate regions are divided into transitional zone, left peripheral zone and right peripheral zone. By using these zones, region boundaries are obtained and also outlined contour of the region. By applying blood flow into these regions it extracts blood flowing regions and non-flowing regions. The non-flowing regions are removed and it retains the segmented part of an image. This method was measured by Resistive Index (RI) and Mean Resistive Index (MRI). It yields RI and MRI as 0.48 and 0.68 respectively [65] proposed a fully automatic model-based prostate boundary segmentation method using the Normalized Cross-Correlation (NCC) matrix. In this method, an image can be analyzed as a set of strips consists of edges and speckle noise in a horizontal line of each strip. The template matching procedure is applied for representative of the prostate boundaries. Prostate shape in the US image showed upper and lower boundaries have different features. To overcome the difference in the NCC dimension matrix, the maximum of heuristic threshold values are applied. The performance of the method was evaluated by Dice Similarity Coefficient (DSC) and Computational Time (CT), it provides result DSC was 90.6% and CT=3.08 seconds [66] proposed Level Set Priors Based Approach with Genetic Algorithm (GA) for prostate segmentation in 2013. The Principal Component Analysis (PCA) used to derive the boundary curve representation and then implicit parameter model are optimized by Genetic Algorithm. In GA, parameters are selected by using Rank Selection, Single-point Crossover, and Mutation. This method effectively segmented the region of interest area from the image to some extent. It yields an accuracy of 93.50%. [67] implemented the Ant Colony Optimization method for segmentation in 2013. By the expertise, an initial point for region of interest in the image was pointed out manually. A set of 12 points is chosen and the same setpoints are act as an initial contour. Finally, the ACO optimization technique was applied to find and segment the Closed Prostate Boundary. The performance of the method was evaluated using Mean Difference (MD), Mean Absolute Difference (MAD) and Maximum Difference (MAXD). The results revealed that the method segments the area is better compared to the Genetic Algorithm [68] proposed a coupled continuous max-flow model algorithm for prostate image segmentation in 2013. This method, delineate 3D prostate boundaries using rotational resliced images around a specified axis, which properly enforces the inherent rotational symmetry of prostate shapes to jointly adjust a series of 2D slices segmentations in the global 3D sense. The proposed method yielded a DSC of 93.7±2.1%, 92.6±3.1% and 92.3±3.2% and COV of 2.3%, 3.3% and 3.5% respectively.

Radial Basis Function (RBF) Interpolation and Statistical Shape Models for prostate image segmentation were introduced by [69]. This method is capable of interpolating data points generated from a nonstandard grid possibly with large data free gaps. It acquires the average volume of US shape model was 36.5ml and MR average model was 37.9ml. In [70] introduced convex optimization with shape prior method for prostate image segmentation. This method is divided into four steps: (i) The 3D TRUS image of the prostate is resliced into n slices about a rotational reslicing axis, (ii) Two points are manually chosen on the long axis of the prostate on the coronal view, (iii) The computed segmentation result is propagated in both a clockwise and counterclockwise directions for segmenting its two adjacent slices, where the segmentation result of the initial slice is used as both the propagation shape constraint and initial contour in the proposed convex optimization-based contour evolution scheme and (iv) A 3D prostate surface is then reconstructed from the segmented contours in all 2D slices. The proposed method yielded a DSC of 93.5%±2.01%, 92.6%±3.1% and 92.3%±3.2% and a COV of 2.3%, 3.3% and 3.5% respectively.

In [71] proposed a Level Set algorithm with active band and the intensity variation across edges for prostate image segmentation. The level set function is updated in a band region around the zero level set named as banded region. Compared to traditional level set method, the average intensities inside or outside the zero level set are computed only in the banded region. The performance of this method was evaluated by the Dice Similarity Coefficient (DSC) and Sensitivity. It yields DSC of 95.82%±2.23% and Sensitivity of 94.87%±1.85%. In [72] implemented automatic prostate segmentation from ultrasound images based on radial bas-relief initialization and slice-based propagation. In this method, 2D slice-based propagation used on each image slice was deformed using the level-set evolution model, it was applied by edge-based and region-based energy fields generated by dyadic wavelet transform. Performance accuracy was evaluated Mean Absolute Difference (MAD), it obtained 0.79±0.26mm pixels.

In [73] proposed the Boundary Completion Recurrent Neural Network (BCRNN) for the segmentation of Prostate from the US image. Initially, the images are converted into a dynamic sequence from serializing sequence and shape priors are done sequentially. In this method, the raw input image was utilized instead of a hand-crafted shape model and the shape inference learned automatically. The multi-view fusion strategy was exploiting to merge shape predictions from various perspectives. Finally, a multi-scale Auto-context scheme is utilized to obtain more refinement on image details of shape prediction. This method obtained segmentation with more accuracy compared to Convolutional Neural Network (CNN) and Fully-Convolutional Network (FCN). The method has been evaluated with DSC and ABD, it yields 92.39% of DSC and 11.44% of ABD.

In [74] proposed Convolutional Neural Network for automatic prostate segmentation in MRI-TRUS images with 2D TRUS slices and 3D TRUS volumes. This method becomes most acceptable in the clinical practice of automating MRI-TRUS image segmentation. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS guided targeted biopsy. This method acquired more precision than the manual process. The proposed method achieved an average of DSC of 0.91±0.12 and Absolute Boundary Segmentation Error of 1.23± 1.46 mm. Active Contour Model was proposed for prostate boundary detection and segmentation. The snake active contour model is an energy-minimizing spline and minimizing a function that converts high-level contour into low-level image information. It produced better results when compared to manual outlined structures.

In [75] Introduced Prostate Segmentation from Ultrasound Images using Residual Fully Convolutional Network. The modified VGG-19 architecture is applied in that the fully connected layer is replaced with the mirror version of the convolution part named deconvolution. The feature maps from the last convolution layer are used then by deconvolution and upsampling it emerges the segmented region from the original image. The average DSC of the method yielded 86.34% and shows a better result than other methods. In [76] proposed Deep Attentive Features framework for segmentation of 3D Transrectal Ultrasound Prostate image. In this method 3D ResNeXt is applied to extract the features and the important features are fine tuned using Single-Layer Features and Multi-Layer Features. The method obtained the result of Dice Similarity as 90%, comparatively higher performance than others.

In [77] proposed deep learning-based segmentation of prostate image. The technique utilized sparse subspace clustering to obtain quantify image similarities and features are learned by Convolutional Auto-Encoder (CAE) architecture. The method obtained DSC value as 93.9% and Hausdorff Distance (HD) as 2.7mm [78] implemented multidirectional deeply supervised V-Net for prostate ultrasound image segmentation in 2019. Initially, the images are filtered by 3D Gaussian Mean and Median filter. The original image along with filtered images are construct a4-channel image data and 3D patch-based V-Net architecture used to enable end-to-end learning. From the multi-derivative images, 3D patches are extracted and input to trained networks, which achieved patch-based segmentation of the image. It yields an average DSC and HD value of 0.92±0.03mm and 3.94±1.55 mm respectively.

In [79] proposed deep learning on clinically diverse 3D ultrasound image segmentation. In the proposed method, 3D segmentation predicted on 2D slices and the modification of 2D U-Net was utilized for training and testing. The method shows that the result of DSC as 94.1%, and HD as 2.89mm. Multi-Scale feature extraction of prostate image introduced by [80]. The features are extracted from the segmented region by using a Multi-scale Feature Pyramid Network (MFPN). It retrieves rich semantic information from the Region of Interest (ROI) and it yields the DSC as 0.9651mm and average absolute distance as 0.504mm. In [81] introduced Densenet-Resnet-based Convolutional Neural Network for segmentation in 2021. In the proposed algorithm, CNN architecture combines a denset encoder with a resnet decoder. The encoder part consists with four Dense blocks bonded by down sampling blocks and decoder part consists of three Residual blocks bonded by transpose convolutions. The method has been evaluated with DSC and ABD, it yields 91.87% of DSC and 11.84% of ABD. The various image segmentation methods are introduced by various authors and the same is evaluated using different parameters. The various segmentation techniques for prostate extraction from ultrasound images are reviewed extensively and the same is exposed in (Table 1) with evaluation parameters and results. The measures for the evaluation of various segmentation algorithms are described in the following section.

Table 1: Prostate Image Segmentation Methods, Evaluation Parameters and Result.

Performance Measures

Researcher adopted different evaluation measures to assess their proposed segmentation methods for prostate TRUS images. The short descriptions of these measures are given hereunder. The Mean Square Error (MSE) is the average error rate of the square of difference between the original image and segmented image where as Peak Signal-to-Noise Ratio (PSNR) is the ratio between the square of the maximum intensity value of an image and the mean squared error of image. The Root Mean Square Error (RMSE) is the square root of MSE and SNR is the Signal-to-Noise Ratio. The Rand Index (RI) or Rand measure is a measure of the similarity between two data clusters. Consider two valid label assignments S and S0 with corresponding labels {li} and {li0} of N points X = {x1, x2, . . . xi, . . . , xN }. The Rand index R can be computed as the ratio of the number of pairs of points having the compatible label relationship in S and S0.

The Global Consistency Error (GCE) that forces all local refinements to be in the same direction. The Variation of Information (VOI) metric defines the distance between two segmentations as the average conditional entropy of one segmentation values given the other, and thus it measures the amount of randomness in one segmentation which cannot be explained by the other. Boundary Displacement Error (BDE) mea­sures the average displacement error of boundary pixels between two segmented images. Particularly, it defines the error of one boundary pixel as the distance between the pixel and the closest pixel in the other boundary image. The Dice Similarity Coefficient (DSC) was utilized as a statistical metric to assess the execution of both the reproducibility of manual segmentations and the spatial overlap of automated probabilistic fractional segmentation of the images.

The Resistive Index (RI) can be calculated from the peak systolic velocity and the end-diastolic velocity of blood flow. Mean Resistive Index (MRI) is averaging of RI. Correct Segmentation Rate (CSR) is defined as the ratio of correct segmentation voxel (volumetric and pixel, representing a value in the three- dimensional space) number and the total voxel number of the ground truth. Incorrect Segmentation Rate (ISR) is defined as the ratio of the incorrect (the non-prostate voxel is classified as prostate voxel) segmentation voxel number and the total voxel number of the ground truth. The Mean Absolute Deviation (MAD) is a measure to represents expected absolute-error loss and is more robust to outliers of the image.                The Jaccard Index (JI) is used to calculate the similarity between the two sets of images and it also measures the variation or dissimilarity between two images. The performance assessment methods formulas are given in (Table 2). Issues in Prostate segmentation from ultrasound images are discussed in the section 4.

Table 2: Performance Evaluation Measures.

Issues in Prostate Segmentation from Trus Image

After reviewing the existing algorithms for segmentation available in the literature, the following issues are identified: The need of high reproducibility and increasing the efficiency for identify the prostate region motivates the development of computer-assisted and automated segmentation. Identifying the region of interest from the TRUS images is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of grey levels, the contrast is usually low and the boundaries between the prostate and background are fuzzy. There is no common characterization of prostate and non-prostate areas, which makes too hard to directly differentiate the prostate from surrounding tissues based on pixel intensities and region appearances. Finally, shadow artifacts often appearing at the anterior side of the prostate make the segmentation difficult because of the lack of image information in those areas

Conclusions

We have presented a brief review and outlines for ultrasound prostate image segmentation techniques. The image segmentation method has different types based on the constraint such as pixel intensity, homogeneity of images, irregularity, cluster data and image content so forth. Each method has pros and cons. The result got utilizing one segmentation approach may not be equivalent to contrast and other methodology. The major image segmentation techniques are used for the purpose of image analysis. Selection of a suitable segmentation technique largely depends on the type of images and application areas. From this survey, it founds that there is no novel and unique method for image segmentations. Since, segmentation depends on texture, intensity, image similarity and image content. Therefore, it isn't conceivable to think about a one technique for all kind of images or all strategies can perform well for a specific kind of images. The performance evaluation metrics used for the prostate segmentation from TRUS image were also discussed. In future, this study can facilitate the researchers to contribute range of approaches for effective prostate ultrasound image segmentations

References

  1. Kass M, Witkin A, and Terzopoulos D. “Snakes: Active Contour Models”, International Journal of Computer Vision. 1988; 1(4):321-331.
  2. Prater JS. and Richard WD. “Segmenting ultrasound images of the prostrate using neural networks”, Ultrasound Imaging. 1992; 14:159-185.
  3. Aarnink RG. et al. “A practical clinical method for contour determination in ultrasonographic prostate images”, Ultrasound in Medicine and Biology. 1994; 20:705-717.
  4. Richard WD, and Keen CG. “Automated texture based segmentation of ultrasound images of the prostate”, Computerized Medical Imaging and Graphics. 1996; 20(3):131-140.
  5. René G Aarnink et al. “A preprocessing algorithm for edge detection with multiple scales of resolution”, European Journal of Ultrasound. 1997; 5(2):113-126, 1997.
  6. Pathak S, et al. “Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging”, In Procedings. Soc Photo Opt Inst. Eng, Bellingham, WA. 1998; 3335:83-92.
  7. Fangwei Zhao and de-Silva CJS. “Use of the Laplacian of Gaussian operator in prostate ultrasound image processing”, In Proceedings. 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 1998; 2(29):812-815.
  8. Knoll C, et al. “Outlining of the prostate using snakes with shape restrictions based on the wavelet transform”, Pattern Recognition. 1999; 32:1767-1781.
  9. Alvaro LR on co and Rossana Fernandez, “Improving ultrasonographic diagnosis of prostate cancer with Neural Networks”, Ultrasound in Medicine a v damp Biology. 1999; 25(5):729-733.
  10. Ladak HM, et al. “Prostate boundary segmentation from 2D ultrasound images”, Medical Physics. 2000; 27:1777-1788.
  11. Pathak SD, et al. “Edge guided boundary delineation in prostate ultrasound images”, IEEE Transactions on Medical Imaging. 2000; 19:1211-1219.
  12. Fangwei Zhao and Christopher JS Desilva. “Contour Extraction in Prostate Ultrasound Images using the Wavelet Transform and Snakes”, In Proceedings. 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2001.
  13. Shen D, Herskovits EH and Davatzikos C. “An adaptive-focus statistical shape model for segmentation and shape modeling of 3D brain structures”, IEEE Transactions on Medical Imaging. 2001; 20:257-270.
  14. Awad J, et al. “Prostate's boundary detection in transrectal ultra sound images using scanning technique”, In Proceedings. Conference on Electrical and Computer Engineering, IEEE, Canadian. 2003; 2:1199-1202.
  15. 15.Wang Y, et al. “Semiautomatic three-dimensional segmentation of the prostate using two-dimensional ultrasound images”, Medical physics. 2003; 30(5):887-897.
  16. Shen D, ZhanY. and Davatzikos C, “Segmentation of Prostate Boundaries From Ultrasound Images Using Statistical Shape Model”, IEEE Transactions on Medical Imaging. 2003; 22(4):539-551.
  17. Chiu B, et al. “Prostate segmentation algorithm using dyadic wavelet transform and Discrete dynamic Contour”, Phys Med Bio. 2004; 49:4943-4960.
  18. Gong L, et al. “Parametric Shape Modeling Using Deformable Super ellipses for Prostate Segmentation”, IEEE Transactions on Medical Imaging. 2004; 23(3):340-349.
  19. Ahmed Jendoubi, Jianchao Zeng and Mohamed F Chouikha. “Top-Down Approach to Segmentation of Prostate Boundaries in Ultrasound Images”, In Proceedings. 33rd Applied Imagery Pattern Recognition Workshop, IEEE Computer Society, Washington, DC, USA. 2004; 145-149.
  20. Yongjian Yu, et al. “Segmentation of the prostate from suprapubic ultrasound images”, American Association of Physicists in Medicine. 2004; 1(12):3474-3484.
  21. Ladak, et al. “Prostate boundary segmentation from 2D and 3D ultrasound images”, 2004.
  22. Lin P, et al. “Medical Image Segmentation by Level Set Method incorporating Region and Boundary Statistical Information”, Progress in Pattern Recognition, Image Analysis and Applications. 2004; 3287:654-660.
  23. Gong L, et al, “Prostate ultrasound image segmentation using level set-based region flow with shape guidance”, In Procedings. Soc Photo Opt. Instrum. Eng. 2005; 5747:1648-1657.
  24. Betrounia N, et al. “Segmentation of Abdominal Ultrasound Images of the Prostate using apriori information and an adapted noise filter”, Computerized Medical Imaging and Graphics. 2005; 29:43-51.
  25. Zaim A and Jankun J. “A Kohonen Clustering based Approach to Segmentation of Prostate from TRUS Data using Gray-Level Co-occurrence Matrix”, In Proceedings. Conference on Computer Graphics and Imaging, 2005.
  26. Farhang Sahba, Hamid R Tizhoosh and Magdy M Salama, “Acoarse-to-fine approach to prostate boundary segmentation in ultrasound images”, Biomedical Engineering On Line. 2005; 4(58):1-13.
  27. Mingyue Ding et al. "Slice-Based Prostate Segmentation in 3D US Images using Continuity Constraint", Proceedings of the IEEE, Engineering in Medicine and Biology 27th Annual Conference, 2005.
  28. Nuwan D Nanayakkara et al. “Prostate segmentation by feature enhancement using domain knowledge and adaptive region based operations”, Phys. Med. Biol. 2006; 51: 1831.
  29. Adam C, et al. “Prostate boundary segmentation from ultrasound images using 2D active shape models: Optimization and extension to 3D”, Computer Methods and Programs in Biomedicine. 2006; 84(2-3):99-113.
  30. Sara Badiei et al. “Prostate Segmentation in 2D Ultrasound Images using Image Warping and Ellipse Fitting”, Springer-Verlag Berlin Heidelberg, MICCAI, LNCS. 2006; 4191:17-24.
  31. Mohamed SS, et al. “Prostate tissue characterization using TRUS image spectral features”, In Proceedings Third International Conference on Image Analysis and Recognition, Springer Verlag Berlin, Heidelberg. 2006; 2:589-601.
  32. Kauchouie N, et al. “An elliptical level set method for automatic TRUS prostate image segmentation”, In Proceedings. IEEE International Symposium on Signal Processing and Information Technology. 2006; 191-196.
  33. Ismail B Tutar, et al. "Semiautomatic 3-D Prostate Segmentation from TRUS images using Spherical Harmonics", IEEE Transactions on Medical Imaging. 2006; 25:12
  34. Fuxing Yang et al. "Segmentation of Prostate from 3-D Ultrasound Volumes using Shape and Intensity Priors in Level Set Framework", IEEE, EMBS Annual International Conference, 2006.
  35. Oleg Michailovic and Allen Tannenbaum. “Segmentation of Medical Ultrasound Images using Active Contours”, In Proceedings. IEEE-International Conference on Image Processing. 2007; 513-516
  36. Samar S Mohamed, and Magdy MA Salama. “Spectral clustering for TRUS images”, Biomedical Engineering On line. 2007; 1-13.
  37. Ayturk Keles, et al. “Neuro-Fuzzy classification of prostate cancer using NEFCLASS-J”, Computers in Biology and Medicine. 2007; 37(11):1617-1628.
  38. El-dahshan E, et al. “Accurate detection of prostate boundary in ultrasound images using biologically-inspired spiking neural network”, International Symposium on Intelligent Signal Processing and Communication Systems. 2007; 308-311.
  39. Amjad Zaim and Jerzy Jankun. “An Energy-Based Segmentation of Prostate From Ultrasound Images using Dot-Pattern Select Cells”, IEEE, 2007.
  40. Farhang Sahba, et al. “Application of Opposition-Based Reinforcement Learning in Image Segmentation”, In Proceedings. IEEE Symposium on Computational Intelligence in Image and Signal Processing (CIISP). 2007; 246-251.
  41. Kachouie NN and Fieguth P. “A Medical Texture Local Binary Pattern For TRUS Prostate Segmentation”, In Proceedings International Conference on Engineering in Medicine and Biology Society, IEEE, 29th Annual. 2007; 5605-5608.
  42. Seok Min Han et al. "Prostate Cancer Detection using Texture and Clinical Features in Ultrasound Images", IEEE Proceedings of the 2007 International Conference on Information Acquisition, 2007.
  43. Guokuan Li et al. "3D Prostate Boundary Reconstruction from 2d TRUS Images", IEEE. 2007; 1(3):4244-1120.
  44. Kaveh Houshmand et al. "Increasing Segmentation Accuracy in Ultrasound Imaging using Filtering and Snakes", IEEE, 2008.
  45. Ali Rafiee, Ahad Salimi, and Ali Reza Roosta. “A Novel Prostate Segmentation Algorithm in TRUS Images”, Journal of World Academy of Science, Engineering and Technology. 2008; 45:120-124.
  46. Saroul L, Bernard O, Vray D and Friboulet D. “Prostate segmentation in echo graphic images: A variational approach using deformable super-ellipse and Rayleigh Distribution”, In Proceedings 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2008; 129-132.
  47. Pingkun Yan, et al. “Optimal Search Guided by Partial Active Shape Model for Prostate Segmentation in TRUS Images”, 2009.
  48. Aboul Ella Hassanien. “Intelligence techniques for prostate ultrasound image analysis”, International Journal of Hybrid Intelligent Systems. 2009; 6(3):155-167.
  49. Aboul Ella Hassanien et al. "Intelligent Analysis of Prostate Ultrasound Images", World Congress on Nature and Biologically Inspired Computing, IEEE, 2009.
  50. Noble JA. “Ultrasound image segmentation and tissue characterization”, In Proceedings. The Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine. 2010; 224(2):307-316.
  51. Ahad Salimi et al. “An Energy-based Algorithm for Automatic Prostate Segmentation in TRUS Images”, In Proceedings. sixth International Conference on Signal-Image Technology and Internet Based Systems. IEEE Computer Society, Washington, DC, USA. 2010; 165-169
  52. Xu Robert S. Michailovich Oleg V., Solovey Igor and Salama Magdy M. A., “A Probability Tracking Approach to Segmentation of Ultrasound Prostate Images using Weak Shape Priors”, Progress in Biomedical optics and imagin. 2010; 11(33):1605-7422.
  53. Soumya Ghose, et al. “Texture Guided Active Appearance Model Propagation for Prostate Segmentation”, 2010.
  54. Soumya Ghose et al. “Prostate Segmentation with Texture Enhanced Active Appearance Model”, In proceedings sixth International Conference on Signal-Image Technology and Internet Based Systems. 2010; 18-22.
  55. Soumya Ghose et al. “Statistical Shape and Probability Prior Model for Automatic Prostate Segmentation”, In Proceedings International Conference on Digital Image Computing: Techniques and Applications. 2011; 340-345.
  56. Soumya Ghose et al. “A Probabilistic Framework For Automatic Prostate Segmentation with a Statistical Model of Shape and Appearance”, In Proceedings 18th IEEE international conference on image processing, 2011.
  57. Soumya Ghose et al. “Multiple Mean Models of Statistical Shape and Probability Priors for Automatic Prostate Segmentation”, Prostate Cancer Imaging, LNCS6963, Springer VerlagBerlin Heidelberg. 2011; 35-46.
  58. Soumya Ghose et al. “Prostate Segmentation with Local Binary Patterns Guided Active Appearance Models”, Medical imaging: Image Processing, France, 2011.
  59. Scebran M, et al. “Automatic Regions of Interest Segmentation for Computer Aided Classification of Prostate TRUS Images”, Book Chapter: Acoustical Imaging. 2011; 0:285-293.
  60. Vikas Wasson and Baljit Singh, “Prostate Boundary Detection from Ultrasound Images using Ant Colony Optimization”, International Journal of Research in Computer Science. 2011; 1(1):39-47.
  61. Pingkun Yan. “Adaptively Learning Local Shape Statistics for Prostate Segmentation in Ultrasound”, IEEE Transactions on Biomedical Engineering. 2011; 58(3):633-641.
  62. R.Manavalan et al. "TRUS Image Segmentation Using Morphological Operators and DBSCAN Clustering". 2011; 978(1):4673-0126.
  63. Chuan-Yu Chang, Yuh-ShuanTsai and I-LienWu. “Integrating Validation Incremental Neural Network and Radial-Basis Function Neural Network for Segmenting Prostate in Ultrasound Images”, International Journal of Innovative Computing, Information and Control. 2011; 7(6):3035-3046.
  64. Chuan-Yu Chang, Cheng-Min Fan and Yuh-Shyan Tsai. "Diagnosing Prostate Diseases in Color Doppler Ultrasound Images", IEEE. 2012; 978(1):4673-2588.
  65. Rasa Vafaie et al. "A Fast Model-based Prostate Boundary Segmentation using Normalized Cross-correlation and Representative Patterns in Ultrasound Images", IEEE EMBS International Conference on Biomedical Engineering and Sciences, 2012.
  66. Yongtao Shi et al. "Level Set Priors based Approach to the Segmentation of Prostate Ultrasound using Genetic Algorithm", Intelligent Automation and Soft Computing. 2013; 19(4):537-544.
  67. Vikas Wasson et al. "A Parallel Optimized Approach for Prostate Boundary Segmentation from Ultrasound Images", IJSRCSE. 2013; 1(1).
  68. Jing Yuan, Wu Qiu, Martin Rajchl, Eranga Ukwatta and Xue-Cheng Tai. "Efficient 3D Endfiring TRUS Prostate Segmentation with Globally Optimized Rotational Symmetry", IEEE Conference on Computer Vision and Pattern Recognition. 2013; 2211-2218.
  69. Ran Rao et al. "A Comparison of US versus MR Based 3D Prostate Shapes Using Radial Basis Function Interpolation and Statistical Shape Models", IEEE Journal of Biomedical and Health Informatics, 2015.
  70. Wu Qiu and Jing Yuan et al. "Rotationally resliced 3D prostate TRUS segmentation using convex optimization with shape priors", Medical Physics, 2015.
  71. Xu Li, Chunming Li, Xiaoping Yang, Andriy Fedorov and Tina Kapur. "Segmentation of Prostate from ultrasound images using level sets on active band and intensity variation across edges", Medical Physics. 2016; 43(6).
  72. Yanyan Yu, Yimin Chen and Bernard Chiu. "Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation", Computers in Biology and Medicine. 2016; 74-90.
  73. Xin Yang et at. "Fine-Grained Recurrent Neural Networks for Automatic Prostate Segmentation in Ultrasound Images", Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  74. Nooshin Ghavami et al. "Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks", Proceedings Vol. :10576, Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling; 1057603 https://doi.org/10.1117/12.2293300, SPIE Medical Imaging, 2018.
  75. S. Hossain, AP Peplinski, and JM Betts. "Residual Semantic Segmentation of the Prostate from Magnetic Resonance Images," in International Conference on Neural Information Processing. 2018; 510-521.
  76. Yi Wang, Haoran, et al. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound, IEEE Transaction on Medical Imaging. 2019; 38(12):2768-2778. [DOI: 10.1109/TMI.2019.2913184].
  77. Davood Karimi, Qi Zeng, et al. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images, Medical Image Analysis. 2019; 1361-8415/©.Elsevier, https://doi.org/10.1016/j.media.2019.07.005.
  78. Yang Lei, & Tian, Sibo & He, Xiuxiu & Wang, Tonghe & Wang, Bo & Patel, Pretesh & Jani. Ultrasound Prostate Segmentation Based on Multi‐Directional Deeply Supervised V‐Net. Medical Physics. 2019; 46. [DOI: 10.1002/mp.13577].
  79. Orlando N, Gillies DJ, Gyacskov I, Romagnoli C, D'Souza D, Fenster A. Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys. 2020; 47(6):2413-2426. [PMID: 32166768. DOI: 10.1002/mp.14134] [Epub 2020 Apr 8].
  80. Lei Geng, & Li, Simu & Xiao, Zhitao & Zhang, Fang. Multi-Channel Feature Pyramid Networks for Prostate Segmentation, Based on Transrectal Ultrasound Imaging. Applied Sciences. 2020; 10:3834. [DOI: 10.3390/app10113834].
  81. Pellicer-Valero OJ, González-Pérez V, Ramón-Borja JC, García I, Benito MB, Gómez PP. Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks. Applied Sciences. 2021; 11:844.
TOP