Label Fusion(标签融合)研究综述
Label Fusion 标签融合 - We compare our method with the label fusion of 13 organs on state-of-the-art Deeds registration method and achieved Dice score of 92. [1] Multi-atlas-based segmentation (MAS) methods have demonstrated superior performance in the field of automatic image segmentation, and label fusion is an important part of MAS methods. [2] In this paper, multi-atlas based methods for brain MR image segmentation were reviewed regarding several registration toolboxes which are widely used in the multi-atlas methods, conventional methods for label fusion, datasets that have been used for evaluating the multi-atlas methods, as well as the applications of multi-atlas based segmentation in clinical researches. [3] Choosing well‐registered atlases for label fusion is vital for an accurate segmentation. [4] Patch-based label fusion in the target space has shown to produce very accurate segmentations although at the expense of registering all atlases to each target image. [5] METHODS Our approach dynamically selects and weights the appropriate number of atlases for weighted-label fusion and generates segmentations and consensus maps indicating voxel-wise agreement between different atlases. [6] Unlike traditional multi-atlas methods, our proposed approach does not rely on label fusion on the voxel level. [7] After the label fusion with majority voting, we finally constructed a 3D-FCN to further refine the boundary voxels with low voting values. [8] Label fusion is one of the key steps in multi-atlas based segmentation of structural magnetic resonance (MR) images. [9] In the single robot semantic mapping process, Bayesian rule is used for label fusion and occupancy probability updating, where the semantic information is added to the geometric map grid. [10] In label fusion, the coefficient as a specific weight is assigned to target label image based on the correlation function between atlases. [11] We incorporate neighborhood information to label fusion so that final label estimation is more accurate and robust for diseased hips with joint space narrowing. [12]我们将我们的方法与最先进的 Deeds 注册方法中 13 个器官的标签融合进行比较,并获得了 92 的 Dice 分数。 [1] 基于多图集的分割(MAS)方法在自动图像分割领域表现出优越的性能,标签融合是MAS方法的重要组成部分。 [2] 本文回顾了基于多图集的脑 MR 图像分割方法,包括在多图集方法中广泛使用的几种配准工具箱、标签融合的常规方法、已用于评估多图集方法的数据集、以及基于多图谱的分割在临床研究中的应用。 [3] 为标签融合选择注册良好的图集对于准确分割至关重要。 [4] 目标空间中基于补丁的标签融合已证明可以产生非常准确的分割,尽管代价是将所有图集注册到每个目标图像。 [5] 方法 我们的方法动态地选择和加权适当数量的图集以进行加权标签融合,并生成分割和共识图,指示不同图集之间的体素一致性。 [6] 与传统的多图集方法不同,我们提出的方法不依赖于体素级别的标签融合。 [7] 在经过多数投票的标签融合之后,我们最终构建了一个 3D-FCN 来进一步细化具有低投票值的边界体素。 [8] 标签融合是基于多图谱的结构磁共振(MR)图像分割的关键步骤之一。 [9] 在单机器人语义映射过程中,使用贝叶斯规则进行标签融合和占用概率更新,将语义信息添加到几何地图网格中。 [10] 在标签融合中,基于图集之间的相关函数,将作为特定权重的系数分配给目标标签图像。 [11] 我们将邻域信息结合到标签融合中,以便最终标签估计对于关节空间变窄的患病臀部更加准确和稳健。 [12]
multi atlas segmentation
To tackle these problems with multi-atlas segmentation, in this paper, we propose a new metric for image registration and new descriptor for label fusion. [1] Background Label fusion is a core step of Multi-Atlas Segmentation (MAS), which has a decisive effect on segmentation results. [2] In contrast, previously popular multi-atlas segmentation (MAS) methods are relatively slow (as they rely on costly registrations) and even though sophisticated label fusion strategies have been proposed, DL approaches generally outperform MAS. [3] Label propagation and label fusion using multiple atlases have made multi-atlas segmentation approach as forefront of segmentation research. [4]为了解决多图集分割的这些问题,在本文中,我们提出了一种新的图像配准指标和新的标签融合描述符。 [1] 背景标签融合是多图谱分割(MAS)的核心步骤,对分割结果有决定性的影响。 [2] 相比之下,以前流行的多图谱分割 (MAS) 方法相对较慢(因为它们依赖于昂贵的注册),即使已经提出了复杂的标签融合策略,DL 方法通常也优于 MAS。 [3] nan [4]
multi atlas joint
Compared to several existing state-of-the-art segmentation methods for subcortical structures, including a multi-atlas joint label fusion method and a representative 3D FCN method, the proposed method performed significantly better for a majority of the subcortical structures. [1] Compared to several existing state-of-the-art segmentation methods including a multi-atlas joint label fusion method and three representative fully convolutional network methods, the proposed method performed significantly better for a majority of the 12 subcortical structures, with the overall mean Dice scores being respective 0. [2] The model was evaluated on a set of cardiac CTA images with comparison to related shape prior and local region-based methods and multi-atlas joint label fusion methods, and experimental results show it achieves competitive accuracies of segmenting myocardial epicardium and endocardium parts. [3]与几种现有的用于皮层下结构的最先进的分割方法相比,包括多图谱联合标签融合方法和代表性的 3D FCN 方法,所提出的方法对大多数皮层下结构的表现要好得多。 [1] 与几种现有的最先进的分割方法(包括多图集联合标签融合方法和三种具有代表性的全卷积网络方法)相比,所提出的方法对 12 种皮层下结构中的大多数都表现得更好,总体平均 Dice分数分别为0。 [2] nan [3]
Joint Label Fusion 联合标签融合
Specifically, we extend the joint label fusion method by taking model uncertainty into account when estimating correlations among predictions produced by different modalities. [1] We first report a machine learning framework for brain tumor growth modeling, tumor segmentation and tracking in longitudinal mMRI scans, comprising of two methods: feature fusion and joint label fusion (JLF). [2] Compared to several existing state-of-the-art segmentation methods for subcortical structures, including a multi-atlas joint label fusion method and a representative 3D FCN method, the proposed method performed significantly better for a majority of the subcortical structures. [3] spatially varying weighted voting and joint label fusion, in the context of segmenting medial temporal lobe subregions in T1-weighted MRI. [4] Compared to several existing state-of-the-art segmentation methods including a multi-atlas joint label fusion method and three representative fully convolutional network methods, the proposed method performed significantly better for a majority of the 12 subcortical structures, with the overall mean Dice scores being respective 0. [5] The model was evaluated on a set of cardiac CTA images with comparison to related shape prior and local region-based methods and multi-atlas joint label fusion methods, and experimental results show it achieves competitive accuracies of segmenting myocardial epicardium and endocardium parts. [6] Although slightly less accurate than our previously reported joint label fusion approach (left lung: 0. [7] By comparing with the non-reinforced segmentation and a classical multi-atlas method with joint label fusion, the proposed approach obtains better results. [8] When using the same 45 training images, AssemblyNet outperforms global U-Net by 28% in terms of the Dice metric, patch-based joint label fusion by 15% and SLANT-27 by 10%. [9]具体来说,我们通过在估计不同模式产生的预测之间的相关性时考虑模型不确定性来扩展联合标签融合方法。 [1] 我们首先报告了一种用于纵向 mMRI 扫描中脑肿瘤生长建模、肿瘤分割和跟踪的机器学习框架,包括两种方法:特征融合和联合标签融合 (JLF)。 [2] 与几种现有的用于皮层下结构的最先进的分割方法相比,包括多图谱联合标签融合方法和代表性的 3D FCN 方法,所提出的方法对大多数皮层下结构的表现要好得多。 [3] 在 T1 加权 MRI 中分割内侧颞叶亚区域的背景下,空间变化的加权投票和联合标签融合。 [4] 与几种现有的最先进的分割方法(包括多图集联合标签融合方法和三种具有代表性的全卷积网络方法)相比,所提出的方法对 12 种皮层下结构中的大多数都表现得更好,总体平均 Dice分数分别为0。 [5] nan [6] 虽然比我们之前报道的联合标签融合方法准确度略低(左肺:0. [7] nan [8] nan [9]
Atla Label Fusion 来标签融合
The framework integrates groupwise multi-atlas label fusion and template-based medial modeling with Kalman filtering to generate quantitatively descriptive and temporally consistent models of valve dynamics. [1] We integrate multi-atlas label fusion, which leverages high-resolution images from another sample as prior spatial information, with parametric Gaussian hidden Markov models based on image intensities, to create a robust method for determining ventricular cerebrospinal fluid volume. [2] The T1-weighted images were automatically parcellated for hippocampus and amygdala, as well as the intracranial volume (ICV), total brain volume, total gray and white matter, using a multi-atlas label fusion method implemented in the MRICloud ( https://braingps. [3] We propose a new voxel/patch correspondence model for intensity-based multi-atlas label fusion strategies that leads to more accurate similarity measures, having a key role in the final brain segmentation. [4] We also compare AdaPro with three other state-of- the-art methods: an statistical shape model based on synergistic object search and delineation, and two methods based on multi-atlas label fusion. [5] The multi‐atlas label fusion (MALF) method is considered a highly accurate parcellation approach, and anticipated for clinical application to quantitatively evaluate early developmental processes. [6] DISCUSSION Starting from the results of the Grand Challenges on brain tissue and structure segmentation held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this review analyses the development of the algorithms and discusses the tendency from multi-atlas label fusion to deep learning. [7] After multi-atlas label fusion by majority voting, we possess noisy labels for each of the targeted LGE images. [8]该框架将分组多图集标签融合和基于模板的中间建模与卡尔曼滤波相结合,以生成定量描述和时间一致的瓣膜动力学模型。 [1] 我们将多图谱标签融合(利用来自另一个样本的高分辨率图像作为先验空间信息)与基于图像强度的参数高斯隐马尔可夫模型相结合,以创建一种确定心室脑脊液体积的稳健方法。 [2] 使用在 MRICloud (https://布赖恩普斯。 [3] 我们为基于强度的多图集标签融合策略提出了一种新的体素/补丁对应模型,该模型可导致更准确的相似性度量,在最终的大脑分割中起关键作用。 [4] 我们还将 AdaPro 与其他三种最先进的方法进行了比较:一种基于协同对象搜索和描绘的统计形状模型,以及两种基于多图集标签融合的方法。 [5] nan [6] nan [7] nan [8]
label fusion method 标签融合方法
In this paper, we propose a robust discriminative label fusion method under the multi-atlas framework. [1] The T1-weighted images were automatically parcellated for hippocampus and amygdala, as well as the intracranial volume (ICV), total brain volume, total gray and white matter, using a multi-atlas label fusion method implemented in the MRICloud ( https://braingps. [2] Specifically, we extend the joint label fusion method by taking model uncertainty into account when estimating correlations among predictions produced by different modalities. [3] In this work, a multi-atlas patch-based label fusion method is presented for automatic brain extraction from neonatal head MR images. [4] Compared to several existing state-of-the-art segmentation methods for subcortical structures, including a multi-atlas joint label fusion method and a representative 3D FCN method, the proposed method performed significantly better for a majority of the subcortical structures. [5] Compared to several existing state-of-the-art segmentation methods including a multi-atlas joint label fusion method and three representative fully convolutional network methods, the proposed method performed significantly better for a majority of the 12 subcortical structures, with the overall mean Dice scores being respective 0. [6] The evaluation results showed that our method was competitive to state-of-the-art label fusion methods in terms of accuracy. [7] Finally, hard labels of multiple attributes are adaptively fused into a soft label by the proposed multi-label fusion method based on the idea of Bayesian inference, which makes the attribute labels suitable for regression tasks. [8] However, a precise segmentation of brain subcortical structures in a magnetic resonance image is still difficult since (1) brain MRI typically suffers low tissue contrast; and (2) image patterns around the boundary of a structure are similar such that similarity-based and reconstruction-based label fusion methods achieve inaccurate results. [9] The model was evaluated on a set of cardiac CTA images with comparison to related shape prior and local region-based methods and multi-atlas joint label fusion methods, and experimental results show it achieves competitive accuracies of segmenting myocardial epicardium and endocardium parts. [10]在本文中,我们提出了一种在多图集框架下的鲁棒判别标签融合方法。 [1] 使用在 MRICloud (https://布赖恩普斯。 [2] 具体来说,我们通过在估计不同模式产生的预测之间的相关性时考虑模型不确定性来扩展联合标签融合方法。 [3] 在这项工作中,提出了一种基于多图集补丁的标签融合方法,用于从新生儿头部 MR 图像中自动提取大脑。 [4] 与几种现有的用于皮层下结构的最先进的分割方法相比,包括多图谱联合标签融合方法和代表性的 3D FCN 方法,所提出的方法对大多数皮层下结构的表现要好得多。 [5] 与几种现有的最先进的分割方法(包括多图集联合标签融合方法和三种具有代表性的全卷积网络方法)相比,所提出的方法对 12 种皮层下结构中的大多数都表现得更好,总体平均 Dice分数分别为0。 [6] nan [7] nan [8] nan [9] nan [10]
label fusion approach
Finally, we develop a label fusion approach to make a final classification decision for new testing samples. [1] Finally, we develop a label fusion approach to make a final classification decision for new testing samples. [2] Although slightly less accurate than our previously reported joint label fusion approach (left lung: 0. [3] The former labels the target volume by registering one or more pre-labeled atlases using a deformable registration method, in which case the result depends on the quality of the reference volumes, the registration algorithm and the label fusion approach, if more than one atlas is employed. [4]最后,我们开发了一种标签融合方法来为新的测试样本做出最终的分类决策。 [1] 最后,我们开发了一种标签融合方法来为新的测试样本做出最终的分类决策。 [2] 虽然比我们之前报道的联合标签融合方法准确度略低(左肺:0. [3] nan [4]
label fusion strategy
We first review various existing patch-based multiatlas label fusion strategies. [1] We propose a new voxel/patch correspondence model for intensity-based multi-atlas label fusion strategies that leads to more accurate similarity measures, having a key role in the final brain segmentation. [2] In contrast, previously popular multi-atlas segmentation (MAS) methods are relatively slow (as they rely on costly registrations) and even though sophisticated label fusion strategies have been proposed, DL approaches generally outperform MAS. [3]我们首先回顾了各种现有的基于补丁的多图集标签融合策略。 [1] 我们为基于强度的多图集标签融合策略提出了一种新的体素/补丁对应模型,该模型可导致更准确的相似性度量,在最终的大脑分割中起关键作用。 [2] 相比之下,以前流行的多图谱分割 (MAS) 方法相对较慢(因为它们依赖于昂贵的注册),即使已经提出了复杂的标签融合策略,DL 方法通常也优于 MAS。 [3]
label fusion technique
We then conduct an empirical study on their cost-effectiveness, showing that the performance of the existing active learning approaches is affected by many factors in hybrid classification contexts, such as the noise level of data, label fusion technique used, and the specific characteristics of the task. [1] We extend the graphical model used in label fusion techniques for the segmentation of multi-modality Magnetic Resonance brain images. [2] Additionally, the set of labels are merged using a label fusion technique that reduces the errors produced by the registration. [3]然后,我们对其成本效益进行了实证研究,表明现有主动学习方法的性能受到混合分类上下文中许多因素的影响,例如数据的噪声水平、使用的标签融合技术以及特定的特征。任务。 [1] 我们扩展了标签融合技术中使用的图形模型,用于分割多模态磁共振脑图像。 [2] 此外,使用标签融合技术合并标签集,以减少配准产生的错误。 [3]
label fusion term
Second, an intensity prior information term and a label fusion term are constructed using intensity information of the initial lesion region, the above two terms are integrated into a region-based level set model. [1] We define a new energy functional by combining a weighted label fusion term, a bias field based image information fitting term and a regularization term together. [2]其次,利用初始病变区域的强度信息构造强度先验信息项和标签融合项,将上述两项集成到基于区域的水平集模型中。 [1] 我们通过将加权标签融合项、基于偏置场的图像信息拟合项和正则化项组合在一起来定义新的能量泛函。 [2]