Layer Fusion(图层融合)研究综述
Layer Fusion 图层融合 - To obtain better fusion performance, an infrared and visible image fusion algorithm based on latent low-rank representation (LatLRR) nested with rolling guided image filtering (RGIF) is proposed that is a novel solution that integrates two-level decomposition and three-layer fusion. [1] Scanning electron microscopy (SEM) was carried out for the interlayer fusion and fractography morphological characterization of the specimens. [2] Furthermore, layer-to-layer fusion and associated side-surface quality were improved specifically in undersaturated conditions. [3] Meanwhile, a multilayer fusion is used to obtain more comprehensive information on feature points, and whether a pair of points correspond depends on the similarity of its descriptors. [4] Samples in the form of a cube (size 30x30x30 mm) were created by layer-by-layer fusion of thermoplastic polymer on a FDM (Fused Deposition Modeling) 3D printer. [5] This architecture is further optimized through exploiting different levels of parallelism, layer fusion, and fully leveraging digital signal processing blocks (DSPs). [6] In this paper, a new backbone network is proposed to complete the cross-layer fusion of multi-scale BEV feature maps, which makes full use of various information for detection. [7] Operator fusion (or kernel/layer fusion) is key optimization in many state-of-the-art DNN execution frameworks, such as TensorFlow, TVM, and MNN, that aim to improve the efficiency of the DNN inference. [8] Operator fusion (or kernel/layer fusion) is key optimization in many state-of-the-art DNN execution frameworks, such as TensorFlow, TVM, and MNN, that aim to improve the efficiency of the DNN inference. [9] The defects were replaced by plastic implants obtained by the method of layer-by-layer fusion of the FDM printing–fusing deposition modeling and fixation with titanium screws to the jaw body. [10] Design/methodology/approach An accurate detector called the rotation region-based convolution neural networks (CNN) with multilayer fusion and multidimensional attention (M2R-Net) is proposed in this paper. [11] Action potential-elicited Ca2+ influx at these release sites triggers zippering of SNAREs embedded in the SV and plasma membrane to drive bilayer fusion and release of neurotransmitters that activate downstream targets. [12] We analyze when and why fusion may result in runtime speedups, and study three types of layer fusion: (a) 3-by-3 depthwise convolution with 1-by-1 convolution, (b) 3-by-3 convolution with 1-by-1 convolution, and (c) two 3-by-3 convolutions. [13] This paper aims to increase the tensile strength of Acrylonitrile Butadiene Styrene (ABS) in the weakest direction (Z axis), where poor interlayer fusion and air gaps between extruded trails reduce strength. [14] Then, combined with the results of unimodal recognition, we construct a CNN model based on two-layer fusion for multimodal biometric recognition. [15] Significance Membrane lipid homeostasis within cells requires a cooperation of vesicular transport and of lipid transfer proteins that mediate lipid exchange between membranes independent of bilayer fusion. [16] Herein, we mix entrapped sub-attoliter volumes of liposomes (~135 nm diameter) via lipid bilayer fusion, facilitated by the hybridization of membrane-anchored lipidated oligonucleotides. [17] , the inner-connected module and the attention skip-layer fusion) are incorporated. [18] This architecture is further optimized through exploiting different levels of parallelism and layer fusion to achieve low latency for RSI segmentation tasks. [19] Fused deposition modeling (FDM) has been one of the most ac- cessible AM methods which guide thermoplastic filaments to provide accurate and easy pro- duction of 3D objects layer by layer fusion. [20] DED could build solid NiTi alloys with good interlayer fusion and phase transformation characteristics. [21] Aiming at the uneven distribution of haze concentration and color imbalance in haze weather images, a natural hazy image enhancement method which combines with multilayer fusion and chunk-based is proposed. [22] Multi-layer fusion to obtain L-M-I filter banks to form new cepstral features. [23] It has three key features: 1) selective caching based layer fusion to minimize external memory access (EMA), 2) memory compaction scheme for smaller on-chip memory footprint, and 3) cyclic ring core architecture to increase the throughput with improved core utilization. [24]为了获得更好的融合性能,提出了一种基于潜在低秩表示(LatLRR)与滚动引导图像滤波(RGIF)相结合的红外和可见光图像融合算法,它是一种集两级分解和三层融合为一体的新型解决方案。 . [1] 采用扫描电子显微镜(SEM)对试样进行层间熔合和断口形貌表征。 [2] 此外,层间融合和相关的侧面质量在欠饱和条件下得到了特别改善。 [3] 同时,采用多层融合来获得更全面的特征点信息,一对点是否对应取决于其描述符的相似性。 [4] 立方体形式的样品(尺寸 30x30x30 毫米)通过在 FDM(熔融沉积建模)3D 打印机上逐层融合热塑性聚合物而创建。 [5] 通过利用不同级别的并行性、层融合和充分利用数字信号处理模块 (DSP),该架构得到了进一步优化。 [6] 本文提出了一种新的骨干网络来完成多尺度BEV特征图的跨层融合,充分利用各种信息进行检测。 [7] 算子融合(或内核/层融合)是许多最先进的 DNN 执行框架(例如 TensorFlow、TVM 和 MNN)中的关键优化,旨在提高 DNN 推理的效率。 [8] 算子融合(或内核/层融合)是许多最先进的 DNN 执行框架(例如 TensorFlow、TVM 和 MNN)中的关键优化,旨在提高 DNN 推理的效率。 [9] 缺陷被通过FDM打印的逐层融合的方法获得的塑料植入物替换 - 融合沉积建模并用钛螺钉固定到颌体。 [10] 设计/方法/方法 本文提出了一种精确的检测器,称为基于旋转区域的卷积神经网络(CNN),具有多层融合和多维注意力(M2R-Net)。 [11] 在这些释放位点由动作电位引发的 Ca2+ 流入触发嵌入 SV 和质膜中的 SNARE 的拉链,以驱动双层融合和释放激活下游靶标的神经递质。 [12] 我们分析了融合何时以及为什么会导致运行时加速,并研究了三种类型的层融合:(a) 3×3 深度卷积与 1×1 卷积,(b) 3×3 卷积与 1-乘 1 卷积,和 (c) 两个 3乘 3 卷积。 [13] 这个 论文旨在提高丙烯腈的拉伸强度 最弱方向(Z轴)的丁二烯苯乙烯(ABS), 其中层间熔合不良和挤出之间的气隙 小径会降低强度。 [14] 然后,结合单模态识别的结果,我们构建了一个基于两层融合的CNN模型,用于多模态生物特征识别。 [15] 意义 细胞内的膜脂质稳态需要囊泡转运和脂质转运蛋白的合作,这些蛋白介导膜之间的脂质交换,而不依赖于双层融合。 [16] 在此,我们通过脂质双层融合混合了包埋的亚阿升体积的脂质体(直径约 135 纳米),这得益于膜锚定的脂质化寡核苷酸的杂交。 [17] ,内部连接模块和注意跳过层融合)被合并。 [18] 该架构通过利用不同级别的并行性和层融合进一步优化,以实现 RSI 分割任务的低延迟。 [19] 熔融沉积建模 (FDM) 一直是最容易使用的增材制造方法之一,它引导热塑性细丝以提供准确和轻松地逐层融合生产 3D 对象。 [20] DED 可以构建具有良好层间融合和相变特性的固体 NiTi 合金。 [21] 针对雾霾天气图像中雾霾浓度分布不均、色彩不平衡等问题,提出一种多层融合与基于块的自然雾霾图像增强方法。 [22] 多层融合以获得 L-M-I 滤波器组以形成新的倒谱特征。 [23] 它具有三个关键特性:1) 基于选择性缓存的层融合以最小化外部内存访问 (EMA),2) 内存压缩方案,用于更小的片上内存占用,以及 3) 循环环核心架构,通过提高核心利用率来增加吞吐量. [24]
convolutional neural network
We propose a novel Multi-modal Multi-layer Fusion Convolutional Neural Network (mmfCNN), which targets at finding a discriminative model for recognizing the subtle differences between live and spoof faces. [1] Aiming at the “memory wall” problem occurred in the training for convolutional neural network using BN algorithm, the training method with splitting BN layer and multi-layer fusion calculation is proposed to reduce the memory access in model training. [2] More specifically, in contrary to the majority of existing data-driven prognostic approaches for RUL estimation, which are developed based on a single deep model and can hardly maintain satisfactory generalization performance across various prognostic scenarios, the proposed HDNN framework consists of two parallel paths (one based on Long Short Term Memory (LSTM) and one based on convolutional neural networks (CNN)) followed by a fully connected multilayer fusion neural network, which acts as the fusion center combining the outputs of the two paths to form the target RUL. [3] In this paper, we propose a multilayer fusion approach by means of a pair of shared parameters (dual-stream) convolutional neural network where each network accepts RGB data and a novel colour-based texture descriptor, namely Orthogonal Combination-Local Binary Coded Pattern (OC-LBCP) for periocular recognition in the wild. [4] In multiple-feature learning with Deep Convolutional Neural Networks (DCNNs) or Machine Learning method for large-scale person identification in the wild, the key is to design an appropriate strategy for decision layer fusion or feature layer fusion which can enhance discriminative power. [5] The proposed deep convolutional neural network is capable of automatically extracting high-level features from raw signals or low-level features and optimally selecting the combination of extracted features via a multi-layer fusion to satisfy any damage identification objective. [6]我们提出了一种新颖的多模态多层融合卷积神经网络(mmfCNN),其目标是找到一个判别模型来识别真人脸和恶搞人脸之间的细微差别。 [1] 针对使用BN算法训练卷积神经网络时出现的“内存墙”问题,提出了分割BN层和多层融合计算的训练方法,以减少模型训练时的内存访问。 [2] 更具体地说,与大多数现有的用于 RUL 估计的数据驱动预测方法相反,这些方法是基于单个深度模型开发的,并且很难在各种预测场景中保持令人满意的泛化性能,所提出的 HDNN 框架由两条并行路径组成(一个基于长短期记忆(LSTM)和一个基于卷积神经网络(CNN)),然后是一个完全连接的多层融合神经网络,它充当融合中心,将两条路径的输出结合起来形成目标 RUL。 [3] nan [4] 在使用深度卷积神经网络 (DCNN) 进行多特征学习或用于野外大规模人员识别的机器学习方法中,关键是设计合适的决策层融合或特征层融合策略,以增强判别力。 [5] nan [6]
Decision Layer Fusion 决策层融合
Finally, the decision layer fusion is defined as the product of the chamber weights and feature signals obtained by wavelet multi-layer decomposition. [1] In this paper, migration learning is used to implement emotion data labeling, continuous conditional random fields to identify emotions based on data collected from smartphones and smart clothes, respectively, and finally decision layer fusion for emotion classification prediction. [2] In multiple-feature learning with Deep Convolutional Neural Networks (DCNNs) or Machine Learning method for large-scale person identification in the wild, the key is to design an appropriate strategy for decision layer fusion or feature layer fusion which can enhance discriminative power. [3] Finally a decision layer fusion algorithm combines the recognition results of the two sensors together. [4]最后将决策层融合定义为小波多层分解得到的腔室权重与特征信号的乘积。 [1] 在本文中,迁移学习用于实现情感数据标记,连续条件随机场分别基于从智能手机和智能衣服收集的数据识别情感,最后融合决策层进行情感分类预测。 [2] 在使用深度卷积神经网络 (DCNN) 进行多特征学习或用于野外大规模人员识别的机器学习方法中,关键是设计合适的决策层融合或特征层融合策略,以增强判别力。 [3] 最后,决策层融合算法将两个传感器的识别结果结合在一起。 [4]
Base Layer Fusion 基础层融合
For base layer fusion, a fusion strategy based on the detail and energy measurements of the source image is proposed to determine the pixel value of the fused image base layer such that the energy loss of the fusion can be reduced and the texture detail features are highlighted to obtain more source image details. [1] Maximum and averaging fusion rules are adopted for base layer fusion. [2]针对基层融合,提出一种基于源图像细节和能量测量的融合策略,确定融合图像基层的像素值,从而减少融合的能量损失,突出纹理细节特征以获得更多的源图像细节。 [1] 基础层融合采用最大和平均融合规则。 [2]
layer fusion strategy
Finally, the category of each pixel is accurately determined by the decision fusion and weighted output layer fusion strategy. [1] Then, we propose a weighted co-association matrix-based fusion algorithm (WCMFA) to detect the inherent community structure in attributed networks by using multi-layer fusion strategies. [2] Firstly, the base parts and detail features of the image were separated, the base parts was denoised and filtered by DT-CWT, and the fusion base parts were obtained; depth learning model VGG-S was selected to extract the detail features of the image, and then the fusion details were obtained by multi-layer fusion strategy. [3] Second, in order to further improve the accuracy for multiscale face detection, multilayer fusion strategy is proposed, which learns the facial texture features from the lower layer in more detail. [4] By exploring the optimization opportunities in computational graphs, we propose a layer fusion strategy, which dramatically decreases the number of scalar computation layers, such as Batch Normalization, Scale. [5] Finally, a multi-layer fusion strategy is used to capture informative clues in images. [6]最后,通过决策融合和加权输出层融合策略准确确定每个像素的类别。 [1] 然后,我们提出了一种基于加权协关联矩阵的融合算法(WCMFA),通过使用多层融合策略来检测属性网络中的固有社区结构。 [2] 首先对图像的基部和细节特征进行分离,对基部进行DT-CWT去噪滤波,得到融合基部;选择深度学习模型VGG-S提取图像的细节特征,然后通过多层融合策略得到融合细节。 [3] 其次,为了进一步提高多尺度人脸检测的准确率,提出了多层融合策略,可以更详细地从下层学习面部纹理特征。 [4] 通过探索计算图中的优化机会,我们提出了一种层融合策略,该策略显着减少了标量计算层的数量,例如 Batch Normalization、Scale。 [5] 最后,使用多层融合策略来捕获图像中的信息线索。 [6]
layer fusion network
The proposed predictor is a multi-layer fusion network that fuses different levels of features. [1] In view of this, this paper studies the SDN-based IP and optical layer cooperative control architecture, establishes the IP and optical layer fusion network perception model, and proposes the SDN-based IP and optical layer cooperative operation and maintenance control scheme, which provides a theoretical basis and demonstration reference for the construction of a new generation of power communication network with unified control of resources, coordinated network and data scheduling, and rapid response to business needs. [2] For the pixel complementation and prediction output edge smoothing problem, this paper proposes a Gated Multi-layer Fusion Network. [3] In general, the proposed framework adopts two fusion networks: side output decision fusion network (SODFN) and fully convolutional layer fusion network (FCLFN). [4] To overcome the problem, this paper proposes a two-layer fusion network (TLFN) indoor localization method for VLC. [5]所提出的预测器是一个多层融合网络,融合了不同级别的特征。 [1] 鉴于此,本文研究基于SDN的IP与光层协同控制架构,建立IP与光层融合网络感知模型,提出基于SDN的IP与光层协同运维控制方案,提供为构建资源统一控制、网络与数据协同调度、快速响应业务需求的新一代电力通信网络提供理论基础和示范参考。 [2] 针对像素互补和预测输出边缘平滑问题,本文提出了一种门控多层融合网络。 [3] nan [4] 为了克服这个问题,本文提出了一种用于 VLC 的两层融合网络 (TLFN) 室内定位方法。 [5]
layer fusion neural
A multi-layer fusion neural network (MFNN) has been designed to capture the artifacts in different levels. [1] More specifically, in contrary to the majority of existing data-driven prognostic approaches for RUL estimation, which are developed based on a single deep model and can hardly maintain satisfactory generalization performance across various prognostic scenarios, the proposed HDNN framework consists of two parallel paths (one based on Long Short Term Memory (LSTM) and one based on convolutional neural networks (CNN)) followed by a fully connected multilayer fusion neural network, which acts as the fusion center combining the outputs of the two paths to form the target RUL. [2] More specifically, in contrary to the majority of existing data-driven prognostic approaches for RUL estimation, which are developed based on a single deep model and can hardly maintain good generalization performance across various prognostic scenarios, the proposed HDNN framework consists of two parallel paths (one LSTM and one CNN) followed by a fully connected multilayer fusion neural network which acts as the fusion centre combining the output of the two paths to form the target RUL. [3]设计了一个多层融合神经网络(MFNN)来捕获不同级别的伪影。 [1] 更具体地说,与大多数现有的用于 RUL 估计的数据驱动预测方法相反,这些方法是基于单个深度模型开发的,并且很难在各种预测场景中保持令人满意的泛化性能,所提出的 HDNN 框架由两条并行路径组成(一个基于长短期记忆(LSTM)和一个基于卷积神经网络(CNN)),然后是一个完全连接的多层融合神经网络,它充当融合中心,将两条路径的输出结合起来形成目标 RUL。 [2] 更具体地说,与大多数现有的用于 RUL 估计的数据驱动预测方法相反,这些方法是基于单个深度模型开发的,并且很难在各种预测场景中保持良好的泛化性能,所提出的 HDNN 框架由两条并行路径组成(一个 LSTM 和一个 CNN),然后是一个完全连接的多层融合神经网络,该网络充当融合中心,将两条路径的输出结合起来形成目标 RUL。 [3]
layer fusion technique
With re-scheduling the training pipeline, we use a patch-based layer fusion technique and reduce the off-chip memory bandwidth by 97%. [1] To tackle (iii), we propose intra- and inter-layer fusion techniques so that the entire BNN inference execution can be packed into a single GPU kernel, and so avoid the high-cost of frequent launching and releasing. [2]通过重新调度训练管道,我们使用了基于补丁的层融合技术,并将片外内存带宽减少了 97%。 [1] 为了解决 (iii),我们提出了层内和层间融合技术,以便整个 BNN 推理执行可以打包到单个 GPU 内核中,从而避免频繁启动和发布的高成本。 [2]
layer fusion feature
Finally, we feed multi-layer fusion features into shared prediction module (shared PM). [1] We present a cross-layer fusion feature network (CLFF-Net) for both high-quality region proposal generation and accurate object detection. [2]最后,我们将多层融合特征输入共享预测模块(共享 PM)。 [1] 我们提出了一个跨层融合特征网络(CLFF-Net),用于高质量的区域提议生成和准确的对象检测。 [2]
layer fusion layer 图层融合图层
The two parallel paths are followed by a multilayer fusion layer acting as a fusion centre that combines localized features. [1] After the dual-channel feature extraction, the attention layer fusion layer is used to convert the weighted values of LSTM hidden variables, so the stock price can be predicted with the news text. [2]两条平行路径之后是多层融合层,作为融合中心,结合了局部特征。 [1] 双通道特征提取后,注意力层融合层用于转换LSTM隐藏变量的加权值,从而可以用新闻文本预测股票价格。 [2]
layer fusion model 层融合模型
In order to further improve the accuracy of these models, a two-layer fusion model based on SOC fragments is proposed in this paper. [1] To obtain more efficient fusion of RGB images and point clouds features, we propose a multi-layer fusion model, which conducts nonlinear and iterative combinations of features from multiple convolutional layers and merges the global and local features effectively. [2]为了进一步提高这些模型的准确性,本文提出了一种基于SOC片段的两层融合模型。 [1] 为了更有效地融合 RGB 图像和点云特征,我们提出了一种多层融合模型,该模型对来自多个卷积层的特征进行非线性和迭代组合,并有效地融合全局和局部特征。 [2]