Regularization Methods(正则化方法)研究综述
Regularization Methods 正则化方法 - If the covariates are high-dimensional, parsimonious propensity score models can be developed by regularization methods including LASSO and its variants. [1] Moreover, the regularization methods are introduced to improve the prediction performance by taking the Bayesian interpretation of the parameters into consideration. [2] Automatic Machine Learning (AutoML) uses automated data-driven methods to realize the selection of hyper-parameters, neural network architectures, regularization methods, etc. [3] To get a stable solution and approximation, we need to offer the regularization methods. [4] In order to solve the ill-posed inverse problem of CLT imaging, some numerical optimization or regularization methods need to be applied. [5] Regularization methods aim at reducing this sensitivity. [6] In this literature, we compare three linear model selection and regularization methods (shrinkage, subset selection, dimension reduction) and nine candidate models (OLS regression, Ridge regression, Lasso regression, Elastic net, best subset selection, forward subset selection, backward subset selection, PCR, PLS) based on leave-one-out-cross-validation (LOOCV) prediction error. [7] We evaluate our regularization methods from several experiments both on small dataset and large dataset. [8] The inverse optimization problem is solved by using the analytical, numerical and regularization methods. [9] , optimizer, regularization methods, and activation functions) choices that affect the quality of the DL models, and consequently software quality. [10] Regularization methods, such as spatial regularization or temporal regularization, used in existing DCF trackers aim to enhance the capacity of the filters. [11] The simulation results show that the Tikhonov has a better performance than the other regularization methods from the computation time and estimation error performance perspectives. [12] We do this using parametric covariance functions combined with shrinkage‐based regularization methods within an Ensemble Transform Kalman Filter inversion setup. [13] The concepts of sufficient dimension reduction (SDR) and regularization methods are combined to introduce SMAVE-EN. [14] After choosing a network architecture, an energy function (or loss) is minimized, choosing from a wide variety of optimization and regularization methods. [15] In this paper, we propose an original CNN training strategy that brings together ideas from both dropout-like regularization methods and solutions that learn discriminative features. [16] Regularization methods such as the least absolute shrinkage and selection operator (LASSO) are commonly used in high dimensional data to achieve sparser solutions. [17] Different from the regularization methods (e. [18] We proposed a framework of regularization methods, called density-fixing, that can be used commonly for supervised and semi-supervised learning to tackle this problem. [19] We analyze some regularization methods and activation functions and their impact on the effectiveness of our architecture. [20] Under the efforts of researchers, some regularization methods have been presente. [21] An algorithm for solving this problem is proposed, which is resistant to informational noises and computational errors, and is based on regularization methods and constructions of guaranteed control theory. [22] This paper presents a brief review on regularization methods and shows that the combination of two techniques could preserve symmetries in all orders of the Perturbation Theory. [23] Furthermore, we comprehensively investigate the effect of combining VIB with other regularization methods including data augmentation and auxiliary data. [24] Both methods improve the interpretability of model‘s gradients and the first method exceeds most regularization methods except adversarial training on MNIST and the second method even exceeds the adversarial training under white-box attacks on CIFAR-10 and CIFAR-100. [25] To obtain the proposed stable method the identification problem is categorized into three subproblems, two of which present numerical instability and regularization methods must be applied to obtain their solution in a stable form. [26] Parametric techniques as MUSIC solve the TomoSAR problem in a different manner as the regularization methods do, hence, each approach demands different methodologies for the proper estimation of their parameters. [27] To obtain mesh independent results, a possible solution is to resort to regularization methods, but only a few of them are compatible with dynamic explicit simulations, especially for ductile failure. [28] The ill-conditioned systems of the equation were solved by the regularization methods based on the single values decomposition. [29] For convolutional neural networks (CNNs), regularization methods, such as DropBlock and Shake-Shake, have illustrated the improvement in the generalization performance. [30] In this framework, the sparse representation based feature selection and regularization methods are very attractive. [31] Algorithms for solving ill-conditioned systems are based on regularization methods. [32] In the sporadic cohort, boosting-based ML algorithms performed best in the training data set, while regularization methods best performed for unseen data. [33] The addition of model selection and regularization methods to the traders’ learning algorithm is shown to reduce but not eliminate overfitting and resulting excess volatility. [34] Regularization methods based on LASSO and Ridge regression have been employed in the model selection and validation. [35] In this article, we contribute to this endeavor by investigating the fidelity of regularization methods such as the Lasso. [36] To fill the gap, an improved inversion algorithm is proposed, in which the dichotomy and the high order total variation (TV) regularization methods are introduced. [37] Post-smoothing (PS) and regularization methods which aim to reduce noise also tend to reduce resolution and introduce bias. [38] Regularization methods are considered to be efficient ways to ease the noise sensitivity by absorbing the prior information into the objective function. [39] In lockstep, regularization methods, which aim to prevent overfitting by penalizing the weight connections, or turning off some units, have been widely studied either. [40] A long-standing problem for kernel-based regularization methods is their high computational complexity O ( N 3 ) , where N is the number of data points. [41] The MI related datasets, input formulation, frequency ranges, and preprocessing and regularization methods were also reviewed. [42] In this paper, we review the feature learning, optimization, and regularization methods that form a core of deep network technologies. [43] However, current regularization methods face important challenges in this type of unstable electrical activity scenarios. [44] To be able to build an efficient deep neural network model, it is important that the parameters such as number of hidden layers, number of nodes in each layer, and training details such as learning rate and regularization methods be investigated in detail. [45] However, due to the scarcity of analytical solutions for non-Newtonian fluid flows and the widespread use of regularization methods, performing rigorous code verification is a challenging task. [46] In this paper, in order to solve the multiple-sets split feasibility problem in Hilbert spaces, we introduce the regularization methods of Lavrentiev’s and Bruck-Bakushinskii’s type, where the iterative parameter is chosen independently of the operator norm. [47] To apply the Taylor series expansion, regularization methods have been adapted. [48] Accordingly, regularization methods are required to overcome the ill-posedness issue. [49] The experimental results demonstrate that the regularization methods can improve the performance of CNN-based anomaly detection models for the software-defined networking (SDN) environment. [50]如果协变量是高维的,则可以通过包括 LASSO 及其变体在内的正则化方法开发简约倾向评分模型。 [1] 此外,引入了正则化方法,通过考虑参数的贝叶斯解释来提高预测性能。 [2] 自动机器学习(AutoML)使用自动化的数据驱动方法来实现超参数的选择、神经网络架构、正则化方法等。 [3] 为了得到一个稳定的解和近似,我们需要提供正则化方法。 [4] 为了解决 CLT 成像的病态逆问题,需要应用一些数值优化或正则化方法。 [5] 正则化方法旨在降低这种敏感性。 [6] 在这篇文献中,我们比较了三种线性模型选择和正则化方法(收缩、子集选择、降维)和九种候选模型(OLS 回归、Ridge 回归、Lasso 回归、弹性网络、最佳子集选择、前向子集选择、后向子集选择, PCR, PLS) 基于留一法交叉验证 (LOOCV) 预测误差。 [7] 我们从小数据集和大数据集的几个实验中评估我们的正则化方法。 [8] 逆优化问题采用解析法、数值法和正则化法求解。 [9] 、优化器、正则化方法和激活函数)选择会影响 DL 模型的质量,进而影响软件质量。 [10] 现有 DCF 跟踪器中使用的正则化方法,例如空间正则化或时间正则化,旨在提高滤波器的容量。 [11] 仿真结果表明,从计算时间和估计误差性能的角度来看,Tikhonov 方法比其他正则化方法具有更好的性能。 [12] 我们在集成变换卡尔曼滤波器反演设置中使用参数协方差函数结合基于收缩的正则化方法来做到这一点。 [13] 结合充分降维(SDR)和正则化方法的概念,引入了 SMAVE-EN。 [14] 选择网络架构后,能量函数(或损失)被最小化,从各种优化和正则化方法中进行选择。 [15] 在本文中,我们提出了一种原始的 CNN 训练策略,该策略汇集了类似于 dropout 的正则化方法和学习判别特征的解决方案的想法。 [16] 诸如最小绝对收缩和选择算子(LASSO)等正则化方法通常用于高维数据,以实现更稀疏的解决方案。 [17] 不同于正则化方法(e. [18] 我们提出了一个称为密度固定的正则化方法框架,可普遍用于监督和半监督学习来解决这个问题。 [19] 我们分析了一些正则化方法和激活函数以及它们对我们架构有效性的影响。 [20] 在研究人员的努力下,已经提出了一些正则化方法。 [21] 提出了一种解决该问题的算法,该算法可以抵抗信息噪声和计算错误,并且基于正则化方法和保证控制理论的构造。 [22] 本文简要回顾了正则化方法,并表明两种技术的结合可以保持微扰理论所有阶的对称性。 [23] 此外,我们全面研究了将 VIB 与其他正则化方法(包括数据增强和辅助数据)相结合的效果。 [24] 两种方法都提高了模型梯度的可解释性,第一种方法超过了除 MNIST 对抗训练之外的大多数正则化方法,第二种方法甚至超过了对 CIFAR-10 和 CIFAR-100 的白盒攻击下的对抗训练。 [25] 为了获得所提出的稳定方法,识别问题分为三个子问题,其中两个必须应用当前的数值不稳定性和正则化方法才能以稳定的形式获得它们的解。 [26] 作为 MUSIC 的参数技术以与正则化方法不同的方式解决 TomoSAR 问题,因此,每种方法都需要不同的方法来正确估计其参数。 [27] 为了获得与网格无关的结果,一种可能的解决方案是采用正则化方法,但其中只有少数与动态显式模拟兼容,尤其是对于延性破坏。 [28] 该方程的病态系统采用基于单值分解的正则化方法求解。 [29] 对于卷积神经网络 (CNN),正则化方法,如 DropBlock 和 Shake-Shake,已经说明了泛化性能的改进。 [30] 在这个框架中,基于稀疏表示的特征选择和正则化方法非常有吸引力。 [31] 解决病态系统的算法基于正则化方法。 [32] 在零星的队列中,基于 boosting 的 ML 算法在训练数据集中表现最好,而正则化方法在看不见的数据中表现最好。 [33] 在交易者的学习算法中添加模型选择和正则化方法可以减少但不能消除过度拟合和由此产生的过度波动。 [34] 基于 LASSO 和 Ridge 回归的正则化方法已用于模型选择和验证。 [35] 在本文中,我们通过研究 Lasso 等正则化方法的保真度来为这项工作做出贡献。 [36] 为了填补这一空白,提出了一种改进的反演算法,其中引入了二分法和高阶总变差(TV)正则化方法。 [37] 旨在减少噪声的后平滑 (PS) 和正则化方法也往往会降低分辨率并引入偏差。 [38] 正则化方法被认为是通过将先验信息吸收到目标函数中来减轻噪声敏感性的有效方法。 [39] 在锁步中,旨在通过惩罚权重连接或关闭某些单元来防止过度拟合的正则化方法也得到了广泛的研究。 [40] 基于核的正则化方法的一个长期存在的问题是它们的高计算复杂度 O ( N 3 ) ,其中 N 是数据点的数量。 [41] 还审查了与 MI 相关的数据集、输入公式、频率范围以及预处理和正则化方法。 [42] 在本文中,我们回顾了构成深度网络技术核心的特征学习、优化和正则化方法。 [43] 然而,当前的正则化方法在这类不稳定的电活动场景中面临着重要挑战。 [44] 为了能够构建高效的深度神经网络模型,重要的是要详细研究隐藏层数、每层节点数等参数以及学习率和正则化方法等训练细节。 [45] 然而,由于非牛顿流体流动分析解决方案的稀缺和正则化方法的广泛使用,执行严格的代码验证是一项具有挑战性的任务。 [46] 在本文中,为了解决希尔伯特空间中的多集分裂可行性问题,我们引入了 Lavrentiev 和 Bruck-Bakushinskii 类型的正则化方法,其中迭代参数的选择独立于算子范数。 [47] 为了应用泰勒级数展开,调整了正则化方法。 [48] 因此,需要正则化方法来克服不适定性问题。 [49] 实验结果表明,正则化方法可以提高基于 CNN 的异常检测模型在软件定义网络 (SDN) 环境中的性能。 [50]
ill posed problem
Convergence and stability results for both methods are proven, characterizing the iterative methods as regularization methods for this ill-posed problem. [1] Although there have been some regularization methods in order to improve this ill-posed problem so far, most of these regularizations are considered in the whole-time domain, and this may make the reconstruction inefficient and inaccurate because impact force is normally limited to some portions of impact duration. [2] Numerical examples on three typical ill-posed problems are conducted with detailed comparison to some usual direct and iterative regularization methods. [3] Inferring local soot temperature and volume fraction distributions from radiation emission measurements of sooting flames may involve solving nonlinear, ill-posed and high-dimensional problems, which are typically conducted by solving ill-posed problems with big matrices with regularization methods. [4] The main approaches to solving inverse problems are considered: minimization of some functional and multiple solution of the direct problem of calculating the magnetic field; solution of an ill-posed problem and determination, using regularization methods, of a pseudosolution that is stable to small perturbations. [5] This paper presents the neural dynamical network to compute the generalized and restricted singular value decompositions (GSVD/RSVD) in the regularization methods for ill-posed problems. [6] Deconvolution of DRT from EIS is a challenging ill-posed problem that requires regularization methods. [7]证明了这两种方法的收敛性和稳定性结果,将迭代方法描述为解决这个不适定问题的正则化方法。 [1] 虽然到目前为止已经有一些正则化方法来改善这个病态问题,但这些正则化大多是在全时间域考虑的,这可能会使重建效率低下且不准确,因为冲击力通常仅限于某些部分的影响持续时间。 [2] 对三个典型的不适定问题进行了数值示例,并与一些常用的直接和迭代正则化方法进行了详细比较。 [3] nan [4] nan [5] nan [6] nan [7]
Variou Regularization Methods
This article studies various regularization methods which are required in the inverse solution process to insure the stability of the solution. [1] We provide an overview of the methods available, illustrate them on a series of widely used benchmark problems, and discuss the accuracy–efficiency trade-off of various regularization methods. [2] With high-dimensional data, sparsity of the precision matrix is often assumed, and various regularization methods have been applied for estimation. [3] These images are used to train and test convolutional networks with various parameters: architecture, including the number of layers; epochs of learning; optimization methods; and also when applying various regularization methods, including drop out and data augmentation. [4] Complex inverse problems such as Radar Imaging and CT/EIT imaging are well investigated in mathematical algorithms with various regularization methods. [5] Various regularization methods are introduced into the seismic inversion to make the inversion results comply with the prespecified characteristics. [6]本文研究了逆解过程中为保证解的稳定性所需要的各种正则化方法。 [1] 我们提供了可用方法的概述,在一系列广泛使用的基准问题上对其进行了说明,并讨论了各种正则化方法的准确性-效率权衡。 [2] nan [3] nan [4] nan [5] nan [6]
Iterative Regularization Methods
, by iterative regularization methods, practically infeasible. [1] Numerical examples on three typical ill-posed problems are conducted with detailed comparison to some usual direct and iterative regularization methods. [2] In this paper we propose a new class of iterative regularization methods for solving ill-posed linear operator equations. [3] This method combines iterative regularization methods and continuous regularization methods effectively. [4] Generally, the known variational procedures and iterative regularization methods deliver approximations with accuracy estimates greater in order than error levels in the input data. [5],通过迭代正则化方法,实际上是不可行的。 [1] 对三个典型的不适定问题进行了数值示例,并与一些常用的直接和迭代正则化方法进行了详细比较。 [2] nan [3] nan [4] nan [5]
Two Regularization Methods
To achieve the goal, we adopt two regularization methods, intermediate CTC and stochastic depth, to train a model whose performance does not degrade much after pruning. [1] Two regularization methods, the well-known Tikhonov solution and a method that accounts for the areas of different patches, are employed to obtain a stable solution. [2] Then we introduce two regularization methods to solve the system in which the diffusion coefficients are globally Lipschitz or locally Lipschitz under some a priori assumptions on the sought solutions. [3] Accordingly, we introduce two regularization methods are introduced at the representation level, i. [4] In this paper, we propose a data-driven portfolio framework based on two regularization methods, glasso and tlasso, that provide sparse estimates of the precision matrix by penalizing its $$L_1$$L1-norm. [5]为了实现这个目标,我们采用了两种正则化方法,中间 CTC 和随机深度,来训练一个模型在剪枝后性能不会下降太多。 [1] 两种正则化方法,众所周知的 Tikhonov 解决方案和一种考虑不同补丁区域的方法,用于获得稳定的解决方案。 [2] nan [3] nan [4] nan [5]
New Regularization Methods
We propose new regularization methods based on conditional likelihood for simultaneous autoregressive-order and parameter estimation with the number of regimes fixed, and use a regularized Bayesian information criterion for selection of the number of regimes. [1] This is done by developing two new regularization methods, based on dynamic programming techniques. [2] ABSTRACT In this paper, we consider new regularization methods for linear inverse problems of dynamic type. [3] Under some weak a priori assumptions on the sought solution, we propose two new regularization methods to stabilize the problem when the source term is a globally or locally Lipschitz function. [4]我们提出了基于条件似然的新正则化方法,用于在固定状态数量的同时进行自回归阶和参数估计,并使用正则化贝叶斯信息标准来选择状态数量。 [1] 这是通过基于动态编程技术开发两种新的正则化方法来完成的。 [2] nan [3] nan [4]
Existing Regularization Methods
While most existing regularization methods exploit sparsity regularization to improve detection performance, CRLEDD provides a unique perspective by ensuring positive semi-definiteness of the sparsified precision matrix used in LDA which is different from the regular regularization method (e. [1] This paper proposes a denoising method based on Shearlet threshold-shrinkage and TGV for making full use of their characteristics, which can recover both edges and fine details much better than the existing regularization methods. [2] However, naively applying the existing regularization methods can result in misleading outcomes due to model misspecification. [3] We study the effects of different types of endogeneity on existing regularization methods and prove their inconsistencies. [4]虽然大多数现有的正则化方法都利用稀疏正则化来提高检测性能,但 CRLEDD 通过确保 LDA 中使用的稀疏精度矩阵的半正定性提供了独特的视角,这与常规正则化方法(例如, [1] 本文提出了一种基于Shearlet threshold-shrinkage和TGV的去噪方法,充分利用它们的特点,比现有的正则化方法可以更好地恢复边缘和细节。 [2] nan [3] nan [4]
Conventional Regularization Methods
Because of the difficulty of applying conventional regularization methods for complex reservoirs, in recent years, a data-driven method has been proposed by scholars to intelligently obtain prior information for formations. [1] In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions. [2] Since conventional regularization methods have been constructed in the deterministic framework, using deterministic regularization methods to address the ill-posedness problem with uncertainties can lead to errors or even mistakes. [3] However, in dynamic ensemble methods, the combination of classifiers is usually determined by the local competence and conventional regularization methods are difficult to apply, leaving the technique prone to overfitting. [4]由于复杂储层难以应用常规正则化方法,近年来,学者们提出了一种数据驱动的方法,用于智能获取地层先验信息。 [1] 在腹部 CT-MR 配准实验中,所提出的方法比传统的正则化方法产生了更好的结果,特别是对于严重变形的局部区域。 [2] nan [3] nan [4]
Different Regularization Methods
A locally constant mean and a locally constant median are compared to locally linear regression models with four different regularization methods and different parameter configurations. [1] Moreover, the combination of model-based image reconstruction with different regularization methods can solve the limited view problem for XACT imaging (in many realistic cases where the full-view dataset is unavailable) and hence pave the way for the future clinical translation. [2] Different regularization methods are investigated in this paper, including several new solutions that fit very well for the identification of sparse and low-rank systems. [3] To address this issue, we propose a channel attention model and study two different regularization methods for attention. [4]将局部恒定均值和局部恒定中值与具有四种不同正则化方法和不同参数配置的局部线性回归模型进行比较。 [1] 此外,基于模型的图像重建与不同的正则化方法的结合可以解决 XACT 成像的受限视图问题(在许多无法获得全视图数据集的实际情况下),从而为未来的临床翻译铺平道路。 [2] nan [3] nan [4]
Variational Regularization Methods
Variational regularization methods are the mainstream methods typically adopted for hyperspectral images (HSIs) denoising, which borrow architectures originally developed for RG-B images, exhibiting limitations when cope with HSI data cubes. [1] Many successful variational regularization methods employed to solve linear inverse problems in imaging applications (such as image deblurring, image inpainting, and computed tomography) aim at enhancing edges in the solution, and often involve non-smooth regularization terms (e. [2] We present a family of non-local variational regularization methods for solving tomographic problems, where the solutions are functions with range in a closed subset of the Euclidean space, for example if the solution only attains values in an embedded sub-manifold. [3]变分正则化方法是通常用于高光谱图像(HSI)去噪的主流方法,它借鉴了最初为 RG-B 图像开发的架构,在处理 HSI 数据立方体时表现出局限性。 [1] 用于解决成像应用中的线性逆问题(例如图像去模糊、图像修复和计算机断层扫描)的许多成功的变分正则化方法旨在增强解决方案中的边缘,并且通常涉及非平滑正则化项(例如 [2] nan [3]
Manifold Regularization Methods
The proposed method outperforms the existing state-of-the-art manifold regularization methods by a significant margin. [1] Manifold regularization methods for matrix factorization rely on the cluster assumption, whereby the neighborhood structure of data in the input space is preserved in the factorization space. [2] We show the relation of this two-step algorithm with our recent SToRM approach, thus reconciling SToRM and manifold regularization methods with algorithms that rely on explicit lifting of data to a high dimensional space. [3]所提出的方法明显优于现有的最先进的流形正则化方法。 [1] 矩阵分解的流形正则化方法依赖于聚类假设,从而将输入空间中数据的邻域结构保留在分解空间中。 [2] nan [3]
Traditional Regularization Methods
Traditional regularization methods have their own disadvantages. [1] To improve the stability and accuracy of the DWC, based on the traditional regularization methods and Landweber iteration method, we introduce two other iterative algorithms, namely Cimmino and component averaging (CAV). [2] The traditional regularization methods add a unique penalty to all the frequency bands of the solution, which may cause the reconstructed result to be too smooth to retain certain features of the original brightness temperature map such as the edge information. [3]传统的正则化方法有其自身的缺点。 [1] 为了提高 DWC 的稳定性和准确性,我们在传统的正则化方法和 Landweber 迭代方法的