However, the SCAD penalty is not smooth, resulting in the optimiz

However, the SCAD penalty is not smooth, resulting in the optimization problem being complicated. Upon this, [14] proposed the Laplace error penalty (LEP) method with a penalty which is unbiased, sparse, continuous, and almost smooth.In this paper, we will apply the LEP method to reconstruct the selleck chemicals Pazopanib gene expression network, and compare it to LASSO and SCAD in the performance of estimating the partial correlation coefficient matrix. The paper is structured as follows. In Section 2, the LASSO, SCAD, and LEP methods will be briefly described. In Section 3, we will report the results of simulations and a real data analysis. A short discussion is given in Section 4.2. MethodsThe graphical Gaussian model, or GGM for abbreviation, is an undirected graphical model.

Let X = (X1,��, Xp)�� indicate a p-dimensional random variable, subject to the multivariate normal distribution N(��, ��), where �� is the mean vector and �� is the variance-covariance matrix. Given n samples from N(��, ��), (xij)p��n, the partial correlation coefficient matrix (��ij)p��p, which reflects the conditional dependence between different components of X, could be estimated by ��^ij=sign(��^ij)��^ij��^ji, where ��^ij is the estimator for ��ij in the linear regression i=1,2,��,p;??j=1,2,��,n,(1)?ij, i = 1,2,��, p and j?modelXij=��1��k��i��p��kjXkj+?ij, = 1,2,��, n, are independent and identically distributed, and independent of X, and sign(x) is an indicator function, being ?1, 0, or 1 when x is smaller, equal, or greater than 0, respectively.

For the ��small N large P�� problem, instead of the classical least square optimization, the objective function��i=1p��j=1n(Xij?��k��i��kjXkj)2+��i=1?p��1��j��i��Np��(��ij)(2)is minimized to get the estimator for ��ij, ��^ij, where p��(?) indicates a penalty function on the parameters. The formula p��(?) is essentially important. It not only determines the way to shrink the estimators, but also directly affects the complexity of the optimization algorithm. A good penalty function should have several desirable statistical properties, unbiasedness, sparsity, continuity [13], and smoothness [14].The LASSO, proposed by [12], has the penalty p��(��) = ��|��|. Although it succeeded in many applications of variable selection, it shrinks the estimates of larger parameters more significantly than that of the smaller parameters, causing a substantial bias. The SCAD penalty function, suggested by [13], has the AV-951 derivative p�ˡ�(��) = ����.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>