A Simple and Efficient Algorithm for Gene Selection using Sparse Logistic Regression. Motivation: This paper gives a new and efficient algorithm for the sparse logistic regression problem. The proposed algorithm is based on the Gauss–Seidel method and is asymptotically convergent. It is simple and extremely easy to implement; it neither uses any sophisticated mathematical programming software nor needs any matrix operations. It can be applied to a variety of real-world problems like identifying marker genes and building a classifier in the context of cancer diagnosis using microarray data. Results: The gene selection method suggested in this paper is demonstrated on two real-world data sets and the results were found to be consistent with the literature.

References in zbMATH (referenced in 27 articles )

Showing results 1 to 20 of 27.
Sorted by year (citations)

1 2 next

  1. Nakayama, Shummin; Narushima, Yasushi; Yabe, Hiroshi: Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions (2021)
  2. Liu, Xiaoman; Liu, Jijun: Image restoration from noisy incomplete frequency data by alternative iteration scheme (2020)
  3. Algamal, Zakariya Yahya; Lee, Muhammad Hisyam: A two-stage sparse logistic regression for optimal gene selection in high-dimensional microarray data classification (2019)
  4. Zhou, Shengbin; Zhou, Jingke; Zhang, Bo: High-dimensional generalized linear models incorporating graphical structure among predictors (2019)
  5. Gotoh, Jun-ya; Takeda, Akiko; Tono, Katsuya: DC formulations and algorithms for sparse optimization problems (2018)
  6. Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou: Feature genes selection using supervised locally linear embedding and correlation coefficient for microarray classification (2018)
  7. Yang, Wenyuan; Li, Chan; Zhao, Hong: Label distribution learning by regularized sample self-representation (2018)
  8. Qiao, Maoying; Liu, Liu; Yu, Jun; Xu, Chang; Tao, Dacheng: Diversified dictionaries for multi-instance learning (2017)
  9. Dong, Qian; Liu, Xin; Wen, Zai-Wen; Yuan, Ya-Xiang: A parallel line search subspace correction method for composite convex optimization (2015)
  10. Wang, Jie; Wonka, Peter; Ye, Jieping: Lasso screening rules via dual polytope projection (2015)
  11. Xu, Yangyang; Yin, Wotao: Block stochastic gradient iteration for convex and nonconvex optimization (2015)
  12. Groll, Andreas; Tutz, Gerhard: Variable selection for generalized linear mixed models by (L_1)-penalized estimation (2014)
  13. Peng, Hong-Yi; Jiang, Chun-Fu; Fang, Xiang; Liu, Jin-Shan: Variable selection for Fisher linear discriminant analysis using the modified sequential backward selection algorithm for the microarray data (2014)
  14. Yu, Yi; Feng, Yang: APPLE: approximate path for penalized likelihood estimators (2014)
  15. Blondel, Mathieu; Seki, Kazuhiro; Uehara, Kuniaki: Block coordinate descent algorithms for large-scale sparse multiclass classification (2013)
  16. Korzeń, M.; Jaroszewicz, S.; Klęsk, P.: Logistic regression with weight grouping priors (2013)
  17. Choi, Hosik; Yeo, Donghwa; Kwon, Sunghoon; Kim, Yongdai: Gene selection and prediction for cancer classification using support vector machines with a reject option (2011)
  18. Yger, F.; Rakotomamonjy, A.: Wavelet kernel learning (2011) ioport
  19. Caster, Ola; Norén, G. Niklas; Madigan, David; Bate, Andrew: Large-scale regression-based pattern discovery: the example of screening the who global drug safety database (2010)
  20. Goeman, Jelle J.: (L_1) penalized estimation in the Cox proportional hazards model (2010)

1 2 next