Dağ, HasanCurebal,F.Dag,H.2024-06-232024-06-2320240979-835038514-4https://doi.org/10.1109/SIEDS61124.2024.10534669https://hdl.handle.net/20.500.12469/5873This study assesses the impact of seven feature selection algorithms (Minimum Redundancy Maximum Relevance (MRMR), Mutual Information (MI), Chi-Square (Chi), Leave One Feature Out (LOFO), Feature Relevance-based Unsupervised Feature Selection (FRUFS), A General Framework for Auto-Weighted Feature Selection via Global Redundancy Minimization (AGRM), and BoostARoota) across two malware datasets (Microsoft and API call sequences) using three machine learning models (Extreme Gradient Boosting (Xgboost), Random Forest, and Histogram-Based Gradient Boosting (Hist Gradient Boosting)). The analysis reveals that no feature selection algorithm uniformly outperforms the others as their effectiveness varies based on the dataset and model characteristics. Specifically, BoostARoota demonstrated significant compatibility with the Microsoft dataset, especially after parameter optimization, whereas its performance varied with the API call sequences dataset, suggesting the need for customized parameter selection. This study highlights the necessity of tailored feature selection approaches and parameter adjustments to optimize machine learning model performance across different datasets. © 2024 IEEE.eninfo:eu-repo/semantics/closedAccessFeature selectionMachine learningMalware classificationParameter optimizationEnhancing Malware Classification: A Comparative Study of Feature Selection Models with Parameter OptimizationConference Object51151610.1109/SIEDS61124.2024.105346692-s2.0-85195324534N/AN/A