An ensemble of pre-trained transformer models for imbalanced multiclass malware classification

dc.authorid Demirkiran, Ferhat/0000-0001-7335-9370
dc.authorid Unal, Ugur/0000-0001-6552-6044
dc.authorscopusid 57219836294
dc.authorscopusid 56497768800
dc.authorscopusid 57215332698
dc.authorscopusid 6507328166
dc.contributor.author Dağ, Hasan
dc.contributor.author Demirkıran, Ferhat
dc.contributor.author Unal, Gur
dc.contributor.author Dag, Hasan
dc.contributor.other Management Information Systems
dc.date.accessioned 2024-06-23T21:36:49Z
dc.date.available 2024-06-23T21:36:49Z
dc.date.issued 2022
dc.department Kadir Has University en_US
dc.department-temp [Demirkiran, Ferhat] Kadir Has Univ, Cyber Secur Grad Program, Istanbul, Turkey; [Cayir, Aykut] Huawei R&D Ctr, Istanbul, Turkey; [Cayir, Aykut; Unal, Gur; Dag, Hasan] Kadir Has Univ, Management Informat Syst, Istanbul, Turkey en_US
dc.description Demirkiran, Ferhat/0000-0001-7335-9370; Unal, Ugur/0000-0001-6552-6044 en_US
dc.description.abstract Classification of malware families is crucial for a comprehensive understanding of how they can infect devices, computers, or systems. Hence, malware identification enables security researchers and incident responders to take precautions against malware and accelerate mitigation. API call sequences made by malware are widely utilized features by machine and deep learning models for malware classification as these sequences represent the behavior of malware. However, traditional machine and deep learning models remain incapable of capturing sequence relationships among API calls. Unlike traditional machine and deep learning models, the transformer-based models process the sequences in whole and learn relationships among API calls due to multi-head attention mechanisms and positional embeddings. Our experiments demonstrate that the Transformer model with one transformer block layer surpasses the performance of the widely used base architecture, LSTM. Moreover, BERT or CANINE, the pre-trained transformer models, outperforms in classifying highly imbalanced malware families according to evaluation metrics: F1-score and AUC score. Furthermore, our proposed bagging-based random transformer forest (RTF) model, an ensemble of BERT or CANINE, reaches the state-of-the-art evaluation scores on the three out of four datasets, specifically it captures a state-of-the-art F1-score of 0.6149 on one of the commonly used benchmark dataset. (C) 2022 Elsevier Ltd. All rights reserved. en_US
dc.identifier.citationcount 16
dc.identifier.doi 10.1016/j.cose.2022.102846
dc.identifier.issn 0167-4048
dc.identifier.issn 1872-6208
dc.identifier.scopus 2-s2.0-85136643921
dc.identifier.scopusquality Q1
dc.identifier.uri https://doi.org/10.1016/j.cose.2022.102846
dc.identifier.uri https://hdl.handle.net/20.500.12469/5644
dc.identifier.volume 121 en_US
dc.identifier.wos WOS:000881541300005
dc.identifier.wosquality Q1
dc.language.iso en en_US
dc.publisher Elsevier Advanced Technology en_US
dc.relation.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.scopus.citedbyCount 38
dc.subject Transformer en_US
dc.subject Tokenization-free en_US
dc.subject API Calls en_US
dc.subject Imbalanced en_US
dc.subject Multiclass en_US
dc.subject BERT en_US
dc.subject CANINE en_US
dc.subject Ensemble en_US
dc.subject Malware classification en_US
dc.title An ensemble of pre-trained transformer models for imbalanced multiclass malware classification en_US
dc.type Article en_US
dc.wos.citedbyCount 24
dspace.entity.type Publication
relation.isAuthorOfPublication e02bc683-b72e-4da4-a5db-ddebeb21e8e7
relation.isAuthorOfPublication 695a8adc-2330-4d32-ab37-8b781716d609
relation.isAuthorOfPublication.latestForDiscovery e02bc683-b72e-4da4-a5db-ddebeb21e8e7
relation.isOrgUnitOfPublication ff62e329-217b-4857-88f0-1dae00646b8c
relation.isOrgUnitOfPublication.latestForDiscovery ff62e329-217b-4857-88f0-1dae00646b8c

Files