Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Jafari Navimipour, Nima"

Filter results by typing the first few letters
Now showing 1 - 20 of 38
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 45
    Citation - Scopus: 51
    Applications of Deep Learning in Alzheimer's Disease: a Systematic Literature Review of Current Trends, Methodologies, Challenges, Innovations, and Future Directions
    (Springer, 2024) Toumaj, Shiva; Heidari, Arash; Shahhosseini, Reza; Navimipour, Nima Jafari; Jafari Navimipour, Nima
    Alzheimer's Disease (AD) constitutes a significant global health issue. In the next 40 years, it is expected to affect 106 million people. Although more and more people are getting AD, there are still no effective drugs to treat it. Insightful information about how important it is to find and treat AD quickly. Recently, Deep Learning (DL) techniques have been used more and more to diagnose AD. They claim better accuracy in drug reuse, medication recognition, and labeling. This essay meticulously examines the works that have talked about using DL with Alzheimer's disease. Some of the methods are Natural Language Processing (NLP), drug reuse, classification, and identification. Concerning these methods, we examine their pros and cons, paying special attention to how easily they can be explained, how safe they are, and how they can be used in medical situations. One important finding is that Convolutional Neural Networks (CNNs) are most often used for AD research and Python is most often used for DL issues. Some security problems, like data protection and model stability, are not looked at enough in the present research, according to us. This study thoroughly examines present methods and also points out areas that need more work, like better data integration and AI systems that can be explained. The findings should help guide more research and speed up the creation of DL-based AD identification tools in the future.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 5
    Citation - Scopus: 5
    Blockchain Systems in Embedded Internet of Things: Systematic Literature Review, Challenges Analysis, and Future Direction Suggestions
    (Mdpi, 2022) Darbandi, Mehdi; Al-Khafaji, Hamza Mohammed Ridha; Nasab, Seyed Hamid Hosseini; AlHamad, Ahmad Qasim Mohammad; Ergashevich, Beknazarov Zafarjon; Navimipour, Nima Jafari; Jafari Navimipour, Nima; Hosseini Nasab, Seyed Hamid
    Internet of Things (IoT) environments can extensively use embedded devices. Without the participation of consumers; tiny IoT devices will function and interact with one another, but their operations must be reliable and secure from various threats. The introduction of cutting-edge data analytics methods for linked IoT devices, including blockchain, may lower costs and boost the use of cloud platforms. In a peer-to-peer network such as blockchain, no one has to be trusted because each peer is in charge of their task, and there is no central server. Because blockchain is tamper-proof, it is connected to IoT to increase security. However, the technology is still developing and faces many challenges, such as power consumption and execution time. This article discusses blockchain technology and embedded devices in distant areas where IoT devices may encounter network shortages and possible cyber threats. This study aims to examine existing research while also outlining prospective areas for future work to use blockchains in smart settings. Finally, the efficiency of the blockchain is evaluated through performance parameters, such as latency, throughput, storage, and bandwidth. The obtained results showed that blockchain technology provides security and privacy for the IoT.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 4
    Citation - Scopus: 6
    A Cloud Database Route Scheduling Method Using a Hybrid Optimization Algorithm
    (Wiley, 2023) Baghi, Zahra Shokri; Navimipour, Nima Jafari; Jafari Navimipour, Nima; Shokri Baghi, Zahra
    Cloud computing has appeared as a technology allowing a company to employ computing resources such as applications, software, and hardware to calculate over the Internet. Scholars have paid great attention to cloud computing because of its cutting-edge availability, cost decrement, and boundless applications. A cloud database is a data storage site on the web where the optimal path is spotted to access the needed database. So, placing the ideal path to a database is crucial. The cloud database defined the scheduling problem to choose the perfect route. Cloud database path scheduling is a multifaceted procedure consisting of congestion control, routing list, and network flow distribution. It has a postponement in searching for the needed source route from the cloud database. Offering numerous infinite resources with the growing database workload is an NP-Hard optimization problem where the query request needs optimal schedules to respond to the required services. So, we have used a hybrid cuckoo search (CS) and genetic algorithm (GA), motivated by a social bird's phenomenon, to solve this problem. Integrating genetic operators has dramatically enhanced the balance between the capability of searching and utilization.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 7
    Citation - Scopus: 7
    A Cloud Service Composition Method Using a Fuzzy-Based Particle Swarm Optimization Algorithm
    (Springer, 2023) Nazif, Habibeh; Nassr, Mohammad; Al-Khafaji, Hamza Mohammed Ridha; Navimipour, Nima Jafari; Unal, Mehmet; Jafari Navimipour, Nima
    In today's dynamic business landscape, organizations heavily rely on cloud computing to leverage the power of virtualization and resource sharing. Service composition plays a vital role in cloud computing, combining multiple cloud services to fulfill complex user requests. Service composition in cloud computing presents several challenges. These include service heterogeneity, dynamic service availability, QoS (Quality of Service) constraints, and scalability issues. Traditional approaches often struggle to handle these challenges efficiently, leading to suboptimal resource utilization and poor service performance. This work presents a fuzzy-based strategy for composing cloud services to overcome these obstacles. The fact that service composition is NP-hard has prompted the use of a range of metaheuristic algorithms in numerous papers. Therefore, Particle Swarm Optimization (PSO) has been applied in this paper to solve the problem. Implementing a fuzzy-based PSO for service composition requires defining the fuzzy membership functions and rules based on the specific service domain. Once the fuzzy logic components are established, they can be integrated into the PSO algorithm. The simulation results have shown the high efficiency of the proposed method in decreasing the latency, cost, and response time.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 41
    Citation - Scopus: 57
    A Comprehensive and Systematic Literature Review on the Big Data Management Techniques in the Internet of Things
    (Springer, 2023) NaghibnAff, Arezou; Navimipour, Nima Jafari; Hosseinzadeh, Mehdi; Sharifi, Arash; Naghib, Arezou; Jafari Navimipour, Nima
    The Internet of Things (IoT) is a communication paradigm and a collection of heterogeneous interconnected devices. It produces large-scale distributed, and diverse data called big data. Big Data Management (BDM) in IoT is used for knowledge discovery and intelligent decision-making and is one of the most significant research challenges today. There are several mechanisms and technologies for BDM in IoT. This paper aims to study the important mechanisms in this area systematically. This paper studies articles published between 2016 and August 2022. Initially, 751 articles were identified, but a paper selection process reduced the number of articles to 110 significant studies. Four categories to study BDM mechanisms in IoT include BDM processes, BDM architectures/frameworks, quality attributes, and big data analytics types. Also, this paper represents a detailed comparison of the mechanisms in each category. Finally, the development challenges and open issues of BDM in IoT are discussed. As a result, predictive analysis and classification methods are used in many articles. On the other hand, some quality attributes such as confidentiality, accessibility, and sustainability are less considered. Also, none of the articles use key-value databases for data storage. This study can help researchers develop more effective BDM in IoT methods in a complex environment.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 32
    Citation - Scopus: 37
    A Cost- and Energy-Efficient Sram Design Based on a New 5 I-P Majority Gate in Qca Nanotechnology
    (Elsevier, 2024) Kassa, Sankit; Ahmadpour, Seyed-Sajad; Lamba, Vijay; Misra, Neeraj Kumar; Navimipour, Nima Jafari; Kotecha, Ketan; Jafari Navimipour, Nima; Kumar Misra, Neeraj
    Quantum-dot Cellular Automata (QCA) is a revolutionary paradigm in the Nano-scale VLSI market with the potential to replace the traditional Complementary Metal Oxide Semiconductor system. To demonstrate its usefulness, this article provides a QCA-based innovation structure comprising a 5-input (i-p) Majority Gate, which is one of the basic gates in QCA, and a Static Random Access Memory (SRAM) cell with set and reset functionalities. The suggested design, with nominal clock zones, provides a reliable, compact, efficient, and durable configuration that helps achieve the optimal size and latency while decreasing power consumption. Based on the suggested 5 i-p majority gate, the realized SRAM architecture improves energy dissipation by 33.95 %, cell count by 31.34 %, and area by 33.33 % when compared to the most recent design designs. Both the time and the cost have been decreased by 30 % and 53.95 %, respectively.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 12
    Citation - Scopus: 25
    A Deep Analysis of Nature-Inspired and Meta-Heuristic Algorithms for Designing Intrusion Detection Systems in Cloud/Edge and Iot: State-Of Techniques, Challenges, and Future Directions
    (Springer, 2024) Hu, Wengui; Cao, Qingsong; Darbandi, Mehdi; Navimipour, Nima Jafari; Jafari Navimipour, Nima
    The number of cloud-, edge-, and Internet of Things (IoT)-based applications that produce sensitive and personal data has rapidly increased in recent years. The IoT is a new model that integrates physical objects and the Internet and has become one of the principal technological evolutions of computing. Cloud computing is a paradigm for centralized computing that gathers resources in one place and makes them available to consumers via the Internet. Despite the vast array of resources that cloud computing offers, real-time mobile applications might not find it acceptable because it is typically located far from users. However, in applications where low latency and high dependability are required, edge computing-which disperses resources to the network edge-is becoming more and more popular. Though it has less processing power than traditional cloud computing, edge computing offers resources in a decentralized way that can react to customers' needs more quickly. There has been a sharp increase in attackers stealing data from these applications since the data is so sensitive. Thus, a powerful Intrusion Detection System (IDS) that can identify intruders is required. IDS are essential for the cybersecurity of the IoT, cloud, and edge architectures. Investigators have mostly embraced the use of deep learning algorithms as a means of protecting the IoT environment. However, these techniques have some issues with computational complexity, long processing times, and poor precision. Feature selection approaches can be utilized to overcome these problems. Optimization methods, including bio-inspired algorithms, are applied as feature selection approaches to enhance the classification accuracy of IDS systems. Based on the cited sources, it appears that no study has looked into these difficulties in depth. This research thoroughly analyzes the current literature on intrusion detection and using nature-inspired algorithms to safeguard IoT and cloud/edge settings. This article examines pertinent analyses and surveys on the aforementioned subjects, dangers, and outlooks. It also examines many frequently used algorithms in the development of IDSs used in IoT security. The findings demonstrate their efficiency in addressing IoT and cloud/edge ecosystem security issues. Moreover, it has been shown that the methods put out in the literature might improve IDS security and dependability in terms of precision and execution speed.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 52
    Citation - Scopus: 56
    Deep Q-Learning Technique for Offloading Offline/Online Computation in Blockchain-Enabled Green Iot-Edge Scenarios
    (Mdpi, 2022) Heidari, Arash; Jamali, Mohammad Ali Jabraeil; Navimipour, Nima Jafari; Akbarpour, Shahin; Jafari Navimipour, Nima; Jabraeil Jamali, Mohammad Ali
    The number of Internet of Things (IoT)-related innovations has recently increased exponentially, with numerous IoT objects being invented one after the other. Where and how many resources can be transferred to carry out tasks or applications is known as computation offloading. Transferring resource-intensive computational tasks to a different external device in the network, such as a cloud, fog, or edge platform, is the strategy used in the IoT environment. Besides, offloading is one of the key technological enablers of the IoT, as it helps overcome the resource limitations of individual objects. One of the major shortcomings of previous research is the lack of an integrated offloading framework that can operate in an offline/online environment while preserving security. This paper offers a new deep Q-learning approach to address the IoT-edge offloading enabled blockchain problem using the Markov Decision Process (MDP). There is a substantial gap in the secure online/offline offloading systems in terms of security, and no work has been published in this arena thus far. This system can be used online and offline while maintaining privacy and security. The proposed method employs the Post Decision State (PDS) mechanism in online mode. Additionally, we integrate edge/cloud platforms into IoT blockchain-enabled networks to encourage the computational potential of IoT devices. This system can enable safe and secure cloud/edge/IoT offloading by employing blockchain. In this system, the master controller, offloading decision, block size, and processing nodes may be dynamically chosen and changed to reduce device energy consumption and cost. TensorFlow and Cooja's simulation results demonstrated that the method could dramatically boost system efficiency relative to existing schemes. The findings showed that the method beats four benchmarks in terms of cost by 6.6%, computational overhead by 7.1%, energy use by 7.9%, task failure rate by 6.2%, and latency by 5.5% on average.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 159
    Citation - Scopus: 267
    Deepfake Detection Using Deep Learning Methods: a Systematic and Comprehensive Review
    (Wiley Periodicals, inc, 2024) Heidari, Arash; Navimipour, Nima Jafari; Dag, Hasan; Unal, Mehmet; Jafari Navimipour, Nima
    Deep Learning (DL) has been effectively utilized in various complicated challenges in healthcare, industry, and academia for various purposes, including thyroid diagnosis, lung nodule recognition, computer vision, large data analytics, and human-level control. Nevertheless, developments in digital technology have been used to produce software that poses a threat to democracy, national security, and confidentiality. Deepfake is one of those DL-powered apps that has lately surfaced. So, deepfake systems can create fake images primarily by replacement of scenes or images, movies, and sounds that humans cannot tell apart from real ones. Various technologies have brought the capacity to change a synthetic speech, image, or video to our fingers. Furthermore, video and image frauds are now so convincing that it is hard to distinguish between false and authentic content with the naked eye. It might result in various issues and ranging from deceiving public opinion to using doctored evidence in a court. For such considerations, it is critical to have technologies that can assist us in discerning reality. This study gives a complete assessment of the literature on deepfake detection strategies using DL-based algorithms. We categorize deepfake detection methods in this work based on their applications, which include video detection, image detection, audio detection, and hybrid multimedia detection. The objective of this paper is to give the reader a better knowledge of (1) how deepfakes are generated and identified, (2) the latest developments and breakthroughs in this realm, (3) weaknesses of existing security methods, and (4) areas requiring more investigation and consideration. The results suggest that the Conventional Neural Networks (CNN) methodology is the most often employed DL method in publications. According to research, the majority of the articles are on the subject of video deepfake detection. The majority of the articles focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. This article is categorized under:Technologies > Machine LearningAlgorithmic Development > MultimediaApplication Areas > Science and Technology
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 8
    Citation - Scopus: 8
    An Efficient Architecture of Adder Using Fault-Tolerant Majority Gate Based on Atomic Silicon Nanotechnology
    (Ieee-inst Electrical Electronics Engineers inc, 2023) Ahmadpour, Seyed-Sajad; Jafari Navimipour, Nima; Bahar, Ali Newaz; Yalcin, Senay
    It is expected that Complementary Metal Oxide Semiconductor (CMOS) implementation with ever-smaller transistors will soon face significant issues such as device density, power consumption, and performance due to the requirement for challenging fabrication processes. Therefore, a new and promising computation paradigm, nanotechnology, can replace CMOS technology. In addition, a new frontier in computing is opened up by nanotechnology called atomic silicon, which has the same extraordinary behavior as quantum dots. On the other hand, atomic silicon circuits are highly prone to defects, so suggested fault-tolerant structures in this technology play important roles. The full adders have gained popularity and find widespread use in efficiently solving mathematical problems. In the following article, we will explore the development of an efficient fault-tolerant 3-input majority gate (FT-MV3) using DBs, further enhancing the capabilities of digital circuits. A rule-based approach to the redundant DB achieves a less complex and more robust atomic silicon layout for the MV3. We use the SiQAD tool to simulate proposed circuits. In addition, to confirm the efficiency of the proposed gate, all common defects, such as single and double dangling bond omission defects and DB dislocation defects, are examined. The suggested gate is 100% and 66.66% tolerant against single and double DB omission defects, respectively. Furthermore, a new adder design is introduced using the suggested FT-MV3 gate. The results show that the suggested adder is 44.44% and 35.35% tolerant against single and double DB omission defects. Finally, a fault-tolerant four-bit adder is designed based on the proposed adder.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 5
    Citation - Scopus: 8
    An Energy-Aware Load Balancing Method for IoT-Based Smart Recycling Machines Using an Artificial Chemical Reaction Optimization Algorithm
    (Mdpi, 2023) Milan, Sara Tabaghchi; Darbandi, Mehdi; Navimipour, Nima Jafari; Yalcin, Senay; Jafari Navimipour, Nima; Tabaghchi Milan, Sara
    Recycling is very important for a sustainable and clean environment. Developed and developing countries are both facing the problem of waste management and recycling issues. On the other hand, the Internet of Things (IoT) is a famous and applicable infrastructure used to provide connection between physical devices. It is an important technology that has been researched and implemented in recent years that promises to positively influence several industries, including recycling and trash management. The impact of the IoT on recycling and waste management is examined using standard operating practices in recycling. Recycling facilities, for instance, can use IoT to manage and keep an eye on the recycling situation in various places while allocating the logistics for transportation and distribution processes to minimize recycling costs and lead times. So, companies can use historical patterns to track usage trends in their service regions, assess their accessibility to gather resources, and arrange their activities accordingly. Additionally, energy is a significant aspect of the IoT since several devices will be linked to the internet, and the devices, sensors, nodes, and objects are all energy-restricted. Because the devices are constrained by their nature, the load-balancing protocol is crucial in an IoT ecosystem. Due to the importance of this issue, this study presents an energy-aware load-balancing method for IoT-based smart recycling machines using an artificial chemical reaction optimization algorithm. The experimental results indicated that the proposed solution could achieve excellent performance. According to the obtained results, the imbalance degree (5.44%), energy consumption (11.38%), and delay time (9.05%) were reduced using the proposed method.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 18
    Citation - Scopus: 19
    An Energy-Aware Nanoscale Design of Reversible Atomic Silicon Based on Miller Algorithm
    (IEEE-Inst Electrical Electronics Engineers Inc, 2023) Ahmadpour, Seyed-Sajad; Jafari Navimipour, Nima; Bahar, Ali Nawaz; Mosleh, Mohammad; Yalcin, Senay
    Area overhead and energy consumption continue to dominate the scalability issues of modern digital circuits. In this context, atomic silicon and reversible logic have emerged as suitable alternatives to address both issues. In this article, the authors propose novel nano-scale circuit design with low area and energy overheads using the same. In particular, the authors propose a reversible gate with Miller algorithm and atomic silicon technology. This article is extremely relevant in todays era, when the world is moving toward low area and low energy circuits for use in edge devices.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 3
    Citation - Scopus: 6
    An Energy-Aware Resource Management Strategy Based on Spark and YARN in Heterogeneous Environments
    (Ieee-inst Electrical Electronics Engineers inc, 2024) Shabestari, Fatemeh; Navimipour, Nima Jafari; Jafari Navimipour, Nima
    Apache Spark is a popular framework for processing big data. Running Spark on Hadoop YARN allows it to schedule Spark workloads alongside other data-processing frameworks on Hadoop. When an application is deployed in a YARN cluster, its resources are given without considering energy efficiency. Furthermore, there is no way to enforce any user-specified deadline constraints. To address these issues, we propose a new deadline-aware resource management system and a scheduling algorithm to minimize the total energy consumption in Spark on YARN for heterogeneous clusters. First, a deadline-aware energy-efficient model for the considered problem is proposed. Then, using a locality-aware method, executors are assigned to applications. This algorithm sorts the nodes based on the performance per watt (PPW) metric, the number of application data blocks on nodes, and the rack locality. It also offers three ways to choose executors from different machines: greedy, random, and Pareto-based. Finally, the proposed heuristic task scheduler schedules tasks on executors to minimize total energy and tardiness. We evaluated the performance of the suggested algorithm regarding energy efficiency and satisfying the Service Level Agreement (SLA). The results showed that the method outperforms the popular algorithms regarding energy consumption and meeting deadlines.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 14
    Citation - Scopus: 16
    Evaluating the Effect of Human Factors on Big Data Analytics and Cloud of Things Adoption in the Manufacturing Micro, Small, and Medium Enterprises
    (IEEE Computer Soc, 2022) Kavre, Mahesh S.; Gardas, Bhaskar B.; Narwane, Vaibhav S.; Navimipour, Nima Jafari; Yalcin, Senay; Jafari Navimipour, Nima
    The purpose of the study is to explore and analyze human factors that influence big data analytics and the cloud of things adoption across Indian micro, small, and medium enterprises (MSMEs). The human factors were identified through a literature survey and experts' opinions. In order to develop a hierarchical structural model of identified human factors indicating the mutual relationship and classify the factors into cause-effect groups, a hybrid ISM-DEMATEL approach has been employed. Results of the study stated that Lack of training and development programs (HF11), Lack of vision of top management and ineffective corporate governance (HF13), and Communication barrier between management and workforce (HF4) are the most significant factors. The study's findings would be helpful to human resource managers and decision-makers of the firm to understand human-related factors responsible for technology adoption. Further, results can be validated with the investigation in other emerging economies.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 19
    Citation - Scopus: 28
    A Fire Evacuation and Control System in Smart Buildings Based on the Internet of Things and a Hybrid Intelligent Algorithm
    (Mdpi, 2023) Mohammadiounotikandi, Ali; Fakhruldeen, Hassan Falah; Meqdad, Maytham N.; Ibrahim, Banar Fareed; Jafari Navimipour, Nima; Unal, Mehmet
    Concerns about fire risk reduction and rescue tactics have been raised in light of recent incidents involving flammable cladding systems and fast fire spread in high-rise buildings worldwide. Thus, governments, engineers, and building designers should prioritize fire safety. During a fire event, an emergency evacuation system is indispensable in large buildings, which guides evacuees to exit gates as fast as possible by dynamic and safe routes. Evacuation plans should evaluate whether paths inside the structures are appropriate for evacuations, considering the building's electric power, electric controls, energy usage, and fire/smoke protection. On the other hand, the Internet of Things (IoT) is emerging as a catalyst for creating and optimizing the supply and consumption of intelligent services to achieve an efficient system. Smart buildings use IoT sensors for monitoring indoor environmental parameters, such as temperature, humidity, luminosity, and air quality. This research proposes a new way for a smart building fire evacuation and control system based on the IoT to direct individuals along an evacuation route during fire incidents efficiently. This research utilizes a hybrid nature-inspired optimization approach, Emperor Penguin Colony, and Particle Swarm Optimization (EPC-PSO). The EPC algorithm is regulated by the penguins' body heat radiation and spiral-like movement inside their colony. The behavior of emperor penguins improves the PSO algorithm for sooner convergences. The method also uses a particle idea of PSO to update the penguins' positions. Experimental results showed that the proposed method was executed accurately and effectively by cost, energy consumption, and execution time-related challenges to ensure minimum life and resource causalities. The method has decreased the execution time and cost by 10.41% and 25% compared to other algorithms. Moreover, to achieve a sustainable system, the proposed method has decreased energy consumption by 11.90% compared to other algorithms.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 1
    High-Performance and Low-Power Quantum-Dot Multiply-Accumulate Design for Next-Generation Supercomputing Platforms
    (Springer, 2026) Ahmadpour, Seyed-Sajad; Zohaib, Muhammad; Rasmi, Hadi; Jafari Navimipour, Nima
    The rapid growth of high-performance computing (HPC) and supercomputing applications necessitates hardware architectures that provide both high computational performance and strong energy efficiency under real-time and massively parallel workloads. However, conventional complementary metal-oxide semiconductor (CMOS) technologies face fundamental challenges, including excessive power consumption, leakage currents, and severe scaling limitations, which restrict their suitability for future exascale systems. To overcome these limitations, emerging nanotechnologies such as Quantum-dot Cellular Automata (QCA) have gained significant attention due to their ultra low-power consumption and high device density. In this work, we present a high-performance and low-power Quantum-Dot Multiply-Accumulate (Q-Dot MAC) unit, where MAC denotes a fundamental arithmetic operation combining multiplication and accumulation, extensively used in scientific computing, and signal processing. The proposed QCA-based architecture is specifically designed to satisfy the high-frequency (HF) operational demands of modern HPC environments, enabling sustained high-throughput computation. The main objective of this design is to realize a compact, energy-efficient, and physically stable MAC unit suitable for large-scale deployment in energy-constrained supercomputing platforms. Exploiting the inherent parallelism and high-density layout characteristics of QCA, the proposed MAC architecture efficiently executes key computational kernels required in HPC workloads, including large-scale matrix multiplication, convolution operations, and scientific simulations. The proposed QCA-based circuits demonstrate significant performance and area efficiency improvements compared with the best existing designs in the literature. Specifically, the half adder (HA) achieves a 20.51% reduction in cell count and a 25% reduction in area, and the complete MAC unit provides a 22.84% decrease in cell count, a 9.03% reduction in occupied area and a 14.25% in delay. These results confirm the efficiency and scalability of the proposed design. The low-area enables the integration of large arrays of MAC units, facilitating scalable systolic and Single Instruction Multiple Data (SIMD) architectures required in supercomputing environments.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 25
    Citation - Scopus: 24
    The History of Computing in Iran (persia)-Since the Achaemenid Empire
    (Mdpi, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Jafari Navimipour, Nima
    Persia was the early name for the territory that is currently recognized as Iran. Iran's proud history starts with the Achaemenid Empire, which began in the 6th century BCE (c. 550). The Iranians provided numerous innovative ideas in breakthroughs and technologies that are often taken for granted today or whose origins are mostly unknown from the Achaemenid Empire's early days. To recognize the history of computing systems in Iran, we must pay attention to everything that can perform computing. Because of Iran's historical position in the ancient ages, studying the history of computing in this country is an exciting subject. The history of computing in Iran started very far from the digital systems of the 20th millennium. The Achaemenid Empire can be mentioned as the first recorded sign of using computing systems in Persia. The history of computing in Iran started with the invention of mathematical theories and methods for performing simple calculations. This paper also attempts to shed light on Persia's computing heritage elements, dating back to 550 BC. We look at both the ancient and current periods of computing. In the ancient section, we will go through the history of computing in the Achaemenid Empire, followed by a description of the tools used for calculations. Additionally, the transition to the Internet era, the formation of a computer-related educational system, the evolution of data networks, the growth of the software and hardware industry, cloud computing, and the Internet of Things (IoT) are all discussed in the modern section. We highlighted the findings in each period that involve vital sparks of computing evolution, such as the gradual growth of computing in Persia from its early stages to the present. The findings indicate that the development of computing and related technologies has been rapidly accelerating recently.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 13
    Citation - Scopus: 12
    Leveraging Explainable Artificial Intelligence for Transparent and Trustworthy Cancer Detection Systems
    (Elsevier, 2025) Toumaj, Shiva; Heidari, Arash; Navimipour, Nima Jafari; Jafari Navimipour, Nima
    Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.
  • Loading...
    Thumbnail Image
    Review
    Citation - WoS: 80
    Citation - Scopus: 96
    Machine Learning Applications for Covid-19 Outbreak Management
    (Springer London Ltd, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Toumaj, Shiva; Jafari Navimipour, Nima
    Recently, the COVID-19 epidemic has resulted in millions of deaths and has impacted practically every area of human life. Several machine learning (ML) approaches are employed in the medical field in many applications, including detecting and monitoring patients, notably in COVID-19 management. Different medical imaging systems, such as computed tomography (CT) and X-ray, offer ML an excellent platform for combating the pandemic. Because of this need, a significant quantity of study has been carried out; thus, in this work, we employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers. Imaging methods, survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps are the seven key uses of applications employed in the COVID-19 pandemic. Conventional neural networks (CNNs), long short-term memory networks (LSTM), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, random forest, and other ML techniques are frequently used in such scenarios. Next, cutting-edge applications related to ML techniques for pandemic medical issues are discussed. Various problems and challenges linked with ML applications for this pandemic were reviewed. It is expected that additional research will be conducted in the upcoming to limit the spread and catastrophe management. According to the data, most papers are evaluated mainly on characteristics such as flexibility and accuracy, while other factors such as safety are overlooked. Also, Keras was the most often used library in the research studied, accounting for 24.4 percent of the time. Furthermore, medical imaging systems are employed for diagnostic reasons in 20.4 percent of applications.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 20
    Citation - Scopus: 16
    A New a Flow-Based Approach for Enhancing Botnet Detection Using Convolutional Neural Network and Long Short-Term Memory
    (Springer London Ltd, 2025) Asadi, Mehdi; Heidari, Arash; Navimipour, Nima Jafari; Jafari Navimipour, Nima
    Despite the growing research and development of botnet detection tools, an ever-increasing spread of botnets and their victims is being witnessed. Due to the frequent adaptation of botnets to evolving responses offered by host-based and network-based detection mechanisms, traditional methods are found to lack adequate defense against botnet threats. In this regard, the suggestion is made to employ flow-based detection methods and conduct behavioral analysis of network traffic. To enhance the performance of these approaches, this paper proposes utilizing a hybrid deep learning method that combines convolutional neural network (CNN) and long short-term memory (LSTM) methods. CNN efficiently extracts spatial features from network traffic, such as patterns in flow characteristics, while LSTM captures temporal dependencies critical to detecting sequential patterns in botnet behaviors. Experimental results reveal the effectiveness of the proposed CNN-LSTM method in classifying botnet traffic. In comparison with the results obtained by the leading method on the identical dataset, the proposed approach showcased noteworthy enhancements, including a 0.61% increase in precision, a 0.03% augmentation in accuracy, a 0.42% enhancement in the recall, a 0.51% improvement in the F1-score, and a 0.10% reduction in the false-positive rate. Moreover, the utilization of the CNN-LSTM framework exhibited robust overall performance and notable expeditiousness in the realm of botnet traffic identification. Additionally, we conducted an evaluation concerning the impact of three widely recognized adversarial attacks on the Information Security Centre of Excellence dataset and the Information Security and Object Technology dataset. The findings underscored the proposed method's propensity for delivering a promising performance in the face of these adversarial challenges.
  • «
  • 1 (current)
  • 2
  • »
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback