Heidari, ArashNavimipour, Nima JafariZeadally, SheraliChamola, Vinay2024-06-232024-06-23202402476-1508https://doi.org/10.1002/itl2.530https://hdl.handle.net/20.500.12469/5740Heidari, Arash/0000-0003-4279-8551Conversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniques like transformer architecture and self-attention mechanisms to replicate human speech and provide coherent and appropriate replies to the situation. The model mainly depends on the patterns discovered in the training data, which might result in incorrect or illogical conclusions. In the context of open-domain chats, we investigate the components, capabilities constraints, and potential applications of ChatGPT along with future opportunities. We begin by describing the components of ChatGPT followed by a definition of chatbots. We present a new taxonomy to classify them. Our taxonomy includes rule-based chatbots, retrieval-based chatbots, generative chatbots, and hybrid chatbots. Next, we describe the capabilities and constraints of ChatGPT. Finally, we present potential applications of ChatGPT and future research opportunities. The results showed that ChatGPT, a transformer-based chatbot model, utilizes encoders to produce coherent responses.eninfo:eu-repo/semantics/closedAccessChatGPTconversational artificial intelligencedeep learninggenerative pre-trained transformerlarge language modelsnatural language processingself-attention mechanismsEverything you wanted to know about ChatGPT: Components, capabilities, applications, and opportunitiesWOS:00123576550000110.1002/itl2.5302-s2.0-85194725891N/AQ3