Arsan, TanerYildiz, E.Safdil, E.B.Arslan, F.Alsan, H.F.Arsan, T.2023-10-192023-10-19202109781665449304https://doi.org/10.1109/ISMSIT52890.2021.9604738https://hdl.handle.net/20.500.12469/49695h International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2021 --21 October 2021 through 23 October 2021 -- --174473This paper creates a multimodal retrieval system for image and text data in a multi-type learning approach that enables text-to-image, image-to-text, text-to-text, and image-to-image retrievals. As a practical solution, a mobile application is developed in which the users can upload their images to search a description sentence for the images. The user system is created on the application, which is done with React Native, and crucial features like e-mail authentication and reset password options are added to the application. An essential database system is designed with PostgreSQL to store user information and search for the user. The multimodal embedding study is worked, and the model that recognizes multitype retrievals is formed. The image-to-text retrieval model, which is our application's idea, is applied to the mobile application. © 2021 IEEE.eninfo:eu-repo/semantics/closedAccessConvolutional NetworksCross-Modal LearningDeep LearningLong-Short Term Memory (LSTM)Mobile ApplicationMultimodal RetrievalReact NativeAuthenticationConvolutional neural networksEmbeddingsInformation retrievalMobile computingSearch enginesConvolutional networksCross-modalCross-modal learningDeep learningLong-short term memoryMobile applicationsMulti-modalMultimodal retrievalMultitypeReact nativeLong short-term memoryMultitype Learning via Multimodal Data EmbeddingConference Object45746110.1109/ISMSIT52890.2021.96047382-s2.0-85123309356N/AN/A