2025 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom), Chisinau, Moldova, 23 - 26 Haziran 2025, ss.1-4, (Tam Metin Bildiri)
Artificial intelligence technologies have demonstrated remarkable success in natural language processing (NLP) tasks, including translation, text generation, and classification, and with the development of multilingual language models, this success has been extended to other languages. This study addresses the NLP task of customized title generation, specifically within the context of Turkish academic texts. A dataset was created from abstracts and titles of 3332 Turkish articles in the field of artificial intelligence published in DergiPark system of Turkish National Academical Networks. Language models including BART, LLaMA, Mistral, Gemma, Trendyol LLM, and Turkish-LLaMA were finetuned on the constructed dataset. BART, Llama, Mistral and Gemma are highly successful models in the field of natural language processing and have a wide range of languages. These models are often used in various tasks such as translation, text generation and classification. Trendyol LLM and Turkish-LLaMA, on the other hand, are models that have been fine-tuned for Turkish and provide effective results in language production. In this study, the models were trained to generate titles from summary texts and each of them was run on A100 GPU for 60 epochs. Performance was evaluated using BLEU, METEOR, ROUGE metrics, mBERT-based semantic similarity, and cosine similarity measures. The best results were obtained with the “Gemma (27)” model. All model development and experimental processes were carried out in Google Colab environment.