33rd IEEE Conference on Signal Processing and Communications Applications, SIU 2025, İstanbul, Türkiye, 25 - 28 Haziran 2025, (Tam Metin Bildiri)
In this study, the performance of BERT and DistilBERT models, which are widely used in the field of natural language processing (NLP), on the SQuAD dataset is investigated. Standard metrics such as Exact Match (EM) and F1 scores were used to evaluate the success of the models. Within the scope of the study, the performance of BERT and DistilBERT models on different text types and difficulty levels were compared and the differences in terms of accuracy, speed and resource efficiency were revealed. The results show that BERT achieves high accuracy rates thanks to its bidirectional contextual understanding, but the computational cost is high. DistilBERT, on the other hand, works faster and consumes fewer resources, showing that it can be an alternative, especially in resource-constrained situations. The rich structure of the SQuAD dataset and its various difficulty levels allow the models to be tested in real-world scenarios.