Alexandra Fodor
The thesis examines the applicability of generative large language models in text analysis annotation tasks using a corpus of texts related to depression. The research compares the performance of the closed-source GPT-4o mini and the open-source Llama 3.3 70B, comparing the results of zero-shot and few-shot techniques for both models. In terms of accuracy, the few-shot approach led to a slight improvement over the zero-shot technique. Overall, the Llama model performed slightly better than GPT. The two models performed moderately in terms of accuracy, but their consistency and reliability can be considered high.