Jakab Buda graduated in Survey Statistics and Data Analytics MSC at Eötvös Loránd University, Faculty of Social Sciences in 2020. He wrote his thesis on recurrent neural network based language models and their application in text classification with the supervision of Márton Rakovics. As a freelance data scientist he participates in machine learnings projects. His academic interests are the application of machine learning based NLP methods and the interpretability of ML models. Recently he participated in several NLP projects, for example in the PAN 2020 author profiling shared task to identify fake news spreaders on Twitter where his team’s submission turned out to be a tie for the first place.
Doctoral Research
Explainable Neural Language Models and their Application in Social Sciences
Supervisor: Dr. Renáta Németh
In recent years, the explainability of machine learning models has become a significant scientific discourse (see e.g. Adadi, A., & Berrada, M. (2018), Danilevsky & al. (2020), Gholizadeh, S. & Zhou, N. (2021), Samek, W., et al. (2021), Holzinger, A. et al. (2022), Islam, M. R. et al. (2022)). One of the reasons for this is that it is increasingly obvious that these models, regarded traditionally as black boxes, inherit the biases present in society (see e.g. Lum and Isaac (2016), Birhane & al. (2022), Wissel & al. (2019), Ntoutsi et al. (2020)).
The natural language processing application of these explainable machine learning methods in social sciences is not yet established but opens exciting possibilities: the patterns recognised by the models can also reveal a lot about our society (see e.g. Bolukbasi & al. (2016)). Therefore, my research is primarily methodological: my goal is to develop a methodology that can reveal associations between the social characteristics of individuals (e.g. age, gender, education, origin, etc.) and their language use with the help of explainable machine learning methods. The basis of the method is a regression or classification based on a language model, which indicates the target variable based on the input texts. (see Radford & al. 2019). After demonstrating the association between language use and the target variable, I will use the tools of explainable machine learning to understand which aspects of language use have a strong association with the given target variable using a theory and data-driven approach.
I plan to present the methodology through case studies: I will examine language polarization, the linguistic appearance of discrimination, and the use of language according to ideological sides in official correspondence, parliamentary speeches, and the online press. Examining the phenomena in different environments also provides an opportunity to draw more general conclusions (on the difficulties of this, see e.g. Yan, H. et al. (2017)). I supplement the case studies with volatility studies to better understand the method’s reliability and reproducibility.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
Birhane, A., Prabhu, V. U. & Whaley, J. (2022). Auditing Saliency Cropping Algorithms. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 4051-4059.
Bolukbasi, T., Chang, K., Zou, J.,Saligrama, V. & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16), 4356–4364.
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B. & Sen, P. (2020). A Survey of the State of Explainable AI for Natural Language Processing. AACL
Gholizadeh, S. & Zhou, N. (2021). Model Explainability in Deep Learning Based Natural Language Processing. ArXiv abs/2106.07410
Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W. (2022). Explainable AI Methods – A Brief Overview. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds) xxAI – Beyond Explainable AI. xxAI
Islam, M. R., Ahmed, M. U. Barua, S. & Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences 12, no. 3: 1353.
Lum, K. & Isaac, W. (2016). To predict and serve?. Significance, 13(5), 14-19.
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis T. & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.
Radford, A., Wu, J., Child, J., Luan, D., Amodei, D. & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners. OpenAI blog, 1(8), 9.
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J. & Müller, K. -R. (2021) Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE, vol. 109, no. 3. 247-278
Wissel, B. D., Greiner, H. M., Glauser, T.A., Mangano, F.T., Santel, D., Pestian, J.P., Szczesniak, R.D. & Dexheimer, J.W. (2019). Investigation of bias in an epilepsy machine learning algorithm trained on physician notes. Epilepsia. 2019 Sep; 60(9):e93-e98.
Yan, H., Lavoie, A. & Das, S. (2017). The perils of classifying political orientation from text. In LINKDEM@ IJCAI.