The spread of fake news poses significant public health, social and political risks. The aim of this study is to compare the performance of two advanced recurrent neural network architectures, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), in a binary fake news classification task. The models were evaluated using various text preprocessing strategies (e.g., lemmatization, stopword handling, numerical data conversion) using GloVe word embeddings. The analysis used several independent and thematically diverse English-language news corpora as training and test sets. The results suggest that certain preprocessing steps, such as converting numbers to text form and retaining stopwords, can significantly improve predictive performance. GRU models performed better on test sets containing articles from 2016, while the LSTM architecture proved to be more reliable and accurate on the most recent news articles from 2025. The results highlight the importance of the interaction between neural architectures and preprocessing methods and may point the way to the development of more effective automated fake news filtering systems.