Classification of Fake News Using Deep Learning-Based GloVE-LSTM Model

Classification of Fake News Using Deep Learning-Based GloVE-LSTM Model

Chandra Bhushana Rao KilliNarayanan Balakrishnan Chinta Someswara Rao 

Dept. of CSE, Annamalai University, Chidambaram 608002, Tamil Nadu, India

Dept. of CSE, SRKR Engineering College, Bhimavaram 534204, Andhra Pradesh, India

Corresponding Author Email: 
kchbhushan.mtech@gmail.com
Page: 
631-637
|
DOI: 
https://doi.org/10.18280/ijsse.120512
Received: 
13 July 2022
|
Revised: 
10 August 2022
|
Accepted: 
19 August 2022
|
Available online: 
30 November 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Fake news is deliberately created with the goal of influencing people and their belief systems. Because false news has a detrimental influence on society and politics, it has become increasingly crucial to identify and stop it from spreading. Most of the prior research has employed supervised learning but has placed emphasis on the terms that were used in the dataset. Initially, we began by pre-processing the information (replacing the missing value, noise removal, tokenization, and stemming). LSTM (long short-term memory network) model is employed for text data classification in this article, and we deal with automated feature selection from text data using the GloVe model. Unlike previous models, the suggested model can select the relevant attributes to determine whether the news is false or real while existing models fail in this regard. The proposed model outperforms the already available models.

Keywords: 

fake news, GloVe, LSTM, deep learning, WELFake

1. Introduction

Because of the long-term impacts and effects of spreading fake news, detecting it has always been a difficult task. There is evidence that it dates to the 17th century and was used in propaganda, which later evolved into disinformation during the Cold War [1]. Because of the proliferation of social media platforms, in recent years, this problem has become increasingly significant. This is especially true given the surge in popularity of fast-moving social media outlets like Facebook, Twitter, and Instagram in the last several years.

Figure 1 shows a sampling of some of the most egregious hoaxes that have appeared in the media in recent years. About half the population of industrialized nations is projected to use social media for information, according to a number of polls [2]. Social media's role in breaking news stories, for example Zubiaga et al. [3] shows just how important it is. However, one of the downs of social media's accessibility is the rapid transmission of erroneous information.

Figure 1. Illustrations of some fabricated news [4]

As compared to conventional media, such as print media or television, social media content may be updated by its users, who can then add their own thoughts or biases to the information, making it more interesting. This has the potential to completely change the meaning or context of the news [5]. In accordance with different research, social media offers a fertile field for the rapid dissemination of information without the need for fact-checking [1]. Fake news, on the other hand, is created and modified by social media users with the goal of distorting the apparent meaning or context of news information and inflicting damage on individuals, organisations, or society, whether monetarily or ethically. Sarcasm, memes, phoney commercials, fabricated political remarks, and rumours are all examples of fake news [3], as are rumours. A fester is a phrase that refers to a person who is responsible for distributing false information. Depending on its reliability, news may be classified as accurate, half-true, or completely fake [5]. In addition to photos and video, fake news may also be delivered via text. False news has been described in the study of Jain and Kasbe [6] as a three-step process: development, publication, and distribution. The rise of fake news on social media has had a major impact [7]. It has the potential to produce a downturn in stock values, a decrease in possible investments, and other consequences [6]. For example, false news had a significant influence on the 2016 United States presidential election [2]. The spread of false information regarding President Obama's death resulted in a loss of USD 130 billion in the stock market in a matter of hours and minutes. Fake news may be published with the objective of accusing someone of having political or personal motives, or it may be published with the intent of misleading people [6]. Factcheck, Snopes, Tyrothricin, and PolitiFact are just a few of the websites that are used to identify false information.

Furthermore, Google has developed a Google News Initiative to combat the spread of false information [3]. Fake news identification, on the other hand, remains a time-consuming endeavour. The reason for this is that false material often incorporates deceptive information that has been tainted with genuine facts [2]. Fake news may be motivated by a variety of factors, including politics, financial gain, or ideological beliefs [3, 5]. Numerous strategies have been proposed to identify fake news, such as linguistic features and deep learning [8]. Other approaches include recurrent neural networks, the convolutional neural network (CNN), a transformer, bidirectional encoder representations from transformers (BERT) and their combination. You may divide the difficulty of spotting false news into two types, depending on how complicated you want to make it. Alternative modelling approaches include regression problems and optimization problems. Several datasets are also available for the categorization of false news, including Kaggle, ISOT, and LIAR [3], among others. Despite the many studies that have been conducted, the issue of counterfeit news identification continues to be very difficult to solve, and it is considered that a complete multi-phased strategy is required to effectively combat it. As a solution to this challenge, this research provides a unique technique for validating the legitimacy of the news reports. After assessing the content's position and verifying the author's credibility, machine learning is used to decide whether the news is fake. By analysing a wide range of factors, including the language used and the author's background, researchers hope to identify whether or not a piece of news is real or not.

It is possible that the planned work may have several consequences. As previously noted, if a customer believes phoney news about medical symptoms to be accurate, it might have serious ramifications for him or her. Fake news may also do irreversible harm in the areas of health, politics, social welfare, and the economy, among other things. This potentially disastrous outcome may be prevented if the recommended technique is followed. This work also provides as a baseline for future research on false news identification and offers up new areas for future investigation. In the field of counterfeit news categorization, three-pronged strategies have received little attention in the academic literature. To solve the problem of detecting and identifying false news, machine learning and deep learning are now being heavily used. The present analysis suggests that the problem can be resolved in three phases. The proposed model identifies fake news as well as a method to properly analysis its validity may be established based on the proposed investigation. The following is the structure of the rest of this paper: Following that, Section 2 discusses previous work; Section 3 outlines the suggested unique technique to identify false news; Section 4 discusses the experimental findings; and, lastly, Section 5 concludes with recommendations and potential prospects for further research.

2. Related Work

More than 90% of the time, Gupta et al. [1] were able to distinguish between real and fraudulent photographs of Sandy on Twitter, which had a huge impact on the United States at the time. By looking at more than 10,000 images on Twitter, the researchers were able to figure out how the falsified photos were influencing people. The Niave Bayes and the Decision Tree models were developed at this period. An accuracy of 97% is achieved while employing the Decision Tree after the application of two machine learning algorithms. At the International Conference on Machine Learning (ICML) in 2017, Zubiaga et al. presented their work on the classification of rumour attitudes on social media using sequential classifiers [2]. Twitter is the social media site of choice for them, and they categorise their tweets into four different groups: Support, denial, query, and a remark on a prior post are the only choices available. This research included eight data sets, all of which were linked to breaking news. To classify the data, they used four sequential classifier-Hawkes methods: LSTM, linear and tree CRF. Social media interaction may be classified using sequential classifiers, which outperform non-sequential classifiers. On the other hand, LSTM outperforms other sequential classifiers in some scenarios. In 2018, KalinaBontch are working on false detection using natural language processing and data mining technologies. According to the researchers, there are two types of false news that circulate on social media: long-standing rumours and new emerging rumours sparked by current events. They create a technique for classifying rumours into four broad categories: This procedure includes the following four steps: Detecting rumours, monitoring rumours, evaluating the viewpoints of rumours, and establishing the authenticity of rumours are all part of the process. Then use this method on the PHEME dataset, which is open to the public and includes both rumours and non-rumors. In 2018, Kotteti et al. [4] are using data imputation to enhance the detection of fake news. They improved performance by using a novel approach to data preparation to fill in the blanks in the raw dataset. Data modelling approaches were used to fill in the blanks for numerical and hierarchical properties. In numerical hierarchies, they choose the column's average value, which is expressed as a number value. For the lacking data, he attempted three different things. When the following methods were followed, the accuracy of multilayer perceptron (MLP) classes increased by 16 percent. Removed columns that had missing values, replaced those values with empty text, and then applied missing values using data impersonation techniques. Using machine learning techniques, Aphiwongsophn and Chongstitvatana [5] 2018 to identify fraudulent news stories. This study focuses on three popular methods: One of the most common distributions is the Naive Bayes distribution. The second and third algorithms are the Support Vector Machine and the Neural Network. The data was sanitised via normalisation so that it would function better when the correct data was input. According to this study, the accuracy of Naive Bayes is 96.08 percent, whereas the accuracy of the two more sophisticated techniques is 99.90 percent. After five years of trying to detect fake news on Facebook, Jain and Kasbe [6] came up with a method for doing so. This was a Naive Bayes prediction. They used a 11000-article dataset from Github that was organised into categories (index, text, title and label). Also included in this database is information about science and industry. Aside from n-gram references, they used the title and the content of their primary source to execute their assertions. On the basis of this information, he made comparisons and determined that the Nave Bayes model had an accuracy rate of 0.931 and that numerous ways had been developed to improve it. Using (CNN) and (RNN) models, a team led by Ajao et al. [7] developed a model for detecting fake news messages on Twitter in 2019. The model was published in the journal Nature Communications in 2019. Over the course of five days, they culled 5,800 tweets related to the Charlie Hebdo shooting, the Sydney Siege shooting, the Germanwing crash shooting, the Ottawa shooting, and the Ferguson shooting. By using CNN and RNN, they claim to be able to intuitively identify key characteristics of misinformation in news stories and obtain an accuracy rate of over 80% without any prior knowledge of the news. On the horizon for Han and Mehta is a study on the effectiveness of false news detection systems in 2019. Fake news is broken down into two groups based on their findings: The two paradigms are news and social situations. Visual and spoken news are divided into two categories to make it simpler for readers (text, title). Traditional machine learning methods like Naive Bayes and Random Forest were compared to the most recent deep learning approaches (deep reinforcement learning) in terms of their performance. This study's purpose is to provide participants the opportunity to pick between these two options. Researchers claim that the hybrid CNN-RNN model outperforms and produces superior outcomes [8]. According to Reece et al. [9], academics are examining a broad range of variables in news articles, postings, and stories to better anticipate false news in 2019. It has been shown that these additional characteristics have a major influence on the assessment of deceptive news reports. These characteristics include bias, reliability/trustworthiness, engagement, domain location, and temporal patterns. With these attributes, the dataset they utilized included 2282 BuzzFeed items (news articles). This concept was studied and debated using KNN, Nave Bayes, Random Forest, Support Vector Machine, and the XGBoost algorithm. XGBoost had the greatest accuracy of any of the methods, at 86% [9]. One of Ahmad et al.'s methods to detect fake news stories in 2020 was described in a paper published in Science (Logistic Regression, Random Forest, Perez- LSVM). This research uses these characteristics to distinguish between true and fraudulent news. We evaluated our algorithm's performance on four publicly accessible datasets, each of which has a distinct domain. On the ISOT Fake News Dataset, Random Forest and Perez-LSVM acquire an accuracy rate of 99 percent [10]. There is a multi-head attention network (SMAN) based technique published in Nature Communications by Yuan et al. as of 2020 [11]. Both publishers and consumers benefit from this approach because of the high level of trust they have. In this method, real-world data was used. Since producers and consumers have different graphs, it is possible to utilise this method to identify bogus news at an early stage. The accuracy of this model has been shown on three different datasets (Twitter 15, Twitter 16, and Weibo). For COVID-19 fake news identification in 2021, the technique proposed by Shifath et al. was adopted by the committee members. They experimented using both traditional language models and CNN content [12-15]. COVID-19-related texts are included in the collection, along with classifications indicating whether they are legitimate or fake [16-18]. Additionally, researchers tested a range of hyper factors and transformer-based models. RoBERTa has the best accuracy at 0.979 [19].

Yang et al. [20] proposed false information have caused widespread alarm. The repercussions of spreading such fabricated political news might be severe. As the prevalence of false news rises, so does the necessity of spotting it. Here, we present a single model, called TI-CNN [21], that can take use of both the explicit and latent properties of text and images. As a result of its high scalability, the suggested model may readily include additional news aspects. In addition, the convolutional neural network allows the model to take in the whole input at once and can be trained considerably more quickly than LSTM and other RNN models.

The problem of tracking down the writers of hoaxes was studied by Zhang et al. [22] and his colleagues. Using the news enhanced heterogeneous social network as a basis, we may extract a set of explicit and latent features from the textual content of news items, producers, and subjects. To further include the network structural information into model learning, a deep diffusive network model has been introduced. This model considers the interdependencies between news stories, the people who report them, and the subjects they cover. We also provide a novel diffusive unitmodel, which we call the GDU, in this study. The content "forget" and "adjust" gates of the Model GDU may effectively fuse many inputs from different sources into a single output.

Ali et al. [23] enhance the accuracy of false news identification, a deep and dense ensemble model was constructed in this work. The research demonstrates that traditional characteristics based on news content outperform the current text embedding approaches with adequate representation and model design. The three-stage process used to build the suggested model. As a preliminary step, we used n-gram approaches to enhance the features retrieved from the news material and the TF-IDF statistical method to represent those features. To determine the hidden properties that accurately characterize the news, many binary classifiers were developed. The multilayer perceptron learned from the parameters of the deep [23, 24] ensemble model that was built to provide the final classification.

In the recent past, deception detection has been an extremely popular subject of discussion. Information that is intentionally misleading includes scientific fraud, fake news, bogus tweets, and so on. Identifying false news is a subtopic that falls under this category.

3. Proposed System

Fake news has played a big part in several real-time catastrophes, with serious ramifications for media, the economy, and political instability as a result. Manual interventions are ineffective in combating false news because of the rapidity with which information is shared on the internet.

It takes a lot of testing with machine learning approaches on a broad variety of datasets to advance the science of false news identification. Fake news and the methods by which it spreads throughout the globe must be well understood before new tactics can be developed. By presenting a model based on unique methodologies that demonstrate the usefulness of deep learning models for the false news detection challenge, the current study makes a significant contribution to this field. Furthermore, it proposes a combination of GloVE and LSTM, which improves the performance of a false news detection model that has been published before.

The GloVe is an unsupervised learning approach for creating vector representations of words. It was developed by the University of Michigan. Using global word-word co-occurrence data from a corpus as input, training is carried out, and the resultant representations reveal intriguing linear substructures of the word vector space. When generating word vectors, because it incorporates both local and global data (such as word co-occurrence) while creating word vectors, GloVe differs from Word2vec.

defrd_glve_vec(glve_vec):

         with open1 (glve_vec, 'r', encoding='UTF-8') as f:

             words1 = set ()

         wd_to_vec_map = {}

             for line1 in f:

               w_line1 = line1.split()

               curr_word1 = w_line1[0]

         wd_to_vec_map[curr_word1]

 = np.array(w_line1[1:], dtype=np.float64)

           return wd_to_vec_map

The suggested approach is designed to deal with fake news which are in the form of text. RNNs are incapable of learning long-term sequences. According to Hochreiter and Schmidhuber, the LSTM algorithm was developed to circumvent the issue. For an LSTM memory block, the self-hidden unit (memory cell) has a recurrent connection, while the input and output gating units (input and output gates) restrict access to the memory cell depending on the previously observed circumstance. Later, Gers et al. added a forget gate, which learns the memory self-reset circuit's behaviour after it has been programmed, thereby improving the original design (forgetting). A range of sequence labelling applications, including off-line handwriting recognition and speech recognition have been successful using LSTM networks. For the examination of sentiment analysis data, we use the LSTM model. The proposed model considers additional information about the news articles, such as the full statement of the fake information provided. Because it is a memory network, it can remember the words in a phrase and representing the precise meaning of those words. If a user publishes a review in a sarcastic manner, this is also taken into consideration and classified.

Figure 2. False-news model proposed for categorization

Figure 2 explains about proposed architecture. Initially takes input datasets and applies to pre-processing of data. After pre-processing given proceed data to the LSTM network. LSTM network takes the data, initially input gate takes the data and sends that to the fort gate, here previous input is forgotten and sends the data to the output gate by enabling the corpus relationship between the data. If the data belongs to positive classified as positive data or else classified as negative data.

Figure 3 is an LSTM memory block described below is for a single chain; the model consists of a single memory block connected to a single self-recurrent link. One of the most significant differences between MD-LSTM and the One-Dimensional LSTM (1D-LSTM) in terms of operations is that numerous interconnections are available for each axis. These links are responsible for transporting context data from one location to another.

$j_n=\operatorname{sigm}\left(X_n . Y_l+\sum_{p \in p} K_n{ }^r k_{l-1}{ }^r+D_n{ }^r d_{l-1}{ }^r+a_n\right)$

(Input gate)

$e_n=\operatorname{sigm}\left(X_n . Y_l+\sum_{p \in p} K_e k_{l-1}+d_n^{r^{\prime}} d_{l-1}^{r^{\prime}}+a_e^{r^{\prime}}\right)$

(Forget gate for the axis)

$f_n=\tanh \left(X_{f n} \cdot Y_l+\sum_{p \in p} k_{f n}^r k_{l-1}^r+a_{f n}\right)$

(axis gate)

$f_n=\sum_{p \epsilon p} e_n^r \odot{k_{l-1}}^r+j_n \odot e_n$

$Q_n=\operatorname{sigm}\left(X_Q \cdot Y_l+\sum_{p \epsilon p} K_Q^r k_{l-1}^r+D_n d_Q+a_Q\right)$

(Output Gate)

$K_n=Q_n \odot \tanh f_n$

(Net Output Gate)

where, P denotes the connections between the axes and the axes themselves (x and y for 2D).

Figure 3. LSTM memory block

Experimental results:

Data Set:

It is vital to gather news data to create an accurate and balanced dataset. It is also essential to provide high quality training data and give excellent outcomes. Even though there is a significant number of accessible datasets available for the research of false news. The literature demonstrated are significant restrictions in terms of size, categorization, and prejudice. After Following a thorough investigation, we developed a more complete WELFake. the result of combining four datasets, and Reuters, Kaggle, and McIntire and BuzzFeed, for a couple of different reasons. For starters, they have a similar look and feel. a two-category organisational structure (i.e., real, and fake news). Secondly, Combining the datasets decreases the constraints and increases the accuracy of the results. open dataset including 72,134 news stories grouped into 35,028 categories. There are 37,106 pieces of false news. There are three records in the collection.

Each of the columns should have a binary label for "fake news" and "real news" (i.e., title, text, and label). Table IV summarises the WELFake dataset's balanced distribution of false and true news across all four feature areas.

Short sentences (less than ten words) representing true news outnumber short sentences (less than ten words) indicating false news in terms of the total number of short sentences. False news articles are also more subjective than their real-news counterparts; c) the readability of language in fake news is worse than that of real-news text; d) the quantity of articles representing genuine news is more than that of fake-news articles.

Pre-processing of data: Depending on the dataset and the project's goals, several ways are used to deal with challenges including typographical errors, unstructured data format, and other limits in the collected data.

Section a deal with missing data, such as null and non-existent entries in the dataset that obstruct the feature engineering process (such as NaNs and NULLs). We utilised a missing value imputation approach to estimate missing values and then analyse the whole data set as if these estimated values were the genuine observed values since eliminating data entries with missing values may result in the loss of essential information.

Errors occurred during the collection of the data lead data points to be inconsistent. To detect and correct outliers, we employed a number of visualisation methods and mathematical functions such as the IQR score, box plots and Z-scores.

De-duplication is the process of eliminating repetition, which may lead to biased judgments, when many persons acquire the same information.

Stop words (and other noise) that are grammatically whole but have no semantic importance in news classification methods may be removed using irrelevant data.

When stop words are removed, but tokens are left in place, the model's performance improves somewhat.

f) Stemming: This approach uses the Porter-Stemmer algorithm to extract a term's root word by analysing the text's properties. As soon as it is unable to locate the root word, it generates the closest possible canonical form.

Figure 4. Accuracy

Accuracy = (TP +TN) / (TN + FP +TP +FN)

Figure 4 shows the accuracy with which samples were classified as fake or not. And the graph compares the existing TI-CNN [20], GDU [21], MLP [22] models and proposed GloVE-LSTM model. TI-CNN model fails to perform classification of fake and genuine samples. Because fake data is very closely related to genuine data TI-CNN, GDU and MLP fails to memorize the relationship between different text sequences. But the proposed GloVe-LSTM model contains a memory unit that can give improved accuracy when the quantity of epochs rises. At the same time, other models fail to provide enhanced accuracy while the number of epochs is enlarged. CNN is not apt for text data processing, producing between 80 to 50% accuracy.

Figure 5. Precession

The greater the number of FPs introduced into the mix, the uglier that precision will seem.

Precision = TP / (TP + FP)

Figure 5 depicts the precession categorization of bogus news samples and real news samples. And the graph compares the existing TI-CNN [20], GDU [21], MLP [22] models and proposed GloVe-LSTM model. The precession of TI-CNN model is between 60% and 80% because TI-CNN model is not able to process text data in an effective way, CNN is not having any memorize mechanism. But the GloVE-LSTM model contains a memory unit as well as suitable to process text data so that it can give better precession while the number of epochs increases. Other models are not apt for text data processing, producing precession less than 80% precession.

Figure 6. Recall

Recall is calculated as the quantity of accurate optimistic findings separated by the total amount of appropriate samples.

Recall = TP / (TP + FN)

Here Figure 6 represents recall for the classification of fake and genuine news data. On X-axis of the graph represented the number of epochs and Y-axis shows the recall. Here the graph shows the existing TI-CNN [20], GDU [21], MLP [22] models and proposed GloVe-LSTM model. GDU model suffering from the gradient exponent problem it is under performed, MLP and TI-CNN give recall between 50% to 90%. due to the limitation of handling text data these two models are not performing better compared to proposed GloVE-LSTM model.

Figure 7. F-score

Here Figure 7 represents the classification of fake news samples and genuine news samples F-Score. And the graph compares the existing TI-CNN [20], GDU [21], MLP [22] models and proposed GloVe-LSTM model. The precession of TI-CNN model is between 60% and 80% because TI-CNN model is not able to process text data in an effective way, CNN is not having any memorize mechanism. But the GloVE-LSTM model contains a memory unit as well as suitable to process text data so that it can give better F-Score while the number of epochs increases. Other models are not apt for text data processing, producing precession less than 80% precession.

4. Conclusion

The GloVe-LSTM model, which is based on deep learning, was used in this study to perform false news categorization. The GloVe model was used to improve the accuracy of the results. Existing models are failing miserably when it comes to properly processing text data since most of the models are not concentrating on correct characteristics that accurately represent the true meaning of the data being processed. The proposed model makes use of the GloVe to identify appropriate features from text data. In addition, the LSTM model is employed for text data categorization. WELFake is being used to test the performance of new and current models using a publicly available open dataset. When compared to the existing TI-CNN, GDU, and MLP models, the proposed model outperforms them all.

  References

[1] Gupta, A., Lamba, H., Kumaraguru, P., Joshi, A. (2013). Faking sandy, identifying fake images of hurricane sandy on twitter. Proceedings of the 22nd International Conference on World Wide Web, pp. 729-736. https://doi.org/10.1145/2487788.2488033

[2] Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., Lukasik, M., Bontcheva, K., Cohn, T., Augenstein, I. (2017). Discourse-aware rumour stance classification in social media using sequential classifier. Information Processing & Management, 54(2): 273-290. https://doi.org/10.1016/j.ipm.2017.11.009

[3] Zubiaga, A., Aker, A., Bontcheva, K., LIakata, M., Procter, R. (2018). Detection and Resolution of rumours in social media: A survey. ACM Computing Surveys, 51(2): 1-36. https://doi.org/10.1145/3161603

[4] Kotteti, C.M.M., Dong, X., Li, N., Qian, L. (2018). Fake news detection enhancement with data imputation. IEEE 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Int Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing. https://doi.org/10.1109/DASC/PiCom/DataCom/CyberSciTec.2018.00042

[5] Aphiwongsophon, S., Chongstitvatana, P. (2018). Detecting fake news with machine learning methods. 2018 15th international conference on Electronics, computer, Telecommunication and Information Technology. https://doi.org/10.1109/ECTICon.2018.8620051

[6] Jain, A., Kasbe, A. (2018). Fake news detection. 2018 IEEE International Students' Conference on Electrical, Electronics and Computer Sciences. https://doi.org/10.1109/SCEECS.2018.8546944

[7] Ajao, O., Bhowmik, D., Zargari, S. (2018). Fake news identification on Twitter with combination of CNN and RNN models. International Conference on Social Media & Society, Copenhagen, pp. 266-230. https://doi.org/10.1145/3217804.3217917

[8] Han, W., Mehta, V. (2019). Fake News Detection in Social Networks Using Machine Learning and Deep Learning: Performance Evaluation. 2019 IEEE International Conference on Industrial Internet (ICII), Orlando, FL, USA. https://doi.org/10.1109/ICII.2019.00070

[9] Reis, J.C.S., Correia, A., Murai, F., Veloso, A., Benevenuto, F., Cambria, E. (2019). Supervised learning for fake news detection. IEEE Intelligent Systems, 34(2): 76-81. https://doi.org/10.1109/MIS.2019.2899143

[10] Ahmad, I., Yousaf, M., Yousaf, S., Ahmad, M.O. (2020). Fake news detection using machine learning ensemble methods. Complexity, 2020: 1-11. https://doi.org/10.1155/2020/8885861

[11] Yuan, C., Ma, Q., Zhou, W., Han, J., Hu, S. (2020). Early detection of fake news by utilizing the credibility of news, publishers, and users based on weakly supervised learning. arXiv:2012.04233v2 [cs.CL]. 

[12] Shifath, S., Khan, M.F., Islam, M.S. (2021). A transformer based approach for fighting COVID-19 fake news. arXiv:2101.12027v1 [cs.CL]. 

[13] Manzoor, S.I., Nikita, Singla, D.J. (2019). Fake news detection using machine learning approaches: A systematic review. Third International Conference on Trends in Electronics and Informatics (ICOEI). https://doi.org/10.1109/ICOEI.2019.8862770

[14] Ksieniewicz, P., Zyblewski, P., Choras, M., Kozik, R., Gielczyk, A., Wozniak, M. (2020). Fake news detection from data streams. 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom. https://doi.org/10.1109/IJCNN48605.2020.9207498

[15] Ghanem, B., Ponzetto, S.P., Russo, P., Rangel, F. (2021). FakeFlow: Fake news detection by modeling the flow of affective information. arXiv:2101.09810v1 [cs.CL]. 

[16] Vo, N., Lee, K. (2020). Where are the facts? Searching for factchecked information to alleviate the spread of fake news. arXiv:2010.03159v1 [cs.CL].

[17] Zhou, X., Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM. Computing Surveys, 53(5): 1-40. https://doi.org/10.1145/3395046

[18] Kwon, S., Cha, M., Jung, K., Chen, W., Wang, Y. (2013). Prominent features of rumor propagation in online social media. 2013 IEEE 13th International Conference on Data Mining (ICDM). https://doi.org/10.1109/ICDM.2013.61

[19] Sharma, H., Kumar, S. (2016). A survey on decision tree algorithms of classification. International Journal of Science and Research (IJSR), 5(4): 2094-2097. 

[20] Yang, Y., Zheng, L., Zhang, J., Cui, Q., Li, Z., Yu, P.S. (2018). TI-CNN: Convolutional neural networks for fake news detection. arXiv preprint arXiv:1806.00749.

[21] Gopi, A.P., Naik, K.J. (2021). A model for analysis of IoT based aquarium water quality data using CNN model. In 2021 International Conference on Decision Aid Sciences and Application (DASA), pp. 976-980. https://doi.org/10.1109/DASA53625.2021.9682251

[22] Zhang, J., Dong, B., Philip, S.Y. (2020). FakeDetector: Effective fake news detection with deep diffusive neural network. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pp. 1826-1829. https://doi.org/10.1109/ICDE48307.2020.00180

[23] Ali, A.M., Ghaleb, F.A., Al-Rimy, B.A.S., Alsolami, F.J., Khan, A.I. (2022). Deep ensemble fake news detection model using sequential deep learning technique. Sensors, 22(18): 6970. https://doi.org/10.3390/s22186970

[24] Rajasab, N., Rafi, M. (2022). A deep learning approach for biometric security in video surveillance system using gait. International Journal of Safety and Security Engineering, 12(4): 491-499. https://doi.org/10.18280/ijsse.120410