- Open Access
- Authors : Sarvagya Srivastava , Vishwaas Khare , R. Vidhya
- Paper ID : IJERTV10IS050015
- Volume & Issue : Volume 10, Issue 05 (May 2021)
- Published (First Online): 08-05-2021
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Economic Forecasting using Generative Adversarial Networks
Sarvagya Srivastava
Computer Science & Engineering SRM Institute of Science & Technology
Kattankulathur, India
-
Vidhya
Vishwaas Khare
Computer Science & Engineering SRM Institute of Science & Technology
Kattankulathur, India
Computer Science & Engineering SRM Institute of Science & Technology
Kattankulathur, India
Abstract Modern-day finance relies immensely on economic forecasting. Making wise decisions and maximizing growth is a function of being able to predict economic variables. Forecasting these variables is a very arduous job because of the complex ways in which different factors impact a given variable. Various time-series models have shown a proven record of success in the field of economic forecasting. They analyze historical patterns in data supplied to predict future values of any variable. In our paper, we propose the implementation of Generative Adversarial Networks to forecast variables of the financial market. This framework uses Gated Recurrent Units as a generator alongside Convolutional Neural Network used as a discriminator. We have used Yahoo Finance API to import stock data of six equities from the Nifty 50 index which include Hindalco, IOC, NTPC, ONGC, Powergrid, and Wipro.
KeywordsGANs, GRU, CNN, time series prediction.
-
INTRODUCTION
Economic forecasts play a vital role in today's financial system. They are directed at the process of predicting financial variables through which individuals and institutions, both private and governments, can make their decisions regarding liability, employment, expenditure, trading, investing, and important policies that make the economic activity possible. The ability to predict these economic indices helps as a guide in decision-making to accelerate growth.
Stock markets ushered in when nations in the New World initiated trading with each other. While many pioneer wholesalers aspired to start vast commerce, this required ample aggregates of assets that no solo wholesalers could foster unaided. Consequently, clusters of patrons pooled their investments and grew into commercial associates and co-holders with separate shares in their businesses to form joint-stash firms. This later began to be known as stock market investment. However, there is a question How would an individual know where and how much to invest in a stock. This is where the idea of stock market predictions comes from.
Equity shares and other financial indicators are related to each other which gives us the ability to predict future numbers.
Forecasts help to develop risk trading strategies and assess portfolio pressures. Predicting stock market returns is a challenging and growing research task with the availability of new data sources, markets, financial instruments, and algorithms. In addition, the stock market is affected by several factors such as political events, firm policies, general economic conditions, investor expectations, institutional investors 'choices, other stock market movements, and investors' mental performance, etc.
Many researchers from different parts of the world have studied historical patterns in the financial time series as well suggested various ways to predict stock prices. To achieve promising performance, many of these methods require careful selection of flexible input, configuration professional financial forecasting model, and adopting various mathematical methods for arbitrage analysis, making it difficult for people outside the financial field to use these methods to predict prices.
To date, various implementations of machine learning like the SVM, ARIMA, LSTM, & GRU have been used to build a model capable enough to predict time-series data up to an extent of certainty. To accurately predict the upcoming equity prices, the model needs to be trained thoroughly with stock data and various economic indicators.
SVM has been extensively used as a classification tool with a great deal of success from object recognition [11, 12] to the classification of cancer morphologies and a variety of other areas.
There are several forecasting models used by the stock analyst. The most widely used conventional methods to forecast stock markets include autoregressive (AR), Auto-Regressive Moving Average (ARMA), Auto-Regressive Integrated Moving Average (ARIMA), Generalized Auto-Regressive Conditional Heteroscedasticity (GARCH), and Stochastic Volatility (SV) [13].
The predictive power of the models is further enhanced by introducing the powerful deep learning-based long- and short-term memory (LSTM) network into the predictive framework [14]. LSTM is a type of recurrent neural network used to process data sequences. It can easily learn the dependencies of one item over another in a sequence by being able to learn the context/patterns required to make predictions. The main structure of LSTM is based on the state of the cell, and various gates. The state of the cell handles the relevant details throughout the sequence processing. The gates determine which information is allowed in a cell by either keeping or forgetting it while training.
Furthermore, Gated Recurrent Unit or GRU models have shown better performance when compared to LSTMs. They function in the same way as LSTM by using gating mechanisms but contain only two gates, reset and update and also uses fewer parameters comparatively.
Based on the above implementations of the deep learning models, our model uses the structure of Generative Adversarial Networks which employ GRU as the generator and CNN as the discriminator. The generator is employed to generate new and similar instances of time series data by learning the patterns. Further, the discriminator is fed with both, original and generated data, so that it discriminates between them and creates feedback to make better predictions. This approach helps in the creation of better models that can generate time-series data that more accurately resembles the future values.
-
STATE OF THE ART (LITERATURE SURVEY) In this section, we have tried to explain the current
breakthroughs in the field of economic forecasting by exploring various papers of research and understating the contributions made by these researchers to the field.
Xingyu Zhou et. al [1] (2018) have proposed an easy-to-use prediction architecture model which they call GAN-FD. GAN-FD stands for Generative Adversarial Networks for minimizing forecast error loss and direction prediction loss. Generative Adversarial Networks was introduced by Ian J. Goodfellow for estimating generative models via an adversarial process. Xingyu's team has used GANs for stock price prediction and trained their model to combine forecast error loss and direction prediction loss to produce satisfactory results. They have employed LSTM and CNN for this adversarial training. Their model avoids complicated data preprocessing by utilizing only 13 simple technical indicators. Their experiments have shown that a smaller model update cycle can improve prediction performance. Furthermore, a study on integrating predictive models under multiscale conditions can be attempted
Yakup Kara [2] et al. (2011) have worked on two basic prediction models of ANN and SVM. The basic purpose of their study was to analyze the prediction ability of the two models and do a comparative study of the same. They have selected ten technical indicators to make up the initial attributes. Their experiments demonstrated that the ANN
model (75.74%) was significantly better than te SVM model (71.52%). They inferred that their models can be improved by adjusting model parameters. Also, experimental results showed that prediction performance change for the same model in different periods of time, due to other factors like the financial crisis.
Adil Moghar and Mhamed Hamiche [3] (2020) have built an LSTM model for forecasting market values. They have illustrated while experimenting that a simple RNN model cannot store long-term memory, hence an LSTM model would get better predictive results on long time-series data. Their observation is that training with lesser data but more epochs can significantly improve the precision of predicted values. They hope to further their work by finding specific sets of data length and training epochs that best suit predictions on specific assets.
Kang Zhang [4] et al. (2019) understand that Deep Learning has a strong capacity to process a huge amount of data. Hence, they proposed a novel architecture of Generative Adversarial Network with Multi-Layer Perceptron as the discriminator and the Long Short-Term Memory (LSTM) as the generator for forecasting the closing price of stocks. Their model was unique as it generates the same distributions of stock data through an adversarial learning system instead of traditional regression methods. Ultimately, they used Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Average Return (AR) for evaluation their model. They further planned to optimize their model by extracting more valuable and influential financial factors.
Parul Agarwal et. al [5] (2019) have tried to understand time-series data in-depth. They concluded that irrespective of forecasting approaches there are 5 elementary steps to be followed, namely Identifying the problem, gathering information, preliminary investigation, model selection, and evaluation. There are two types of forecasting techniques qualitative and quantitative, which are based on expert's knowledge and previous data available respectively. They also found that data patterns are best understood when Trend, Cyclic, Seasonal and Irregular components of time series data are examined. It was known that AR and MA are two widely used liner models. Here Parul's team has
S.No.
Title
Methodology Used
Authors
Year
Publication
Pros
Cons
1
Stock Market Prediction on High-Frequency Data Using Generative Adversarial Nets (BASE PAPER)
Generative Adversarial Networks (GAN-FD for minimizing forecast error loss and direction prediction loss).
Long Short-Term Memory (LSTM)Convolutional Neural Network (CNN)
Xingyu Zhou, Zhisong Pan, Guyu Hu, Siqi Tang, Cheng Zhao
2018
Hindawi Mathematical Problems in Engineering Volume 2018, Article ID
4907423,
This model avoids complicated data preprocessing. A smaller model update cycle has improved prediction performance.
LSTMs take longer to train. Requires more memory.
2
Predicting direction of stock price index movement using artificial neural networks and support vector machines
Artificial Neural Networks (ANN)
Support Vector Machines (SVM)
Yakup Kara, Melek Acar Boyacioglu, Ömer Kaan Baykan
2011
Elsevier Ltd Expert Systems with Applications Volume 38, Issue 5,
May 2011, Pages
5311-5319 – 010
Demonstrated that their ANN model (75.74%) was significantly better than the SVM model (71.52%). They
improved prediction performance by adjusting model parameters.
Under circumstances of economic crisis, a decrease in the prediction performance of technical indicators was found
3
Stock Market Prediction Using LSTM Recurrent Neural Network
Long Short-Term Memory (LSTM)
Recurrent Neural Network (RNN)
Adil MOGHAR,
Mhamed HAMICE
2020
Elsevier Ltd Procedia Computer Science 170 (2020)
11681173
Their LSTM model gave better predictive results on long time-series data.
Model required more memory to train; it was easily overfitting and was very sensitive to different weight initializations.
4
Stock Market Prediction Based on Generative Adversarial Network
Generative Adversarial Networks (GANS)
Long Short-Term Memory (LSTM)
Multi-Layer Perceptron (MLP)
Kang Zhang, Guoqiang Zhong, Junyu Dong, Shengke Wang, Yong Wang
2019
Elsevier Ltd Procedia Computer Science 147 (2019)
400406
Their LSTM model gave better predictive results on long time-series data.
There is a need to extract more valuable and influential financial factors from stock markets and optimize the model to learn the data distributions more accurately.
5
A Prediction Approach for Stock Market Volatility Based on Time Series Data
·ARIMA Model
·Box-Jenkinss method
Sheikh Mohammad Idrees, M. Afshar Alam, Parul Agarwal
2019
IEEE Access Volume 7 – 2019 10.1109/ACCESS.2
019.2895252
They have done a comprehensive study to understand the elementary steps of any forecasting approach.
Their model predicts with just a deviation of 5% mean percentage error.
Asymmetry, sudden outbreaks at random time intervals, and periods of high and low volatility are attributes that the ARIMA models have definite restrictions in counterfeiting in an economic time-series data
6
A Hybrid Model to Forecast stock Trend Using Support Vector Machine and Neural Networks
Support Vector Machine (SVM)
·SVM-RFE
·Artificial Neural Networks (ANN)
J. Sharmila Vaiz, M
Ramaswami
2017
International Journal of Engineering Research and Development (IJERD) Volume 13,
Issue 9 (September
2017), PP.52-59
Their feature selection process based on SVM-RFE reduced classification
computational time and improved accuracy rate. Their model used SVM to remove irrelevant, redundant, and noisy features of the input data.
Prediction accuracy of classification using a combination of SVM and ANN is between 83% and 90%
7
Stock Market Price Prediction Using LSTM RNN
·Long Short-Term Memory Cells (LSTM)
Kriti Pawar, Raj Srujan Jalem and Vivek Tiwari
2018
Springer Nature Singapore Emerging Trends in Expert Applications and Security, Advances in Intelligent Systems and Computing 841
2018 (493-503)
Their DLSTMP model has outperformed every other LSTM model in comparison.
They have laid the emphasis that data collection and preprocessing to remove noise and missing values
Proper portfolio management techniques were not performed to maximize gains and a simple user experience.
8
Artificial neural networks approach to the forecast of stock market price movements
·Artificial Neural Net
·Multi-Layer Neural Net
·Convolutional Neural Net
·Long Short-Term Memory
·Recurrent Neural Net
·Deep Learning
Luca Di Persio, Oleksandr Honchar
2016
IARAS,
International Journal of Economics and Management Systems (2016), 1,
158-162
Their novel Wavelet-CNN algorithm
uses feature
pre-processing that has significantly increased prediction accuracy.
MLP and RNN are not as effective compared t CNN which can model financial time-series better than others.
9
Network approach for Stock market data mining and
·Network Analysis of Stock Data
Susan George, Manoj Changat
2017
IEEE -2017
International Conference on
Gave a fresh perspective to the study of the financial market.
Other portfolio analysis methods such as community detection must be done as an
-
Recurrent Neural Network (RNN)
portfolio analysis
·Network Based Data Mining
·Market Graphs
·Financial Networks
·Portfolio Analysis
Networks & Advances in Computational Technologies (Net ACT)
They identified key structural properties of the market and key sectors of the market that have great influence over the market.
extension of this work to improve analysis. Dynamic study of the structural properties of the market must also be studied along.
The correlation dependency of lobbying power can be studied further.
10
Stock Market Index Prediction Using Deep Neural Network Ensemble
·Neural network
·Machine learning
·Ensemble learning
·Stock index
·Time series forecasting
Bing Yang, Zi-Jia Gong, Wenqi Yang
2017
IEEE – 2017 36th
Chinese Control Conference (CCC)
Their deep neural network ensemble model reduces generalization error.
Since the forecasting based on trend analysis depends upon historical data, both accuracy and reliability of such forecasts suffer when the business environment changes.
Table 1: Summary of Literature Survey
employed ARIMA (0,1,0) model also called the Box-Jenkins model for forecasting time series data. Their model when compared to the actual time series predicts with a deviation of 5% mean percentage error. For validation ADF test and L-Jung box test was used.
J Sharmila Vaiz and M Ramaswami [6] (2017) have worked on a hybrid prediction model by combining ANN and SVM. Their model uses SVM to remove irrelevant, redundant, and noisy features of the input data. Also, a distinctive element of their model was to conduct a feature selection process based on SVM-RFE, to remove terms in the dataset that are statistically uncorrelated, thus also reducing classification computational time and improving accuracy rate. They concluded from their model that BBands, CCI, DC, and WPR are strong technical indicators. They have used F-measure and AUC value to evaluate the hybrid model, which showed that SVM-RFE based feature selection has improved the prediction process and results.
Kriti Pawar, Raj Srujan Jalem, and Vivek Tiwari [7] (2018) have worked on testing several architectures of LSTM to arrive at the best model having the lowest lost loss value. They have compared LSTM along with RNN models with simple RNN and DNN models and found that the DLSTMP model has outperformed every other model in comparison. DLSTMP (Deep Long Short-Term Memory Projection) is a variant of LSTM to further optimize speed by adding a projection layer. To validate their
findings, they have used Mean Squared Error as the loss function and to optimize results have used Adam Optimization. They have emphasized that data collection and preprocessing to remove noise and missing values is a must while implementing a prediction model. Their results prove that RNN-LSTM models are much more accurate than traditional ML algorithms.
Luca Di Persio and Oleksandr Honcharm [8] (2016) have done a very comprehensive study to compare the accuracy of various Neural Network architectures, namely the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM) recurrent neural networks technique. Along with this, they have also implemented the novel Wavelet-CNN algorithm
to display that feature preprocessing is very crucial to the performance of a model.
Susan George and Manoj Changat [9] (2017) have used the theory and tools of complex networks to give a fresh perspective to the study of the financial market. They have tried to construct stock networks based on stock prices and analyze the characteristics of community structure in it. For this, they aggregated daily time series data of 3781 stocks and their balance-sheet data. Their study emphasizes the identification of important players, investors, financial policymakers, and the interdependency of stocks on each other. Thus, they identified key structural properties of the market and key sectors of the market that have great influence over the market.
Bing Yang, Zi-Jia Gong, and Wenqi Yang [10] (2017) chose to implement the deep neural network ensemble for building a prediction model. The Neural network ensemble is a learning paradigm in which a specific set of neural networks is trained and the predictions of these neural networks are combined to produce a final result. They use the bagging approach to generate components of training sets via bootstrapping. Bootstrapping randomly samples a new training set from the original dataset. They found their model to be giving unsatisfactory results on close indices but also found that ensemble reduces generalization error.
Following are the acronyms used: –
-
GANs Generative Adversarial Networks is a deep learning model that performs unsupervised learning to build a model by pitting the generator and the discriminator against each other in a zero-sum game
-
GRU Gated Recurrent Unit is based on Recurrent Neural Networks and employs a gating mechanism to learn data sequences and patterns.
-
CNN Convolutional Neural Network process data with grid-like topologies, such as images, which make it a specific type of neural network.
-
LSTM Long Short-Term Memory can learn order dependence in sequence prediction problems since these networks are a type of recurrent neural network.
-
ANN Artificial Neural Network allows us to determine how increasing and decreasing the dataset vertically or horizontally affects computational time.
-
SVM Support Vector Machine differentiates the vertices of a hyperplane by discovering their boundary in 2D space. The boundary traces are referred to as pairs.
-
RNN Recurrent Neural Network is a category of artificial neural network that analyzes sequential data or time-series data.
-
ARIMA An autoregressive integrated moving average (ARIMA) model is an oversimplification of an autoregressive moving average (ARMA) model. These models are mutually fitted to time series data either to ameliorate the understanding of the data or to forecast imminent points in the series (forecasting).
-
SVM-RFE SVMs Recursive Feature Elimination as a result of its simplicity and effectiveness in choosing feature columns that are meaningful in predicting the target variable, has gained popularity.
-
-
PROPOSED WORK
In this project, we shed light on the details of the proposed generative adversarial network framework to forecast a time series. The closing price of an equity is a typical time series data we have worked on.
PRINCIPLE:
Generative Adversarial Network is an unsupervised training framework that employs two models in the form of a zero-sum game. These two models are called the generator and the discriminator. The generator attempts to create fresh data which is confusable with the actual data presented to it. Now the discriminator which is fed with both the actual data and the fake data, generated by the generator, tries to differentiate between the two. While training these models together, they stop at a point where the discriminator is unable to distinguish between real and fake data.
GENEATOR:
Since we are working to generate new instances of equity price in a sequence of time series, we have employed Gated Recurrent Units. These are yet another type of advanced
Recurrent Neural Network which are very useful to generate new sequential data. GRUs primarily have two gates Reset Gate and Update Gate to control the flow of data. Reset Gate eliminates data that can be forgotten whereas the Update Gate decides what data is useful and hence should be passed on.
DISCRIMINATOR:
The task of the discriminator is to classify fake and real data. One-dimensional Convolutional Neural Network is one of the best classifiers to work on time-series data. It consists of an input layer, hidden layers, and output layers. These hidden layers perform convolution. During convolution, two functions produce a third that represents how the shape of one is modified by the other. The discriminator penalizes itself for misclassifications, and using discriminator loss updates its weight using backpropagation.
TRAINING GANS:
Finally, the two models are trained together to reach a point of convergence. At this point, the discriminator's performance weakens because the generator can fool it and over time the feedback from the discriminator becomes less meaningful.
-
IMPLEMENTATION
For implementing our model, we have divided our code into 4 broad sections which are Loading data, Data Pre-processing, GANS model & training, and testing predictions.
-
LOADING DATA
The data we have downloaded is directly imported from Yahoo finance API. The API provides Date, Open, Low, High, Close, and Volume data for any particular time frame as required.
After receiving the OHLC data for the stock, we define a function to calculate technical indicators for each day. In our implementation, we have calculated a Moving Average of 7 days and 21 days, Moving Average Convergence-Divergence, Bollinger Bands, Log Momentum, and Exponential Moving averages.
Fig 1: Architecture Diagram for basic GANs
-
DATA PREPROCESSING
After loading the data, we need to perform data pre-processing to verify that the data doesnt contain any discrepancies.
We replace all zeroes with NA, check for NA and fill them with meaningful data, set date to matplotlib date-time data, get features and target, perform the autocorrelation check, normalize the data, and then split data for training and testing.
-
GANS MODEL and TRAINING
For making the GANS model we first define the Generator. The Generator is a sequential model that contains 3 layers of GRU and 3 layers of dense.
Next, we define the discriminator model with 3 Conv1D layers and 3 dense layers.
Thereafter we have defined a GANS class that sets Adam optimizer as optimizers for generator and discriminator and Binary Cross Entropy to calculate losses.
The Discriminator loss and the Generator loss are then defined after which the training process is defined. We have made a provision to create checkpoints of the trained model on every 15 epochs to better understand the training results and optimize the models performance. We, thereby, train the model and produce plots of losses and training result along with Root Mean Squared Error.
-
TESTING PREDICTIONS
Finally, we check the predictions on the earlier split of 30% data. For every 15 epochs further, we have one model checkpoint. By comparing the results of each checkpoint, the optimum number of epochs required for training can be found. Similarly, the model can be checked for various learning rates to optimize it.
-
-
RESULTS & DISCUSSION
With substantial training, we initiated a model proficient enough to forecast equity rates based on the sort of data it is fed with. A GANs model with GRU as its generator along with one-dimensional CNN as its discriminator was successfully implemented. We have tried optimizing our model by altering Learning Rate, Batch Size & Epoch Value.
After several rounds of training at different learning rates, it was observed that the optimal learning rate is 0.00015. Comparing the results obtained from the models trained to a different number of epochs it was observed that the optimal number of epochs is 180. Below are the results obtained for the equity of IOC:
No. of Epochs
30
60
120
180
210
240
RMSE
7.92
6.20
5.75
4.89
5.23
5.27
Table 2: Model performance at different number of epochs
Following are the testing results obtained for the six equities we have worked on:
Fig 2: Testing results on WIPRO
Fig 3: Testing results on POWERGRID
Fig 4: Testing results on ONGC
Fig 5: Testing results on NTPC
Fig 6: Testing results on IOC
Fig 7: Testing results on HINDALCO
Sr. No.
Stock
GANS (Testing RMSE)
1
HINDALCO
7.32
2
IOC
4.89
3
NTPC
4.04
4
ONGC
4.84
5
POWERGRID
4.02
6
WIPRO
5.41
Table 3: Model Performance on different equities
-
CONCLUSION
-
We have taken a look at existing models on economic forecasting. Many successful high-accuracy models like ARIMA, LSTM, GRU, etc have been made but there was still room for improvement. We have worked on a solution that uses a generative model and performs unsupervised learning by taking feedback from a CNN-based discriminator. Finally, we optimized our model using an ADAM optimizer and changing model parameters to better suit the type of input data. There is still room for improvement since our model doesnt employ Wasserstein metrics which could prove to be better than mini-max metrics.
REFERENCES
-
Xingyu Zhou, Zhisong Pan, Guyu Hu, Siqi Tang, and Cheng Zhao, Stock Market Prediction on High-Frequency Data Using Generative Adversarial Nets, Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 4907423. I. S. Jacobs and C. P. Bean, Fine particles, thin films, and exchange anisotropy, in Magnetism, vol. III,
G. T. Rado, and H. Suhl, Eds. New York: Academic, 1963, pp. 271 350
-
Yakup Kara, Melek Acar Boyacioglu, and Ömer Kaan Baykan, Predicting direction of stock price index movement using artificial neural networks and support vector machines, Elsevier Ltd Expert Systems with Applications Volume 38, Issue 5, May 2011, Pages 5311-5319 – 010
-
Adil MOGHAR, and Mhamed HAMICHE, Stock Market Prediction Using LSTM Recurrent Neural Network, Elsevier Ltd Procedia Computer Science 170 (2020) 11681173. Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron spectroscopy studies on magneto-optical media and plastic substrate interface, IEEE Trans. J. Magn. Japan, vol. 2, pp. 740741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982]
-
Kang Zhang, Guoqiang Zhong, Junyu Dong, Sheng Wang, and Yong Wang, Stock Market Prediction Based on Generative Adversarial Network, Elsevier Ltd Procedia Computer Science 147 (2019) 400 406
-
George S. Atsalakis1 and Kimon P. Valavanis (2013), Research Gate- Stock Market forecasting Part 1 conventional method
-
J Sharmila Vaiz, M Ramaswami, A Hybrid Model to Forecast stock Trend Using Support Vector Machine and Neural Networks, International Journal of Engineering Research and Development (IJERD) Volume 13, Issue 9 (September 2017), PP.52-59
-
Kriti Pawar, Raj Srujan Jalem and Vive Tiwari, Stock Market Price Prediction Using LSTM RNN, Springer Nature Singapore Emerging Trends in Expert Applications and Security, Advances in Intelligent Systems and Computing 841
-
Luca Di Persio, Oleksandr Honchar. (2016) Artificial Neural Networks Approach to the Forecast of Stock Market Price Movements. International Journal of Economics and Management Systems, 1, 158-162
-
Susan George, Manoj Changat, Network approach for Stock market data mining and portfolio analysis, IEEE -2017 International Conference on Networks & Advances in Computational Technologies (NetACT)
-
Bing Yang, Zi-Jia Gong, Wenqi Yang, Stock Market Index Prediction Using Deep Neural Network Ensemble, IEEE – 2017 36th Chinese Control Conference (CCC)
-
Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Machine Learning. 2002;46(13):389422
-
Mao Y, Pi D, Liu Y and Sun Y. Accelerated recursive feature elimination based on support vector machine for key variable identification. Chinese Journal of Chemical Engineering. 2006;14(1):6572
-
George S. Atsalakis1 and Kimon P. Valavanis (2013), Research Gate- Stock Market forecasting Part 1 conventional method
-
Mehtab, Sidra & Sen, Jaydip & Dutta, Abhishek. (2020). Stock Price Prediction Using Machine Learning and LSTM-Based Deep Learning Models. 10.13140/RG.2.2.23846.34880
-
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza et al., Generative adversarial nets, in Proceedings of the 28th Annual Conference