Credit rating analysis with support vector machines and neural networks: a market comparative study
|
|
|
- Darren Hubbard
- 9 years ago
- Views:
Transcription
1 Decision Support Systems 37 (2004) Credit rating analysis with support vector machines and neural networks: a market comparative study Zan Huang a, *, Hsinchun Chen a, Chia-Jung Hsu a, Wun-Hwa Chen b, Soushan Wu c a Department of Management Information Systems, Eller College of Business and Public Administration, The University of Arizona, Rm. 430, McClelland Hall, 1130 E. Helen Street, Tucson, AZ 85721, USA b Department of Business Administration, National Taiwan University, Taiwan c College of Management, Chang-Gung University, Taiwan Available online 4 July 2003 Abstract Corporate credit rating analysis has attracted lots of research interests in the literature. Recent studies have shown that Artificial Intelligence (AI) methods achieved better performance than traditional statistical methods. This article introduces a relatively new machine learning technique, support vector machines (SVM), to the problem in attempt to provide a model with better explanatory power. We used backpropagation neural network (BNN) as a benchmark and obtained prediction accuracy around 80% for both BNN and SVM methods for the United States and Taiwan markets. However, only slight improvement of SVM was observed. Another direction of the research is to improve the interpretability of the AI-based models. We applied recent research results in neural network model interpretation and obtained relative importance of the input financial variables from the neural network models. Based on these results, we conducted a market comparative analysis on the differences of determining factors in the United States and Taiwan markets. D 2003 Elsevier B.V. All rights reserved. Keywords: Data mining; Credit rating analysis; Bond rating prediction; Backpropagation neural networks; Support vector machines; Input variable contribution analysis; Cross-market analysis 1. Introduction Credit ratings have been extensively used by bond investors, debt issuers, and governmental officials as a surrogate measure of riskiness of the companies and bonds. They are important determinants of risk premiums and even the marketability of bonds. * Corresponding author. Tel.: addresses: [email protected] (Z. Huang), [email protected] (H. Chen), hsuc@ .arizona.edu (C.-J. Hsu), [email protected] (W.-H. Chen), [email protected] (S. Wu). Company credit ratings are typically very costly to obtain, since they require agencies such as Standard and Poor s and Moody s to invest large amount of time and human resources to perform deep analysis of the company s risk status based on various aspects ranging from strategic competitiveness to operational level details. As a result, not all companies can afford yearly updated credit ratings from these agencies, which makes credit rating prediction quite valuable to the investment community. Although rating agencies and many institutional writers emphasize the importance of analysts subjective judgment in determining credit ratings, many /03/$ - see front matter D 2003 Elsevier B.V. All rights reserved. doi: /s (03)
2 544 Z. Huang et al. / Decision Support Systems 37 (2004) researchers have obtained promising results on credit rating prediction applying different statistical and Artificial Intelligence (AI) methods. The grand assumption is that financial variables extracted from public financial statements, such as financial ratios, contain a large amount of information about a company s credit risk. These financial variables, combined with historical ratings given by the rating agencies, have embedded in them valuable expertise of the agencies in evaluating companies credit risk levels. The overall objective of credit rating prediction is to build models that can extract knowledge of credit risk evaluation from past observations and to apply it to evaluate credit risk of companies with much broader scope. Besides the prediction, the modeling of the bond-rating process also provides valuable information to users. By determining what information was actually used by expert financial analysts, these studies can also help users capture fundamental characteristics of different financial markets. In this study, we experimented with using a relatively new learning method for the field of credit rating prediction, support vector machines, together with a frequently used high-performance method, backpropagation neural networks, to predict credit ratings. We were also interested in interpreting the models and helping users to better understand bond raters behavior in the bond-rating process. We conducted input financial variable contribution analysis in an attempt to interpret neural network models and used the interpretation results to compare the characteristics of bond-rating processes in the United States and Taiwan markets. The remainder of the paper is structured as follows. A background section about credit rating follows the introduction. Then, a literature review about credit rating prediction is provided, followed by descriptions of the analytical methods. We also include descriptions of the data sets, the experiment results and analysis followed by the discussion and future directions. 2. Credit risk analysis There are two basic types of credit ratings, one is for specific debt issues or other financial obligations and the other is for debt issuers. The former is the one most frequently studied and can be referred to as a bond rating or issue credit rating. It is essentially an attempt to inform the public of the likelihood of an investor receiving the promised principal and interest payments associated with a bond issue. The latter is a current opinion of an issuer s overall capacity to pay its financial obligations, which conveys the issuer s fundamental creditworthiness. It focuses on the issuer s ability and willingness to meet its financial commitments on a timely basis. This rating can be referred to as counterparty credit rating, default rating or issuer credit rating. Both types of ratings are very important to the investment community. A lower rating usually indicates higher risk, which causes an immediate effect on the subsequent interest yield of the debt issue. Besides this, many regulatory requirements for investment or financial decision in different countries are specified based on such credit ratings. Many agencies allow investment only in companies having the top four rating categories ( investment level ratings). There is also substantial empirical evidence in the finance and accounting literature that have established the importance of information content contained in credit ratings. These studies showed that both the stock and bond markets react in a manner that indicated credit ratings convey important information regarding the value of the firm and its prospects of being able to repay its debt obligations as scheduled [28]. A company obtains a credit rating by contacting a rating agency requesting that an issuer rating be assigned to the company or that an issue rating be assigned to a new debt issue. Typically, the company requesting a credit rating submits a package containing the following documentation: annual reports for past years, latest quarterly reports, income statement and balance sheet, most recent prospectus for debt issues and other specialized information and statistical reports. The rating agency then assigns a team of financial analysts to conduct basic research on the characteristics of the company and the individual issue. After meeting with the issuer, the designated analyst prepares a rating report and presents it to the rating committee, together with his or her rating recommendations. A committee reviews the documentation presented and discuss with the analysts involved. They make the final decision on the credit rating and take responsibility for the rating results.
3 Z. Huang et al. / Decision Support Systems 37 (2004) It is generally believed that the credit rating process involves highly subjective assessment of both quantitative and qualitative factors of a particular company as well as pertinent industry-level or market-level variables. Rating agencies and some researchers have emphasized the importance of subjective judgment in the bond-rating process and criticized the use of simple statistical models and other models derived from AI techniques to predict credit ratings, although they agree that such analysis provide a basic ground from judgment in general. However, as we will show in the next section, the literature of credit rating prediction has demonstrated that statistical models and AI models (mainly neural networks) achieved remarkably good prediction performance and largely captured the characteristics of the bondrating process. 3. Literature review Substantial literature can be found in bond-rating prediction history. We categorized the methods extensively used in prior research into statistical methods and machine learning methods Statistical methods The use of statistical methods for bond-rating prediction can be traced back to 1959, when Fisher utilized ordinary least squares (OLS) in an attempt to explain the variance of a bond s risk premium [13]. Many subsequent studies used OLS to predict bond ratings [19,36,47]. Pinches and Mingo [33,34] utilized multiple discriminant analysis (MDA) to yield a linear discriminant function relating a set of independent variables to a dependent variable to better suit the ordinal nature of bond-rating data and increase classification accuracy. Other researchers also utilized logistic regression analysis [10] and probit analysis [17,22]. These studies used different data sets and the prediction results were typically between 50% and 70%. Different financial variables have been used in different studies. The financial variables typically selected included measures of size, financial leverage, long-term capital intensiveness, return on investment, short-term capital intensiveness, earnings stability and debt coverage stability [34]. The general conclusion from these efforts in bondrating prediction using statistical methods was that a simple model with a small list of financial variables could classify about two-thirds of a holdout sample of bonds. These statistical models were succinct and were easy to explain. However, the problem with applying these methods to the bond-rating prediction problem is that the multivariate normality assumptions for independent variables are frequently violated in financial data sets [8], which makes these methods theoretically invalid for finite samples Artificial intelligence methods Recently, Artificial Intelligence (AI) techniques, particularly rule-based expert systems, case-based reasoning systems and machine learning techniques such as neural networks have been used to support such analysis. The machine learning techniques automatically extract knowledge from a data set and construct different model representations to explain the data set. The major difference between traditional statistical methods and machine learning methods is that statistical methods usually need the researchers to impose structures to different models, such as the linearity in the multiple regression analysis, and to construct the model by estimating parameters to fit the data or observation, while machine learning techniques also allow learning the particular structure of the model from the data. As a result, the humanimposed structures of the models used in statistical methods are relatively simple and easy to interpret, while models obtained in machine learning methods are usually very complicated and hard to explain. Galindo and Tamayo [14] used model size to differentiate statistical methods from machine learning methods. For a given training sample size, there is an optimal model size. The models used in statistical methods are usually too simple and tend to under-fit the data while machine learning methods generate complex models and tend to over-fit the data. This is in fact the trade-off between the explanatory power and parsimony of a model, where explanatory power leads to high prediction accuracy and parsimony usually assures generalizability and interpretability of the model. The most frequently used AI method was backpropagation neural networks. Many previous studies
4 546 Z. Huang et al. / Decision Support Systems 37 (2004) compared the performance of neural networks with statistical methods and other machine learning techniques. Dutta and Shekhar [9] started to investigate the applicability of neural networks to bond rating in They got a prediction accuracy of 83.3% classifying AA and non-aa rated bonds. Singleton and Surkan [38] used a backpropagation neural network to classify bonds of the 18 Bell Telephone companies divested by AT and T in The task was to classify a bond as being rated either Aaa or one of A1, A2 and A3 by Moody s. They experimented with backpropagation neural networks with one or two hidden layers and the best network obtained 88% testing accuracy. They also compared a neural network model with multiple discriminant analysis (MDA) and demonstrated that neural networks achieved better performance in predicting direction of a bond rating than discriminant analysis could [39]. Kim [25] compared the neural network approach with linear regression, discriminant analysis, logistic analysis, and a rule-based system for bond rating. They found that neural networks achieved better performance than other methods in terms of classification accuracy. The data set used in this study was prepared from Standard and Poor s Compustat financial data, and the prediction task was on six rating categories. Moody and Utans [30] used neural networks to predict 16 categories of S and P rating ranging from B- and below (3) to AAA (18). Their model predicted the ratings of 36.2% of the firms correctly. They also tested the system with 5-class prediction and 3-class prediction and obtained prediction accuracies of 63.8% and 85.2%, respectively. Maher and Sen [28] compared the performance of neural networks on bond-rating prediction with that of logistic regression. They used data from Moody s Annual Bond Record and Standard and Poor s Compustat financial data. The best performance they obtained was 70%. Kwon et al. [27] applied ordinal pairwise partitioning (OPP) approaches to backpropagation neural networks. They used Korean bond-rating data and demonstrated that neural networks with OPP had the highest level of accuracy (71 73%), followed by conventional neural networks (66 67%) and multiple discriminant analysis (58 61%). Chaveesuk et al. [3] also compared backpropagation neural network with radial basis function, learning vector quantization and logistic regression. Their study revealed that neural networks and logistic regression model obtained the best performances. However, the two methods only achieved accuracy of 51.9% and 53.3%, respectively. Other researchers explored building case-based reasoning systems to predict bond ratings and at the same time provide better user interpretability. The basic principle of case-based reasoning is to match a new problem with the closest previous cases and try to learn from experiences to solve the problem. Shin and Han [37] proposed a case-based reasoning approach to predict bond rating of firms. They used inductive learning for case indexing, and used nearest-neighbor matching algorithms to retrieve similar past cases. They demonstrated that their system had higher prediction accuracy (75.5%) than the MDA (60%) and ID3 (59%) methods. They used Korean bond-rating data and the prediction was for five categories. Some other researchers have studied the problems of default prediction and bankruptcy prediction [26,41,49], which are closely related to the bondrating prediction problem. Similar financial variables and methods were used in such studies and the prediction performance was typically higher because of the binary output categories. We summarized important prior studies that applied AI techniques to the bond-rating prediction problem in Table 1. In summary, previous literature has consisted of extensive efforts to apply neural networks to the bond-rating prediction problem and comparisons with other statistical methods and machine learning methods have been conducted by many researchers. The general conclusion has been that neural networks outperformed conventional statistical methods and inductive learning methods in most prior studies. The assessment of the prediction accuracy obtained by individual studies should be adjusted by the number of prediction classes used. For studies that classified into more than five classes, the typical accuracy level was between 55% and 75%. The financial variables and sample sizes used by different studies both covered very wide ranges. The number of financial variables used ranged from 7 to 87 and the sample size ranged from 47 to Past academic research in
5 Table 1 Prior bond rating prediction studies using Artificial Intelligence techniques Study Bond rating categories [9] 2 (AA vs. non-aa) [38] 2 (Aaa vs. A1, A2 or A3) AI methods Accuracy Data Variables Sample size BP 83.30% US Liability/cash asset, debt ratio, sales/net worth, profit/sales, financial strength, earning/fixed costs, past 5 year revenue growth rate, projected next 5 year revenue growth rate, working capital/sales, subjective prospect of company. BP 88% US (Bell companies) Debt/total capital, pre-tax interest expense/income, return on investment (or equity), 5-year ROE variation, log(total assets), construction cost/total cash flow, toll revenue ratio. Benchmark statistical methods 30/17 LinR (64.7%) 126 MDA (39%) [15] 3 BP 84.90% US S&P 87 financial variables 797 N/A [25] 6 BP, RBS 55.17% (BP), US S and P 31.03% (RBS) [30] 16 BP 36.2%, 63.8% (5 classes), 85.2% (3 classes) [28] 6 BP 70% (7), 66.67% (5) [27] 5 BP (with OPP) 71 73% (with OPP), 66 67% (without OPP) [27] 5 ACLS, BP 59.9% (ACLS), 72.5% (BP) [3] 6 BP, RBF, LVQ 56.7% (BP), 38.3% (RBF), 36.7% (LVQ) [37] 5 CBR, GA 75.5% (CBR, GA combined) 62.0% (CBR) 53 54% (ID3) Total assets, total debt, long term debt or total invested capital, current asset or liability, (net income + interest)/interest, preferred dividend, stock price or common equity per share, subordination. 110/58/60 LinR (36.21%), MDA (36.20%), LogR (43.10%) US S&P N/A N/A N/A US Moody s Total assets, long-term debt/total assets, Net income from operations/total asset, subordination status, common stock market beta value. 299 LogR (61.66%), MDA (58 61%) Korean 24 financial variables 126 MDA (58 62%) Korean 24 financial variables 126 MDA (61.6%) US S&P Korean Total assets, total debt, long-term debt/total capital, short-term debt/total capital, current asset/current liability, (net income + interest expense)/ interest expense, total debt/total asset, profit/sales. Firm classification, firm type, total assets, stockholders equity, sales, years after founded, gross profit/sales, net cash flow/total asset, financial expense/sales, total liabilities/total assets, depreciation/total expense, working capital turnover 60/60 (10 for each category) LogR (53.3%) 3886 MDA ( %) BP: Backpropagation Neural Networks, RBS: Rule-based System, ACLS: Analog Concept Learning System, RBF: Radial Basis Function, LVQ: Learning Vector Quantization, CBR: Case-based Reasoning, GA: Genetic Algorithm, MDA: Multiple Discriminant Analysis, LinR: Linear Regression, LogR: Logistic Regression, OPP: Ordinary Pairwise Partitioning. Sample size: Training/tuning/testing. Z. Huang et al. / Decision Support Systems 37 (2004)
6 548 Z. Huang et al. / Decision Support Systems 37 (2004) bond-rating prediction has mainly been conducted in the United States and Korean markets. 4. Research questions The literature of bond-rating prediction can be viewed as researchers efforts to model bond raters rating behavior by using publicly available financial information. There were two themes in these studies, prediction accuracy and explanatory power of the models. By exploring the statistical methods for addressing the bond-rating prediction problem, researchers have shown that relatively simple functions on historical and publicly available data can be used as an excellent first approximation for the highly complex and subjective bond-rating process [24]. These statistical methods have provided relatively higher prediction accuracy than expectation and simple interpretations of the bond-rating process. Many studies have reported lists of important variables for bond-rating prediction models. The recent application of different Artificial Intelligence (AI) techniques to the bond-rating prediction problem achieved higher prediction accuracy with typically more sophisticated models in which stronger learning capabilities were embedded. One of the most successful AI techniques was the neural network, which generally achieved the best results in the reported studies. However, due to the difficulty of interpreting the neural network models, most studies that applied neural networks focused on prediction accuracy. Few efforts to use neural network models to provide better understanding of the bond-rating process have been reported in the literature. This research attempts to extend previous research in two directions. Firstly, we are interested in applying a relatively new learning algorithm, support vector machines (SVM), to the bond-rating prediction problem. SVM is a novel learning machine based on statistical learning theory, which has yielded excellent generalization performance on a wide range of problems. We expect to improve prediction accuracy by adopting this new algorithm. Secondly, we want to apply the results from previous research on neural network model interpretation to the bond-rating problem, and to try to provide some insights about the bond-rating process through neural network models. Thirdly, although there are different markets, such as the United States market and Korean market, few efforts have been made to provide cross-market analysis. Based on interpretation of neural network models, we aim to explore the differences between bondrating processes in different markets. 5. Analytical methods The literature of bond-rating prediction has shown that backpropagation neural networks have achieved better accuracy level than other statistical methods (multiple linear regression, multiple discriminant analysis, logistic regression, etc.) and other machine learning algorithms (inductive learning methods such as decision trees). In this study, we chose backpropagation neural networks and a newly introduced learning method, support vector machines, to analyze bond ratings. We provide some brief descriptions of the two methods in this section. Since neural network is a widely adopted method, we will focus more on the relatively new method, support vector machines Backpropagation neural network Backpropagation neural networks have been extremely popular for their unique learning capability [48] and have been shown to perform well in different applications in our previous research such as medical application [43] and game playing [4]. A typical backpropagation neural network consists of a threelayer structure: input-layer nodes, output-layer nodes and hidden-layer nodes. In our study, we used financial variables as the input nodes and rating outcome as the output layer nodes. Backpropagation networks are fully connected, layered, feed-forward models. Activations flow from the input layer through the hidden layer, then to the output layer. A backpropagation network typically starts out with a random set of weights. The network adjusts its weights each time it sees an input output pair. Each pair is processed at two stages, a forward pass and a backward pass. The forward pass involves presenting a sample input to the network and letting activations flow until they reach the output layer. During the backward pass, the network s actual output is compared with the
7 Z. Huang et al. / Decision Support Systems 37 (2004) target output and error estimates are computed for the output units. The weights connected to the output units are adjusted to reduce the errors (a gradient descent method). The error estimates of the output units are then used to derive error estimates for the units in the hidden layer. Finally, errors are propagated back to the connections stemming from the input units. The backpropagation network updates its weights incrementally until the network stabilizes. For algorithm details, readers are referred to Refs. [1,48]. Although researchers have tried different neural network architecture selection methods to build the optimal neural network model for better prediction performance [30], in this study, we used a standard three-layer fully connected backpropagation neural network, in which the input layer nodes are financial variables, output nodes are bond-rating classes, and the number of hidden layer nodes is (number of input nodes + number of output nodes)/2. We followed this standard neural network architecture because it provides comparable results to the optimal architecture and works well as a benchmark for comparison Support vector machines Recent advances in statistics, generalization theory, computational learning theory, machine learning and complexity have provided new guidelines and deep insights into the general characteristics and nature of the model building/learning/fitting process [14]. Some researchers have pointed out that statistical and machine learning models are not all that different conceptually [29,46]. Many of the new computational and machine learning methods generalize the idea of parameter estimation in statistics. Among these new methods, Support Vector Machines have attracted most interest in the last few years. Support vector machine (SVM) is a novel learning machine introduced first by Vapnik [45]. It is based on the Structural Risk Minimization principle from computational learning theory. Hearst et al. [18] positioned the SVM algorithm at the intersection of learning theory and practice: it contains a large class of neural nets, radial basis function (RBF) nets, and polynomial classifiers as special cases. Yet it is simple enough to be analyzed mathematically, because it can be shown to correspond to a linear method in a highdimensional feature space nonlinearly related to input space. In this sense, support vector machines can be a good candidate for combining the strengths of more theory-driven and easy to be analyzed conventional statistical methods and more data-driven, distributionfree and robust machine learning methods. In the last few years, there have been substantial developments in different aspects of support vector machine. These aspects include theoretical understanding, algorithmic strategies for implementation and reallife applications. SVM has yielded excellent generalization performance on a wide range of problems including bioinformatics [2,21,52], text categorization [23], image detection [32], etc. These application domains typically have involved high-dimensional input space, and the good performance is also related to the fact that SVM s learning ability can be independent of the dimensionality of the feature space. The SVM approach has been applied in several financial applications recently, mainly in the area of time series prediction and classification [42,44]. A recent study closely related to our work investigated the use of the SVM approach to select bankruptcy predictors. They reported that SVM was competitive and outperformed other classifiers (including neural networks and linear discriminant classifier) in terms of generalization performance [12]. In this study, we are interested in evaluating the performance of the SVM approach in the domain of credit rating prediction in comparison with that of backpropagation neural networks. A simple description of the SVM algorithm is provided here, for more details please refer to Refs. [7,31]. The underlying theme of the class of supervised learning methods is to learn from observations. There is an input space, denoted by X, XpR n, an output space, denoted by Y, and a training set, denoted by S, S=((x 1, y 1 ), (x 2, y 2 ),...,(x l, y l ))p(x Y) l, l is the size of the training set. The overall assumption for learning is the existence of a hidden function Y = f(x), and the task of classification is to construct a heuristic function h(x), such that h! f on the prediction of Y. The nature of the output space Y decides the learning type. Y={1, 1} leads to a binary classification problem, Y={1,2,3,...m}leads to a multiple class classification problem, and YpR n leads to a regression problem. SVM belongs to the type of maximal margin classifier, in which the classification problem can be
8 550 Z. Huang et al. / Decision Support Systems 37 (2004) represented as an optimization problem, as shown in Eq. (1). min w;b < w; w > s:t:y i ð< w; /ðx i Þ > þbþz1; i ¼ 1;...; l ð1þ Vapnik [45] showed how training a support vector machine for pattern recognition leads to a quadratic optimization problem with bound constraints and one linear equality constraint (Eq. (2)). The quadratic optimization problem belongs to a type of problem that we understand very well, and because the number of training examples determines the size of the problem, using standard quadratic problem solvers will easily make the computation impossible for large size training sets. Different solutions have been proposed on solving the quadratic programming problem in SVM by utilizing its special properties. These strategies include gradient ascent methods, chunking and decomposition and Platt s Sequential Minimal Optimization (SMO) algorithm (which extended the chunking approach to the extreme by updating two parameters at a time) [35]. maxw ðaþ ¼ Xl ¼ Xl a i 1 2 a i 1 2 X l i;j¼1 X l i;j¼1 ¼ 0; a i > 0; i ¼ 1;...l y i y j a i a j < /ðx i Þ; /ðx j Þ > y i y j a i a j Kðx i ; x j Þs:t: Xl y i a i ð2þ where a kernel function, K(x i, x j ), is applied to allow all necessary computations to be performed directly in the input space (a kernel function K(x i, x j ) is a function of the inner product between x i and x j, thus it transforms the computation of inner product < /(x i ), /(x j )> to that of < x i, x j >). Conceptually, the kernel functions map the original data into a higher-dimension space and make the input data set linearly separable in the transformed space. The choice of kernel functions is highly application-dependent and it is the most important factor in support vector machine applications. The formulation in Eq. (2) only considers the separable case, which corresponds to an empirical error of zero. For noisy data, slack variables are introduced to relax the hard-margin constraints to allow for some classification errors [5], as shown in Eq. (3). In this new formulation, noise level C>0 determines the tradeoff between the empirical error and the complexity term. min w;b;n < w; w > þc Xn n i y i ð< w; /ðx i Þ > þbþz1 n i ; n i z0; i ¼ 1;...; l ð3þ This extended formulation leads to the dual problem described in Eq. (4). maxw ðaþ ¼ Xl /ðx j Þ >¼ Xl a i 1 2 a i 1 2 X l X l i;j¼1 i;j¼1 ¼ 0; 0Va i C; i ¼ 1;...l y i y j a i a j < /ðx i Þ; y i y j a i a j Kðx i ; x j Þs:t: Xl y i a i ð4þ The standard SVM formulation solves only the binary classification problem, so we need to use several binary classifiers to construct a multi-class classifier or make fundamental changes to the original formulation to consider all classes at the same time. Hsu and Lin s recent paper [20] compared several methods for multi-class support vector machines and concluded that the one-against-one and DAG methods are more suitable for practical uses. We used the software package Hsu and Lin provided, BSVM, for our study. We experimented with different SVM parameter settings on the credit rating data, including the noise level C and different kernel functions (including the linear, polynomial, radial basis and sigmoid function). We also experimented with different approaches for multi-class classification using SVM [20]. The final SVM setting used Crammer and Singer s formulation [6] for multi-class SVM classification, a radial basis kernel function (K(x i, x j )=e gjx i x j j 2, c =0.1) 1, and C with a value of This setting achieved a relatively better performance on the credit rating data set. 1 Comparable prediction results were also observed for some models when using the polynomial kernel function: K(x i, x j )=(c < x i, x j >+u) d, c = 0.1, d =3, h =0.
9 Z. Huang et al. / Decision Support Systems 37 (2004) Data sets For the purpose of this study, we prepared two bond-rating data sets from the United States and Taiwan markets Taiwan data set Corporate credit rating has a relatively short history in Taiwan. Taiwan Ratings Corporation (TRC) is the first credit rating service organization in Taiwan and is currently partnering with Standard and Poor s. Established in 1997, the TRC started rating companies in February It has a couple of hundred rating results so far, over half of which are credit ratings of financial institutes. We obtained the rating information from TRC and limited our target to financial organizations because of the data availability. Companies requesting for credit ratings are required to provide their last five annual reports and financial statements of the last three quarter at the beginning of the rating process. Usually, it takes 3 to 6 months for the rating process, depending on the scheduling of required conferences with the high-level management. We obtained fundamental financial variables from Securities and Futures Institute (SFI). Publicly traded companies report their quarterly financial statements and financial ratios to the SFI quarterly. Although the SFI has 36 financial ratios in its database, not every company reported all 36 financial ratios, and some ratios are not used by most industries. We needed to match the financial data from SFI with the rating information from TRC. To keep more data points, we decided to use financial variables two quarters before the rating release date as the basis for rating prediction. It is reasonable to assume that the most recent financial information contributes most to the bondrating results. We obtained the releasing date of the ratings and the company s fiscal year information, and used the financial variables two quarters prior to the rating release to form a case in our data set. After matching and filtering data with missing values, we obtained a data set of 74 cases with bank credit rating and 21 financial variables, which covered 25 financial institutes from 1998 to Five rating categories appeared in our data set, including twaaa, twaa, twa, twbbb and twbb United States data set We prepared a comparable US corporate rating data set to the Taiwan data set from Standard and Poor s Compustat data set. We obtained comparable financial variables with those in the Taiwan data set, and the S and P senior debt rating for all the commercial banks (DNUM 6021). The data set covered financial variables and ratings from 1991 to Since the rating release date was not available, financial variables of the first quarter were used to match with the rating results. After filtering data with missing values, we obtained 265 cases of 10-year data for 36 commercial banks. Five rating categories appeared in our data set, including AA, A, BBB, BB and B. The distributions of the credit rating categories of the two data sets are presented in Table Variable selection The financial variables we obtained in the Taiwan data set are listed in Table 3. These variables include the financial ratios that were available in the SFI database and two balance measures frequently used in bond-rating prediction literature, total assets and total liabilities. The first seven variables are frequently used financial variables in prior bond-rating prediction studies. Some other financial ratios are not commonly used in US; therefore, short descriptions are provided. We ran ANOVA on the Taiwan data set to test whether the differences between different rating classes were significant in each financial variable. If the difference was not significant (high p-value), the financial variable was considered not informative with regard to the bond-rating decision. Table 3 shows p- values of each variable, which provides information Table 2 Ratings in the two data sets TW data US data twaaa 8 AA 20 twaa 11 A 181 twa 31 BBB 56 twbbb 23 BB 7 twbb 1 B 1 Total 74 Total 265
10 552 Z. Huang et al. / Decision Support Systems 37 (2004) Table 3 Financial ratios used in the data set Financial ratio name/description X1 Total assets 0.00 X2 Total liabilities 0.00 X3 Long-term debts/total 0.12 invested capital X4 Debt ratio 0.00 X5 Current ratio 0.36 X6 Times interest earned 0.00 (EBIT/interest) X7 Operating profit margin 0.00 X8 (Shareholders equity long-term debt)/fixed assets X9 Quick ratio 0.37 X10 Return on total assets 0.01 X11 Return on equity 0.04 X12 Operating income/received capitals 0.00 X13 Net income before tax/received capitals 0.00 X14 Net profit margin 0.00 X15 Earnings per share 0.00 X16 Gross profit margin 0.02 X17 Non-operating income/sales 0.81 X18 Net income before tax/sales 0.00 X19 Cash flow from operating 0.84 activities/current liabilities X20 (Cash flow from operating 0.64 activities/(capital expenditures + increased in inventory + cash dividends)) in last 5 years X21 (Cash flow from operating activities-cash dividends)/ (fixed assets + other assets + working capitals) 0.08 ANOVA between-group p-value about whether or not the difference is significant. We eliminated five ratios in our data set that had relatively high p-values (X5, X9, X17, X19 and X20). Thus, we kept 14 ratios and two balance measures in our final Taiwan data set. For better comparison of the two markets, we tried to use similar variables in the US market. Two ratios were not available in US data set (X6 and X21). Therefore, the US data set contained 12 available ratios and two balance measures 2. 2 To make sure that no valuable information was lost due to the variable selection process, we also experimented with different prediction models on the original data sets. The prediction accuracies of the SVM and NN models deteriorated when the variables with high p-values were added. 7. Experiment results and analysis Based on the two data sets, we prepared for the two markets, we constructed four models for an initial experiment. For each market, we constructed a simple model with commonly used financial variables and a complex model with all available financial variables. The models we constructed are the following. TW I: Rating = f(x1, X2, X3, X4, X6, X7), TW II: Rating = f(x1, X2, X3, X4, X6, X7, X8, X10, X11, X12, X13, X14, X15, X16, X18, X21), US I: Rating = f(x1, X2, X3, X4, X7), and US II: Rating = f(x1, X2, X3, X4, X7, X8, X10, X11, X12, X13, X14, X15, X16, X17) Prediction accuracy analysis For each of the four models, we used backpropagation neural network and support vector machines to predict the bond ratings. To evaluate the prediction performance, we followed the 10-fold cross-validation procedure, which has shown good performance in model selection [46]. Because some credit rating classes had a very small number of data points for both the US and Taiwan datasets, we also conducted the leaveone-out cross-validation procedure to access the prediction performances. When performing the crossvalidation procedures for the neural networks, 10% of the data was used as a validation set. Table 4 summarizes the prediction accuracies of the four models using both cross-validation procedures. For comparison purposes, the prediction accuracies of a regression model that achieved relatively good performance in the literature, the logistic regression model, are also reported in Table 4. The following observations are summarized: support vector machines achieved the best performance Table 4 Prediction accuracies (LogR: logistic regression model, SVM: support vector machines, NN: neural networks) 10-fold Cross-validation LogR (%) SVM (%) NN (%) Leave-one-out Cross-validation LogR (%) SVM (%) NN (%) TW I TW II US I US II
11 Z. Huang et al. / Decision Support Systems 37 (2004) in three of the four models that we tested; support vector machines and neural networks model outperformed the logistic regression model consistently; the 10-fold and leave-one-out cross-validation procedures obtained comparable prediction accuracies. Many previous studies using backpropagation neural networks have provided analysis of prediction errors in the number of rating categories. We also prepared Table 5 to show the ability of the backpropagation neural network to get predictions within one class away from actual rating. We can conclude that the probabilities for the predictions to be within one class away from the actual rating were over 90% for all four models. We can draw several conclusions from the experiment results obtained. First, our results conform to prior research results indicating that analytical models based on publicly available financial information built by machine learning algorithms could provide accurate predictions. Although the rating agencies and many institutional writers emphasize the importance of subjective judgment of the analyst in determining the ratings, it appeared that a relatively small list of financial variables largely determined the rating results. This phenomenon appears consistently in different financial markets. In our study, we obtained the highest prediction accuracies of 79.73% for Taiwan data set and 80.75% for the United States data set. Second, support vector machines slightly improved the credit rating prediction accuracies. Third, the results also showed that models using the small set of financial variables that have been frequently used in the literature achieved comparable and in some cases, even better results than models using a larger set of financial variables. This validated that the set of financial variables identified in previous studies captured the most relevant information for the credit rating decision Variable contribution analysis Another focus of this study was on the interpretability of machine learning based models. By examining the input variables and accuracies of the models, we could provide useful information about the bond-rating process. For example, in our study, we could conclude that bond raters largely rely on a small list of financial variables to make rating decisions. However, it is generally difficult to interpret the relative importance of the variables in the models for either support vector machines or neural networks. In fact, this limitation has been a frequent complaint about neural networks in the literature [41,50]. In the case of bond-rating prediction, neural networks have been shown in many studies to have excellent accuracy performance, but little effort has Table 5 Within-1-class accuracy results Acutal rating Predicted rating Acutal rating Predicted rating twaaa twaa twa twbbb twbb twaaa twaa twa twbbb twbb twaaa twaaa twaa twaa twa twa twbbb twbbb twbb twbb TW I: within-1-class accuracy: 91.89% TW II: within-1-class accuracy: 93.24% Acutal rating Predicted rating Acutal rating Predicted rating AA A BBB BB B AA A BBB BB B AA AA A A BBB BBB BB BB B B US I: within-1-class accuracy: 97.74% US II: within-1-class accuracy: 98.44%
12 554 Z. Huang et al. / Decision Support Systems 37 (2004) been made in prior studies to try to explain the results. In this study, we tried to extend previous research by applying results from recent research in neural network model interpretation, and to try to find out the relative importance of selected financial variables to the bond-rating problem. Before further analysis on the neural network model, we optimized the backpropagation models for both Taiwan and United States markets by selecting optimal sets of input financial variables following a procedure similar to that of step-wise regression. We started from the simple model, we constructed (US I and TW I), and then tried to remove each financial variable in the model and to add each remaining financial variable one at a time. During this process, we modified the model if any prediction accuracy improvement was observed. We iterated the process with the modified model until no improvement was observed. The optimal neural network models we obtained for the two markets were TW III: Rating = f(x1, X2, X3, X4, X6, X7, X8) and US III: Rating = f(x1, X2, X3, X4, X7, X11). We obtained the highest 10-fold cross-validation prediction accuracy from these two models, 77.52% and 81.32%, respectively. The following model interpretation analysis will be based on these two fine-tuned models. Many researchers have attempted to identify the contribution of the input variables in neural network models. This information can be used to construct the optimal neural network model or to provide better understanding of the models [11]. These methods mainly extract information from the connection strengths (inter-layer weights) of the neural network model to interpret the model. Several studies have tried to analyze the first order derivatives of the neural network with respect to the network parameters (including input units, hidden units and weights). For example, Garson [16] developed measures of relative importance or relative strength [51] of inputs to the network. Other researchers used the connection strengths to extract symbolic rules [40] in order to provide interpretation capability similar to that of decision tree algorithms. In this study, we were interested in using Garson s contribution measures to evaluate the relative importance of the input variables in the bond-rating neural network models. Both of these two measures are based on a typical three-layer backpropagation neural network. We use the following notations to describe the two measures. Consider a neural network with I input units, J hidden units, and K output units. The connection strengths between input, hidden and output layers are denoted as w ji and v jk, where i =1,...I, j =1,..., J and k =1,... K. Garson s measure of relative contribution of input i on output k is defined as Eq. (5) and Yoon et al. s measure is defined as Eq. (6). Con ik ¼ Con ik ¼ X I X J j¼1 Aw ji Nv jk A X I X J X J j¼1 j¼1 X I X J j¼1 Aw ji A Aw ji Nv jk A X I w ji v jk w ji v jk Aw ji A ð5þ ð6þ Garson s method places more emphasis on the connection strengths from the hidden layer to the output layer, but it does not measure the direction of the influence. The two methods both measure the relative contribution of each input variable to each of the output units. The number of output nodes in our neural network architecture was the number of bondrating classes. Thus, the contribution measures described above will evaluate the contribution of each input variables to each of the bond-rating classes. This brought some problems to interpret Yoon s contribution measures. The direction of the influence of an input financial variable may be different across bondrating classes. With relatively large number of bondrating classes, the results we obtained were too complicated to permit interpreting the contribution measures and did not improve understanding of the bond-rating process. On the other hand, the contribution analysis results from Garson s method showed that input variables made similar contributions to different bond-rating classes, which allowed us to understand the relative importance of different input financial variables in our neural network models. We summarize the results we obtained using Garson s method in Fig. 1.
13 Z. Huang et al. / Decision Support Systems 37 (2004) Fig. 1. Financial variable contribution based on Garson s measure Cross-market analysis Using Garson s contribution measure, we can assess the relative importance of each input financial variable to the bond-rating process in different markets. Since the neural network models we constructed for the Taiwan and the United States markets both achieved prediction accuracy close to 80%, we believe that information about variable importance extracted from the models also to a large extent captures the bond raters behavior. As showed in Fig. 1, although the optimal neural network models for Taiwan and US data sets used a similar list of input financial variables (five financial variables including X1, X2, X3, X4 and X7 were selected by both models), the relative importance of different financial variables was quite different in the two models. For the US model, X1, X2, X3, and X7 were more important, while X4 and X11 were relatively less important. For the Taiwan market, X4, X7, X8 were more important while X1, X2, X3 and X6 were relatively less important. The important financial variables in the US model were total assets, total liabilities, long-term debts/total invested capital and operating profit margin, while debt ratio, operating profit margin, and (shareholders equity + long-term debt)/fixed assets were more important in the Taiwan model. The total assets (X1) and total liabilities (X2) were the most important variables in the US data set, which indicated that US bond raters rely more on the size of the target company in giving ratings. The most important variable for the Taiwan data set was the operating profit margin (X7), which indicated that Taiwan raters focus more on the companies profitability when making rating decisions. A closer study of the characteristics of the data set provided a partial explanation of some of the contributions of the variables in different models. From our experimental data set, the operating profit margin for each rating group in the US data set, except for the B rating group, averaged between 30% and 35%, but the average operating profit margin ranged from 3.6% in BBB group to 40% in AAA group in the Taiwan data set. Thus, operating profit margin in the Taiwan data set provided more information to the credit ratings model than it did to the US model. A similar phenomenon was observed for debt ratio (X4). Debt ratio measures how a company is leveraging its debt against the capital committed by its owners. Based on contemporary financial theory, companies are encouraged to operate at leverage in order to grow rapidly. Most US companies do operate at high leverage. However, similar to most Asian countries, most of Taiwan s companies are more conservative and choose not to operate at very high leverage. While comparing the two data sets, we found that every rating group in the US data set had average debt ratio over 90%; no rating group in the Taiwan data set had average debt ratio higher than 90%. For example, the AAA group in Taiwan data set had average debt ratio of 37%, which was much lower than the debt ratios of most US companies. This information confirmed our variable contribution analysis, where the debt ratio was assigned as having the least contribution in the US model while it was one of the more important variables in the Taiwan data set. In summary, we extended prior research by adopting relative importance measures of to interpret relative importance of input variables in bond-rating neural network models. The results we obtained showed quite different characteristics of bond-rating processes in Taiwan and the United States. We tried to
14 556 Z. Huang et al. / Decision Support Systems 37 (2004) provide some explanations of the contribution analysis results by examining the data distribution characteristics in different data sets. However, expert judgment and field study are needed to further rationalize and evaluate the relative importance information extracted from the neural network models. 8. Discussion and future directions In this study, we applied a newly introduced learning method based on statistical learning theory, support vector machines, together with a frequently used highperformance method, backpropagation neural networks, to the problem of credit rating prediction. We used two data sets for Taiwan financial institutes and United States commercial banks as our experiment testbed. The results showed that support vector machines achieved accuracy comparable to that of backpropagation neural networks. Applying the results from research in neural network model interpretation, we conducted input financial variable contribution analysis and determined the relative importance of the input variables. We believe this information can help users understand the bond-rating process better. We also used the contribution analysis results to compare the characteristics of bond-rating processes in the United States and Taiwan markets. We found that the optimal models we built for the two markets used similar lists of financial variables as inputs but found the relative importance of the variables was quite different across the two markets. One future direction of the research would be to conduct a field study or a survey study to compare the interpretation of the bond-rating process we have obtained from our models with bond-rating experts knowledge. Deeper market structure analysis is also needed to fully explain the differences we found in our models. Acknowledgements This research is partly supported by NSF Digital Library Initiative-2, High-performance Digital Library Systems: From Information Retrieval to Knowledge Management, IIS , April 1999 March We would like to thank Securities and Futures Institute for providing the dataset and assistance from Ann Hsu during the project. We would also like to thank Dr. Jerome Yen for the valuable comments during the initial stage of this research. References [1] C.M. Bishop, Neural Networks for Pattern Recognition, Oxford Univ. Press, New York, [2] M.P. Brown, W.N. Grudy, D. Lin, N. Cristianini, C.W. Sugnet, T.S. Furey, M. Ares, D. Haussler, Knowledge-based analysis of microarray gene expression data by using support vector machines. Proceedings of National Academy of Sciences 97 (1) (2000) [3] R. Chaveesuk, C. Srivaree-Ratana, A.E. Smith, Alternative neural network approaches to corporate bond rating, Journal of Engineering Valuation and Cost Analysis 2 (2) (1999) [4] H. Chen, P. Buntin, L. She, S. Sutjahjo, C. Sommer, D. Neely, Expert prediction, symbolic learning, and neural networks: an experiment on greyhound racing, IEEE Expert 9 (6) (1994) [5] C. Cortes, V.N. Vapnik, Support vector networks, Machine Learning 20 (1995) [6] K. Crammer, Y. Singer, On the learnability and design of output codes for multiclass problems, Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, Morgan Kaufmann, San Francisco, 2000, pp [7] N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge Univ. Press, Cambridge, New York, [8] E.B. Deakin, Discriminant analysis of predictors of business failure, Journal of Accounting Research (1976) [9] S. Dutta, S. Shekhar, Bond rating: a non-conservative application of neural networks, Proceedings of IEEE International Conference on Neural Networks, 1988, pp. II443 II450. [10] H.L. Ederington, Classification models and bond ratings, Financial Review 20 (4) (1985) [11] A. Engelbrecht, I. Cloete, Feature extraction from feedforward neural networks using sensitivity analysis, Proceedings of the International Conference on Systems, Signals, Control, Computers, 1998, pp [12] A. Fan, M. Palaniswami, Selecting bankruptcy predictors using a support vector machine approach, Proceedings of the International Joint Conference on Neural Networks, [13] L. Fisher, Determinants of risk premiums on corporate bonds, Journal of Political Economy (1959) [14] J. Galindo, P. Tamayo, Credit risk assessment using statistical and machine learning: basic methodology and risk modeling applications, Computational Economics 15 (1 2) (2000) [15] S. Garavaglia, An application of a Counter-Propagation Neural Networks: Simulating the Standard & Poor s Corporate
15 Z. Huang et al. / Decision Support Systems 37 (2004) Bond Rating System, Proceedings of the First International Conference on Artificial Intelligence on Wall Street, 1991, pp [16] D. Garson, Interpreting neural-network connection strengths, AI Expert (1991) [17] J.A. Gentry, D.T. Whitford, P. Newbold, Predicting industrial bond ratings with a probit model and funds flow components, Financial Review 23 (3) (1988) [18] M.A. Hearst, S.T. Dumais, E. Osman, J. Platt, B. Schölkopf, Support vector machines, IEEE Intelligent Systems 13 (4) (1998) [19] J.O. Horrigan, The determination of long term credit standing with financial ratios, Journal of Accounting Research, Supplement (1966) [20] C.W. Hsu, C.J. Lin, A comparison of methods for multi-class support vector machines, Technical report, National Taiwan University, Taiwan, [21] T.S. Jaakkola, D. Haussler, Exploiting generative models in discriminative classifiers, in: M.S. Kearns, S.A. Solla, D.A. Cohn (Eds.), Advances in Neural Information Processing Systems, MIT Press, Cambridge, [22] J.D. Jackson, J.W. Boyd, A statistical approach to modeling the behavior of bond raters, The Journal of Behavioral Economics 17 (3) (1988) [23] T. Joachims, Text categorization with support vector machines, Proceedings of the European Conference on Machine Learning (ECML), [24] R.S. Kaplan, G. Urwitz, Statistical models of bond ratings: a methodological inquiry, The Journal of Business 52 (2) (1979) [25] J.W. Kim, Expert systems for bond rating: a comparative analysis of statistical, rule-based and neural network systems, Expert Systems 10 (1993) [26] C.L. Kun, H. Ingoo, K. Youngsig, Hybrid neural network models for bankruptcy predictions, Decision Support Systems 18 (1) (1996) [27] Y.S. Kwon, I.G. Han, K.C. Lee, Ordinal Pairwise Partitioning (OPP) approach to neural networks training in bond rating, Intelligent Systems in Accounting, Finance and Management 6 (1997) [28] J.J. Maher, T.K. Sen, Predicting bond ratings using neural networks: a comparison with logistic regression, Intelligent Systems in Accounting, Finance and Management 6 (1997) [29] D. Michie, D.J. Spiegelhalter, C.C. Taylor, Machine Learning, Neural and Statistical Classification, Elis Horwood, London, [30] J. Moody, J. Utans, Architecture selection strategies for neural networks application to corporate bond rating, in: A. Refenes (Ed.), Neural Networks in the Capital Markets, Wiley, Chichester, 1995, pp [31] K.-R. Müller, S. Mika, G. Ratsch, K. Tsuda, B. Schölkopf, An introduction to kernel-based learning algorithms, IEEE Transactions on Neural Networks 12 (2) (2001) [32] E. Osuna, R. Freund, F. Girosi, Training support vector machines: an application to face detection, Proceedings of Computer Vision and Pattern Recognition, 1997, pp [33] G.E. Pinches, K.A. Mingo, A multivariate analysis of industrial bond ratings, Journal of Finance 28 (1) (1973) [34] G.E. Pinches, K.A. Mingo, The role of subordination and industrial bond ratings, Journal of Finance 30 (1) (1975) [35] J.C. Platt, Fast training of support vector machines using sequential minimum optimization, in: B. Schölkopf, C. Burges, A. Smola (Eds.), Advances in Kernel Methods-Support Vector Learning, MIT Press, Cambridge, 1998, pp [36] T.F. Pogue, R.M. Soldofsky, What s in a bond rating? Journal of Financial and Quantitative Analysis 4 (2) (1969) [37] K.S. Shin, I. Han, A case-based approach using inductive indexing for corporate bond rating, Decision Support Systems 32 (2001) [38] J.C. Singleton, A.J. Surkan, Neural networks for bond rating improved by multiple hidden layers, Proceedings of the IEEE International Conference on Neural Networks, 1990, pp [39] J.C. Singleton, A.J. Surkan, Bond rating with neural networks, in: A. Refenes (Ed.), Neural Networks in the Capital Markets, Wiley, Chichester, 1995, pp [40] I.A. Taha, J. Ghosh, Symbolic interpretation of artificial neural networks, IEEE Transactions on Knowledge and Data Engineering 11 (3) (1999) [41] K. Tam, M. Kiang, Managerial application of neural networks: the case of bank failure predictions, Management Science 38 (7) (1992) [42] F.E.H. Tay, L.J. Cao, Modified support vector machines in financial time series forecasting, Neurocomputing 48 (2002) [43] K.M. Tolle, H. Chen, H. Chow, Estimating drug/plasma concentration levels by applying neural networks to pharmacokinetic data sets, Decision Support Systems, Special Issue on Decision Support for Health Care in a New Information Age 30 (2) (2000) [44] T. Van Gestel, J.A.K. Suykens, D.-E. Baestaens, A. Lambrechts, G. Lanckriet, B. Vandaele, B. De Moor, J. Vandewalle, Financial time series prediction using least squares support vector machines within the evidence framework, IEEE Transactions on Neural Networks 12 (4) (2001) [45] V. Vapnik, The Nature of Statistical Learning Theory, Springer- Verlag, New York, [46] S.M. Weiss, C.A. Kulikowski, Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Networks, Machine Learning and Expert Systems, Morgan Kaufmann, San Mateo, [47] R.R. West, An alternative approach to predicting corporate bond ratings, Journal of Accounting Research 8 (1) (1970) [48] B. Widrow, D.E. Rumelhart, M.A. Lehr, Neural networks: applications in industry, business, and science, Communications of the ACM 37 (1994) [49] R.L. Wilson, S.R. Sharda, Bankruptcy prediction using neural networks, Decision Support Systems 11 (5) (1994) [50] I.H. Witten, E. Frank, J. Gray, Data Mining: Practical Machine
16 558 Z. Huang et al. / Decision Support Systems 37 (2004) Learning Tools and Techniques with Java Implementations, Morgan Kaufmann, San Francisco, [51] Y. Yoon, T. Guimaraes, G. Swales, Integrating artificial neural networks with rule-based expert systems, Decision Support Systems 11 (1994) [52] A. Zien, G. Rätsch, S. Mika, B. Schölkopf, T. Lengauer, K.-R. Müller, Engineering support vector machine kernels that recognize translation initiation sites, Bioinformatics 16 (9) (2000) Zan Huang is a doctoral student in the Department of Management Information Systems at the University of Arizona, and a research associate in the Artificial Intelligence Lab. His current research interests include recommender systems, data mining and text mining for financial applications, intelligent agents and experimental economics-related research for electronic markets. He received a BS in Management Information Systems from Tsinghua University, Beijing, China. Hsinchun Chen is McClelland Professor of MIS and Andersen Professor of MIS at the University of Arizona, where he is the director of the Artificial Intelligence Lab and the director of the Hoffman E-Commerce Lab. His articles have appeared in Communications of the ACM, IEEE Computer, Journal of the American Society for Information Science and Technology, IEEE Expert, and many other publications. Professor Chen has received grant awards from the NSF, DARPA, NASA, NIH, NIJ, NLM, NCSA, HP, SAP, 3COM, and AT and T. He serves on the editorial board of Decision Support Systems and the Journal of the American Society for Information Science and Technology, and has served as the conference general chair of the International Conferences on Asian Digital Library in the past 4 years. Wun-Hwa Chen received his PhD in Management Science from the State University of New York at Buffalo in He is currently a Professor of Operations Management in the Department of Business Administration at National Taiwan University. Prior to his current affiliation, he was an Assistant Professor in the Department of Management and Systems at Washington State University. His research and teaching interests lie in production planning and scheduling, AI/OR algorithmic design, data mining models for customer relationship management, and mathematical financial modeling. He has published articles in several journals including Annals of Operational Research, IEEE Transactions on Robotics and Automation, European Journal of Operational Research, and Computers and Operations Research. In addition, Dr. Chen has held the Chairmanship in Information Technology Advisory Committees of several government agencies in Taiwan and has extensive consulting experiences with many major corporations in Taiwan. He was also a visiting scholar at the University of New South Wales, Sydney, Australia, in 1995 and the University of Arizona in Soushan Wu received his PhD in Finance from the University of Florida in He is a Chair Professor and Dean of the College of Management, Chang-Gung University, Taiwan. He is also a visiting scholar at Clemson University, Hong Kong Polytechnic University. His research interests cover Management Science, Investment Science, Capital Markets, and Information Systems. He has published more than 70 articles in Research in Finance, Financial Management, Asia-Pacific Journal of Finance, International Journal of Accounting and Information Systems, etc. He has also served as an ad hoc reviewer for academic journals in the fields of Accounting, Finance, and Information Management. Dr. Soushan Wu serves as a chief editor of the Review of Securities and Futures Markets and Asia-Pacific Journal of Management and Technology. Chia-Jung Hsu received his BS in Economics from National Taiwan University in 1997, and an MBA in International Management from Thunderbird, the American Graduate School of International Management, in Currently, he is working towards a Master of Science in Management Information Systems at the University of Arizona. He has experience in the field of security analysis and is a certified security and futures broker (Taiwan). He completed CFA level I in 2001.
Equity forecast: Predicting long term stock price movement using machine learning
Equity forecast: Predicting long term stock price movement using machine learning Nikola Milosevic School of Computer Science, University of Manchester, UK [email protected] Abstract Long
Corporate Credit Rating using Multiclass Classification Models with order Information
Corporate Credit Rating using Multiclass Classification Models with order Information Hyunchul Ahn and Kyoung-Jae Kim Abstract Corporate credit rating prediction using statistical and artificial intelligence
A Simple Introduction to Support Vector Machines
A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear
Data Mining - Evaluation of Classifiers
Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION
HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION Chihli Hung 1, Jing Hong Chen 2, Stefan Wermter 3, 1,2 Department of Management Information Systems, Chung Yuan Christian University, Taiwan
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
Predictive time series analysis of stock prices using neural network classifier
Predictive time series analysis of stock prices using neural network classifier Abhinav Pathak, National Institute of Technology, Karnataka, Surathkal, India [email protected] Abstract The work pertains
Prediction of Stock Performance Using Analytical Techniques
136 JOURNAL OF EMERGING TECHNOLOGIES IN WEB INTELLIGENCE, VOL. 5, NO. 2, MAY 2013 Prediction of Stock Performance Using Analytical Techniques Carol Hargreaves Institute of Systems Science National University
DOUBLE ENSEMBLE APPROACHES TO PREDICTING FIRMS CREDIT RATING
DOUBLE ENSEMBLE APPROACHES TO PREDICTING FIRMS CREDIT RATING Jungeun Kwon, Business School, Korea University, Seoul, Republic of Korea, [email protected] Keunho Choi * Business School, Korea University,
Comparison of K-means and Backpropagation Data Mining Algorithms
Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and
Neural Network Applications in Stock Market Predictions - A Methodology Analysis
Neural Network Applications in Stock Market Predictions - A Methodology Analysis Marijana Zekic, MS University of Josip Juraj Strossmayer in Osijek Faculty of Economics Osijek Gajev trg 7, 31000 Osijek
Support Vector Machine (SVM)
Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
Classification of Bad Accounts in Credit Card Industry
Classification of Bad Accounts in Credit Card Industry Chengwei Yuan December 12, 2014 Introduction Risk management is critical for a credit card company to survive in such competing industry. In addition
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Chapter 12 Discovering New Knowledge Data Mining
Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to
FINANCIAL AND REPORTING PRINCIPLES AND DEFINITIONS
FINANCIAL AND REPORTING PRINCIPLES AND DEFINITIONS 2 BASIC REPORTING PRINCIPLES Full Disclosure of Meaningful Information Basic facts about an investment should be available prior to buying it. Investors
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM
Journal of Computational Information Systems 10: 17 (2014) 7629 7635 Available at http://www.jofcis.com A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM Tian
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
Employer Health Insurance Premium Prediction Elliott Lui
Employer Health Insurance Premium Prediction Elliott Lui 1 Introduction The US spends 15.2% of its GDP on health care, more than any other country, and the cost of health insurance is rising faster than
Evaluation of Feature Selection Methods for Predictive Modeling Using Neural Networks in Credits Scoring
714 Evaluation of Feature election Methods for Predictive Modeling Using Neural Networks in Credits coring Raghavendra B. K. Dr. M.G.R. Educational and Research Institute, Chennai-95 Email: [email protected]
SURVIVABILITY OF COMPLEX SYSTEM SUPPORT VECTOR MACHINE BASED APPROACH
1 SURVIVABILITY OF COMPLEX SYSTEM SUPPORT VECTOR MACHINE BASED APPROACH Y, HONG, N. GAUTAM, S. R. T. KUMARA, A. SURANA, H. GUPTA, S. LEE, V. NARAYANAN, H. THADAKAMALLA The Dept. of Industrial Engineering,
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail
Bank Customers (Credit) Rating System Based On Expert System and ANN
Bank Customers (Credit) Rating System Based On Expert System and ANN Project Review Yingzhen Li Abstract The precise rating of customers has a decisive impact on loan business. We constructed the BP network,
Data Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.1 Spring 2010 Instructor: Dr. Masoud Yaghini Outline Classification vs. Numeric Prediction Prediction Process Data Preparation Comparing Prediction Methods References Classification
Data quality in Accounting Information Systems
Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC)
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) 1 three frequent ideas in machine learning. independent and identically distributed data This experimental paradigm has driven
Beating the MLB Moneyline
Beating the MLB Moneyline Leland Chen [email protected] Andrew He [email protected] 1 Abstract Sports forecasting is a challenging task that has similarities to stock market prediction, requiring time-series
The Artificial Prediction Market
The Artificial Prediction Market Adrian Barbu Department of Statistics Florida State University Joint work with Nathan Lay, Siemens Corporate Research 1 Overview Main Contributions A mathematical theory
The Combination Forecasting Model of Auto Sales Based on Seasonal Index and RBF Neural Network
, pp.67-76 http://dx.doi.org/10.14257/ijdta.2016.9.1.06 The Combination Forecasting Model of Auto Sales Based on Seasonal Index and RBF Neural Network Lihua Yang and Baolin Li* School of Economics and
Data Mining mit der JMSL Numerical Library for Java Applications
Data Mining mit der JMSL Numerical Library for Java Applications Stefan Sineux 8. Java Forum Stuttgart 07.07.2005 Agenda Visual Numerics JMSL TM Numerical Library Neuronale Netze (Hintergrund) Demos Neuronale
UNDERSTANDING PARTICIPATING WHOLE LIFE INSURANCE
UNDERSTANDING PARTICIPATING WHOLE LIFE INSURANCE equimax CLIENT GUIDE ABOUT EQUITABLE LIFE OF CANADA Equitable Life is one of Canada s largest mutual life insurance companies. For generations we ve provided
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
NEURAL NETWORKS IN DATA MINING
NEURAL NETWORKS IN DATA MINING 1 DR. YASHPAL SINGH, 2 ALOK SINGH CHAUHAN 1 Reader, Bundelkhand Institute of Engineering & Technology, Jhansi, India 2 Lecturer, United Institute of Management, Allahabad,
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
Lecture 8 February 4
ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt
D-optimal plans in observational studies
D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
Hong Kong Stock Index Forecasting
Hong Kong Stock Index Forecasting Tong Fu Shuo Chen Chuanqi Wei [email protected] [email protected] [email protected] Abstract Prediction of the movement of stock market is a long-time attractive topic
Neural Networks and Support Vector Machines
INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines
Applications of improved grey prediction model for power demand forecasting
Energy Conversion and Management 44 (2003) 2241 2249 www.elsevier.com/locate/enconman Applications of improved grey prediction model for power demand forecasting Che-Chiang Hsu a, *, Chia-Yon Chen b a
Towards applying Data Mining Techniques for Talent Mangement
2009 International Conference on Computer Engineering and Applications IPCSIT vol.2 (2011) (2011) IACSIT Press, Singapore Towards applying Data Mining Techniques for Talent Mangement Hamidah Jantan 1,
Data, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
A hybrid financial analysis model for business failure prediction
Available online at www.sciencedirect.com Expert Systems with Applications Expert Systems with Applications 35 (2008) 1034 1040 www.elsevier.com/locate/eswa A hybrid financial analysis model for business
Electronic Payment Fraud Detection Techniques
World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 2, No. 4, 137-141, 2012 Electronic Payment Fraud Detection Techniques Adnan M. Al-Khatib CIS Dept. Faculty of Information
6.2.8 Neural networks for data mining
6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural
Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal
Learning Example Chapter 18: Learning from Examples 22c:145 An emergency room in a hospital measures 17 variables (e.g., blood pressure, age, etc) of newly admitted patients. A decision is needed: whether
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Comparison of machine learning methods for intelligent tutoring systems
Comparison of machine learning methods for intelligent tutoring systems Wilhelmiina Hämäläinen 1 and Mikko Vinni 1 Department of Computer Science, University of Joensuu, P.O. Box 111, FI-80101 Joensuu
Recognizing Informed Option Trading
Recognizing Informed Option Trading Alex Bain, Prabal Tiwaree, Kari Okamoto 1 Abstract While equity (stock) markets are generally efficient in discounting public information into stock prices, we believe
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Total shares at the end of ten years is 100*(1+5%) 10 =162.9.
FCS5510 Sample Homework Problems Unit04 CHAPTER 8 STOCK PROBLEMS 1. An investor buys 100 shares if a $40 stock that pays a annual cash dividend of $2 a share (a 5% dividend yield) and signs up for the
Early defect identification of semiconductor processes using machine learning
STANFORD UNIVERISTY MACHINE LEARNING CS229 Early defect identification of semiconductor processes using machine learning Friday, December 16, 2011 Authors: Saul ROSA Anton VLADIMIROV Professor: Dr. Andrew
Multiclass Classification. 9.520 Class 06, 25 Feb 2008 Ryan Rifkin
Multiclass Classification 9.520 Class 06, 25 Feb 2008 Ryan Rifkin It is a tale Told by an idiot, full of sound and fury, Signifying nothing. Macbeth, Act V, Scene V What Is Multiclass Classification? Each
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015
RESEARCH ARTICLE OPEN ACCESS Data Mining Technology for Efficient Network Security Management Ankit Naik [1], S.W. Ahmad [2] Student [1], Assistant Professor [2] Department of Computer Science and Engineering
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
Acknowledgments. Data Mining with Regression. Data Mining Context. Overview. Colleagues
Data Mining with Regression Teaching an old dog some new tricks Acknowledgments Colleagues Dean Foster in Statistics Lyle Ungar in Computer Science Bob Stine Department of Statistics The School of the
Neural Networks for Sentiment Detection in Financial Text
Neural Networks for Sentiment Detection in Financial Text Caslav Bozic* and Detlef Seese* With a rise of algorithmic trading volume in recent years, the need for automatic analysis of financial news emerged.
Estimating Beta. Aswath Damodaran
Estimating Beta The standard procedure for estimating betas is to regress stock returns (R j ) against market returns (R m ) - R j = a + b R m where a is the intercept and b is the slope of the regression.
A study of Taiwan s issuer credit rating systems using support vector machines
Expert Systems with Applications 30 (2006) 427 435 www.elsevier.com/locate/eswa A study of Taiwan s issuer credit rating systems using support vector machines Wun-Hwa Chen a,1, Jen-Ying Shih b,1, * a Graduate
Learning is a very general term denoting the way in which agents:
What is learning? Learning is a very general term denoting the way in which agents: Acquire and organize knowledge (by building, modifying and organizing internal representations of some external reality);
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Azure Machine Learning, SQL Data Mining and R
Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:
Uses and Limitations of Ratio Analysis
Uses and Limitations of Ratio Analysis Balkrishna Parab ACS, AICWA [email protected] F inancial statement analysis involves comparing the firm s performance with that of other firms in the same
Machine Learning in FX Carry Basket Prediction
Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines
An Introduction to Neural Networks
An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,
Gerry Hobbs, Department of Statistics, West Virginia University
Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit
Financial time series forecasting using support vector machines
Neurocomputing 55 (2003) 307 319 www.elsevier.com/locate/neucom Financial time series forecasting using support vector machines Kyoung-jae Kim Department of Information Systems, College of Business Administration,
AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
Design call center management system of e-commerce based on BP neural network and multifractal
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(6):951-956 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Design call center management system of e-commerce
Interpretation of Financial Statements
Interpretation of Financial Statements Author Noel O Brien, Formation 2 Accounting Framework Examiner. An important component of most introductory financial accounting programmes is the analysis and interpretation
Predict the Popularity of YouTube Videos Using Early View Data
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
A fast multi-class SVM learning method for huge databases
www.ijcsi.org 544 A fast multi-class SVM learning method for huge databases Djeffal Abdelhamid 1, Babahenini Mohamed Chaouki 2 and Taleb-Ahmed Abdelmalik 3 1,2 Computer science department, LESIA Laboratory,
The Data Mining Process
Sequence for Determining Necessary Data. Wrong: Catalog everything you have, and decide what data is important. Right: Work backward from the solution, define the problem explicitly, and map out the data
Making Sense of the Mayhem: Machine Learning and March Madness
Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University [email protected] [email protected] I. Introduction III. Model The goal of our research
Mapping of Standard & Poor s Ratings Services credit assessments under the Standardised Approach
30 October 2014 Mapping of Standard & Poor s Ratings Services credit assessments under the Standardised Approach 1. Executive summary 1. This report describes the mapping exercise carried out by the Joint
Data Mining Analytics for Business Intelligence and Decision Support
Data Mining Analytics for Business Intelligence and Decision Support Chid Apte, T.J. Watson Research Center, IBM Research Division Knowledge Discovery and Data Mining (KDD) techniques are used for analyzing
Detecting Corporate Fraud: An Application of Machine Learning
Detecting Corporate Fraud: An Application of Machine Learning Ophir Gottlieb, Curt Salisbury, Howard Shek, Vishal Vaidyanathan December 15, 2006 ABSTRACT This paper explores the application of several
Neural network models: Foundations and applications to an audit decision problem
Annals of Operations Research 75(1997)291 301 291 Neural network models: Foundations and applications to an audit decision problem Rebecca C. Wu Department of Accounting, College of Management, National
DISCRIMINANT ANALYSIS AND THE IDENTIFICATION OF HIGH VALUE STOCKS
DISCRIMINANT ANALYSIS AND THE IDENTIFICATION OF HIGH VALUE STOCKS Jo Vu (PhD) Senior Lecturer in Econometrics School of Accounting and Finance Victoria University, Australia Email: [email protected] ABSTRACT
DATA MINING IN FINANCE AND ACCOUNTING: A REVIEW OF CURRENT RESEARCH TRENDS
DATA MINING IN FINANCE AND ACCOUNTING: A REVIEW OF CURRENT RESEARCH TRENDS Efstathios Kirkos 1 Yannis Manolopoulos 2 1 Department of Accounting Technological Educational Institution of Thessaloniki, Greece
degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].
1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very
New Work Item for ISO 3534-5 Predictive Analytics (Initial Notes and Thoughts) Introduction
Introduction New Work Item for ISO 3534-5 Predictive Analytics (Initial Notes and Thoughts) Predictive analytics encompasses the body of statistical knowledge supporting the analysis of massive data sets.
An Introduction to Data Mining
An Introduction to Intel Beijing [email protected] January 17, 2014 Outline 1 DW Overview What is Notable Application of Conference, Software and Applications Major Process in 2 Major Tasks in Detail
Counterparty Credit Risk for Insurance and Reinsurance Firms. Perry D. Mehta Enterprise Risk Management Symposium Chicago, March 2011
Counterparty Credit Risk for Insurance and Reinsurance Firms Perry D. Mehta Enterprise Risk Management Symposium Chicago, March 2011 Outline What is counterparty credit risk Relevance of counterparty credit
DATA MINING TECHNIQUES AND APPLICATIONS
DATA MINING TECHNIQUES AND APPLICATIONS Mrs. Bharati M. Ramageri, Lecturer Modern Institute of Information Technology and Research, Department of Computer Application, Yamunanagar, Nigdi Pune, Maharashtra,
Data Mining. Nonlinear Classification
Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Nonlinear Classification Classes may not be separable by a linear boundary Suppose we randomly generate a data set as follows: X has range between 0 to 15
Classification algorithm in Data mining: An Overview
Classification algorithm in Data mining: An Overview S.Neelamegam #1, Dr.E.Ramaraj *2 #1 M.phil Scholar, Department of Computer Science and Engineering, Alagappa University, Karaikudi. *2 Professor, Department
Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval
Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!
Stock Portfolio Selection using Data Mining Approach
IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 11 (November. 2013), V1 PP 42-48 Stock Portfolio Selection using Data Mining Approach Carol Anne Hargreaves, Prateek
Data Mining Practical Machine Learning Tools and Techniques
Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea
Introduction to Learning & Decision Trees
Artificial Intelligence: Representation and Problem Solving 5-38 April 0, 2007 Introduction to Learning & Decision Trees Learning and Decision Trees to learning What is learning? - more than just memorizing
