Predictive Analytics for the Eyes and Mind


 Arlene Cain
 1 years ago
 Views:
Transcription
1 WHITE PAPER Predictive Analytics for the Eyes and Mind Stephen Few Perceptual Edge
2 Most of my current work in the field of data visualization focuses on analytical uses of visualization, especially what s called descriptive statistics. Analysis of this type strives to discover and understand the meanings that reside in data meanings that reveal what actually exists and has happened in the past. Often, the greatest benefi t of data analysis, however, comes from using our knowledge of what s happened in the past to predict what might happen in the future what s called predictive statistics or predictive analytics. If we understand data well enough to describe the past clearly and accurately, we can usually build a model that can be used to predict what will likely happen in the future as a result of particular conditions, events, or decisions. This interest in prediction has always been a preoccupation of statistics, and for good reason. The ability to predict probable futures allows us to shape the future, rather than merely survive whatever it brings. When supported by good visualizations, predictive analytics come alive in ways that not only help statisticians, but also make it possible for a much broader audience to become involved in shaping the future. In recent years, the benefi ts that visualization brings to descriptive statistics have been embraced by a growing body of knowledge workers in organizations of all types. It s time for this same broad audience of people to be invited to harness the power of visualization for predictive analytics. There is a mystique that often and unnecessarily shrouds predictive analytics, which I believe can and should be dispersed by simple, welldesigned examples of data visualization. This is what I m attempting to do, at least in part, in this paper. While it s true that some analytical problems can only be solved by welltrained statisticians and the use of sophisticated tools, most of the analytical tasks that nonstatisticians face in the course of their work can be effectively handled using relatively simple visualizations and no more than a basic understanding of statistics. Predictive Analytics for All Let s clarify the meaning of predictive analytics. Like many terms that get tossed around by software vendors and even by thought leaders in the fi eld of business intelligence, predictive analytics seems to mean whatever s convenient at the moment. It might be useful for marketing purposes to keep defi nitions loose and adaptable to the occasion, but it doesn t help us at all. We ll start by turning to everyone s favorite encyclopedia these days: Wikipedia. Predictive analytics encompass a variety of techniques from statistics and data mining that analyze current and historical data to make predictions about future events. Such predictions rarely take the form of absolute statements, and are more likely to be expressed as values that correspond to the odds of a particular event or behavior taking place in the future. In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. Wikipedia entry for predictive analytics as of December 15, Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 1 of 16
3 Despite reservations about defi nitions from Wikipedia, I think these two paragraphs nicely describe the nature of predictive analytics that we can use as a platform for the short journey that I ll take you on in the paper. I d like to supplement what we ve learned from Wikipedia with a few words from Alan McLean of Monash University, because he adds an important emphasis on probability and features the role of models. Statistics is commonly taught as a set of techniques to aid in decision making, by extracting information from data. It is argued here that the underlying purpose, often implicit rather than explicit, of every statistical analysis is to establish one or more probability models that can be used to predict values of one or more variables. Such a model constitutes information only in the sense, and to the extent, that it provides predictions of suffi cient quality to be useful for decision making. The quality of the decision making is determined by the quality of the predictions, and hence by that of the models used. Alan McLean, The Predictive Approach to Teaching Statistics, Journal of Statistics Education v.8, n.3 (2000). The goal of statistics, when used for prediction, is not to produce certainty, but to reduce uncertainty to a level that will allow us to make better decisions. To do this, we rely on models. Don t let the terms statistical model or predictive model throw you. In concept, they re quite simple, just like more familiar models of other types. Generally speaking, models are representations of things or events, which we use to examine and understand those things or events when it isn t possible or practical to observe or interact with them directly. Statistical models represent mathematical relationships between the parts that make up the thing or event. Predictive models are those that we can interact with to investigate the results of hypothetical conditions, such as by changing the values of particular variables. Predictive models make it possible for us to do what some people call whatif analysis. What if such and such a condition existed or event occurred? What would happen as a result? Predictive models give us the means to predict what would probably happen (the probable outcomes of dependent variables) if particular conditions arose naturally or by intention (specifi ed input values to one or more independent variables). One of my favorite models is 1.5 acres in size, filling a large building in the city of Sausalito, California, located just across the Golden Gate Bridge from San Francisco in Marin County. The Bay Model, as it s called, is a threedimensional hydraulic model of the entire San Francisco Bay and the surrounding delta system that extends into the Sacramento and San Joaquin valleys, consisting of more than 1,600 square miles of waterways. It was built and is still used by the US Army Corps of Engineers to study the effects of various conditions on this huge estuary, which is vitally important to the state of California. One way that engineers use this model is to anticipate conditions that could harm the bay so they can construct preventative protections. Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 2 of 16
4 On two occasions, I ve walked all around and over the bay model on platforms that let visitors explore this smallscale version of the entire bay area. It s a bit like fl ying over this beautiful area, where I live, in a helicopter. Like most visitors, I love to spot places that are familiar to me and to get a sense of how they fi t into the region as a whole. The Bay Model, like all good models, has been intentionally simplifi ed. It doesn t show houses and only major roads are included because its purpose is to help engineers focus on water levels in relation to particular areas of shoreline and structures, such as bridges, ports and shipping routes. Small metal rods stick out of the water throughout the model, extending from the earthen fl oor of the bay upward above the water line. These rods make it possible to see water levels throughout the model and to watch how they re affected by particular conditions, such as releases of water from a dam or the drought conditions that we re currently experiencing. The engineers could have prepared a complex Excel spreadsheet, using mathematical calculations alone to model this water system, but this wouldn t suffi ce. Some things must be seen to be understood. Good predictive models of all types, whether they represent something physical such as the San Francisco Bay or something abstract such as your company s sales, almost always benefi t from good visualizations. Good Models Are Critical A model must capture the essence of the thing it represents, fi nding the right balance between too much information and too little. To build an effective model, you must understand the thing being modeled well enough to pick out the important parts and ignore the others, and to represent only the aspects of those important parts that are relevant to the task. If a model is more complicated than the thing it represents, it s a bad model. If it s so simple that it leaves out information that must be seen and understood, it s a bad model. For purposes of analysis or presentation, viewing and interacting with a good model works better than viewing and interacting with the real thing. The model removes extraneous features and details, making it easy for us to focus only on what pertains to our purpose. For this reason, good instruction manuals often use simple line drawings to illustrate how things should be put together or repaired, rather than photographs. It would be diffi cult to pick out the important features of the things you need to interact with from a photograph. Like the real world, photographs are fi lled with shadows, subtle details are buried in visual complexity, and what you need to see can be hidden behind something else. In this same vein, comic book artist Scott McCloud, a talented artist and thoughtful communicator, explains how the pared down design of comic book illustrations works as a form of amplifi cation through simplifi cation. When we abstract an image through cartooning, we re not so much eliminating details as we are focusing on specifi c details. By stripping down an image to its essential meaning, an artist can amplify that meaning in a way that realistic art can t. Scott McCloud, Understanding Comics, Harper Collins, New York, NY, 1993, p. 30 Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 3 of 16
5 Most of the applications that I ve seen marketed by business intelligence software vendors for predictive analytics allow data to be entered on one end (inputs) and then results (outputs) pop out the other; what goes on in between remains hidden in a black box. Unfortunately, without seeing what goes on in that black box, our brains aren t fully engaged in the process and too much is missed. Predictive analytics are most revealing when they allow us to see how all the variables that contribute either directly or indirectly to the outcomes that concern us relate to those outcomes and to one another. To understand these relationships, we must see them; we must watch how changes in one variable directly cause or indirectly infl uence changes in the others. For this to happen, predictive models must be displayed visually in a way that allows: (1) our eyes to see the relationships and changes; and (2) our minds to make sense of them. It s almost impossible to spot and make sense of patterns without pictures Designers of good predictive models know how to let computers do what they do well in this case, the math, which they do quickly and rapidly and how to let people s eyes and minds do what they do well spot patterns and make sense of them. This level of involvement in the analytical process takes advantage of our brains in a way that throws open the windows to insights that might never be experienced otherwise. We must enable a subtle interplay between human and computer a dance in which each partner fully displays its strength and grace, without overstepping and diminishing the other s. It s almost impossible to spot and make sense of patterns without pictures. Patterns that remain hidden in a table of numbers, like that Excel spreadsheet the U.S. Army Corps of Engineers could have built to model the San Francisco Bay, are made visible by the right visualization. In their case the visualization is a 3D physical model, but most models of abstract information (that is, information pertaining to something that isn t physical), work best with simple 2D visualizations that can be viewed and manipulated on a computer. Good software treats our minds with respect. Our brains are capable of abstract reasoning, subtle pattern detection, and interpretation of meaning. Models sell them short when our minds are invited only to enter inputs and then view end results, as if they can t make sense of what lies between. Not only can they make sense of what often hides in the black box, it is only when that information is properly revealed that we can hope to discover what lives beyond the expected. Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 4 of 16
6 An Example of an Enlightening Predictive Model Earlier I spoke of the mystique that often hides analytics behind the curtain, making anyone who isn t a wizard fear to approach. Consistent with this alienating perception is the common fallacy that many things simply can t be measured, should not be measured, or only can be measured using techniques known by a few statistical grand masters. Douglas Hubbard, the author of How to Measure Anything, believes differently. Hubbard makes a clear and convincing argument that not only can everything be measured, and much of it should, but also that there are means at our disposal that aren t terribly complicated. In his book, Hubbard teaches an entire collection of practical techniques for measuring things that businesspeople often dismiss at great loss as intangibles. One use of predictive models that he features in his book involves the assessment of risk. Hubbard bemoans the fact that organizations often fail to measure risk because they assume it can t be done or is too diffi cult. Many organizations employ fairly sophisticated risk analysis methods on particular problems; for example, actuaries in an insurance company defi ne the particulars of an insurance product, statisticians analyze the ratings of a new TV show, and production managers are using simulations to model changes in production methods. But those very same organizations do not routinely apply those same sophisticated risk analysis methods to much bigger decisions with more uncertainty and more potential loss. Douglas W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, John Wiley and Sons Inc., Hoboken, NJ, 2007, p. 83. I decided to use one of Hubbard s examples of risk assessment to illustrate how I believe predictive models ought to be designed to fully engage our minds and, as a result, lead us to the best decisions. This particular example is quite simple. It involves a company that manufactures a product, which management hopes can be manufactured more successfully and at less cost by leasing a particular machine at a cost of $400,000 per year. Here are the variables that must be considered to assess the risk associated with the lease that is, the risk of not saving enough money as a result of using the machine to equal or exceed the cost of the lease: Maintenance Savings ($ per unit) Labor Savings ($ per unit) Raw Materials Savings ($ per unit) Production Level (total units manufactured) We can build a predictive model to assess the risk of the lease and to also determine where efforts should be focused to produce maximum savings. In this model, total annual savings resulting from the use of the machine can be calculated as: Annual Savings = (Maintenance Savings + Labor Savings + Raw Material Savings) * Production Level Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 5 of 16
7 In this example, annual savings must exceed $400,000 or we ll lose money. In the graceful dance between human and computer that we re hoping to perform, as manufacturing experts (or so we imagine) we can contribute useful knowledge to the model. We can draw on our own experience to estimate the savings we can expect to gain related to maintenance, labor, and raw materials. You might argue, and rightfully so, that even an expert can t be expected to predict the actual amount of savings for each of these variables, but an expert can come up with a reliable estimate of the possible range. Hubbard talks a great deal about range estimates such as these. One method to express our uncertainty about a number is to think of it as a range of probable values. In statistics, a range that has a particular chance of containing the correct answer is called a confi dence interval (CI). A 90% CI is a range that has a 90% chance of containing the correct answer. Douglas W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, John Wiley and Sons Inc., Hoboken, NJ, 2007, p. 53. As it turns out, these lowtohigh value ranges that we use to express confi dence intervals are quite handy for predictive analysis. Using ranges to represent your uncertainty instead of unrealistically precise point values clearly has advantages. When you allow yourself to use ranges and probabilities, you don t really have to assume anything you don t know for a fact. But precise values have the advantage of being simple to add, subtract, multiply and divide in a spreadsheet. So how do we add, subtract, multiply and divide in a spreadsheet when we have no exact values, only ranges? Fortunately, a fairly simple trick can be done on any PC  Monte Carlo simulations, which were developed to do exactly that. A Monte Carlo simulation uses a computer to generate a large number of scenarios based on probabilities of inputs. For each scenario, a specifi c value would be randomly generated for each of the unknown variables. Then these specifi c values would go into a formula to compute an output for that single scenario. This process usually goes on for thousands of scenarios. Douglas W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, John Wiley and Sons Inc., Hoboken, NJ, 2007, p. 72. We ll get to Monte Carlo simulations in a moment. For now, let s enter our 90% confi dence intervals for each of the variables into a model. To construct and interact with our predictive model, I m going to use a feature, called the Profi ler, in the JMP software product from SAS. To get started, the Profi ler allowed me to easily defi ne each of the variables and set their lower and upper limits in the table on the following page. Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 6 of 16
8 Figure 1 I named the model Risk Predictor. As you can see, I entered a low value for each variable s confi dence interval (including production volume) into the fi rst row, and a high value into the second. These values came directly from Hubbard s book. The low and high values in the Annual Savings column were automatically calculated by the Profi ler based on the formula that I described earlier, which I entered into the Profi ler while defi ning this variable. People often try to predict risk by assigning specifi c values to the independent variables. For instance, we could use the following middle values (the means in this case) of our confi dence intervals: Independent Variable Low Value High Value Mean Maintenance Savings (dollars per unit) Labor Savings (dollars per unit) Raw Materials Savings (dollars per unit) Production Level (total units manufactured) 15,000 35,000 25,000 Based on these specifi c values, we could calculate annual savings as follows: Annual Savings = ($15 + $3 + $6) * 25,000 = $600,000 This method is simple, but it doesn t tell us how likely it is that our annual savings would fall below the $400,000 cost of the lease. In other words, this method doesn t really assess probable risk. To do this, we ll need to use our confi dence intervals. After entering the variables into the Profi ler table, I asked JMP to graph the variables a single, simple step which generated the display on the following page: Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 7 of 16
9 Figure 2 This visual model consists of four graphs, one for each independent variable, and a quantitative scale on the left, which measures annual savings in thousands of dollars. Consider the maintenance savings graph on the left for a moment. The black line that slopes upward from left to right shows how increases in maintenance savings will affect annual savings. The vertical dashed red line is currently set to a maintenance savings value of $15, and the horizontal dashed red line indicates that for this amount of maintenance savings, along with the current values set for each of the other independent variables in their respective graphs, projected annual savings equal $600,000. We still haven t gotten around to assessing risk. So far, the Profiler has given us a way to visually explore the effect on annual savings of various values for any or all of the independent variables. Let s use this model to explore. Before changing any of the values to see the results, let s see what we can understand about these four variables and their relationships to annual savings without changing a thing. By looking at the black lines in the four graphs above, which show the slope of change to annual savings associated with each of these variables, we can see that changes to some variables have a greater effect than others on annual savings, based on the slopes of the lines. The variable with the greatest upward slope is production volume, which isn t a surprise. Maintenance and labor savings both appear to have roughly the same midlevel effect on annual savings, and raw materials savings contributes the least. By moving the red line in the production volume graph from 25,000 to 26,000, increasing volume by 1,000 units, without changing anything else, we could increase annual savings from $600,000 to $624,000, as you can see below. Figure 3 Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 8 of 16
10 Assuming that the leased machine would enable us to manufacture more products than we currently produce without it, we can see that our greatest opportunity to maximize savings would come from increases in manufacturing volume. What if we achieved only the lowest predicted dollar savings associated with each of the other three variables? Could we then make up for the cost of the lease based on increases to manufacturing volumes alone? Let s take a look. I moved the vertical red lines for maintenance, labor, and raw materials savings to the lowest value in their ranges and then moved the vertical red line for production volume to 35,000, the highest value in the range, and here s the result: Figure 4 As you can see, we could only expect this scenario to produce around $385,000 in annual savings, which is below our lease cost of $400,000. We probably couldn t break even by focusing on production volume increases alone. We should focus on some combination of maintenance and labor savings along with increases in production volumes to achieve high annual savings. For instance, by increasing maintenance savings from $15 to $16 a unit and labor savings from $3 to $4 a unit and leaving production volume at 25,000, we could likely increase annual savings from $600,000 to $650,000, as shown below. Figure 5 Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 9 of 16
11 From here, each incremental increase of 1,000 units in production volume results in an annual savings increase of $26,000. For instance, in the following example, you can see that a single increase of 1,000 units in production volume raised annual savings from $650,000 to $676,000. Figure 6 This simple visual display has given us a means to perform whatif analysis in a manner that reveals how the variables relate to and affect one another. The result is a better understanding of how the leased machine could affect savings than we would have gained by entering input values and seeing the resulting annual savings amount, while the relationship between the variables remained hidden in a black box. This better understanding helped us to develop a strategy for maximizing savings that would have taken a great deal of trial and error to piece together using the black box approach. The best way (and often the only way) to see how the parts of something relate to one another is to place them all simultaneously before our eyes, so we can observe them all at once and watch how a change in one results in changes to the others. This is because we can only compare things that reside simultaneously in working memory. Visual working memory is limited to about three chunks of information at any one time. Visual information gets into working memory in three ways: (1) from the external world through our eyes, (2) from longterm memory, where memory chunks are stored for later retrieval when needed, and (3) from our imagination, where we construct images from pure thought. This means that when we re comparing quantitative values to determine how they relate, as we ve been doing with our Risk Predictor model, the easiest way to support this process is to position the variables simultaneously in front of your eyes on a computer screen in the form of a graphical model. This allows us to swap the patterns that we see and wish to compare in and out of working memory rapidly. In this manner, we can also compare patterns that we re currently seeing with our eyes to relevant patterns that we ve seen in the past and memorized (that is, stored in longterm memory) or to patterns that we re imagining in the moment. Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 10 of 16
12 Let s now extend our predictive queries to the assessment of risk. For this, we re going to let the computer do what it does well: perform extremely fast and accurate calculations. Specifi cally, we re going to ask it to perform what s called a Monte Carlo simulation. If you ve encountered this term before and dismissed it as something too complicated to consider (which I did for years), I assure you, there s nothing to fear. Although we might not be able to write the software algorithms that Monte Carlo simulations use, the concept is fairly simple. The Wikipedia entry for Monte Carlo method includes a sentence that s quite pertinent to our situation. Monte Carlo methods are useful for modeling phenomena with signifi cant uncertainty in inputs, such as the calculation of risk in business. Wikipedia entry for Monte Carlo method as of December 15, What we ve done so far is whatif predictive analysis. Using models similar to the one we ve been using so far, analysts often produce three different whatif scenarios: best case, worst case and most likely case. Based on the 90% confi dence intervals that we ve assigned to each independent variable, we could produce the following: Scenario Annual Savings ($) Gain/Loss on Lease ($) Best Case 1,295, ,000 Most Likely Case 600, ,000 Worse Case 165,000 (235,000) This expresses risk as a range of possibilities: a full $1,130,000 of uncertainty, extending from a gain of $895,000 on the high end to a loss of $235,000 on the low end. So what is our risk of losing money exactly? In cases like this, whatif analysis is informative, but it doesn t give us what we really need: a measure of the probability that we ll lose money. In statistical terms, whatif analysis is deterministic. This means that it s based on singlevalue estimates for each of the variables. What we need is the type of model that statisticians call stochastic. I know what you re thinking. Why do statisticians have to use such cryptic terms? Just as in any field of study, statisticians coin and then use terms that sound like gobbledygook to those outside the fi eld in this case gobbledygook that alienates most nonstatisticians. Let s be honest statisticians aren t alone in the practice. Experts in any fi eld do this, not to be obscure, but simply because they need a precise term to describe something relevant to their work, which to them is no more obscure than any other term. Sometimes they forget, however, that others don t understand their unique language and end up unintentionally confusing their audience when attempting to explain something. Back to stochastic. I run across this term in my work every once in awhile, but I managed to ignore it out of fear until I took the time to look it up. What I found was fairly simple to understand. Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 11 of 16
13 Stochastic means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fl uctuations observed in historical data for a selected period using standard timeseries techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which refl ect the random variation in the input(s). Wikipedia entry for Stochastic process as of December 15, In other words, a predictive model that is stochastic is one that uses a set of random values within a prescribed range, such as our 90% confi dence intervals, for the independent variables in an attempt to simulate a distribution of possible outcomes. Let s go back to Wikipedia once again to tie stochastic models and Monte Carlo simulation together: Monte Carlo simulation considers random sampling of probability distribution functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional whatif scenarios, and then run again with Monte Carlo simulation shows that the Monte Carlo analysis has a narrower range than the whatif analysis. This is because the whatif analysis gives equal weight to all scenarios. Wikipedia entry for Monte Carlo method as of December 15, A Monte Carlo simulation involves using random numbers (that is, a stochastic process) and probability (such as a normal, bellshaped distribution) to solve problems. Enough defi nition; let s get back to our example. Using the predictive model that we already built using the JMP Profi ler, let s now enhance it to run Monte Carlo simulations. After setting it up, which I ll explain in a moment, I asked the Profi ler to use the Monte Carlo method to run 5,000 random simulations, which produced the following display: Figure 7 Copyright 2008, Stephen Few. Reprinted with the permission of Stephen Few. Page 12 of 16
Rich Data, Poor Data Designing Dashboards to Inform
Rich Data, Poor Data Designing Dashboards to Inform by Stephen Few A note about the author Stephen Few has worked for 24 years as an IT innovator, consultant, and educator. Today, as Principal of the consultancy
More informationIntellectual Need and ProblemFree Activity in the Mathematics Classroom
Intellectual Need 1 Intellectual Need and ProblemFree Activity in the Mathematics Classroom Evan Fuller, Jeffrey M. Rabin, Guershon Harel University of California, San Diego Correspondence concerning
More informationGRE. Practice Book for the Paperbased. GRE revised General Test. Second Edition. www.ets.org/gre
GRE Practice Book for the Paperbased GRE revised General Test Second Edition www.ets.org/gre 19587 Note to Test Takers: Keep this practice book until you receive your score report. This book contains
More informationPITFALLS. in Dashboard Design. Stephen Few Principal, Perceptual Edge February 2006 EMBARKING ON A NEW JOURNEY SPECIAL ADDENDUM
Common EMBARKING ON A NEW JOURNEY PITFALLS in Dashboard Design by Stephen Few Principal, Perceptual Edge February 2006 SPECIAL ADDENDUM Avoid the Pitfalls with ProClarity TABLE OF CONTENTS Executive Summary...1
More informationDirect Manipulation Interfaces
I b HUMANCOMPUTER INTERACTION, 1985, Volume 1, pp. 311338 4,T. Copyright 0 1985, Lawrence Erlbaum Associates, Inc. Direct Manipulation Interfaces Edwin L. Hutchins, James D. Hollan, and Donald A. Norman
More informationIntroduction to Data Mining and Knowledge Discovery
Introduction to Data Mining and Knowledge Discovery Third Edition by Two Crows Corporation RELATED READINGS Data Mining 99: Technology Report, Two Crows Corporation, 1999 M. Berry and G. Linoff, Data Mining
More informationPerfect For RTI. Getting the Most out of. STAR Math. Using data to inform instruction and intervention
Perfect For RTI Getting the Most out of STAR Math Using data to inform instruction and intervention The Accelerated products design, STAR Math, STAR Reading, STAR Early Literacy, Accelerated Math, Accelerated
More informationSave the Pies for Dessert
perceptual edge Save the Pies for Dessert Stephen Few, Perceptual Edge Visual Business Intelligence Newsletter August 2007 Not long ago I received an email from a colleague who keeps watch on business
More informationAN INTRODUCTION TO. Data Science. Jeffrey Stanton, Syracuse University
AN INTRODUCTION TO Data Science Jeffrey Stanton, Syracuse University INTRODUCTION TO DATA SCIENCE 2012, Jeffrey Stanton This book is distributed under the Creative Commons Attribution NonCommercialShareAlike
More informationPLANNING ANALYSIS PLANNING ANALYSIS DESIGN
PLANNING ANALYSIS Apply Requirements Analysis Technique (Business Process Automation, Business Process Improvement, or Business Process Reengineering) Use RequirementsGathering Techniques (Interview,
More informationReview of basic statistics and the simplest forecasting model: the sample mean
Review of basic statistics and the simplest forecasting model: the sample mean Robert Nau Fuqua School of Business, Duke University August 2014 Most of what you need to remember about basic statistics
More informationA Simpler Plan for Startups
A Simpler Plan for Startups Business advisors, experienced entrepreneurs, bankers, and investors generally agree that you should develop a business plan before you start a business. A plan can help you
More informationA First Encounter with Machine Learning. Max Welling Donald Bren School of Information and Computer Science University of California Irvine
A First Encounter with Machine Learning Max Welling Donald Bren School of Information and Computer Science University of California Irvine November 4, 2011 2 Contents Preface Learning and Intuition iii
More informationHow To Write a Good PRD. Martin Cagan Silicon Valley Product Group
How To Write a Good PRD Martin Cagan Silicon Valley Product Group HOW TO WRITE A GOOD PRD Martin Cagan, Silicon Valley Product Group OVERVIEW The PRD describes the product your company will build. It
More informationRevenue Management. Second Edition
Revenue Management Second Edition A Technology Primer Developed by the American Hotel & Lodging Association s Technology and EBusiness Committee Funded by the American Hotel & Lodging Educational Foundation
More informationBase Tutorial: From Newbie to Advocate in a one, two... three!
Base Tutorial: From Newbie to Advocate in a one, two... three! BASE TUTORIAL: From Newbie to Advocate in a one, two... three! Stepbystep guide to producing fairly sophisticated database applications
More informationdeveloper.* The Independent Magazine for Software Professionals Code as Design: Three Essays by Jack W. Reeves
developer.* The Independent Magazine for Software Professionals Code as Design: Three Essays by Jack W. Reeves Introduction The following essays by Jack W. Reeves offer three perspectives on a single theme,
More informationTeaching Smart People How to Learn
Teaching Smart People How to Learn Chris Argyris 4 Chris Argyris James Bryant Conant Professor Harvard Business School 1991 Harvard Business Review. Distributed by The New York Times Special Features/Syndication
More informationCHAPTER 1 INTRODUCTION TO MASTERY LEARNING Introduction by Carla Ford 1 Perspective on Training: Mastery Learning in the Elementary School 2
Table of Contents THE MASTERY LEARNING MANUAL CHAPTER 1 INTRODUCTION TO MASTERY LEARNING Introduction by Carla Ford 1 Perspective on Training: Mastery Learning in the Elementary School 2 CHAPTER 2 AN OVERVIEW
More informationBuilding a Great Business Plan for Your New Law Practice
Building a Great Business Plan for Your New Law Practice by Freya Allen Shoffner, Esq. Shoffner & Associates, Boston, MA January 22, 2009 Building a Great Business Plan for Your New Law Practice by Freya
More informationIs that paper really due today? : differences in firstgeneration and traditional college students understandings of faculty expectations
DOI 10.1007/s1073400790655 Is that paper really due today? : differences in firstgeneration and traditional college students understandings of faculty expectations Peter J. Collier Æ David L. Morgan
More informationMaking Smart IT Choices
Making Smart IT Choices Understanding Value and Risk in Government IT Investments Sharon S. Dawes Theresa A. Pardo Stephanie Simon Anthony M. Cresswell Mark F. LaVigne David F. Andersen Peter A. Bloniarz
More informationTHE FIELD GUIDE. to DATA S CIENCE COPYRIGHT 2013 BOOZ ALLEN HAMILTON INC. ALL RIGHTS RESERVED.
THE FIELD GUIDE to DATA S CIENCE COPYRIGHT 2013 BOOZ ALLEN HAMILTON INC. ALL RIGHTS RESERVED. FOREWORD Every aspect of our lives, from lifesaving disease treatments, to national security, to economic
More informationPredictive Modeling for Life Insurance Ways Life Insurers Can Participate in the Business Analytics Revolution. Prepared by
Predictive Modeling for Life Insurance Ways Life Insurers Can Participate in the Business Analytics Revolution Prepared by Mike Batty, FSA, CERA Arun Tripathi, Ph.D. Alice Kroll, FSA, MAAA Chengsheng
More informationChapter 17. Network Effects
From the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World. By David Easley and Jon Kleinberg. Cambridge University Press, 2010. Complete preprint online at http://www.cs.cornell.edu/home/kleinber/networksbook/
More informationWriting and using good learning outcomes Written by David Baume
Writing and using good learning outcomes Written by David Baume 2 www.leedsmet.ac.uk Preface Our Assessment, Learning and Teaching strategy reinforces the University s commitment to put students at the
More informationA GENTLE INTRODUCTION TO SOAR,
A GENTLE INTRODUCTION TO SOAR, AN ARCHITECTURE FOR HUMAN COGNITION: 2006 UPDATE JILL FAIN LEHMAN, JOHN LAIRD, PAUL ROSENBLOOM 1. INTRODUCTION Many intellectual disciplines contribute to the field of cognitive
More informationELEMENTARY & MIDDLE SCHOOL MATHEMATICS
ELEMENTARY & MIDDLE SCHOOL MATHEMATICS Teaching Developmentally, 5/E 2003 John A.Van de Walle 020538689X Bookstore ISBN Visit www.ablongman.com/replocator to contact your local Allyn & Bacon/Longman
More informationBasic Marketing Research: Volume 1
Basic Marketing Research: Volume 1 Handbook for Research Professionals Official Training Guide from Qualtrics Scott M. Smith Gerald S. Albaum Copyright 2012, Qualtrics Labs, Inc. ISBN: 9780984932818
More information