A Comparative Test of Web Accessibility Evaluation Methods
|
|
|
- Christopher Cole
- 9 years ago
- Views:
Transcription
1 A Comparative Test of Web Accessibility Evaluation Methods Giorgio Brajnik Dipartimento di Matematica e Informatica Università di Udine Via delle Scienze, Udine Italy [email protected] ABSTRACT Accessibility auditors have to choose a method when evaluating accessibility: expert review (a.k.a. conformance testing), user testing, subjective evaluations, barrier walkthrough are some possibilities. However, little is known to date about their relative strengths and weaknesses. Furthermore, what happened for usability evaluation methods is likely to repeat for accessibility: that there is uncertainty about not only pros and cons of methods, but also about criteria to be used to compare them and metrics to measure these criteria. After a quick review and description of methods, the paper illustrates a comparative test of two web accessibility evaluation methods: conformance testing and barrier walkthrough. The comparison aims at determining merits of barrier walkthrough, using conformance testing as a control condition. A comparison framework is outlined, followed by the description of a laboratory experiment with 12 subjects (novice accessibility evaluators), and its results. Significant differences were found in terms of correctness, one of the several metrics used to compare the methods. Reliability also appears to be different. Categories and Subject Descriptors: H. Information Systems H.5 Information Interfaces and Presentation (I.7) H.5.2 User Interfaces (D.2.2, H.1.2, I.3.6). General Terms: Human Factors, Measurement, Reliability. Keywords: Web Accessibility, Accessibility Evaluation Method, Quality Assessment. 1. INTRODUCTION Web accessibility can be evaluated by means of different methods, including standards review, user testing, subjective assessments and barrier walkthrough [10, 5, 2, 7]; yet little is known about their properties. Accessibility evaluation methods (AEMs) can differ in terms of their effectiveness, efficiency and usefulness, as hinted by [15, 1, 3]. If proper information about AEMs were avail- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ASSETS 08, October 13 15, 2008, Halifax, Nova Scotia, Canada. Copyright 2008 ACM /08/10...$5.00. able, then practitioners (auditors, teams involved in quality assessment, developers) could: 1) optimize resources expended in assessing and evaluating accessibility, like time, effort, skilled people, facilities; 2) aim at predictable and controllable quality of results produced by AEMs; 3) support sustainable accessibility processes; and 4) standardize the content of accessibility reports. Several studies of usability evaluation methods have shown that user testing methods may fail in yielding consistent results when performed by different evaluators [17, 12, 11] and that inspection-based methods aren t free of shortcomings either [19, 22, 4, 9]. Although accessibility and usability are two different properties, there is no reason to assume that the kind of uncertainty and mishaps that apply to usability evaluation methods should not apply to AEMs as well. The most likely conclusion is that different AEMs lead to different kinds of results revealing different levels of quality, require different levels of resources, and differ for their applicability. To date there are few experimental studies on AEMs that could confirm or contradict these expected properties. The main purpose of this paper is to describe results of an experiment aimed at comparing quality of two inspection-based AEMs: standards review (called henceforth conformance review, CR) and barrier walkthrough (BW), the former based on the Italian requirements for web accessibility [13] and the latter being a method proposed by the author [2] and based on heuristics walkthrough [19]. A secondary purpose is to outline a quality framework for AEMs and to describe a first benchmark for quality of AEMs. In [1] the BW method was investigated and some preliminary, non conclusive results were obtained. In this paper we provide more significant results, based on experimental data obtained from a more focused and specific experiment. Results show that with respect to novice evaluators, barrier walkthrough compared to conformance review markedly improves the number of problems that are correctly detected, as well as the correctness rate; no significant difference was found for sensitivity, and reliability decreases a little. 2. ACCESSIBILITY EVALUATION METH- ODS Conformance reviews, called also expert, standards, or guidelines review or manual inspection [10, 21], is the AEM by far most widely used [6]. It is based on checking if a page satisfies a checklist of criteria. It is an analytic method, based on evaluators opinions, producing failure modes (in
2 the form of violated checkpoints) possibly with defects and solutions. Conformance reviews are dependent on the chosen checklist, that range from standards issued by international bodies (like the Web Content Accessibility Guidelines, WCAG, published by the W3C), to national or state-level guidelines, to individual organizations guidelines (like those issued by IBM, SUN or SAP, for example). Other methods for evaluating accessibility include screening techniques, informal empirical techniques based on using a interface in a way that some sensory, motor or cognitive capabilities of the user are artificially reduced [10]; subjective assessments, based on a panel of users instructed to explore and use a given website by themselves, and later report feedback on what worked for them and what did not; user testing, based on informal empirical usability tests with disabled people adopting the think-aloud protocol [10, 5]. One final method is barrier walkthrough, which is discussed below in Section Benefits and drawbacks of AEMs Conformance review generally leads to the following benefits: ability to identify a rather large range of diverse problems for a diverse audience (albeit this depends on the quality of the underlying checkpoints). It is relatively costeffective, especially when coupled with automatic testing tools, and is able to identify the defects 1 underlying the checkpoint violations. The drawbacks are that it requires skillful evaluators, and that it is not likely to distinguish reliably between important and non important accessibility problems. See [14, 16, 7, 18] for a controversial list of shortcomings of accessibility guidelines and conformance reviews. Screening techniques are easy to use, very cheap, but also very unsystematic. Sometime however such a method is capable of achieving good levels of sensitivity [16]. The benefits of subjective assessments include its low cost, and its ability to be performed remotely in space and time. In addition, participants are able to explore the areas of the website that most suite them, which is likely to increase their motivation in using it. However the drawbacks are important: it is a method that is not systematic, both regarding the pages being tested and the criteria used to evaluate them. In addition, different users with different experience and different attitudes will report very different things about the same page. Benefits of user testing as an AEM include [15] its capability to accurately identify usability problems, that are usually experienced by real users, and that have catastrophic consequences. The drawbacks include: relatively low efficiency, inability to highlight the defects, problems may be missed if predefined scenarios are not well chosen, it will not identify low-severity problems. In addition, it is rather complicated to set up a user testing session with disabled participants (given their requirement in terms of appropriate assistive technology, including room facilities), and the results of performing user testing is likely to be a set of usability problems that are general to all the users of the website, rather than being specific to disabled ones. In other words, the method is likely to identify a number of true, but irrelevant prob- 1 I use the term failure mode to mean the way in which the interaction fails; the defect is the reason for the failure, its cause; the effects are the negative consequences of the failure mode. lems. Finally, inexperienced test facilitators are likely to introduce noisy results, and in the worst case even invalidate the test. See [18] for a discussion on the relationship between usability and accessibility. 2.2 Barrier Walkthrough A major difference in the methods just previewed is the role that context plays during evaluations [3]. While analytic methods appear to be inexpensive compared to empirical methods, we hypothesize that their validity and reliability depend on how context is considered. The barrier walkthrough method [1, 2] is an accessibility inspection technique where context of website usage is explicitly considered. An evaluator has to assess a number of predefined barriers which are interpretations and extensions of well known accessibility principles; they are linked to user characteristics, user activities, and situation patterns so that appropriate conclusions about user effectiveness, productivity, satisfaction and safety can be drawn, and appropriate severity scores can be consequently derived. The method is rooted on heuristics walkthrough [19] which takes into account the context of use of the website. For BW, context comprises certain user categories (like blind persons), usage scenarios (like using a given screen reader), and user goals (corresponding to use cases, like submitting an IRS form). An accessibility barrier is any condition that makes it difficult for people to achieve a goal when using the web site through specified assistive technology (see Figure 1 for an example). A barrier is a failure mode of the web site, described in terms of (i) the user category involved, (ii) the type of assistive technology being used, (iii) the goal that is being hindered, (iv) the features of the pages that raise the barrier, and (v) further effects of the barrier. barrier defect users affected consequences users cannot perceive nor understand the information conveyed by an information rich image (e.g. a diagram, a histogram) an image that does not have accompanying text (as an alt attribute, as content of the object tag, as running text close to the picture or as a linked separate page) blind users of screen readers, users of small devices users try to look around for more explanations, spending time and effort; they may not be able to get an information; effectiveness, productivity, satisfaction are severely affected Figure 1: Example of barrier Severity of a barrier depends on the context of the analysis (type of user, usage scenario, user goal). The BW method prescribes that severity is graded on a scale (minor, major, critical), and is a function of impact (the degree to which the user goal cannot be achieved within the considered context) and frequency (the number of times the barrier shows up while a user is trying to achieve that goal). Therefore the same type of barrier may be rated with different severities in different contexts; for example, a missing
3 skip-links link may turn out to be a nuisance for a blind user reading a page that has few preliminary stuff, while the same defect may show a higher severity within a page that does a server refresh whenever the user interacts with links or select boxes. Potential barriers to be considered are derived by interpretation of relevant guidelines and principles [7, 20]. A complete list can be found in [2]. A preliminary experimental evaluation of the BW method [1] showed that this method is more effective than conformance review in finding more severe problems and in reducing false positives; however, it is less effective in finding all the possible accessibility problems. 3. QUALITY FACTORS AEMs can be compared on the basis of a general quality framework, that includes such criteria as effectiveness, efficiency and usefulness, applied to the results obtained through application of the method [19, 8, 11, 15, 9]. In particular, in our study we consider the following criteria, some of which are further refined into more specific ones: Effectiveness defined as the extent to which the method can be used to yield desired results with appropriate levels of accuracy and completeness. Effectiveness is further refined into: Validity defined as the extent to which the problems detected during an evaluation are also those that show up during real-world use of the system. It is further refined into: Correctness the percentage of reported problems that are true problems (sometimes called precision or validity). Sensitivity the percentage of the true problems that are reported (sometimes called recall or thoroughness). Reliability the extent to which independent evaluations produce the same results. Notice that these three criteria are related but independent. An AEM that yields high correctness may be useless if too few of the actual problems are identified (i.e. if it features too little sensitivity). Similarly, by itself a high level of sensitivity is not enough unless a sufficient correctness is obtained: the AEM may find all the true accessibility problems, but they may be swamped by a large number of false positives. Finally, an unreliable AEM is not desired since even if it features high validity, this cannot be exploited in a consistent way. Efficiency the amount of resources (time, skills, money, facilities) that are expended/used to carry out an evaluation that leads to given levels of effectiveness and usefulness. Usefulness is the effectiveness and usability of the results produced (with respect to users that have to assess, or to fix, or otherwise to manage accessibility of a web site). These quality factors need to be operationalized so that they can be actually measured. In our study, we made them operational through the metrics described below, in Sect Although usefulness is an important quality factor, it will not be discussed any further as it falls beyond the scope of this experimental study. 4. EXPERIMENTAL PLAN The purpose of the experiment is to compare effectiveness and efficiency of BW and CR (based on the Italian requirements [13]) and see if they differ in any of the criteria outlined above, and in particular if the results obtained in [1] are confirmed by a more formal experiment. The null hypotheses are that there are no differences in correctness, sensitivity, reliability and efficiency. Sixteen students of my course (user centered web design) showed up for the experiment, out of 35 that were invited to. During the course, students were exposed to web accessibility, guidelines, CR and BW for about 15 lecture hours, after which they were asked to analyze given web sites and write corresponding reports. In this within-subjects experiment, each of them was asked to perform two evaluations, one per method whose order was counterbalanced. Each evaluation was on two preselected pages of two websites (a home page and a student-services page of two Italian universities; websites were also counterbalanced). Subjects were assigned in appearance order to a sequence of the two tasks (CR/BW applied on website A/B, and then viceversa). Each subject was given the tasks order, a spreadsheet with a checklist of barriers or checkpoints, a workstation with Firefox and the Web Developer bar 2, links to complete BW description and description of the CR requirements, and for BW a use case (which was the same for both websites). Subjects, when performing BW, were asked to identify as many barriers as possible, provided they were relevant to blind users of screen readers and to motor disabled users, and rate them as minor (1), major (2), or critical (3). When performing CR they were asked to mark violated requirements. When they completed an evaluation, they were instructed to write down the elapsed time and the spreadsheet. Twelve of the sixteen students worked in a university lab; four worked at home. A judge (myself) identified all barriers and violated requirements on all four pages, and rated the severity of the barriers. 4.1 Independent and dependent variables Independent variables for this experiment include the method used (CR/BW), the tested website (A/B) and the order of application of a method (1 st or 2 nd ). Collected data include the barriers reported by an evaluation, their severity (in the scale 1-2-3), the violated requirements, whose severity is conventionally set to 1, the barriers rated by the judge (with severity ranging in ; 0 was given to barriers that the judge considered to be false positive, i.e. erroneously identified by the subject). Appropriate aggregation of these data produced by a single evaluation leads to the following dependent variables: IDENTIFIED TRUE Correctness C =, that is: among the IDENTIFIED barriers/requirements that were given a positive sever- 2 version 1.1.5, available from work/web-developer
4 ity by the subject, the proportion that received a positive severity also by the judge. IDENTIFIED TRUE Sensitivity S =, that is: among the TRUE barriers/requirements that were given a positive severity by the judge, the proportion that received a positive severity also by the subject. F-measure a combination of correctness C and sensitivity S in the range [0, 1): F = ; by setting α = CS αc+(1 α)s 0.5 we get F = 2CS, which is what was used in this C+S study. F is a monotonic and symmetric function of both arguments, and is normally used to represent a balanced combination of correctness and sensitivity. Reliability Following [19], reliability is defined as max{0, 1 sd }, where sd is the standard deviation and M is M the mean of the number of correctly identified barriers/requirements (over a set of evaluations). Efficiency simply represented by the elapsed time in minutes during an evaluation. 5. FINDINGS Results of four subjects were excluded due to the fact that they completed only one of the assigned tasks. These four subjects did not include those that worked at home. The remaining 12 subjects identified 92 checkpoint violations using CR on website A and 69 on B, and 102 barriers on A and 124 on B when using BW. This is a remarkable difference between the methods, probably due to the higher resolution of the BW method, that suggests potential barriers that are more finely tuned and less general than generic checkpoints. Table 1 shows the contingency table of the differences in ratings of checkpoints and barriers by subjects and by the judge. Diagonal cells in both tables represent the number of times there was agreement; cells above the diagonal represent the number of times subjects over-rated severity, and viceversa for cells below the diagonal. Table 2 presents the same information as proportions. For CR the number of over-rated and under-rated checkpoint violations are roughly the same and close to 73, over a total of 526 judgments, 381 of which were correctly rated; for BW the number of over/under-rated barriers is larger: 790 barriers were correctly rated, 118 were over-rated and 124 were under-rated, over a total of sev j.sev sev j.sev Table 1: Number of checkpoints (left) and barriers (right) rated by the judge (j.sev) and by subjects (sev). There were no barriers with severity 3 (according to the judge). These differences can be better appreciated if we consider that the BW method requires an additional step after the evaluator has identified a barrier, i.e. severity assignment. Such a step is known to be highly subjective for usability sev j.sev sev j.sev Table 2: Same data as before expressed as row proportions. N C S F Time CR mean :30 sd :12 BW mean :10 sd :54 Table 3: Means and standard deviations for number of correct ratings N, correctness C, sensitivity S, F- measure F and time (min:sec). evaluation methods [11]. Even though the BW method is well structured in terms of severity assignment procedure, the actual severity of a barrier is itself dependent a lot on the experience of the user: with assistive technology, with the browser, with the WWW, with the website domain and with the actual website. As a consequence, unless the scenario adopted for the evaluation is extremely specific, the margin for subjectivity in assigning severity scores remains quite high. Figure 2 shows the actual data for correctness, sensitivity, F-measure and time, while Figure 3 shows how the data is distributed, and Table 3 gives mean values and standard deviation of the number of correctly rated barriers/checkpoints, correctness, sensitivity, F-measure and time. BW leads to a slight improvement in correctness, sensitivity and (consequently) F-measure: the number of evaluations where BW scores better than CR tends to be higher than the other way around. Even though the mean completion time for CR is higher than that for BW, CR evaluations appear to be faster than BW in more cases. The boxplots and Table 3 show that BW leads to a higher variability of all indexes, and with the exception of time, an improvement in the median and mean. However, if we take a closer look to the F-measure (taken as a representative index of the validity of each evaluation), we can see that for BW the two quartiles are 0.50 and 0.79, whereas for CR they are much closer each other, at 0.44 and Therefore, the increase of variance is significant. Figure 4 shows the data used to compute reliability, i.e. number of correctly identified barriers and violated checkpoints. These values are computed by considering those cases where both the judge and the subject rated the barrier/checkpoint with severity 0 (not an accessibility problem) and where both rated it with severity greater than 0, not necessarily the same. In other words, we did not consider disagreement of severity when it was greater than 0. Although BW shows a higher number of correctly identified barriers/checkpoints (M = 13.7 compared to 7.3), it is readily seen that this number changes a lot evaluation by evaluation. Therefore, according to the definition we gave earlier, reliability of BW is 0.62, which is smaller than reliability of CR, at A reliability of 0.62 means that the
5 Figure 2: Barplots of correctness, sensitivity, F- measure and completion time. Figure 3: Boxplots of correctness, sensitivity, F- measure and completion time.
6 standard deviation of the number of correct ratings is 0.38 times its mean value. This is not an unexpected result, again due to the fact that BW requires additional interpretation (i.e. subjective) steps to be carried out by the evaluators, namely severity assignment. The positive finding is that the worsening of reliability due to such an additional step is relatively small (reliability has worsened by 10%). In terms of number of correctly identified violated checkpoints and barriers, analysis of variance (repeated measures ANOVA) shows a significant effect of method used (df = 1, F = 20.56, p < ) and no other effect nor interactions; see Figure 4. A corresponding planned comparison with a paired t-test highlights a significant difference (M CR = 7.33, M BW = 13.7, t = 4.65, df = 11, p < 0.001, two tailed), with a considerable effect size of the difference d = 0.99; the 95% confidence interval around the difference in means is [3.34, 9.32], which corresponds to an increase between 45% and 127%. Although such a difference depends on the number of potential checkpoints and barriers, it nevertheless implies that when using BW evaluators can identify a larger number of correctly identified problems, and therefore produce more useful reports. Analysis of variance showed no significant effect (with α = 0.05) on correctness, sensitivity and F-measure of task order and of website. The only other significant effects found were of the method being used with respect to correctness (p = 0.039) and task order with respect to completion time (p = 0.011); see Table 5 and 6. A paired two-tailed t-test on correctness yields t = 2.98, df = 11, p < , M CR = 0.53, M BW = 0.72, an effect size d = 0.77 and a confidence interval around the difference of the means of [0.05, 0.31]. This corresponds to an increase of correctness between 9% and 60% when using BW. The nonparametric Wilcoxon test yields V = 69, p = 0.016, confirming the significance of this result. Sensitivity, as predicted by the ANOVA, showed no significant difference. As a consequence, no difference was found for F-measure as well. Df Sum Sq Mean Sq F value Pr(>F) site:met site:order met:order Residuals site met order site:met:order Residuals Table 4: RM ANOVA results of number of correct ratings with respect to website, method and task order; all factors are within-subjects. First 4 rows represent within-subjects variance. Figure 4: Number of correctly identified barriers/checkpoints (with mean values), and reliability of the two methods, respectively 0.62 for BW and 0.69 for CR. No significant correlation was found between completion time and correctness nor sensitivity: we tried with data transformations (like considering log(time)), Pearson linear and Spearman rank correlation. The only significant and moderate correlation that we found was between correctness and sensitivity when using BW: the Pearson s correlation test gave t = 3.863, df = 10, p = , r = When
7 Df Sum Sq Mean Sq F value Pr(>F) site:met site:order met:order Residuals site met order site:met:order Residuals Table 5: RM ANOVA results of correctness with respect to website, method and task order; all factors are within-subjects. First 4 rows represent withinsubjects variance. Df Sum Sq Mean Sq F value Pr(>F) met:site met:order site:order Residuals met site order met:site:order Residuals Table 6: RM ANOVA results of completion time with respect to method, website and task order; factors are within-subjects. First 4 rows represent within-subjects variance. considering data for CR the same test gave results close to significance and with a smaller correlation: t = 2.069, df = 10, p = , r = Figure 5 shows the scatter plots of correctness against sensitivity. If such a difference in correlation were confirmed by additional experiments, then the consequence could be that when using BW increases of correctness (which is easier to verify) are accompanied by increases of sensitivity (which is much more difficult to verify). 6. CONCLUSIONS The paper shows how accessibility evaluation methods can be compared within a framework based on sound and measurable criteria. Stronger results could have been achieved by ameliorating the experimental plan so that a few disturbance effects would be eliminated: 1. The drop in attention that subjects payed to the second evaluation task, probably due to the longer-thanpredicted time required to perform each evaluation. 2. The fact that the tools used by the subjects were particularly well geared towards CR, and no tool was/is available to support BW at the same level of usability. 3. While the detailed description of barriers was in English, checkpoint formulation and explanation was in Italian, the mother tongue of all the subjects. Figure 5: Scatter plot, separately for the two methods, of correctness vs. sensitivity values. 4. A quick preliminary review of barriers and checkpoints could have improved the students ability to correctly identify them. Regarding the comparison between CR and BW, the main conclusions that can be derived from the experimental data are that, with respect to novice evaluators: In terms of reliability, BW scores 10% worse than CR, most probably due to the additional interpretation steps that are required by BW, that contribute to the higher standard deviation in the number of correctly rated problems. However, BW achieved a reliability of 0.62, which is reasonably high. Some indexes, namely sensitivity, F-measure, completion time, do not show any generalizable effect. However it is likely that with a larger sample of evaluators significant results could be achieved, since values for BW are systematically higher than CR. When using BW, subjects were able to identify a larger number of correctly judged barriers (an increase between 45% and 127%). This difference can be safely generalized beyond the particular pool of subjects we used in the study. Therefore BW leads to more useful results. Usage of BW improves correctness by at least 0.05 and as much as 0.31 (corresponding to an increase between 9% and 60% over CR), which can be quite a marked effect.
8 With more experienced evaluators it is likely that reliability, sensitivity, correctness and F-measure improve when using BW, since more experience would probably lead to reduced variance. 7. REFERENCES [1] G. Brajnik. Web Accessibility Testing: When the Method is the Culprit. In K. Miesenberger, J. Klaus, W. Zagler, and A. Karshmer, editors, ICCHP 2006, 10th International Conference on Computers Helping People with Special Needs, Lecture Notes in Computer Science 4061, Linz, Austria, July Springer Verlag. [2] G. Brajnik. Web accessibility testing with barriers walkthrough. March Visited May [3] G. Brajnik. Beyond conformance: the role of accessibility evaluation methods. In 2nd International Workshop on Web Usability and Accessibility IWWUA08, Auckland, New Zealand, Sept Keynote speech. [4] G. Cockton and A. Woolrych. Understanding inspection methods: lessons from an assessment of heuristic evaluation. In A. Blandford and J. Vanderdonckt, editors, People & Computers XV, pages Springer-Verlag, [5] K. Coyne and J. Nielsen. How to conduct usability evaluations for accessibility: methodology guidelines for testing websites and intranets with users who use assistive technology. reports/accessibility/testing, Nielsen Norman Group, Oct [6] A. Dey. Accessibility evaluation practices - survey results. accessibility-evaluation-practices.html, Visited May [7] DRC. Formal investigation report: web accessibility. Disability Rights Commission, publicationsandreports/report.asp, April Visited Jan [8] W. Gray and M. Salzman. Damaged merchandise: a review of experiments that compare usability evaluation methods. Human Computer Interaction, 13(3): , [9] H. R. Hartson, T. S. Andre, and R. C. Williges. Criteria for evaluating usability evaluation methods. Int. Journal of Human-Computer Interaction, 15(1): , [10] S. Henry and M. Grossnickle. Just Ask: Accessibility in the User-Centered Design Process. Georgia Tech Research Corporation, Atlanta, Georgia, USA, On-line book: [11] M. Hertzum and N. Jacobsen. The evaluator effect: a chilling fact about usability evaluation methods. Int. Journal of Human-Computer Interaction, 1(4): , [12] M. Hertzum, N. Jacobsen, and R. Molich. Usability inspections by groups of specialists: Perceived agreement in spite of disparate observations. In CHI 2002 Extended Abstracts, pages ACM, ACM Press, [13] Italian Government. Requisiti tecnici e i diversi livelli per l accessibilità agli strumenti informatici. July G. U. n /8/2005. [14] B. Kelly, D. Sloan, S. Brown, J. Seale, H. Petrie, P. Lauke, and S. Ball. Accessibility 2.0: people, policies and processes. In W4A 07: Proc. of the 2007 international cross-disciplinary conference on Web accessibility (W4A), pages , New York, NY, USA, ACM. [15] T. Lang. Comparing website accessibility evaluation methods and learnings from usability evaluation methods. about-us/pdf/website_accessibility.pdf, Visited May 2008, [16] J. Mankoff, H. Fait, and T. Tran. Is your web page accessible?: a comparative study of methods for assessing web page accessibility for the blind. In CHI 2005: Proc. of the SIGCHI conference on Human factors in computing systems, pages 41 50, New York, NY, USA, ACM. [17] R. Molich, N. Bevan, I. Curson, S. Butler, E. Kindlund, D. Miller, and J. Kirakowski. Comparative evaluation of usability tests. In Proc. of the Usability Professionals Association Conference, Washington, DC, June [18] H. Petrie and O. Kheir. The relationship between accessibility and usability of websites. In Proc. CHI 2007, pages , San Jose, CA, USA, ACM. [19] A. Sears. Heuristic walkthroughs: finding the problems without the noise. Int. Journal of Human-Computer Interaction, 9(3): , [20] W3C/WAI. How people with disabilities use the web. World Wide Web Consortium Web Accessibility Initiative, w3.org/wai/eo/drafts/pwd-use-web/ html, March Visited May [21] W3C/WAI. Conformance evaluation of web sites for accessibility Visited May [22] A. Woolrych and G. Cockton. Assessing heuristic evaluation: mind the quality, not just the percentages. In Proc. of HCI 2000, pages 35 36, 2000.
The USE Project: Bridging the Gap between Usability Evaluation and Software Development
The USE Project: Bridging the Gap between Usability Evaluation and Software Development Als, B. 1, Frøkjær, E. 2, Hornbæk, K. 2, Høegh, R. 1, Jensen, J. 1, Nørgaard, M. 2, Skov, M. 2, Stage, J. 1 & Uldall-Espersen,
Exploring the Usability of Municipal Web Sites: A Comparison Based on Expert Evaluation Results from Four Case Studies
Informatica Economică vol. 14, no. 4/2010 87 Exploring the Usability of Municipal Web Sites: A Comparison Based on Expert Evaluation Results from Four Case Studies Costin PRIBEANU, Ruxandra-Dora MARINESCU,
SPSS Tests for Versions 9 to 13
SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected]
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected] Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:
Research Methods & Experimental Design
Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and
DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
UNIVERSITY OF NAIROBI
UNIVERSITY OF NAIROBI MASTERS IN PROJECT PLANNING AND MANAGEMENT NAME: SARU CAROLYNN ELIZABETH REGISTRATION NO: L50/61646/2013 COURSE CODE: LDP 603 COURSE TITLE: RESEARCH METHODS LECTURER: GAKUU CHRISTOPHER
Statistics Review PSY379
Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses
Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.
Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
Comparing accessibility evaluation tools
Universal Access in the Information Society manuscript No. (will be inserted by the editor) Comparing accessibility evaluation tools A method for tool effectiveness Giorgio Brajnik Dip. di Matematica e
Foundation of Quantitative Data Analysis
Foundation of Quantitative Data Analysis Part 1: Data manipulation and descriptive statistics with SPSS/Excel HSRS #10 - October 17, 2013 Reference : A. Aczel, Complete Business Statistics. Chapters 1
STATISTICAL ANALYSIS WITH EXCEL COURSE OUTLINE
STATISTICAL ANALYSIS WITH EXCEL COURSE OUTLINE Perhaps Microsoft has taken pains to hide some of the most powerful tools in Excel. These add-ins tools work on top of Excel, extending its power and abilities
Chapter 2 Probability Topics SPSS T tests
Chapter 2 Probability Topics SPSS T tests Data file used: gss.sav In the lecture about chapter 2, only the One-Sample T test has been explained. In this handout, we also give the SPSS methods to perform
Statistical tests for SPSS
Statistical tests for SPSS Paolo Coletti A.Y. 2010/11 Free University of Bolzano Bozen Premise This book is a very quick, rough and fast description of statistical tests and their usage. It is explicitly
An introduction to IBM SPSS Statistics
An introduction to IBM SPSS Statistics Contents 1 Introduction... 1 2 Entering your data... 2 3 Preparing your data for analysis... 10 4 Exploring your data: univariate analysis... 14 5 Generating descriptive
Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics
Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This
THE UNIVERSITY OF TEXAS AT TYLER COLLEGE OF NURSING COURSE SYLLABUS NURS 5317 STATISTICS FOR HEALTH PROVIDERS. Fall 2013
THE UNIVERSITY OF TEXAS AT TYLER COLLEGE OF NURSING 1 COURSE SYLLABUS NURS 5317 STATISTICS FOR HEALTH PROVIDERS Fall 2013 & Danice B. Greer, Ph.D., RN, BC [email protected] Office BRB 1115 (903) 565-5766
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
Using Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
Tutorial 5: Hypothesis Testing
Tutorial 5: Hypothesis Testing Rob Nicholls [email protected] MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................
Study Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
Fairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
STATISTICS FOR PSYCHOLOGISTS
STATISTICS FOR PSYCHOLOGISTS SECTION: STATISTICAL METHODS CHAPTER: REPORTING STATISTICS Abstract: This chapter describes basic rules for presenting statistical results in APA style. All rules come from
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics
Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to
SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.
SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation
Common Industry Format Usability Tests
Proceedings of UPA 98, Usability Professionals Association, Scottsdale, Arizona, 29 June 2 July, 1999 Common Industry Format Usability Tests Nigel Bevan Serco Usability Services 4 Sandy Lane, Teddington,
Projects Involving Statistics (& SPSS)
Projects Involving Statistics (& SPSS) Academic Skills Advice Starting a project which involves using statistics can feel confusing as there seems to be many different things you can do (charts, graphs,
Collaborative Editing for All: the Google Docs Example
Collaborative Editing for All: the Google Docs Example Giulio Mori, Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini and Victor M. R. CNR-IIT, via Moruzzi 1, 56124 Pisa, Italy CNR-ISTI, via Moruzzi
MASTER COURSE SYLLABUS-PROTOTYPE PSYCHOLOGY 2317 STATISTICAL METHODS FOR THE BEHAVIORAL SCIENCES
MASTER COURSE SYLLABUS-PROTOTYPE THE PSYCHOLOGY DEPARTMENT VALUES ACADEMIC FREEDOM AND THUS OFFERS THIS MASTER SYLLABUS-PROTOTYPE ONLY AS A GUIDE. THE INSTRUCTORS ARE FREE TO ADAPT THEIR COURSE SYLLABI
Terminating Sequential Delphi Survey Data Collection
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application
Calculating, Interpreting, and Reporting Estimates of Effect Size (Magnitude of an Effect or the Strength of a Relationship)
1 Calculating, Interpreting, and Reporting Estimates of Effect Size (Magnitude of an Effect or the Strength of a Relationship) I. Authors should report effect sizes in the manuscript and tables when reporting
NORTHERN VIRGINIA COMMUNITY COLLEGE PSYCHOLOGY 211 - RESEARCH METHODOLOGY FOR THE BEHAVIORAL SCIENCES Dr. Rosalyn M.
NORTHERN VIRGINIA COMMUNITY COLLEGE PSYCHOLOGY 211 - RESEARCH METHODOLOGY FOR THE BEHAVIORAL SCIENCES Dr. Rosalyn M. King, Professor DETAILED TOPICAL OVERVIEW AND WORKING SYLLABUS CLASS 1: INTRODUCTIONS
SPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS This booklet contains lecture notes for the nonparametric work in the QM course. This booklet may be online at http://users.ox.ac.uk/~grafen/qmnotes/index.html.
South Carolina College- and Career-Ready (SCCCR) Probability and Statistics
South Carolina College- and Career-Ready (SCCCR) Probability and Statistics South Carolina College- and Career-Ready Mathematical Process Standards The South Carolina College- and Career-Ready (SCCCR)
Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction
OEA Report 00-02 Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction Gerald M. Gillmore February, 2000 OVERVIEW The question addressed in this report is
Usability Testing. Usability Testing. There are two major considerations when
18 There are two major considerations when conducting usability testing. The first is to ensure that the best possible method for testing is used. Generally, the best method is to conduct a test where
Biostatistics: Types of Data Analysis
Biostatistics: Types of Data Analysis Theresa A Scott, MS Vanderbilt University Department of Biostatistics [email protected] http://biostat.mc.vanderbilt.edu/theresascott Theresa A Scott, MS
1 Nonparametric Statistics
1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions
Measurement in ediscovery
Measurement in ediscovery A Technical White Paper Herbert Roitblat, Ph.D. CTO, Chief Scientist Measurement in ediscovery From an information-science perspective, ediscovery is about separating the responsive
Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Universally Accepted Lean Six Sigma Body of Knowledge for Green Belts
Universally Accepted Lean Six Sigma Body of Knowledge for Green Belts The IASSC Certified Green Belt Exam was developed and constructed based on the topics within the body of knowledge listed here. Questions
ISyE 2028 Basic Statistical Methods - Fall 2015 Bonus Project: Big Data Analytics Final Report: Time spent on social media
ISyE 2028 Basic Statistical Methods - Fall 2015 Bonus Project: Big Data Analytics Final Report: Time spent on social media Abstract: The growth of social media is astounding and part of that success was
Skewed Data and Non-parametric Methods
0 2 4 6 8 10 12 14 Skewed Data and Non-parametric Methods Comparing two groups: t-test assumes data are: 1. Normally distributed, and 2. both samples have the same SD (i.e. one sample is simply shifted
The Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
Likert Scales. are the meaning of life: Dane Bertram
are the meaning of life: Note: A glossary is included near the end of this handout defining many of the terms used throughout this report. Likert Scale \lick urt\, n. Definition: Variations: A psychometric
Chapter 7 Section 7.1: Inference for the Mean of a Population
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
Towards Inferring Web Page Relevance An Eye-Tracking Study
Towards Inferring Web Page Relevance An Eye-Tracking Study 1, [email protected] Yinglong Zhang 1, [email protected] 1 The University of Texas at Austin Abstract We present initial results from a project,
MTH 140 Statistics Videos
MTH 140 Statistics Videos Chapter 1 Picturing Distributions with Graphs Individuals and Variables Categorical Variables: Pie Charts and Bar Graphs Categorical Variables: Pie Charts and Bar Graphs Quantitative
Chapter 4 and 5 solutions
Chapter 4 and 5 solutions 4.4. Three different washing solutions are being compared to study their effectiveness in retarding bacteria growth in five gallon milk containers. The analysis is done in a laboratory,
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE
LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE MAT 119 STATISTICS AND ELEMENTARY ALGEBRA 5 Lecture Hours, 2 Lab Hours, 3 Credits Pre-
Chapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test
The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part
Multivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
Exploratory data analysis (Chapter 2) Fall 2011
Exploratory data analysis (Chapter 2) Fall 2011 Data Examples Example 1: Survey Data 1 Data collected from a Stat 371 class in Fall 2005 2 They answered questions about their: gender, major, year in school,
SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES
SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR
t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
An analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
The Cost of Annoying Ads
The Cost of Annoying Ads DANIEL G. GOLDSTEIN Microsoft Research and R. PRESTON MCAFEE Microsoft Corporation and SIDDHARTH SURI Microsoft Research Display advertisements vary in the extent to which they
Exercise 1: How to Record and Present Your Data Graphically Using Excel Dr. Chris Paradise, edited by Steven J. Price
Biology 1 Exercise 1: How to Record and Present Your Data Graphically Using Excel Dr. Chris Paradise, edited by Steven J. Price Introduction In this world of high technology and information overload scientists
X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)
CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.
Analysis of the perceptions of accounting students and practitioners regarding the ethnicity of earnings management post Sarbanes-Oxley
Analysis of the perceptions of accounting students and practitioners regarding the ethnicity of earnings management post Sarbanes-Oxley ABSTRACT Deborah M. Pendarvis University of Tampa David E. Morris,
Analyzing Research Data Using Excel
Analyzing Research Data Using Excel Fraser Health Authority, 2012 The Fraser Health Authority ( FH ) authorizes the use, reproduction and/or modification of this publication for purposes other than commercial
ON TRUST IN THE INTERNET: BELIEF CUES FROM DOMAIN SUFFIXES AND SEALS OF APPROVAL
1346 ON TRUST IN THE INTERNET: BELIEF CUES FROM DOMAIN SUFFIXES AND SEALS OF APPROVAL Atticus Y. Evil, Eric F. Shaver, and Michael S. Wogalter Cognitive Ergonomics Laboratory Department of Psychology North
3-Step Competency Prioritization Sequence
3-Step Competency Prioritization Sequence The Core Competencies for Public Health Professionals (Core Competencies), a consensus set of competencies developed by the Council on Linkages Between Academia
Contact Recommendations from Aggegrated On-Line Activity
Contact Recommendations from Aggegrated On-Line Activity Abigail Gertner, Justin Richer, and Thomas Bartee The MITRE Corporation 202 Burlington Road, Bedford, MA 01730 {gertner,jricher,tbartee}@mitre.org
How To Map Behavior Goals From Facebook On The Behavior Grid
The Behavior Grid: 35 Ways Behavior Can Change BJ Fogg Persuasive Technology Lab Stanford University captology.stanford.edu www.bjfogg.com [email protected] ABSTRACT This paper presents a new way of
APPLYING THE TECHNOLOGY ACCEPTANCE MODEL AND FLOW THEORY TO ONLINE E-LEARNING USERS ACCEPTANCE BEHAVIOR
APPLYING THE TECHNOLOGY ACCEPTANCE MODEL AND FLOW THEORY TO ONLINE E-LEARNING USERS ACCEPTANCE BEHAVIOR Su-Houn Liu, Chung Yuan Christian University, [email protected] Hsiu-Li Liao, Chung Yuan Christian
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
Two-sample hypothesis testing, II 9.07 3/16/2004
Two-sample hypothesis testing, II 9.07 3/16/004 Small sample tests for the difference between two independent means For two-sample tests of the difference in mean, things get a little confusing, here,
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data
Technical Report No. 15 An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data Stephen L. Benton Russell Webster Amy B. Gross William H. Pallett September
DATA ANALYSIS. QEM Network HBCU-UP Fundamentals of Education Research Workshop Gerunda B. Hughes, Ph.D. Howard University
DATA ANALYSIS QEM Network HBCU-UP Fundamentals of Education Research Workshop Gerunda B. Hughes, Ph.D. Howard University Quantitative Research What is Statistics? Statistics (as a subject) is the science
IBM SPSS Direct Marketing 23
IBM SPSS Direct Marketing 23 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 23, release
