Mission Critical Role Project

Similar documents
SECURE POWER SYSTEMS PROFESSIONALS (SPSP) PROJECT PHASE 3, FINAL REPORT: RECRUITING, SELECTING, AND DEVELOPING SECURE POWER SYSTEMS PROFESSIONALS

SPSP Phase III Recruiting, Selecting, and Developing Secure Power Systems Professionals: Job Profiles

TOOL KIT for RESIDENT EDUCATOR and MENT OR MOVES

Cybersecurity: Mission integration to protect your assets

The Comprehensive National Cybersecurity Initiative

CyberSkills Management Support Initiative

Introduction to NICE Cybersecurity Workforce Framework

Section Two: Ohio Standards for the Teaching Profession

PENETRATION TESTING GUIDE. 1

Advanced Threat Protection with Dell SecureWorks Security Services

Arkansas Teaching Standards

Middle Class Economics: Cybersecurity Updated August 7, 2015

Cyber Learning Solutions

PASTA Abstract. Process for Attack S imulation & Threat Assessment Abstract. VerSprite, LLC Copyright 2013

Rethinking Information Security for Advanced Threats. CEB Information Risk Leadership Council

CyberSecurity Solutions. Delivering

How to use the National Cybersecurity Workforce Framework. Your Implementation Guide

Crosswalk of the New Colorado Principal Standards (proposed by State Council on Educator Effectiveness) with the

[STAFF WORKING DRAFT]

DoD Strategy for Defending Networks, Systems, and Data

Session Two. Organizational Knowledge Management

SECURITY METRICS: MEASUREMENTS TO SUPPORT THE CONTINUED DEVELOPMENT OF INFORMATION SECURITY TECHNOLOGY

Symantec Cyber Threat Analysis Program Program Overview. Symantec Cyber Threat Analysis Program Team

CORE INSIGHT ENTERPRISE: CSO USE CASES FOR ENTERPRISE SECURITY TESTING AND MEASUREMENT

UNIT 2: CRITICAL THINKING IN GAME DESIGN

WRITTEN TESTIMONY OF

NICE and Framework Overview

How To Write A National Cybersecurity Act

U.S. Office of Personnel Management. Actions to Strengthen Cybersecurity and Protect Critical IT Systems

CyberNEXS Global Services

Cybersecurity Enhancement Account. FY 2017 President s Budget

CISM ITEM DEVELOPMENT GUIDE

The Path Ahead for Security Leaders

Teacher Evaluation. Missouri s Educator Evaluation System

Guided Pathways to Success in STEM Careers. Request for Proposals

Cyber Threat Intelligence Move to an intelligencedriven cybersecurity model

Principles to Actions

Industrial Engineering Definition of Tuning

High Level Cyber Security Assessment 2/1/2012. Assessor: J. Doe

The Historic Opportunity to Get College Readiness Right: The Race to the Top Fund and Postsecondary Education

Hearing before the House Permanent Select Committee on Intelligence. Homeland Security and Intelligence: Next Steps in Evolving the Mission

Illinois Professional Teaching Standards

Cyber Security. BDS PhantomWorks. Boeing Energy. Copyright 2011 Boeing. All rights reserved.

Testimony of Dan Nutkis CEO of HITRUST Alliance. Before the Oversight and Government Reform Committee, Subcommittee on Information Technology

The Senior Executive s Role in Cybersecurity. By: Andrew Serwin and Ron Plesco.

Actions and Recommendations (A/R) Summary

Cybersecurity The role of Internal Audit

Cyber Intelligence Workforce

Some Thoughts on the Future of Cyber-security

DHS. CMSI Webinar Series

NIST Cloud Computing Program Activities

Educational Goals and Objectives A GUIDE TO DEVELOPING LEARNER BASED INSTRUCTION

Preventing and Defending Against Cyber Attacks June 2011

Beyond the Hype: Advanced Persistent Threats

Draft Policy on Graduate Education

Cybersecurity on a Global Scale

MILLIKIN TEACHING STANDARDS

Revisioning Graduate Teacher Education in North Carolina Master of Arts in Elementary Education Appalachian State University

FREQUENTLY ASKED QUESTIONS

Beacon s Education Program:

Optimizing Network Vulnerability

Chairman Johnson, Ranking Member Carper, and Members of the committee:

QRadar SIEM and FireEye MPS Integration

OVERVIEW DEGREES & CERTIFICATES

Cisco Security Optimization Service

CYBER SECURITY, A GROWING CIO PRIORITY

Cyber Security Operations Centre Reveal Their Secrets - Protect Our Own Defence Signals Directorate

U.S. Army Research, Development and Engineering Command. Cyber Security CRA Overview

JOB ANNOUNCEMENT. Chief Security Officer, Cheniere Energy, Inc.

How To Create An Insight Analysis For Cyber Security

Cyber Adversary Characterization. Know thy enemy!

Information Security Engineering

Procuring Penetration Testing Services

The Talent Management Framework

CYBER SECURITY TRAINING SAFE AND SECURE

2015 Global Cyber Intelligence and Security Competitive Strategy Innovation and Leadership Award

NASA OFFICE OF INSPECTOR GENERAL

NASPAA Accreditation. Policy Briefs. Crystal Calarusse

A New Approach to Assessing Advanced Threat Solutions

Cyber Watch. Written by Peter Buxbaum

National Cyber Security Policy -2013

ASSUMING A STATE OF COMPROMISE: EFFECTIVE DETECTION OF SECURITY BREACHES

Course Descriptions November 2014

Protecting Energy s Infrastructure and Beyond: Cybersecurity for the Smart Grid

CLOSING THE DOOR TO CYBER ATTACKS HOW ENTERPRISES CAN IMPLEMENT COMPREHENSIVE INFORMATION SECURITY

How To Understand And Understand The Concept Of Business Architecture

Security Technology Vision 2016: Empowering Your Cyber Defenders to Enable Digital Trust Executive Summary

EDUCATIONAL LEADERSHIP PROGRAM Recognition. Standards:

Transcription:

Mission Critical Role Project Job Competency Modeling for Critical Roles in Advanced Threat Response and Operational Security Testing Authors: MJ Assante DH Tobey TJ Vanderhorst Jr Contributors: R Huber B Rios L Barloon D McGuire Advanced Threat Response Panel Operational Security Testing Panel National Board of Information Security Examiners, doing business as Council on CyberSecurity Department of Homeland Security HSARPA, Cyber Security Division July 2013 This material is based on research sponsored by Air Force Research Laboratory under agreement number FA 8750-12- 2-0120. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory or the US Government.

TABLE OF CONTENTS EXECUTIVE SUMMARY... 3 INTRODUCTION... 6 The Importance of Competency Definition... 6 Practitioner Involvement... 9 Roles... 9 Scenario- Driven Competency Definition... 13 METHODS, ASSUMPTIONS, AND PROCEDURES... 15 Assumptions and Key Terms... 15 Vignettes: Defining Moments of Expert Performance... 15 Vignette Identification... 17 RESULTS AND DISCUSSION... 19 Cybersecurity Roles and Definition... 19 Process Results and Findings Summary... 19 Implications for Security Programs... 22 Implications for Challenges and Competitions... 26 Implications for Workforce Development... 27 Implications for Human Capital Management... 28 CONCLUSIONS... 29 REFERENCES... 31 APPENDICES... 36 2

EXECUTIVE SUMMARY The search for existing cybersecurity technical talent has moved from a competition to a full- blown crisis for many organizations. Finding, developing, and retaining individuals that demonstrate valued technical skills is a difficult process with few tools and resources for hiring managers to rely upon. Competition can be so great that there now exist have some and have none organizations. This imbalance has resulted in industries and sectors that cannot overcome very real barriers and challenges to improve their security posture against the growing pool of sophisticated threat actors. Given the complex and multifaceted dependencies and relationships between modern organizations, this imbalance of cybersecurity technical talent reduces the cybersecurity posture of all organizations. The crisis label may seem extreme, but one only needs to look at the alarming increase in number of cybersecurity incidents affecting organizations across all industries and demographics. Numerous executive blue ribbon panel studies have been conducted by the defense community, civil agencies, and national security think tanks in an attempt to help address these concerns at a national scale. These studies have laid the groundwork for making talent identification more straight- forward and for expanding the overall talent pool through a federated model for pipeline development. All of these efforts focus on trying to answer the simple question of how does {organization name goes here} identify, pursue, capture, integrate, develop, and retain the talent necessary to prudently manage the risks posed by cyber threats? The current answer is for an organization to take the few cybersecurity technical staff they have and spend a portion of their time trying to find and recognize talent to hire and fill open job positions. This expert eyes approach has weaknesses and certainly does little for organizations that have been previously unsuccessful in attracting that type of talent. Tools, beyond professional certifications, are beginning to emerge to help public and private organizations in the identification of talent for hiring cybersecurity professionals. One category of tools is the use of cyber competitions (contests) or games to identify technical competency. Many of these competitions at the college and the early professional stage are high stakes contests for identifying those able to perform under the pressure of real world, job- relevant performance conditions. Many recruiters are finding these competitions to be an indicator of talent. They feel validated by the stiff competition with other recruiters in engaging winners in employment discussions. As with any game, there can be only a few winners, so what about the remaining field of competitors? How many are viable candidates to fill an organization s competency gaps? How can the competitions tool be further sharpened to find much needed talent with greater precision and in larger numbers? A quick inspection of competitions returns promising aspects that should contribute to the shared goal of finding and attracting talent. At their core, cybersecurity competitions, like other serious games, are expected to be an engaging learning environment (Hoffman, Rosenberg, Dodge, & The field of information security and cybersecurity (IS/Cyber) continues to undergo rapid expansion and change. Federal agencies are increasingly reliant on computer systems and networks to meet their mission requirements. While this has dramatically increased the speed and efficiency with which federal employees can do their jobs, it also creates vulnerabilities for the United States Government and its citizens. Therefore, IS/Cyber are becoming increasingly important as all agencies work to ensure that their systems are secure and their information remains intact and accessible to the right users. Chief Information Officers Council, Information Security Workforce Development Resource Guide December, 2010 3

Ragsdale, 2005; Schepens & James, 2003; Schepens, Ragsdale, Surdu, & Schafer, 2002; White & Williams, 2005). They are expected to attract the best and brightest into the workforce by aligning instructional technology with what motivates the incoming generation of workers, and how they think and learn (Prensky, 2001). But we must ask further questions about the design of the competitions. Are these competitions engineered to reflect the current competency needs in the market? Are the existing needs defined well enough so that competitions can be honed to better suit this powerful purpose? The National Board of Information Security Examiners (NBISE) was established to study methods for rapidly developing cybersecurity job performance models by applying real world work scenarios to identify the desired knowledge, tool proficiency, and human abilities for specific cybersecurity job roles. The simple premise was to use an offensively- informed (i.e., representative cyber attack scenarios) and practitioner- focused process to identify competency elements over a short period of time. These models could then be used to validate job requirements and serve as a basis for the development of training curriculums, formative and summative measurement, and competitions and challenges. Based on the US Cyber Challenge (USCC) competition framework, the Department of Homeland Security (DHS) Science &Technology (S&T) sponsored an exploration of job performance model creation and its potential viability to assist the competition development community and talent acquisition programs. An important question arose: Can these models serve as a lens to identify talent by tuning existing scoring models for competitions? Also, can the process of identifying real world scenarios serve as a resource for competition design and developers in building a game/challenge? The very process of engaging practitioners filling the representative job roles brings immediate value by more clearly defining or simply validating the competencies that are being sought. Scenario- driven competency modeling can: 1. Expose future cyber defenders to realistic ground truth scenarios to prepare them for the demands of real world job performance 2. Highlight the critical mission areas and roles that are instrumental to organizations/agencies 3. Dissect the work being performed to highlight goals, objective metrics of performance, responsibilities, tasks, methods, tools, and, of course, varying knowledge, skill, and abilities 4. Illuminate the finely tuned, situated expertise that is able to address the emergent (unknown) problem or dilemma that cannot be adequately addressed by those with less expertise 5. Provide a library that can be used to structure exercises and drills to evaluate work processes and defense teams to identify short falls to be addressed and hone responses The purpose of this initial study is to demonstrate how to develop the components of a job performance model that may be used to support workforce development and/or to assist cybersecurity competitions to support aggregation and comparison of participant performance. The approach can be described in four steps: 1. Establish vignettes (or scenarios) that define situated expertise in job roles; 2. Detail the goals and objective metrics that determine successful performance; 3. Identify the responsibilities by job role necessary to achieve the objectives; 4. Detail the tasks, methods, and tools along with how competence may differ in level of fundamental or differentiating indicators of expertise, or the level of Volatility, Uncertainty, Complexity, Ambiguity (VUCA; Johansen, 2007) that indicates the difficulty of achieving that level of expertise. 4

The study was focused on two specific job roles that were identified as mission critical by a DHS study conducted by the Secretary s Homeland Security Advisory Committee (HSAC) i. NBISE leveraged its standing subject matter expert panels in Operational Security Testing and Advanced Threat Response to engage in the job performance model creation. The group was asked to analyze the list of DHS Mission Critical Job Roles, specified in the HSAC report (Task Force on CyberSkills: http://www.dhs.gov/homeland- security- advisory- council- hsac#3). They selected two roles that were best represented by the seasoned practitioners on the panel and served as good starting points for DHS. The panel was then asked to brainstorm a series of scenarios (referred to as job vignettes) and select a single scenario that would exercise a large portion of the job performance model for the selected job role. The selected scenarios were further elaborated until they obtained a rich description of the story that best exercised the competencies of the selected job role. The scenario was organized into steps or logical stages representing the type of work being performed by the role to address the scenario. This was then used to identify the goals and responsibilities of the job role being modeled. The next series of exercises relied upon both the scenario and the job responsibilities to identify the necessary knowledge that is required, basic tool proficiency, and important underlying human abilities. This report describes the process and presents the outcomes of each panel exercise. It also captures the insights and observations from the subject matter experts as they review the outcome of their own exercises and evaluate the scenario- driven job performance model. The project team and panel members further considered how the resulting job performance models could be applied to strengthen cybersecurity workforce programs, initiatives, and frameworks with a special look at assisting in the design and tuning of cyber completions. In short, the integration of mission- critical role definitions with experiential game theory enables substantial improvement in cybersecurity competition program evaluation models and techniques. The purpose of this study was to demonstrate how to develop the components of a job performance model that may be used to support workforce development and/or to assess cybersecurity competitions to support aggregation and comparison of participant performance. Once such a validated model for scoring performance has been established, competition programs may be evaluated on outcome measures such as generalizability of scores, participant engagement, and support for growth and diversification of the workforce. In this way, a rigorous development of job competency models can directly support the rapid development of a capable workforce in a relatively short span of time, without much of the trial- and- error that has often accompanied the evolution of new professions. 5

INTRODUCTION Defining Mission Critical Roles The Importance of Competency Definition Cybersecurity is a contest of competence vulnerabilities are limitless because they emanate from constantly expanding human intellect, imagination, and ingenuity and are the artifacts of complexity. The mission of the cyber defender is therefore continually shifting; best practice heuristics have a half- life approaching zero as every day brings new attack vectors, exploitation techniques, or exfiltration targets. According to research on judgment and decision- making, work focused on such novel, highly variant, or rare problems defines a competency- based domain (Smith, Shanteau, & Johnson, 2004): Such tasks require decisions to be made and actions taken in the face of ambiguous and/or incomplete information. Time pressure is frequently great, and the penalties for failure are often severe. The research shows that competence- based professions have difficulty identifying and defining optimal performance. Optimal performance must be defined and measured differently by stage of expertise development and for the unique contributions of knowledge, skill, and ability. Learning curves are steep in competence- based professions. Thus, what is optimal performance differs greatly between beginners, or those merely proficient in methods and tools, and the skilled competent or expert performers. Mastery must be evaluated across the full multidimensionality of competence (Tobey, Reiter- Palmon, & Callens, 2012), combining the following: depth of understanding comprising knowledge; consistency of skills honed through practice; and the generative capacity of abilities by which knowledge and skill are adapted to effectively respond to increased volatility, uncertainty, complexity, and ambiguity that typify competence- based domains (Johansen, 2007; O Neil, Assante, & Tobey, 2012). Finally, while expertise may be apparent in hindsight, performance models that seek to predict or accredit competency must distinguish between fundamental tasks that define base levels of competence and the differentiating tasks in which both the methods used and outcomes achieved differ across stages of the learning curve (Tobey, 2011). Job performance in these domains is highly subject specific, method or tool specific, or scenario dependent. Measurement of optimal performance is more difficult because indicators are needed at a level of detail not typically found in competency models (Campion et al., 2011). Additionally, interrelationships among multiple competencies or across multiple job roles must be defined which are usually not identified in job task analysis, including: action or task- level competence, domain or subject competence, cognitive or intellectual competence, emotional or social competence, and meta- competence by which one may accurately gauge personal efficacy, engagement, and ethical stance in the performance of job duties (Le Deist & Winterton, 2005). Thus, in competence- based domains like cybersecurity the definition of mission- critical roles is highly situated (Lave & Wenger, 1991) grounded in scenarios whose truth is continually constructed as the interplay between attacker and defender play out in a contest of adaptive expertise (Assante & Tobey, 2011). Each scenario involves different goals, objective metrics of performance, responsibilities, tasks, methods, tools, and, of course, varying knowledge, skill, and abilities. 6

According to Brown, Collins and Duguid (1989), situated expertise becomes embedded through the interaction of declarative and procedural knowledge during skilled application. Individuals proficient in understanding of the domain must undergo cognitive apprenticeship by which the procedures they have learned become generalized and adaptive as a result of varied practice, collaboration, and reflection. Competence forms through repeated application of knowledge and skill. Novices, beginners and the proficient use reasoning based on procedures and rules, but these permit resolution of only well- defined (textbook) problems. The competent practitioner has developed skills through repetitive application enabling them to reason by causal models that demonstrate the situational awareness necessary to address ill- defined (unknown) problems. The expert and the master differ in the degree of ability developed to adapt these causal models and habituated skills. They reason through stories or, more accurately, vignettes that demonstrate finely tuned, situated expertise which is able to address the emergent (unknowable) problem or dilemma that cannot be adequately addressed by those with less expertise. In summary, mission- critical roles in cybersecurity must be defined at increasing depth of detail to align with the conceptualization and action repertoires of masters, experts, competent practitioners, proficient students, beginners, and novices. These decreasing orders of competency align with representation of situated expertise as vignettes (master level), goals and objectives (expert level), responsibilities (competent level), tasks (proficient level), tool procedures (beginner level), and domain knowledge (novice level) as the basis for performance. Optimal performance at each level differs. Accordingly, assessments or evaluations of performance should consider both the stage of expertise development and the fundamental or differentiating nature of the task. Finally, the development of mission- critical competence in cybersecurity requires the opportunity to fully engage in a contest of competence by which expertise is grounded in the volatility, uncertainty, complexity, and ambiguity that typifies the real- world environment of this competence- based domain. Developing Mission- Critical Competence Through Competition We contend that the effectiveness of a game feature is contingent on the ability of designers to align the complexity of the serious game with the limitations of human processing capacity Wouters, van der Spek and van Oostendorp (2009) Current practices in serious game research Cybersecurity competitions are serious games they are contests of competence that seek to edify and engage, more than to simply entertain (Garris, Ahlers, & Driskell, 2002; Vogel et al., 2006). Beyond their educational mission, cybersecurity competitions are developed to assist in recruiting and selecting the next generation workforce. These are high stakes contests for identifying those able to perform under the pressure of real world, job- relevant performance conditions. They involve competing goals, which must be prioritized and satisfied in a highly competitive setting. Though the best provide many forms of feedback, outperforming an adversary provides the ultimate indicator of a player s competence. But at their core, cybersecurity competitions, like other serious games, are expected to be an engaging learning environment (Hoffman, Rosenberg, Dodge, & Ragsdale, 2005; Schepens & James, 2003; Schepens, Ragsdale, Surdu, & Schafer, 2002; White & Williams, 2005). They are expected to attract the best and brightest into the workforce by aligning instructional technology with what motivates the incoming generation of workers, and how they think and learn (Prensky, 2001). Cybersecurity is not the only discipline seeking to use simulations and serious games to grow its workforce. Serious games and other challenges are used to entice young talent across a broad array of 7

science, technology, engineering, and math (STEM) disciplines (e.g., Mountain, 2004). For example, the ACM International Collegiate Programming contest has been operating for over 40 years and currently more than 7,000 teams and tens of thousands of students compete across nearly 90 countries. These contests have increased awareness of STEM professions. However, STEM competitions have yet to attract a diverse and growing workforce (Shilov & Yi, 2002). Enrollments continue to fall with grave implications for developing a sufficient and competent labor pool (Assante & Tobey, 2011). In a study comparing a broad spectrum of STEM competitions in general science, robotics, and cyber defense, Rursch, Luse and Jacobson (2010) found that despite increasing awareness of their respective discipline, competitions have failed to improve the diversity of the workforce, nor have they reduced the decline in numbers of people entering STEM- related careers. Recent studies suggest, however, that the promise of engaging the next generation in cybersecurity careers may be realized if there is better alignment between game design and the developmental stage of a participant s expertise. The design of games that present challenges adjusted to the learning state and competence of the player is called game balance (Kiili, 2005). For instance, Joiner et al (2011) discovered that game design factors have differential impact by gender. Women are more motivated than men to participate in games that adapt to their competence level and are focused on formative assessments that guide learning, rather than scores providing a summative assessment of achievement. Other studies applying game balance techniques are finding that they motivate students to achieve mastery in the discipline while at the same time increasing their persistence to learn. These studies show, as Phillips (2013) stated: A good teacher challenges her students, understands their struggles, and provides needed encouragement. A [good] game provides the same level of interaction, but with the added benefit of embedded assessments a student's progress is continually tracked The continual guidance towards higher and higher learning goals is called scaffolding. Adapting challenges based on the current level of the participant helps to develop critical thinkers that become engaged in and committed to a discipline, and increase motivation to learn. Thus, scaffolding is the difference between a serious game that increases awareness and one that fosters deep learning (Chin & Brown, 2000). However, the opposite effect the disengagement of the participant from a profession may result if a competition is too challenging. Wouters et al (in press) conducted a meta- analysis of the cognitive and motivational effects of serious games. They found students developing foundational competence through drill and practice saw no benefit from participating in a serious game. Further, the study showed that competitions that occur in a single, continuous session were actually less effective than traditional instruction. A recent study of the National Cyber League (NCL) inaugural competition season (Tobey, Pusey, & Burley, in press) offers an explanation for the failure of competitions to match or improve upon traditional classroom instruction. Overall, during the multi- event NCL season students who participated across multiple sessions showed a significant increase in all measures of engagement: dedication, absorption, and vigor towards participating in cybersecurity activities. However, there was also a notable decline in engagement for those with little experience in the field those who had participated in less than two events. These participants frequently dropped out of the competition before the season was over. The conclusion drawn from the data suggests that improved game balance that engages students through facilitating deep learning is needed in cybersecurity competitions if they are to accomplish the objective of expanding and enhancing the cybersecurity workforce: 8

The growth of participation in competition events is generally presumed to be increasing the number of entrants into the field. The analysis of the NCL data indicates that competitions may actually be constraining or detracting from this growth. It may be possible that competitions discourage those with little prior experience in cybersecurity. At a minimum, this analysis seems to strongly suggest more research is needed to understand the difference in change to perceived engagement between those with little to no experience and those who are entering their second (or greater) competition. (Tobey et al., in press) Practitioner Involvement To identify and develop the job performance components of the mission- critical roles, this project leverages the expertise of subject matter experts (SME) available in two of NBISE s job performance panels. NBISE job performance panels are assemblies of experts from industry stakeholders, government agencies, research institutions, service companies, and security product vendors who work to identify critical job roles that make up the cybersecurity workforce of today and tomorrow. They collaborate to define competency models for those roles and develop a standards- based library of validated assessment, curriculum, and simulation- based learning components. For this project, NBISE guided the Advanced Threat Response (ATR) and Operational Security Testing (OST) panels through the job competency definition process driven by scenarios that represent ground truth and properly capture the necessary job competencies. To facilitate the panel SMEs in this process, NBISE supported the panels with a technology suite that included tools for scenario (vignette) driven elicitation, collaboration, performance measurement, task characterization, and role identification. The 23- member ATR panel is focused on advanced cyber security threats such as advanced persistent threats and other highly sophisticated threats (see Appendix A for roster). The 31- member OST panel is focused on penetration testing, red teaming, and attacker emulation testing (see Appendix A for roster). With the involvement of these 54 SMEs from the two panels, this project supported over 700 hours of SME input to help DHS understand the job competency requirements for the mission- critical roles. Roles Mission critical is a term often used to identify those people that are unable to leave early in the face of an approaching foul weather system or have to brave the elements when everyone else has an unplanned day off. This particular definition captures some of our use of the term, but a more useful definition is functional job roles that bring the necessary know how, competencies, and practices to accomplish the mission of an organization. We focused our efforts on job roles that fall into the cybersecurity domain that contribute to the cybersecurity posture of an organization in a material and more direct manner. 9

These are the job roles that have very short line of sight to the health and security of an organization s information and communication technology. A recent published article ii coined these job roles as being in the Red Zone, to distinguish them from other cybersecurity job roles that are important but are less direct in impacting the security of deployed systems. Our definition of Mission Critical for this study is as follows: 1. Mission Critical is used to define the importance of the work to be performed by the cyber functional role as being critical to the defense of an organization/agency s information systems. 2. Functional Job Role is a label given to a category or classification of job roles based on their sharing a significant number of common goals (i.e., functions). 3. Job Role is a label given to a category or classification of job titles based on their sharing a significant number of common responsibilities or job duties. The universe of functional job roles that were evaluated for selection for this project came from recommendations put forward to the Secretary of DHS by the Homeland Security Advisory Committee (HSAC). NBISE asked its subject matter expert panels to analyze the list of DHS Mission Critical Job Roles, specified in the HSAC report and select two roles that were best represented by the seasoned practitioners on the panel and served as a reasonable starting point for DHS. The HSAC report stated: Red Zone Think about what the phrase red zone conveys in American football: when defensive players have their backs to the goal line, the situation demands peak performance because the threat is imminent and has to be turned back. Similarly, defender roles within the CI/KR sector s red zone need to be ever present and the capabilities of the individuals in those roles need to be fully developed to achieve peak performance. The pipeline of people moving into the workforce that have the necessary skills, knowledge, and capabilities to perform the critical red zone jobs compared to the pipeline of people exiting those positions is not balanced. This unbalanced condition seems to be worsening as the number of individuals exiting is increasing, the need across multiple sectors is growing, and the available programs or development capabilities have remained flat. Tim Conway, NBISE Smart Grid Cybersecurity panel chair, Control Engineering Magazine, April 2013 On June 6, 2012, Secretary Napolitano announced the formation of a Task Force on CyberSkills with a two- part mandate: first, to identify the best ways DHS can foster the development of a national security workforce capable of meeting current and future cybersecurity challenges; and second, to outline how DHS can improve its capability to recruit and retain that sophisticated cybersecurity talent. The HSAC report further detailed their rationale for calling out and defining Mission Critical Roles : In her tasking letter posing this challenge, Secretary Janet Napolitano said that DHS needs a workforce with specialized knowledge and skill to carry out its mission. The Task Force s first job was thus to identify those specialized skills without which DHS cannot meet its cybersecurity responsibilities (called mission- critical tasks and mission- critical skills ). Explicit definitions of the required skills are needed to enable DHS to differentiate between people who actually have those skills and people who may have knowledge in the area but no hands- on skills. Explicit definitions are also essential to meet the Task Force s charge to identify the most promising and effective 10

competitions, university programs, internships, private sector programs, and relevant federal government programs that may be valuable as partners or sources of talent for the Department. (HSAC Task Force on Cyber Skills) Table 1. List of DHS mission critical roles with alignment to ATR/OST roles Mission Critical Roles* ATR roles OST roles System and network penetration tester X Application penetration tester X Security monitoring and event analysis X Incident responder in- depth X Threat analyst/counter- intelligence Analyst X Risk assessment engineers Advanced forensics analysts for law enforcement Secure coders and code reviewers Security Engineers operations Security engineers/architects for building security in *From DHS HSAC CyberSkills Task Force Report Fall 2012. This table presents the Task Force s recommended list of mission- critical jobs. Both panels engaged in a discussion and a voting process to select the roles to develop scenario- driven job competency models. The initial project was scoped for two functional roles to serve as the basis for the underlying responsibility and accompanied competency model, but it is important to note that the brainstormed scenarios that were selected applied to multiple cybersecurity functional roles and can in many cases be used in future projects to help identify the responsibilities and competencies of non- selected or listed cyber roles. The ATR panel identified three jobs as aligned with their panel focus and membership; these jobs are Security monitoring and event analysis, Incident responder in- depth, and Threat analyst/counter- intelligence analyst. The OST panel identified two jobs as aligned with their panel focus and membership; these jobs are network and system penetration testing and application penetration testing. Table 2 includes descriptions for these five job roles as provided by the HSAC. ATR Table 2. Description of ATR and OST related HSAC identified Mission Critical Roles Role Description Security monitoring Identify indicators that show an incident has occurred and initiate swift and event analysis response, differentiating between those incidents that represent impotent attack vectors and those that need to be analyzed in- depth by the incident responders. Many other tasks are performed by the security monitoring and event analysis staff, but the ones described here are the critical tasks for which skills are in very short supply. Incident responder in- depth Implement proactive measures to contain the incident, including isolation, characterization, reverse engineering, assessment of capability and activity of malicious software that has been found on agency systems, identification of intruder local changes/suspect interactions, triggering of targets to evoke malicious behaviors, and development and deployment of eradication tools. Only 2% 10% of all malicious software needs to be put through this deep analysis; the remainder will be cleaned with anti- virus tools using current and updated signatures. However, the 2% 10% constitute the most dangerous payloads. 11

Threat analyst/ Counter- intelligence analyst System and network penetration tester Deploy deep and current knowledge of the attack surface, its most vulnerable and high value targets, and how its technical vulnerabilities may be exploited; maintain up to the minute situational awareness on what malicious actors are using and targeting; and develop techniques and program custom tools to detect local changes, identify suspect interactions, and watch for and respond to what the malicious actors are doing. More advanced teams also are able to understand the attackers motivation, language, organization, and social behaviors, as well as group the threat actors logically to create effective cyber profiles of groups, actors, and campaigns, thereby helping organizations become more proactive in their security posture and defense. Follow a systematic process to assess the ability of systems and networks to withstand sophisticated adversaries who have knowledge of the architecture and systems that are deployed. This is not social engineering or running a vulnerability testing tool or a packaged exploit tool, but rather a sophisticated technical testing of the configuration and pathways and interactions between systems that mimics the techniques employed by advanced adversaries. Application Test applications before they are deployed and when they are modified. penetration tester Identify the avenues that are most riddled with flaws and holes and that give malicious actors access to the most important content or systems. This is not only a tool- deployment task; it also requires deep understanding of the application being tested. Reference: Homeland Security Advisory Council s CyberSkills Task Force Report, Fall 2012 12

Scenario- Driven Competency Definition iii The cybersecurity mission is to ensure the security, accuracy, and timely transfer of information (Seddigh et al., 2004). The ultimate goal is to provide assurance that a computer- based system is reasonably protected by reducing exploitable vulnerabilities and insecure behaviors, while maintaining an ability to detect and respond to security incidents and intrusion. This mission is therefore similar to that of other engineering professions, which assess and assure safety. Recently, best practices in safety assurance have been adopted by cybersecurity researchers seeking to develop an evidence- based approach to improve information assurance (Gandhi, Siy, & Wu, 2010; Goodenough, Lipson, & Weinstock, 2012). n this section, we will briefly describe the relevance and importance of safety assurance case modeling for our analysis of mission- critical roles in cybersecurity. The safety assurance case method was originally developed by Kelly and colleagues (Kelly & McDermid, 1997, 2001; Kelly & Weaver, 2004; Weaver, McDermid, & Kelly, 2002) to document, validate, and evolve safety assessments based on lessons learned from implementation of new technologies. A similar method is used to guide development of competency assessments in large- scale credentialing programs, such as those operated by the Educational Testing Service (Mislevy, Steinberg, & Almond, 2003). Across several studies, Kelly and colleagues showed that this scenario- based method facilitated common understanding of system vulnerabilities and faults across all stakeholders, e.g., system designers, safety professionals, industry regulators, and certifying authorities. Importantly, their studies also showed that it facilitated rapid adaptation of system designs and remedial actions necessary to reduce the number and negative impact of safety incidents. Accordingly, this method may address a critical issue facing the cybersecurity profession: the dynamic nature of threats and attack patterns require that mission- critical roles and task assignments are continually updated based on evidence gathered from the latest tactics, techniques and procedures used in cybersecurity incidents. This infusion of ground truth in workforce planning means that cybersecurity workforce programs must be well integrated and constantly adapted (Assante & Tobey, 2011). However, maintaining alignment and currency among the various workforce development programs and tools, such as training, simulations and certifications, is a constant challenge. For example, a recent application of the assurance case method to analyze cybersecurity workforce programs in the energy sector found several important job responsibility areas were either missing or showed wide variance in emphasis among competency frameworks, course designs, and certification programs (Assante et al., 2013). Goodenough, Lipson and Weinstock (2012) adapted the safety assurance case argument method to develop an evidence- based practice for information assurance that facilitates assessments of system safety, security and/or reliability. Their adaptation of the Goal- Structuring Notation Method for safety assurance (Kelly & Weaver, 2004) provides a step- by- step process for modeling cybersecurity scenarios. The first step in the assurance case method is the definition of a case that captures one or more critical vulnerabilities, system failures, recovery actions and consequences. Each case is further elaborated by creating a structured story or vignette that enumerates primary and subsidiary goals; objective measures of expected outcomes or operation; challenges to these goals and objectives introduced by one or more exemplar incidents; and the process steps, strategies (or job responsibilities), and tasks necessary to recognize and effectively respond to return the system to an acceptable operating state. Finally, tools are identified which will provide the evidence necessary to indicate whether a vulnerability has been detected or an intrusion has been thwarted. In this study, we applied the assurance case process to demonstrate how it may help to identify the situations and conditions that determine the development or demonstration of competence in mission- critical roles. As discussed above, cybersecurity work is characterized by decision- making that 13

must be made under high levels of uncertainty, ambiguity, and time pressure where optimal performance is difficult to decipher. Studies of similar jobs in military contexts (Gompert, 2007) show that performance in these contexts is highly situational, where decisions must be made on the fly (Franke, 2011). Success therefore depends on effective sensemaking (Weick, Sutcliffe, & Obstfeld, 2005). What differentiates the competent from the merely knowledgeable is the speed and accuracy of incident pattern recognition and classification into known scenarios. This finding is consistent with studies of chess masters which show that recall of game scenarios, especially during the first few moves, are highly predictive of the level of skill (Charness, 1991). Similarly, cybersecurity skill requires much more than rote memorization. Expert cybersecurity professionals, like their counterparts in military counterintelligence or chess mastery, must possess sufficient situational awareness (Endsley, 1995) to adapt the response to meet the unique requirements of the situation. These studies also show that simulation systems, such as those used in cyber competitions, can be effective training and assessment mechanisms if, and only if, the scenarios used are realistic and grounded in detailed case definitions (Zbylut & Ward, 2004). These studies show that fostering learning and the engagement necessary to create the active learning environment needed to develop adaptive expertise requires that the scenarios focus on tasks, tools, and methods that differentiate the performance of novices and beginners from those who are competent or expert cybersecurity professionals (Gandhi, Tobey, Reiter- Palmon, Yankelevich, & Pabst, 2013). Thus, a scenario- based approach to competency modeling will enable the development of challenges and assessments that are much more effective than a knowledge recall test at determining skill levels in a competence- based domain. 14

METHODS, ASSUMPTIONS, AND PROCEDURES Modeling Ground Truth Assumptions and Key Terms A primary purpose of this study was to demonstrate how a job performance modeling approach (Tobey, 2011; Tobey, Reiter- Palmon, & Callens, 2012) accelerates the process of job task analysis while improving the depth and breadth of analysis typically conducted in the preparation of a competency model for training or assessment, such as the NICE Framework (National Institute of Standards and Technology, 2011; Paulsen, McDuffie, Newhouse, & Toth, 2012), or in developing a cyber- competition design (Conklin, 2006; Schepens & James, 2003; Schepens et al., 2002). Figure 1. Basic Job Performance Model Process Vignettes: Defining Moments of Expert Performance Defining the context of job performance is essential because of the situated nature of expertise in cybersecurity. The term critical incident is often used to describe varying situations in which expertise is exhibited in competence- based domains (Benner, 1984; Boyatzis, 1982; Klein, 1998). Incident, as the word is used here, is not simply an event requiring a response. Instead, it represents a defining moment when differences in skill level are notable in clearly identifiable outcomes of action taken. This may be an actual or a potential event, and includes not only sense- and- respond situations, but also proactive or sustaining events critical to achievement of goals and objectives. Hence, the word incident here is more broadly defined. We therefore use the definition of incident proposed by John Flanagan, the inventor of the critical incident technique for task analysis: any observable human activity that is sufficiently complete in itself to permit inferences and predictions to be made about the person performing the act. To be critical, an incident must occur in a situation where the purpose or intent of the act seems fairly clear to the observer and where its consequences are sufficiently definite to leave little doubt concerning its effects (Flanagan, 1954, p. 327). However the incident name itself does not tell the whole story. In many cases, experts use an incident name to quickly convey a complex and diverse set of conditions and events in a simple, terse manner, especially when conversing with peers (Boje, 1991). Consequently, we prefer the term vignette because it signifies the need to extract the whole story, including several scenarios or differing perspectives of a critical incident (Boje, 1995; Tobey, 2007). Stories are frequently used by experts to 15

convert tacit into explicit knowledge for communicating an event to less experienced people (De Long, 2004; Tyler & Boje, 2008). Accordingly, the term vignette describes the collection of: a critical incident title or description; when the incident occurs (frequency and/or action sequence); what happens during the incident (problem or situation); who is involved (entities or roles); and where the incident might happen, now or in the future (systems or setting). Further definition of a vignette might include why it is important (severity or priority of response) and how the critical incident is addressed (method, tools, or abilities that may be needed). A collection of vignettes and the associated job context forms the basis for developing a Job Performance Model that may facilitate comparison with other jobs or to identify when an individual is performing the job as classified. Decomposing goals, responsibilities and tasks to guide assessment of ability We define a goal as a statement that expresses an action that must be successfully completed to accomplish the job mission, or to facilitate the accomplishment of another goal. The goal objective is defined as the measurable outcome that establishes the criteria by which the degree of success or effectiveness may be assessed. Job responsibilities are defined as action statements that result in outcome states that may be monitored or assessed to determine if an objective has been accomplished. Accordingly, responsibility statements use passive verbs, such as "ensure", "follow", or "obtain" that are not included in Bloom's taxonomy. Consistent with its use in task analysis, Schraagen (2006, p. 185) defines the word task as "what a person is required to do, in terms of actions and/or cognitive processes, to achieve a system goal." This definition implies that task statements must be written specifically to highlight the action verb that indicates the execution of the task. It is often the case, though not a requirement of task analysis, that the action verbs used to describe goals and tasks align with Bloom's taxonomy of action verbs (Anderson, Krathwohl, & Bloom, 2001; Bloom, 1956). This definition of a task also helps to clarify the definitions for elements of competency. The three components of competence (i.e., knowledge, skill, and ability) are independent dimensions which may be used to understand an individual s or team s level of competence within a three- dimensional space (Tobey, 2011; Tobey et al., 2012). Knowledge is defined as the understanding of a concept, strategy, or procedure. Thus, knowledge is measured by depth of understanding, from shallow to deep. Knowledge is therefore independent of task performance. Knowledge is identifiable by the capacity to encode, recall, or associate information, independent of context. For example, organizational knowledge is required to Understand what is important to the organization and what is mission critical. Skill is defined as the reliable application of knowledge in the accomplishment of a task to achieve desired outcomes. Thus, skill is measured by the degree of reliability, from inconsistent to consistent, in performance of a task. Skill is always task- specific and context- specific. Skill is identifiable by statements of accomplishment, such as Establish plan for secure storage and transmission of customer data. Ability is defined as a mental or physical capacity to transfer or transform knowledge and skills for application to new domains. Thus, ability is measured by the extent of knowledge and skill transfer, from narrow to broad, typically assessed through the use of physical or intelligence tests. Abilities are task- independent. Abilities include many forms of mental or physical manipulation (Guilford, 1956), e.g., dexterity, locomotion, memorizing, deducing, recognizing patterns, and planning. 16

Vignette Identification As a demonstration of the job performance modeling process (Tobey, 2011), the SME panel first defined two job roles that would guide the elicitation of job performance model components. During a complete job performance modeling process, the roles identified during this step would be categorized into functional roles. The list of functional roles would be discussed, or ranked, by the panel of subject matter experts (SMEs) who then select one or more functional roles to focus on for the remainder of the modeling process. This selection of functional roles establishes an important boundary condition for the Job Performance Model. A guide to the selection process may be the roles targeted by a sponsoring organization or roles identified in an existing competency model, such as the NICE Information Assurance Framework ("NICE Cybersecurity Workforce Framework," 2011) in the cybersecurity profession. The ATR subpanel decided to focus on the role of Security Monitoring and Event Analyst. The OST subpanel decided to focus on the role of System and Network Penetration Tester. With the two focal roles identified, the panel members then brainstormed a list of vignettes. In addition to providing a terse description of the critical incident, the panel added examples of the scenarios. For instance, the vignette Adversaries are collecting open source intelligence on your organization to be used for targeting and attack was further defined into a set of scenarios (see results section below), including Honeypots could be triggered based on web- scraping and Utilizing social networking sites to collect information about the company and employees. If this had been part of a complete job performance modeling process, the next step of the process would be to categorize the vignettes into a set of master vignettes (Tobey et al., 2012). However, as the purpose of this project was to demonstrate how the JPM process may be applied to develop mission- critical role definitions, only one master vignette for each subpanel, ATR and OST, was selected for further analysis. These two master vignettes were: Discovery of large amounts of sensitive data posted to internet with no clear signs of intrusion (ATR subpanel); and Conduct of a comprehensive Red Team penetration test against a sensitive national laboratory conducting advanced research with national security implications (OST subpanel). Each of the two master vignettes was further elaborated through a series of six focus group sessions. First, the descriptions of each critical incident was expanded by applying ante- narrative dynamic analysis (Boje, 2001; Tobey et al., 2012) to answer five questions about each incident: what, when, why, where, and how. Second, the SME panel brainstormed a list of process steps for responding to the critical incident. Third, the goals associated with each master vignette were elicited. Fourth, the SME panel brainstormed a list of job responsibilities for each goal. Fifth, a list of knowledge requirements, tools, and tasks (where abilities may be identified) were elicited for each master process step. Finally, the tasks for each process step were sorted by the SME panel into a list of abilities. This categorization was then analyzed to determine the relative importance of an ability to perform the target mission critical roles, both at the overall vignette level and for each process step. Job responsibilities defined in a job performance modeling process may bear some resemblance to the tasks defined during a traditional job task analysis or competency model. In job performance models they represent the starting point for decomposing a job into finer levels of detail. In effect, the responsibilities align with job duties often listed in job descriptions or performance evaluations. One fundamental difference between job performance modeling and previous approaches is the use of multiple roles at this step in the process. Guided by the vignette description, the SME panel defines responsibilities across the entire group of functional roles determined by the panel to provide the role boundary for the job performance model process. This approach enables elicitation of job overlap and 17

the establishment of collaborative requirements of the job where responsibilities are duplicated across functional roles. During the elicitation of goals and responsibilities the SME panel collaborates on developing a list of expected outcomes, both positive (best practices) and negative (errors and omissions) for each role involved in each vignette. These outcomes can serve both to establish learning objectives for training programs or situational judgment outcomes for assessment instruments. In the former case, the misuse cases (errors and omissions) are especially important. By identifying likely errors a training program may be developed that enables "failing forward" where common mistakes are addressed by appropriate remedial instruction modules and practice exercises that guide the learner through a problem- based approach to deliberate practice. Research has shown that deliberate practice is necessary to accelerate proficiency. In the case of situational judgment test development, the mis- use cases can form a set of distractor choices to ensure that the test taker has developed sufficient understanding, or to demonstrate skilled performance during a Potential Performance Analysis (Gandhi et al., 2013). See Appendix F for a more detailed description of the job definition process. 18

RESULTS AND DISCUSSION Cybersecurity Roles and Definition Individuals and Teams Throughout the process of eliciting vignettes, roles, goals, tasks, responsibilities and process steps, what became abundantly clear is that seemingly few cybersecurity professionals have taken the time to document and debate at length what it takes to be a cybersecurity professional. Considering many members of the panels were currently or previously in leadership roles with the responsibility for interviewing, training and developing cybersecurity teams, this revelation is disconcerting. The value of this endeavor was not only the resultant output of the aforementioned, but for the scaffolding thought process that brought the panels down to the essential abilities required for cybersecurity practitioners (summarizing, deducing, recognizing patterns, filtering etc.). When posting a cybersecurity vacancy, we identify the most quantifiable needs of the position, such as experience with brand X firewall product. Additionally, common interpersonal and team skills are always included in any position description. However, if an organization were to take the time necessary to ask the above questions related to the role and responsibilities, additional knowledge, skills and abilities would become much more readily identifiable. As previously stated, cybersecurity is a fast moving practice. The underlying qualities that are required to successfully execute many of the tasks are undefined. This makes it extremely difficult to identify the correct person for the role, based on these abilities, even within your own team. These abilities include being able to shift from one new threat to another, not knowledge of a specific tool or technique. In a response scenario all of the abilities are in play. It cannot be expected that a single individual will have mastered all of the abilities. Therefore, a team approach may be best, assembling many of the required abilities to address nascent threats. From an individual contributor standpoint, there is also value in developing these abilities, which are arguably the most portable capabilities of a cybersecurity professional. The majority of cybersecurity training focuses on tactics and techniques, which change given the scenario; however, the abilities, such as recognizing patterns and filtering remain constant. Experience with tactics, techniques and procedures will always be valuable to make immediate impacts, but in the highly adaptive cybersecurity arena, these lower abilities are what allow us to adapt to the latest threats. In short, these abilities are given cursory consideration when designing individual development plans, though all other components emanate from these skills. The importance of identifying key knowledge, skills and abilities supports the development of the individual, and the team, as only a well assembled team can amass the necessary experience, from novice to expert, across a range of threats to present a formidable defense. Process Results and Findings Summary As addressed in the Methodology section, the ATR and OST panels each elicited job performance model components of the related DHS mission critical job role. Each panel identified a vignette to guide this elicitation and identified goals, responsibilities, master process steps, process steps, knowledge requirements, tools, and abilities associated with their selected job roles. This section provides the results and findings of both panels efforts to elicit these job performance model components. 19

The first activity for the panels was to identify a ground truth vignette to serve as the basis for defining job competencies for their selected job role. Using the list of possible vignettes developed through panel collaboration (see results in Appendix B), each panel s leadership selected a vignette to use in the job competency analysis of their selected job role. To support the identification of job performance components for a Security Monitoring & Event Analyst, the ATR panel selected the discovery of large amounts of sensitive data posted to the Internet with no clear signs of intrusion. To support the identification of job performance components for a System & Network Penetration Tester, the OST panel selected the Red Team assessment of a high security network. More detailed mission statements and scenario descriptions were created for each of these vignettes and, as addressed above, these vignettes and their more detailed mission definitions provided the scaffolding necessary to spur panel participants generation of scenario- based task/ability descriptors (see Appendix C for detailed vignette descriptions). With the vignette identified and described, the panels brainstormed a list of process steps for an organization or team responding to the critical incident described in the vignette. Guided by the information gained from the process steps and existing models, panel leadership created master process steps and sorted the process steps into their related master process step. The ATR panel identified 57 process steps that were sorted into 9 master process steps and the OST panel identified 67 process steps that were sorted into 7 master process steps (see Table 4 below). Following the development of the process steps, the panels developed response goals and job responsibilities for the critical job role in addressing the critical event. The ATR panel elicited six response goals and 30 responsibilities for a Security Monitoring & Event Analyst in response to the discovery of large amounts of sensitive data posted to the Internet with no clear signs of intrusion. The OST panel identified four response goals and 18 responsibilities for a Network & System Penetration Tester in support of a Red Team assessment of a high security network. Table 3 includes the response goals for each job role and the number of job responsibilities associated with each goal and the detailed results are available in Appendix D. Job Role Security Monitoring & Event Analyst (ATR) Network & System Penetration Tester (OST) Table 3. Response goals for each job role Goals Communicate incident to appropriate internal groups such as security management, IT management, HR, and Legal Conduct an initial triage of the incident Directly support the incident response process (e.g. assist in validating containment for hosts or network segments were effective) Follow, identify, update whether there is process documentation that covers this scenario Gather potential indicators or artifacts of compromise from end user workstations Maintain situational awareness throughout the incident for the organization Develop and execute penetration and testing strategy Differentiate between vulnerabilities that are meaningful to the assessment and those that are not Manage the administration and logistics of the assessment Understand, demonstrate, and educate the client on the real world impact of threats to vulnerabilities in a given environment Number of Job Responsibilities 1 9 6 3 3 8 8 2 5 3 20

Informed by the process steps, goals, and job responsibilities, the panels collaborated to identify knowledge requirements and necessary tools for the job role to support the master process steps associated with their vignette. The ATR panel identified 77 knowledge requirements and 92 tools for a Security Monitoring & Event Analyst to address this vignette. The OST panel identified 76 knowledge requirements and 60 tools for a Network & System Penetration Tester to address this vignette. A summary of the number of process steps, knowledge requirements, and tools identified for both job roles is available in Table 4. Table 4. Job performance results Job Role Master Process Steps Process Steps Knowledge Tools ATR Preconditions 8 12 2 Onset 3 0 6 Preparation 2 7 14 Identification 8 17 27 Containment 13 8 12 Eradication 11 8 15 Recovery 5 9 5 Lessons Learned 3 7 8 High Level Steps 4 9 3 OST Preconditions 9 13 3 Onset 0 0 0 Recon 12 10 9 Enumeration 10 14 13 Exploitation 15 14 16 Escalation 14 15 15 After Action 7 10 4 Finally, the panel members identified abilities required of, or greatly beneficial to, an individual in carrying out the job responsibilities in each of the process steps identified for the respective vignettes. Using a list of 30 abilities, the panel members associated abilities with process steps (see Appendix E for abilities and their definitions). A specific ability was deemed important to a process step if 55% or more of the participants associated the ability with that process step. These results were aggregated at the master process step and vignette level through assessing the number of process steps within the master process step and vignette that the ability was deemed by the panel to support. Table 5 and 6 show the five most involved abilities for both the ATR job role and OST job role in their respective vignettes. The complete results for the vignettes as well as the results by master process step and process step are in Appendix D. Table 5. Five most involved ability requirements for ATR vignette Abilities # of steps % of steps Summarizing (representing the whole in a condensed statement) 15 26.3% Organizing a message (sequencing elements for the best impact) 13 22.8% Contextualizing (connecting related parts to the environment) 11 19.3% Recognizing Patterns (perceiving consistent repetitive occurrences) 11 19.3% Deducing (arriving at conclusions from general principles) 10 17.5% 21

Table 6. Five most involved ability requirements for OST vignette Abilities # of steps % of steps Selecting Tools (finding methods to facilitate solution) 31 45.6% Contextualizing (connecting related parts to the environment) 30 44.1% Deducing (arriving at conclusions from general principles) 25 36.8% Reusing Solutions (adapting existing methods/results) 25 36.8% Planning (deciding how to use resources to achieve goals) 19 27.9% Implications for Security Programs Cybersecurity is ultimately about people engaged in a contest playing out on a field of Information/Operational Systems, Communications, and Technology. The U.S. suffers from a specific shortage of technically skilled security engineers and defense experts to fill existing positions. The dynamic nature of cybersecurity requires an adaptive approach to identify and develop appropriate competencies and skills to address emerging challenges. The constantly growing demand and limited supply of talent has made it increasingly difficult to fill both government and private sector positions. The more successful hiring practices are not scalable, as they require competent security professionals to evaluate candidates and engage in detailed discussion with candidates. Successful recruitment depends on committing expert resources to assess whether candidates possess valued experience, judgment, and skill level, as well as whether or not candidates suit the existing team chemistry. This process is difficult to scale unless we focus on defining the critical incidents (or scenarios) and corresponding behavioral responses that identify the presence of valued competencies. Having gone through countless interview loops, it s safe to say, It takes talented engineers to hire talented engineers. Billy Rios, September 2012 Ensuring an adequate supply of competent cybersecurity professionals will require more than finding and luring top talent. In short, candidates with sufficient experience to meet the burgeoning demand are in short supply. Consequently, cybersecurity workforce programs must develop a lifecycle approach to managing their talent pools. Since many programs are unable to identify and articulate the competencies they need to fill gaps, they need to spend more effort on development, assessment, and rewarding progression through stages of a cybersecurity career. This begins with understanding the type of work the situations and scenarios that are the best indicators of developing expertise. 22

Figure 2. Human Capital Lifecycle Axract Hire Train Develop Deploy Measure Reward This project has demonstrated how organizations may derive both a process for identifying and describing the situations that indicate expertise in security testing and advanced threat response. The first important finding is the value of a brainstorming process for developing ground truth scenarios (also referred to as vignettes). The simple process of scenario generation and elaboration provides an incredible information sharing and learning opportunity for the parties involved. The very act allows an organization to consider lessons learned by reconstructing past incidents and by collecting and applying external learning and data about threats and threat actors. The early panel sessions focusing on scenario brainstorming, selection, and elaboration facilitated meaningful exchanges between our subject matter expert panels. The outcome is a rich scenario that can be used for various purposes from the identification of job roles, process steps, and responsibilities (identifying the staffing needs of an organization and the competency mix) to exercising an organization s security capabilities, or serving as a basis for risk assessment and analysis. Implications to Security Programs Brainstorming and elaboration of ground truth scenarios Establishing detailed job role descriptions that include a model for the knowledge, tool proficiency, and underlying abilities to support recruiting, hiring decisions, formulation of development and training programs, and job performance measures Identify human competency gaps as a vulnerability dimension for an organization s risk assessment process The scenario- driven Job Competency Models provided by this study forms a basis for a more comprehensive job analysis. Job analysis is a method by which we understand the nature of work activities by breaking them down into smaller components (Brannick, Levine, & Morgeson, 2007; McCormick, 1979). As the name implies, many job analyses focus primarily on the attributes of work itself, and then link these work attributes to job- relevant knowledge, skills, abilities, and other work- related characteristics including attitudes and motivation (KSAOs). Collectively, the KSAOs represent the 23

competencies required for a job. An individual employee would need to possess these competencies to successfully perform the work (Shippmann et al., 2000). The purpose of the job analysis is to provide detailed information about the job that will guide various aspects related to managing performance such as the development of training materials, testing for selection and competency evaluation, and developmental plans. Research suggests that innovations are necessary to increase the depth and complexity of these models to match the increasing complexity of today s jobs (Smit- Voskuijl, 2005). NBISE innovated this process by the use of scenarios and by assembling subject matter experts (SMEs) with different levels of expertise and who work in a variety of organizational settings allowing our study to increase generalizability across domains (Morgeson & Dierdorff, 2011). A scenario- based job performance model increases both the depth of analysis and breadth of application. The results may be used to create job position descriptions, job role and expectation documents, instructional designs, and performance management programs. The scenarios presented in this study begin to shed light on understanding the competencies necessary for dealing with advanced and sophisticated cyber threats. There is perhaps no working environment more complex and dynamic than that facing those battling advanced persistent threats. Asymmetrical threats challenge traditional security methods and practices, demonstrating the growing need for new and better practices, but, more importantly, greater levels of expertise. The National Institute of Standards and Technology defines advanced persistent threat (APT) as that which results from an attack by "an adversary that possesses sophisticated levels of expertise and significant resources which allow it to create opportunities to achieve its objectives by using multiple attack vectors (e.g., cyber, physical, and deception). These objectives typically include establishing and extending footholds within the information technology infrastructure of the targeted organizations for purposes of exfiltrating information, undermining or impeding critical aspects of a mission, program, or organization; or positioning itself to carry out these objectives in the future. The advanced persistent threat: (i) pursues its objectives repeatedly over an extended period of time; (ii) adapts to defenders' efforts to resist it; and (iii) is determined to maintain the level of interaction needed to execute its objectives." Despite the severity and increasing presence of APTs, little was known about the competencies necessary to respond to an advanced persistent threat. This report provides an important glance at the competencies determined by our Advanced Threat Response panel required to respond to the APT scenario developed for this study. Effective security and response against highly advanced cyber threats requires a current understanding of adversary capability, the opportunity to experience directed attacks in order to become familiar with observables and experiment with response actions, and the use of a team training framework to optimize defender tactics, techniques and procedures. We must embrace customized training systems that adapt to individual skill profiles and accelerate skill development through practice defending against simulated attacks. The infusion of real- world security stories, or grounded scenarios, into instructional designs or assessment instruments replaces what traditionally were hypothetical concepts with exercises or simulations of real events that better facilitate learning and preparing. It is time that we more formally assess and prepare these individuals to ensure they are competent, prepared, and capable of making the right decisions day- to- day and during emergencies, despite the distraction or distress created by a constantly shifting adversarial threat. Frincke & Ford (2010) indicate why the development of a competency map for cybersecurity professionals has been so difficult. Even the development of a simple depiction of knowledge requirements for Advanced Threat Response is challenging. First, it is difficult or impossible to define a 24

typical practitioner. Second, it is not known how practitioners derive their knowledge - - from books or the Internet, through tweets or RSS feeds. Finally, it is unclear what differentiates foundational knowledge from specialized knowledge. They conclude that a competency framework must determine whether the knowledge needed is universal or changes based on role and responsibility. For instance, a researcher trying to design a lab test of an advanced persistent threat would need past knowledge of attacks in order to design a test that could accurately respond to an attack. On the other hand, a security expert would not only need the basic knowledge of past attacks but also the knowledge of how to detect an attack and produce the right defenses in real- time. Competency models will further aid human capital planning and workforce management by providing a basis for planning, decision- making, development, and staffing. A workforce development program must be holistic in the way it measures, develops and supports cybersecurity expertise (Assante & Tobey, 2011). Holistic in this context means addressing the full complement of human factors that determine expertise development (book- knowledge, hands- on skills, innate abilities, cognitive/behavioral influences), including all phases of the workforce development cycle (assessment, training, certification, re- testing, professional development, communities of practice, etc.). Accordingly, a roadmap for developing the next generation cyber workforce should be based on: 1) detailed understanding of job requirements; 2) multidimensional aptitude assessments; 3) customized training and simulation; 4) knowledge and performance- based measurement of skills; 5) personal development plans for continual development; and 6) use of performance support systems and simulation to continually refresh these skills based ground truths. The outcomes from this study provide detailed lists of knowledge areas, tool proficiencies, and underlying human abilities that were identified as being important in responding to the elaborated scenarios. These tables are useful to gain an understanding of the demands placed on cyber defenders under specific scenarios/conditions. The use of multiple scenarios should provide adequate coverage to complete a specific function job role s performance model. Scenarios can also be selected and prioritized based on risk (threat scenarios in particular based on an organization s risk libraries). We must be better prepared to learn about our weaknesses; identify and understand new threats; and make better design, deployment, and operations decisions. The traditional risk assessment program begins with taking an inventory of assets and processes, understanding the organization or business reliance on those assets, and determining the hazards that can impact the productivity or functioning of those assets. The overall security posture is determined through evaluating the mitigations or controls that prevent or diminish the hazards from occurring and effecting the organization. A significant element of that process includes inventorying vulnerabilities that may allow a particular hazard to actualize or have a negative impact. We are familiar with inventorying technical weaknesses, or physical gaps, and for physical security threats we try to calculate the capability and timeliness of the security response to evaluate the necessary security detect- delay- respond cycle. However, the panel discussions suggest that shortfalls in specific competencies also have a significant impact on risk exposure. Threats dubbed as Advanced Persistent Threats (APT) were identified as extremely challenging and stretched or exceeded the skills of the cyber defense team. There are some emerging efforts to calculate the vulnerability that exists when comparing the competency inventory/assessment of your cyber defense staff with cyber threats and particular tactics, techniques, and procedures. Security programs should consider the competency of the cyber defense team (our research found that this group expands far past the Information Security Team and includes infrastructure and information technology support and system owners) as an element of their overall risk calculus. 25

Implications for Challenges and Competitions As discussed in the introduction, one purpose of the current study is to demonstrate a process for eliciting a detailed definition of mission- critical roles in order to establish an improved framework for evaluating and benchmarking performance in cybersecurity competitions. Experiential Learning Theory (ELT; Kolb, 1984) provides the guidance necessary to integrate mission- critical role definitions and cybersecurity game design evaluation. Based on ELT, Kiili (2005) proposed an experiential game model in which game balance is achieved through a cyclical model of ideation, experience, and challenge which engages the participant by using scaffolding to maintain activity at the edge of the participant s zone of proximal development (Chaiklin, 2003; Vygotsky, 1966). According to Kiili s experiential game model, cybersecurity competition challenges should first be evaluated for their degree of alignment with vignettes that describe a vulnerability pattern or critical incident appropriate for the expertise stage of the participants. The recent development of semantic templates (Gandhi et al., 2010; Wu, Siy, & Gandhi, 2011) for establishing standardized definition of vignettes may help to facilitate mapping of game design to cybersecurity job performance models (O Neil et al., 2012; Tobey et al., 2012; Tobey, 2011). Once competition challenges have been mapped to a job performance model vignette, the weighting determined for each competency factor in terms of the degree to which the task is fundamental or differentiating may be used to establish a standardized evaluation framework that can guide accreditation or validation of a competition scoring model. Validated competition scoring models are required for comparing or aggregating scores across multiple competitions, whether in support of the planned National Cyber Cup or to support other assessment systems, including performance- based testing in simulation environments. The first step in this score accreditation is to assess the degree to which each vignette covers the various goals associated with the selected vignette. Second, each challenge should be evaluated in terms of the distribution of responsibilities to the team members. By evaluating responsibility distribution, the evaluation framework enables role- specific analysis while still supporting an overall team assessment. Third, the tasks, methods, and tools used by a competition should be assessed using the weights provided in a cybersecurity job performance model to determine difficulty levels of the gameplay based on the degree to which they indicate fundamental or differentiating competencies. Fourth, the levels of volatility, uncertainty, complexity, and ambiguity (VUCA) manipulated by the game to alter the cognitive load on the participant may provide further guidance on how scoring models compare against other competitions. Fifth, game balance may be assessed using VUCA levels as cognitive load has been found to be the critical determinant of whether a game is adequately, but not excessively, challenging participants (Kiili, 2005; Sweller, 1994; Tindall- Ford, Chandler, & Sweller, 1997; Wouters et al., 2009). Finally, once a validated scoring model has been established, additional validation analyses may be performed that are not currently possible because cybersecurity competitions lack a standardized scoring model. First, the reliability of the scoring system could be analyzed. This is an area that requires additional research as currently no studies exist which have evaluated game score psychometric reliability nor their generalizability (Brennan, 2001; Cronbach, Gleser, Nanda, & Rajaratnarn, 1972; Lord & Novick, 1968). Second, as was suggested by the study of the National Cyber League (Tobey et al., in press), competitions as an educational and career development program could be evaluated in terms of the level of engagement they create among the participants. Comparison of engagement by the depth of learning which demonstrates proficiency (van der Meij, Albers, & Leemkuil, 2011), skill level (Trafimow & Rice, 2009; Weiss & Shanteau, 2003), and/or ability scores (Campbell et al., 1990; Guilford, 26

1967), as well as demographic variables would be especially important to assess game balance and the value of the competition for facilitating expansion and diversification of the workforce. In summary, the integration of mission- critical role definitions with experiential game theory enables substantial improvement in cybersecurity competition program evaluation models and techniques. The purpose of this study is to demonstrate how to develop the components of a job performance model that may be used to support workforce development and/or to assess cybersecurity competitions to support aggregation and comparison of participant performance. The steps in this modeling effort are to: 1) establish vignettes that define situated expertise in mission- critical roles; 2) detail the goals and objective metrics that determine successful performance; 3) identify the responsibilities by job role necessary to achieve the objectives; 4) detail the tasks, methods, and tools along with how competence may differ in level of fundamental or differentiating indicators of expertise levels or the level of VUCA that indicates the difficulty of achieving that level of expertise. Finally, once a validated model for scoring performance has been established, competition programs may be evaluated on outcome measures such as generalizability of scores, participant engagement, and support for growth and diversification of the workforce. Implications for Workforce Development Compared to the historical snapshot of work produced by a Job Analysis, by actively linking employee behaviors to business goals, competency modeling (CM) is considered a more employee- focused examination of working conditions (Sanchez & Levine, 2009). CM typically include(s) a fairly substantial effort to understand an organization's business context and competitive strategy and to establish some direct line- of- sight between individual competency requirements and the broader goals of the organization (Schippman et al., 2000, p. 725). Our panel provided the necessary context by describing the conditions to perform the work and the fundamental challenges of working in a co- adaptive environment where other humans are observing and responding to others actions. The use of vignettes provided the necessary scaffolding to support elicitation of responsibilities, conditions, and context of the work being performed. The process described above produced a wealth of information directly tied to the identified job role and their specific responsibilities within the vignette. The panel logically organized the work needing to be performed at various stages in response to the vignette. Several members found the identification of abilities by job responsibility, and how certain abilities were more prevalent in supporting different but related work tasks, to be Quotes from SME panel members on the importance of scenarios to cyber workforce development and practices Training in the cyber environment is a challenge and the scenario based training assists to close the gaps and increase the level of knowledge in a training environment, but yet, based on a real world event. This methodology also can be used to develop the required countermeasures to the cyber adversary. While cyber events may be similar in outcome, the processes leveraged to achieve the outcome will vary and result in non- standard practices. The scenario- based approach will assist to reduce the number of non- standard approaches and establish repeatability. Overall, the panel members had the unanimous opinion that this study was a comprehensive look to map the various processes required as related to the abilities required to perform the job role of either a cyber- security practitioner or penetration tester. This approach will also promote the development of the required training budgets to acquire the necessary funding for follow on training. This validates the training requirements and provides the necessary justification to build a validated budget based on requirements versus based on best guess or personal opinion. In the end, it will establish a disciplined and rigorous approach to training cybersecurity staff to perform the necessary job roles. 27

very enlightening. Several specifically commented that the unique view of work that was assembled demonstrated gaps in the work that is being performed and how individuals are being educated and trained (through industry training, community colleges and 4- year colleges) for cybersecurity roles. Additionally, this study will assist designers of training curricula to develop junior personnel in these roles. It will also assist with generating the necessary qualification standards to qualify cyber personnel and teams. The deconstruction of work by mapping against a scenario shows how specific technical skills and task execution support higher- level mission objectives and job responsibilities. Panel members saw direct applicability for the assembled data to support job description development and the talent evaluation and hiring processes. This is an important point, as this qualitative measure indicates that the competency tables may provide a useful framework for understanding what knowledge, abilities, and proficiencies in methods and tools are desired. Such a framework can help describe a challenge or contest designed to evaluate an individual or team s performance. Implications for Human Capital Management Panel members indicated that the process of using work scenarios provided a structured set of requirements for job roles and desired knowledge, experience, abilities, and proficiency. They were unanimous in recognizing the difficulty in hiring cybersecurity practitioners. They recognized a general lack of standardized training and qualifications that also exists in other fields like nursing. Many felt the information assembled by the panels would assist in hiring, but also in the continued development of employees and job family planning. The requirements derived from the competency model will help direct and justify training activities and necessary investments. Several members felt that scenario- driven competency maps provided a risk- informed and grounded validation of training objectives and needs. 28

CONCLUSIONS Next Steps for Competitions and Challenges This study produced a wealth of information that can be directly applied by competition developers to enhance future competition challenges. Two rich scenarios developed by some of the world s leading practitioners in both threat response and operational security testing can provide the basis for a future game. More importantly, challenge developers can take the understanding of how knowledge, abilities, and tool proficiency map to real work scenarios and use this information to translate their scoring results to real world job requirements. Once competition challenges have been mapped to a job performance model vignette, the weighting determined for each competency factor in terms of the degree to which the task is fundamental or differentiating may be used to establish a standardized evaluation framework that can guide accreditation or validation of a competition- scoring model. Validated competition scoring models are required for comparing or aggregating scores across multiple competitions. The first step in this score accreditation is to assess the degree to which a competition design covers the various goals associated with the selected vignette. Second, each challenge should be evaluated in terms of the distribution of responsibilities to the team members. By evaluating responsibility distribution, the evaluation framework enables role- specific analysis while still supporting an overall team assessment. Third, the tasks, methods, and tools used by a competition should be assessed using the weights provided in a cybersecurity job performance model to determine difficulty levels of the gameplay based on the degree to which they indicate fundamental or differentiating competencies. Fourth, the levels of volatility, uncertainty, complexity, and ambiguity (VUCA) manipulated by the game to alter the cognitive load on the participant may provide further guidance on how scoring models compare against other competitions. Fifth, game balance may be assessed using VUCA levels as cognitive load has been found to be the critical determinant of whether a game is adequately, but not excessively, challenging the participant (Kiili, 2005; Wouters et al., 2009). Finally, additional validation analyses could be performed that are not currently possible because cybersecurity competitions lack a standardized scoring model. First, the reliability of the scoring system could be analyzed. Second, as was suggested by the study of the National Cyber League (Tobey et al., in press), competitions as an educational and career development program could be evaluated in terms of the level of engagement they create among the participants. Comparison of engagement by the depth of learning which demonstrates proficiency, skill level, and/or ability scores as well as demographic variables would be especially important to assess game balance and the value of the competition for facilitating expansion and diversifying the workforce. In summary, the integration of mission- critical role definitions with experiential game theory enables substantial improvement in cybersecurity competition program evaluation models and techniques. The purpose of this study was to demonstrate how to develop the components of a job performance model that may be used to support workforce development and/or to assess cybersecurity competitions to support aggregation and comparison of participant performance. The steps in this modeling effort are to: 1) establish vignettes that define situated expertise in mission- critical roles; 2) detail the goals and objective metrics that determine successful performance; 3) identify the responsibilities by job role necessary to achieve the objectives; 4) detail the tasks, methods, and tools 29

along with how competence may differ in level of fundamental or differentiating indicators of expertise levels or the level of VUCA that indicates the difficulty of achieving that level of expertise. Once such a validated model for scoring performance has been established, competition programs may be evaluated on outcome measures such as generalizability of scores, participant engagement, and support for growth and diversification of the workforce. 30

REFERENCES Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom s taxonomy of educational objectives. New York: Longman. Assante, M. J., & Tobey, D. H. (2011). Enhancing the cybersecurity workforce. IEEE IT Professional, 13, 12 15. Assante, M. J., Tobey, D. H., Conway, T. J., Leo, R., Januszewki, J., & Perman, K. (2013). Developing secure power systems professional competence: Alignment and gaps in workforce development programs (Technical Report No. 2013- SGC- 02). Idaho Falls, ID: National Board of Information Security Examiners. Benner, P. E. (1984). From novice to expert: Excellence and power in clinical nursing practice. Menlo Park, CA: Addison- Wesley. Berk, R. A. (1980). Criterion- referenced measurement: The state of the art. Baltimore: John Hopkins University Press. Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. New York,: Longmans, Green. Boje, D. M. (1991). The storytelling organization: A study of story performance in an office- supply firm. Administrative Science Quarterly, 36, 106 126. Boje, D. M. (1995). Stories of the storytelling organization: A postmodern analysis of Disney as Tamara- land. Academy of Management Journal, 38, 997 1035. Boje, D. M. (2001). Narrative Methods for Organizational and Communication Research. London: Sage Publications. Boyatzis, R. E. (1982). The competent manager: A model for effective performance. New York: Wiley. Brennan, R. L. (2001). Generalizability theory. New York: Springer. Retrieved from http://www.loc.gov/catdir/enhancements/fy0816/2001032009- t.html http://www.loc.gov/catdir/enhancements/fy0816/2001032009- d.html Briggs, R. O., Vreede, G.- J. de, & Nunamaker, J. F. J. (2003). Collaboration engineering with ThinkLets to pursue sustained success with group support systems. Journal of Management Information Systems, 19, 31 64. Briggs, R. O., Vreede, G.- J. de, Nunamaker, J. F. J., & Tobey, D. H. (2001). ThinkLets: Achieving predictable, repeatable patterns of group interaction with group support systems (GSS). Proceedings of the 34th Annual Hawaii International Conference on System Sciences, 1057 1065. Brown, Collins, & Duguid. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32 42. Campbell, C. H., Ford, P., Rumsey, M. G., Pulakos, E. D., BORMAN, W. C., Felker, D. B., Reigelhaupt, B. J. (1990). Development of multiple job performance measures in a representative sample of jobs. Personnel Psychology, 43, 277 300. Campion, M. A., Fink, A. A., Ruggenberg, B. J., Carr, L., Phillips, G. M., & Odman, R. B. (2011). Doing competencies well: Best practices in competency modeling. Personnel Psychology, 64, 225 262. Chaiklin, S. (2003). The zone of proximal development in Vygotsky s analysis of learning and instruction. In A. Kozulin, B. Gindis, V. S. Ageyev, & S. M. Miller (Eds.), Vygotsky s educational theory in cultural context (pp. 39 64). New York: Cambridge University Press. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&an=120 644 31

Charness, N. (1991). Expertise in chess: The balance between knowledge and search. In Toward a general theory of expertise: Prospects and Limits (pp. 39 63). Cambridge: Cambridge University Press. Chin, C., & Brown, D. E. (2000). Learning in science: A comparison of deep and surface approaches. Journal of Research in Science Teaching, 37(2), 109 138. Conklin, A. (2006). Cyber defense competitions and information security education: An active learning solution for a capstone course. In System Sciences, 2006. HICSS 06. Proceedings of the 39th Annual Hawaii International Conference on (Vol. 9, p. 220b 220b). Crandall, B., Klein, G. A., & Hoffman, R. R. (2006). Working minds: A practitioner s guide to cognitive task analysis. Cambridge, MA: MIT Press. Retrieved from http://www.loc.gov/catdir/toc/fy0704/2005058030.html Cronbach, L. J., Gleser, G., Nanda, H., & Rajaratnarn, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: John Wiley & Sons. Cropanzano, R. S., James, K., & Citera, M. (1993). A goal hierarchy model of personality, motivation and leadership. Research in Organizational Behavior, 15, 1267 1322. De Long, D. W. (2004). Lost knowledge: Confronting the threat of an aging workforce. Oxford: Oxford University Press. Retrieved from http://www.loc.gov/catdir/enhancements/fy0617/2004041577- d.html http://www.loc.gov/catdir/enhancements/fy0617/2004041577- t.html http://www.loc.gov/catdir/enhancements/fy0723/2004041577- b.html Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37, 32 64. Franke, V. (2011). Decision- making under uncertainty: Using case studies for teaching strategy in complex environments. Journal of Military and Strategic Studies, 13(2), 1 21. Gandhi, R. A., Siy, H., & Wu, Y. (2010). Studying software vulnerabilities. Crosstalk: The Journal of Defense Software Engineering, 16 20. Gandhi, R. A., Tobey, D. H., Reiter- Palmon, R., Yankelevich, M., & Pabst, K. (2013, February). ADAPTS: An evidence- based cyberlearning network for accelerating proficiency. Working paper, Omaha, NE. Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: A research and practice model. Simulation & gaming, 33(4), 441 467. Gibson, S. G., Harvey, R. J., & Harris, M. L. (2007). Holistic versus decomposed ratings of general dimensions of work activity. Management Research News, 30, 724 734. Gompert, D. C. (2007). Heads We Win the Cognitive Side of Counterinsurgency (COIN): RAND Counterinsurgency Study Paper 1. Santa Monica, CA: RAND Corporation. Goodenough, J., Lipson, H., & Weinstock, C. (2012, June 21). Arguing Security - Creating Security Assurance Cases. Build Security In. Retrieved May 8, 2013, from https://buildsecurityin.us- cert.gov/bsi/articles/knowledge/assurance/643- BSI.html Guilford. (1967). The Nature of Human Intelligence. McGraw- Hill. Guilford, J. P. (1956). The structure of intellect. Psychological Bulletin; Psychological Bulletin, 53(4), 267. Hoffman, L. J., Rosenberg, T., Dodge, R., & Ragsdale, D. (2005). Exploring a national cybersecurity exercise for universities. IEEE Security & Privacy, 3(5), 27 33. Johansen, R. (2007). Get there early: Sensing the future to compete in the present. San Francisco, Calif.: Berrett- Koehler Publishers. Joiner, R., Iacovides, J., Owen, M., Gavin, C., Clibbery, S., Darling, J., & Drew, B. (2011). Digital games, gender and learning in engineering: Do females benefit as much as males? Journal of Science Education and Technology, 20(2), 178 185. 32

Kelly, T. P., & McDermid, J. A. (1997). Safety case construction and reuse using patterns. In Proceedings of 16th International Conference on Computer Safety, Reliability and Security (SAFECOMP 97) (pp. 55 69). Springer. Kelly, T. P., & McDermid, J. A. (2001). A systematic approach to safety case maintenance. Reliability Engineering & System Safety, 71(3), 271 284. Kelly, T. P., & Weaver, R. A. (2004). The goal structuring notation a safety argument notation. In Proceedings of the dependable systems and networks 2004 workshop on assurance cases. Kiili, K. (2005). Digital game- based learning: Towards an experiential gaming model. The Internet and Higher Education, 8(1), 13 24. Klein. (1998). Sources of Power: How people make decisions. MIT Press. Kolb, D. A. (1984). Experiential Learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice- Hall. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, England: Cambridge University Press. Le Deist, F. D., & Winterton, J. (2005). What is competence? Human Resource Development International, 8, 27 46. Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance: 1969-1980. Psychological Bulletin, 90, 125 152. Lord, F. M., & Novick, M. (1968). Statistical theories of mental test scores. Reading, MA: Addison- Wesley. Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. New York: Henry Holt and Company. Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessments. Measurement, 1, 3 62. Mountain, J. R. (2004). The use of applied process control systems design to attract engineering students. In Proceedings of the 34th Annual ASEE/IEEE Frontiers in Education Conference (p. F4D 6 11). Savannah, GA. National Institute of Standards and Technology. (2011). NICE Cybersecurity Workforce Framework. Retrieved from http://csrc.nist.gov/nice/framework/ Nunamaker, J. F. J., Briggs, R. O., Mittleman, D. D., Vogel, D. R., & Balthazard, P. A. (1997). Lessons from a dozen years of group support systems research: A discussion of lab and field findings. Journal of Management Information Systems, 13, 163 207. O Neil, L. R., Assante, M. J., & Tobey, D. H. (2012). Snart Grid Cybersecurity: Job Performance Model Report (Technical Report No. PNNL- 21639). Alexandria, VA: National Technical Information Service. Paulsen, C., McDuffie, E., Newhouse, W., & Toth, P. (2012). NICE: Creating a cybersecurity workforce and aware public. IEEE Security & Privacy, 76 79. Phillips, V. (2013, March 8). Learning grammar with a joystick and math with a mouse. Huffington Post. Retrieved from http://www.huffingtonpost.com/impatient- optimists/learning- grammar- with- a- j_b_2837828.html?utm_hp_ref=technology Powers, W. T. (1973). Behavior: The control of perception. Chicago: Aldine. Prensky, M. (2001). Digital game- based learning. New York: McGraw- Hill. Reiter- Palmon, R., Brown, M., Sandall, D. L., Buboltz, C., & Nimps, T. (2006). Development of an O* NET web- based job analysis and its implementation in the US Navy: Lessons learned. Human Resource Management Review, 16, 294 309. Rursch, J. A., Luse, A., & Jacobson, D. (2010). IT- Adventures: A program to spark IT interest in high school students using inquiry- based learning with cyber defense, game design, and robotics. Education, IEEE Transactions on, 53(1), 71 79. 33

Schepens, W. J., & James, J. R. (2003). Architecture of a cyber defense competition. In Systems, Man and Cybernetics, 2003. IEEE International Conference on (Vol. 5, pp. 4300 4305). Schepens, W. J., Ragsdale, D. J., Surdu, J. R., & Schafer, J. (2002). The Cyber Defense Exercise: An evaluation of the effectiveness of information assurance education. The Journal of Information Security, 1(2). Schraagen, J. M. (2006). Task analysis. In The Cambridge handbook of expertise and expert performance (pp. 185 201). Cambridge, UK: Cambridge University Press. Seddigh, N., Pieda, P., Matrawy, A., Nandy, B., Lambadaris, I., & Hatfield, A. (2004). Current trends and advances in information assurance metrics. In Proceedings of the Second Annual Conference on Privacy, Security and Trust (pp. 197 205). Shilov, N. V., & Yi, K. (2002). Engaging students with theory through ACM collegiate programming contest. Communications of the ACM, 45(9), 98 101. Smith, K., Shanteau, J., & Johnson, P. (2004). Introduction: What does it mean to be competent? In K. Smith, J. Shanteau, & P. Johnson (Eds.), Psychological investigations of competence in decision making (pp. 1 4). Cambridge, UK: Cambridge University Press. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and instruction, 4(4), 295 312. Tindall- Ford, S., Chandler, P., & Sweller, J. (1997). When two sensory modes are better than one. Journal of experimental psychology: Applied, 3(4), 257. Tobey, D. H. (2007). Narrative s Arrow: Story sequences and organizational trajectories in founding stories. In Standing Conference on Management and Organizational Inquiry. Las Vegas, NV. Tobey, D. H. (2011). A competency model of advanced threat response. ATR Working Group Report NBISE- ATR- 11-02. Idaho Falls, ID: National Board of Information Security Examiners. Tobey, D. H., Pusey, P., & Burley, D. (in press). Engaging learners in cybersecurity careers: Lessons from the launch of the National Cyber League. ACM Inroads. Tobey, D. H., Reiter- Palmon, R., & Callens, A. (2012). Predictive Performance Modeling: An innovattive approach to defining critical competencies that distinguish levels of performance. OST Working Group Report. Idaho Falls, ID: National Board of Information Security Examiners. Tobey, D. H., Wanasika, I., & Chavez, C. I. (2007). PRISMA: A goal- setting, alignment and performance evaluation exercise. In Organizational Behavior Teachers Conference. Pepperdine, CA. Trafimow, D., & Rice, S. (2009). Potential performance theory (PPT): Describing a methodology for analyzing task performance. Behavior Research Methods, 41, 359 371. Tyler, J. A., & Boje, D. M. (2008). Sorting the relationship of tacit knowledge to story and narrative knowing. In Handbook of Research on Knowledge- Intensive Organizations. Information Science Reference. Van der Meij, H., Albers, E., & Leemkuil, H. (2011). Learning from games: Does collaboration help? British Journal of Educational Technology, 42(4), 655 664. Vogel, J. J., Vogel, D. S., Cannon- Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta- analysis. Journal of Educational Computing Research, 34(3), 229 243. Vygotsky, L. S. (1966). Thought and language. Cambridge, MA: MIT Press. Weaver, R. A., McDermid, J. A., & Kelly, T. P. (2002). Software safety arguments: towards a systematic categorisation of evidence. In Proceedings of the 20th International System Safety Conference (ISSC 2002). Denver, CO. Weick, Sutcliffe, & Obstfeld. (2005). Organizing and the process of sensemaking. Organization Science, 16, 409 421. Weiss, D. J., & Shanteau, J. (2003). Empirical assessment of expertise. Human Factors: The Journal of the Human Factors and Ergonomics Society, 45, 104 116. 34

White, G. B., & Williams, D. (2005). The collegiate cyber defense competition. In Proceedings of the 9th Colloquium for Information Systems Security Education. Wicker, F. W., Lambert, F. B., Richardson, F. C., & Kahler, J. (1984). Categorical goal hierarchies and classification of human motives. Journal of Personality, 52, 285 305. Wouters, P., van der Spek, E. D., & van Oostendorp, H. (2009). Current practices in serious game research: A review from a learning outcomes perspective. In T. Connolly, M. Stansfield, & L. Boyle (Eds.), Games- based learning advancements for multi- sensory human computer interfaces techniques and effective practices (pp. 232 250). Hershey, PA: Information Science Reference. Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (in press). A meta- analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology. Wu, Y., Siy, H., & Gandhi, R. (2011). Empirical results on the study of software vulnerabilities. In Proceedings of the International Conference on Software Engineering (pp. 964 967). Presented at the ICSE 2011, Honolulu, HI: Association for Computing Machinery. Zbylut, M. L., & Ward, J. N. (2004, December 29). Developing interpersonal abilties with interactive vignettes. Poster Presentation presented at the Proceedings for the Army Science Conference, Orlando, FL. Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/a433166.pdf i The Department of Homeland Security (DHS), Homeland Security Advisory Council (HSAC) provides advice and recommendations to the Secretary on matters related to homeland security. The Council comprises leaders from state and local government, first responder communities, the private sector, and academia. Source: http://www.dhs.gov/homeland- security- advisory- council- hsac ii Control Engineering Magazine, Working in the Cybersecurity Red Zone", April 2013. Source: http://www.controleng.com/single- article/working- in- the- cyber- security- red- zone/98d5f710139d2da28b8e51ea723bb945.html iii Ground Truth is used to describe the work scenario (also referred to as vignette) to impart the importance of using work stories that are current. The dynamic nature of the field means the work problems are changing. 35