VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8. VxWorks GUEST OS PROGRAMMER'S GUIDE FOR HYPERVISOR 1.1 6.8



Similar documents
Full and Para Virtualization

Site Configuration SETUP GUIDE. Windows Hosts Single Workstation Installation. May08. May 08

Virtual Machines. COMP 3361: Operating Systems I Winter

Virtualization. Dr. Yingwu Zhu

WIND RIVER HYPERVISOR

evm Virtualization Platform for Windows

Hybrid Virtualization The Next Generation of XenLinux

Hardware accelerated Virtualization in the ARM Cortex Processors

Virtualization in the ARMv7 Architecture Lecture for the Embedded Systems Course CSD, University of Crete (May 20, 2014)

Notes and terms of conditions. Vendor shall note the following terms and conditions/ information before they submit their quote.

Nested Virtualization

KVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com

Operating Systems. Lecture 03. February 11, 2013

Virtualization Technology. Zhiming Shen

x86 ISA Modifications to support Virtual Machines

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

The Microsoft Windows Hypervisor High Level Architecture

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

Virtualization for Cloud Computing

Nios II Software Developer s Handbook

Virtualization. Jia Rao Assistant Professor in CS

Virtualization. Pradipta De

Hardware Based Virtualization Technologies. Elsie Wahlig Platform Software Architect

Parallels Virtuozzo Containers

Virtualization. Types of Interfaces

Special FEATURE. By Heinrich Munz

Virtualization. Explain how today s virtualization movement is actually a reinvention

Architecture of the Kernel-based Virtual Machine (KVM)

Virtualization. Michael Tsai 2015/06/08

Chapter 5 Cloud Resource Virtualization

Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor?

Achieving Real-Time Performance on a Virtualized Industrial Control Platform

Site Configuration SETUP GUIDE. Linux Hosts Shared File Server Installation. May08. May 08

Virtualization in Linux KVM + QEMU

Xen and the Art of. Virtualization. Ian Pratt

An Implementation Of Multiprocessor Linux

BHyVe. BSD Hypervisor. Neel Natu Peter Grehan

Enabling Technologies for Distributed Computing

Using Linux as Hypervisor with KVM

Distributed Systems. Virtualization. Paul Krzyzanowski

Microkernels, virtualization, exokernels. Tutorial 1 CSC469

Hypervisors and Virtual Machines

Applying Multi-core and Virtualization to Industrial and Safety-Related Applications

VxWorks Licenses for EPICS Application Developers Andrew Johnson

EECatalog SPECIAL FEATURE

XtratuM hypervisor redesign for LEON4 multicore processor

FRONT FLYLEAF PAGE. This page has been intentionally left blank

9/26/2011. What is Virtualization? What are the different types of virtualization.

CS 695 Topics in Virtualization and Cloud Computing. More Introduction + Processor Virtualization

Multi-core Programming System Overview

Virtualization: Hypervisors for Embedded and Safe Systems. Hanspeter Vogel Triadem Solutions AG

Proteus, a hybrid Virtualization Platform for Embedded Systems

Enabling Technologies for Distributed and Cloud Computing

PCI-SIG SR-IOV Primer. An Introduction to SR-IOV Technology Intel LAN Access Division

Compromise-as-a-Service

Chapter 14 Virtual Machines

Advanced Computer Networks. Network I/O Virtualization

The Freescale Embedded Hypervisor

AN4664 Application note

Cloud^H^H^H^H^H Virtualization Technology. Andrew Jones May 2011

Enterprise-Class Virtualization with Open Source Technologies

Virtualization. Jukka K. Nurminen

COS 318: Operating Systems. Virtual Machine Monitors

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

Introduction to the NI Real-Time Hypervisor

Virtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16

System Virtual Machines

Virtualised MikroTik

Performance tuning Xen

White Paper. Freescale s Embedded Hypervisor for QorIQ P4 Series Communications Platform

Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition

VMware Server 2.0 Essentials. Virtualization Deployment and Management

COS 318: Operating Systems

Windows Server Virtualization & The Windows Hypervisor

Operating Systems 4 th Class

Introduction to Virtual Machines

Phoenix SecureCore TM Setup Utility

Version 3.7 Technical Whitepaper

Gigabit Ethernet Packet Capture. User s Guide

Intel Virtualization Technology Overview Yu Ke

DS-5 ARM. Using the Debugger. Version 5.7. Copyright 2010, 2011 ARM. All rights reserved. ARM DUI 0446G (ID092311)

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

An Esri White Paper January 2010 ArcGIS Server and Virtualization

Wind River. Intelligent Device Platform XT EMS Profile EMS DEVICE MANAGEMENT USER'S GUIDE WIND RIVER 1.0

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

KVM: Kernel-based Virtualization Driver

SYSTEM ecos Embedded Configurable Operating System

Knut Omang Ifi/Oracle 19 Oct, 2015

Network connectivity controllers

PikeOS: Multi-Core RTOS for IMA. Dr. Sergey Tverdyshev SYSGO AG , Moscow

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version Fix Pack 2.

DS-5 ARM. Using the Debugger. Version Copyright ARM. All rights reserved. ARM DUI 0446M (ID120712)

2972 Linux Options and Best Practices for Scaleup Virtualization

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Practical Applications of Virtualization. Mike Phillips IAP 2008 SIPB IAP Series

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

How To Test A Microsoft Vxworks Vx Works (Vxworks) And Vxwork (Vkworks) (Powerpc) (Vzworks)

Outline. Outline. Why virtualization? Why not virtualize? Today s data center. Cloud computing. Virtual resource pool

SUSE Linux Enterprise 10 SP2: Virtualization Technology Support

Transcription:

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 VxWorks GUEST OS PROGRAMMER'S GUIDE FOR HYPERVISOR 1.1 6.8

Copyright 2009 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means without the prior written permission of Wind River Systems, Inc. Wind River, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc. The Wind River logo is a trademark of Wind River Systems, Inc. Any third-party trademarks referenced are the property of their respective owners. For further information regarding Wind River trademarks, please see: www.windriver.com/company/terms/trademark.html This product may include software licensed to Wind River by third parties. Relevant notices (if any) are provided in your product installation at the following location: installdir/product_name/3rd_party_licensor_notice.pdf. Wind River may refer to third-party documentation by listing publications or providing links to third-party Web sites for informational purposes. Wind River accepts no responsibility for the information provided in such third-party documentation. Corporate Headquarters Wind River 500 Wind River Way Alameda, CA 94501-1153 U.S.A. Toll free (U.S.A.): 800-545-WIND Telephone: 510-748-4100 Facsimile: 510-749-2010 For additional contact information, see the Wind River Web site: www.windriver.com For information on how to contact Customer Support, see: www.windriver.com/support VxWorks Guest OS Programmer's Guide for Hypervisor 1.1 6.8 30 Oct 09

Contents 1 Overview... 1 1.1 Introduction... 1 1.2 What is the Wind River Hypervisor?... 2 1.3 What is the VxWorks Guest OS?... 2 1.3.1 VxWorks Guest OS and Native VxWorks... 3 Supported Platforms and Architectures... 3 Supported Device Drivers... 3 Limitations and Unsupported VxWorks Features... 3 Usage Caveats... 4 1.4 Additional Documentation... 4 2 Hypervisor Guest OS System Configurations... 7 2.1 Introduction... 7 2.2 VxWorks Standalone... 8 2.3 Multiple VxWorks Instances... 8 2.4 VxWorks and Wind River Linux... 9 2.5 VxWorks and Virtual Board Applications... 9 3 Getting Started with the VxWorks Guest OS... 11 3.1 Introduction... 11 3.2 Development Workflow... 11 3.3 Configuring and Building the VxWorks Guest OS... 14 3.3.1 Developing Drivers and BSPs... 14 3.3.2 Configuring and Building the VxWorks Guest OS Libraries with VSB... 14 3.3.3 Configuring and Building the VxWorks Image with a VIP... 16 iii

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 4 VxWorks Guest OS Development Environment... 17 4.1 Introduction... 17 4.2 Virtual Board Interface (VBI) Support... 18 4.3 General Interface Variations... 18 Power Management... 18 RTPs... 18 4.4 Architecture Considerations... 19 4.4.1 IA-32... 19 4.4.2 PowerPC... 19 5 BSP and Device Driver Considerations... 21 5.1 Introduction... 21 BSP Integration... 22 5.2 Hardware Interface Development Workflow... 23 5.3 Device Driver Development and Integration... 23 5.3.1 Porting a Native VxWorks Driver to VxWorks Guest OS... 23 5.3.2 Available Guest OS Device Drivers... 24 Interrupt Controller Drivers... 24 Timer Drivers... 25 Network Drivers... 25 PCI Configuration Space... 26 5.3.3 Configuring the Hypervisor System to Support a Guest OS Driver... 26 5.3.4 Configuring and Building Guest OS Device Drivers... 27 Configuring Guest OS Drivers... 28 Building Device Drivers... 28 5.4 BSP Development... 29 5.4.1 Native BSP Development and the Native VxWorks Boot Process... 29 5.4.2 Guest OS BSP Development... 29 5.4.3 Configuring the BSP... 30 5.4.4 Building the BSP... 30 5.4.5 Configuring the Hypervisor for the Virtual Board... 31 A Glossary... 33 B BSP and Driver Migration... 35 B.1 Introduction... 35 iv

Contents B.2 Device Driver Migration... 35 B.3 BSP Migration... 36 Index... 37 v

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 vi

1 Overview 1.1 Introduction 1 1.2 What is the Wind River Hypervisor? 2 1.3 What is the VxWorks Guest OS? 2 1.4 Additional Documentation 4 1.1 Introduction The VxWorks 6.8 Guest OS for Hypervisor is a special configuration of the VxWorks operating system designed to run in a Wind River Hypervisor system. This document briefly describes the overall hypervisor system and how the VxWorks 6.8 Guest OS fits into that system. It describes how to configure and build the guest OS so that it can be included in a hypervisor system that suits your development needs. Other topics that are covered include: limitations and usage caveats associated with the VxWorks 6.8 Guest OS interface and behavior variations in the guest OS when compared to a native or standalone VxWorks instance running directly on target hardware hardware interface (BSP and device driver) development This document is intended to support information provided with both the Wind River Hypervisor and the standard Wind River General Purpose Platform, VxWorks Edition documentation set. For more information, see 1.4 Additional Documentation, p.4. NOTE: The VxWorks 6.8 Guest OS is designed for use with Wind River Hypervisor, an optional add-on to your Platform. Guest OS support is available for the Wind River General Purpose Platform, VxWorks Edition. For more information, contact your Wind River representative. 1

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 1.2 What is the Wind River Hypervisor? Virtual Boards Wind River Hypervisor is a virtualization platform that can be used to partition the physical hardware of a computer system, allowing one or more independent guest operating systems or virtual board applications (which run directly on the virtual board without operating systems support) to run on the partitioned hardware. The hypervisor provides a virtualization environment called a virtual board. The hypervisor provides a virtualized or paravirtualized interface to physical devices in the system using the virtual board context. Virtual boards are created (configured) by the hypervisor and presented as a target platform for a guest OS or a virtual board application to run on. When the operating system is running as a guest OS, it is not aware of other guest OS instances or applications running on the target hardware (each in their own virtual board context). The guest OS operates as though it is a standalone operating system running a target board with the exact configuration provided by the virtual board context. Physical hardware resources are controlled by the hypervisor. Although the guest OS often has direct access to the target hardware resources, that access is always gated (allowed or disallowed) by the hypervisor. The hypervisor uses the virtual board context to present a virtual target hardware configuration to the guest OS. This includes the presentation of virtual or paravirtual versions of the target hardware devices (such as network, timer, or interrupt controller devices). Hypercalls and the Virtual Board Interface In a standard VxWorks system, the kernel has direct access to the hardware and direct control over hardware resources. Because the guest OS runs on a virtual board and not on the target hardware itself, it is limited in its access to the hardware. These limits are imposed by the hypervisor. For this reason, the hypervisor provides a virtual board interface which includes a set of calls (called hypercalls) that allow the guest OS to access certain privileged operations on the target hardware. 1.3 What is the VxWorks Guest OS? Guest OS support for VxWorks enables the VxWorks operating system to run on the Wind River Hypervisor (within a virtual board context). Guest OS support requires changes to the core VxWorks operating system as well as board support packages and device drivers. Figure 1-1 shows an example configuration of VxWorks running as a standalone guest OS to the Wind River Hypervisor. On any given hypervisor system, there can be one or more VxWorks Guest OS instances and these instances can be potentially running in combination with any number of Linux or virtual board applications (VBAs). Each guest OS runs within the virtual board context provided by the hypervisor. That virtual board provides virtualized target hardware to the guest OS. 2

1 Overview 1.3 What is the VxWorks Guest OS? Figure 1-1 VxWorks Running as a Guest OS on Wind River Hypervisor VxWorks Guest OS Virtual Board Wind River Hypervisor Target Hardware You should note that when running within a virtual board context, the VxWorks Guest OS is limited in its capabilities and supported features by both the configuration of the VxWorks project and the capabilities exposed to the virtual board within the hypervisor configuration. VxWorks does not run natively on the target hardware. In general, hardware access is controlled by the hypervisor through the virtual board context. 1.3.1 VxWorks Guest OS and Native VxWorks Behavior of the VxWorks Guest OS closely parallels that of the native VxWorks operating system (that is, VxWorks running directly on target hardware). Therefore, this document assumes that you are generally familiar with development in a native VxWorks system. Cross-references to additional information are provided as needed. Supported Platforms and Architectures In this release, the hardware platforms and architectures supported by the VxWorks Guest OS are a subset of those supported by the native VxWorks release. For specific target hardware support, see the release notes for your Wind River Hypervisor product. Supported Device Drivers Information on supported device drivers is provided in 5. BSP and Device Driver Considerations. Limitations and Unsupported VxWorks Features In general, the goal of the VxWorks Guest OS is to provide the same level of functionality and support in the hypervisor environment (or virtual board context) that is available on native VxWorks running standalone on target hardware. However, in some cases, the VxWorks feature set is limited by the support provided by the Wind River Hypervisor. 3

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 In general, the following limitations apply to the VxWorks Guest OS: VxWorks Guest OS support is based on the Wind River General Purpose Platform, VxWorks Edition. Support for additional Wind River VxWorks Platforms such as Wind River Platform for Network Equipment, VxWorks Edition is not available in this release. Current architecture support is limited to IA-32 and PowerPC. (For more information, see the Wind River Hypervisor Release Notes.) Device driver support is limited. For details, see 5.3.2 Available Guest OS Device Drivers, p.24. Symmetric multiprocessing (SMP) configurations are not supported. The native VxWorks unsupervised asymmetric multiprocessing (AMP) implementation is also unavailable (this support can be replaced using Wind River Hypervisor capabilities). Power management support is not available in this release. Support for real-time processes (RTPs) is not available in this release. Hardware breakpoints are not supported in this release. Interrupt stack protection for IA-32 targets is not available. Cache and MMU support is limited for PowerPC targets. (For more information, see 4.4 Architecture Considerations, p.19.) Usage Caveats When running within a virtual board context, the ability of the VxWorks Guest OS to respond to interrupts is directly impacted by the configuration of the virtual board. Therefore, it is critical that the two configurations the virtual board configuration (for example, vxworks.xml) and the VxWorks BSP configuration (hwconf.c and config.h) are well matched. You must also be aware of the configuration of other virtual boards (and their load on the system) in order to understand their potential impact on the overall system. 1.4 Additional Documentation The VxWorks 6.8 Guest OS for Hypervisor 1.1 Programmer s Guide (this document) does not serve as a standalone document for VxWorks or hypervisor development. Because VxWorks Guest OS development closely parallels the process used for standard VxWorks development, this guide is intended as a supplement to the standard VxWorks documentation set. In cases where the information in this document conflicts with or imposes more restriction than that provided in the standard VxWorks documentation set, you should assume that the information in this guide supersedes other VxWorks documentation. You are also expected to use this document in conjunction with the standard Wind River Hypervisor documentation provided with your release. This document does not provide complete instructions for hypervisor development. 4

1 Overview 1.4 Additional Documentation Before beginning your development, you should be familiar with the following VxWorks documentation and other guides and references provided as part of this VxWorks release: VxWorks Kernel Programmer s Guide VxWorks Application Programmer s Guide VxWorks BSP Developer s Guide VxWorks Device Driver Developer s Guide (Volumes 1-3) Wind River Network Stack Programmer s Guide (Volumes 1-3) You may also wish to be familiar with the following documentation: Wind River Hypervisor Release Notes Wind River Hypervisor Getting Started Wind River Hypervisor User s Guide Wind River Hypervisor Virtual Board Interface Guide Wind River Linux Guest OS for Hypervisor 1.1 Programmer s Guide Wind River MIPC Programmer s Guide 5

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 6

2 Hypervisor Guest OS System Configurations 2.1 Introduction 7 2.2 VxWorks Standalone 8 2.3 Multiple VxWorks Instances 8 2.4 VxWorks and Wind River Linux 9 2.5 VxWorks and Virtual Board Applications 9 2.1 Introduction This chapter briefly describes some of the standard Wind River Hypervisor system configurations that can include one or more VxWorks Guest OS instances. As mentioned previously, the VxWorks Guest OS runs in its own virtual board context and is not aware of the overall hypervisor system. For this reason, guest OS behavior and development is the same, regardless of the overall system configuration. For more information on designing, building, and configuring the Wind River Hypervisor, see the Wind River Hypervisor User s Guide. 7

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 2.2 VxWorks Standalone In a VxWorks standalone configuration, a single instance of the VxWorks Guest OS runs in a virtual board context on top of the Wind River Hypervisor. VxWorks Guest OS Virtual Board Wind River Hypervisor Target Hardware 2.3 Multiple VxWorks Instances When the hypervisor configuration includes multiple VxWorks instances, any number of VxWorks Guest OS instances each running in its own virtual board context are configured to run on top of the Wind River Hypervisor. VxWorks Guest OS 1 VxWorks Guest OS 2 VxWorks Guest OS N Virtual Board Virtual Board Virtual Board Wind River Hypervisor Target Hardware 8

2 Hypervisor Guest OS System Configurations 2.4 VxWorks and Wind River Linux 2.4 VxWorks and Wind River Linux In some configurations, you may wish to have both a VxWorks Guest OS and Linux Guest OS instance running (again, each in its own virtual board context) on top of the Wind River Hypervisor. Note that the number of instances of each operating system running at any given time is not limited to one. Any number of Linux andvxworks instances can be run simultaneously. VxWorks Guest OS Virtual Board Linux Guest OS Virtual Board Wind River Hypervisor Target Hardware 2.5 VxWorks and Virtual Board Applications Wind River Hypervisor can also be configured in a way that couples one or more guest OS instances (these could be VxWorks, Linux, or a combination of both) along with a virtual board application (VBA) that runs directly on the virtual board (without a guest OS). VxWorks Guest OS Virtual Board Virtual Board Application Virtual Board Wind River Hypervisor Target Hardware 9

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 10

3 Getting Started with the VxWorks Guest OS 3.1 Introduction 11 3.2 Development Workflow 11 3.3 Configuring and Building the VxWorks Guest OS 14 3.1 Introduction This chapter provides a brief introduction to the development workflow when working with a Wind River Hypervisor system. The goal is not to provide complete instructions on how to design, build, and configure a hypervisor system (for that information, see the Wind River Hypervisor User s Guide) but rather to help you understand how VxWorks Guest OS development fits into the overall process. This chapter also describes (in more detail) how to build and configure the VxWorks Guest OS itself. 3.2 Development Workflow The Wind River Hypervisor comes with a set of examples that demonstrate configuration of virtual boards running the VxWorks Guest OS with Ethernet and serial driver access, MIPC, and debugging functionality. You can use one of these examples to start your development, or you can create a new system from scratch using Wind River Workbench. For more information on creating a system from scratch, see the Wind River Hypervisor User s Guide. 11

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 There are pre-configured examples that are intended to cover certain standard configurations (see 2. Hypervisor Guest OS System Configurations) running on the supported target hardware. If your development can be done using one of these standard configurations, your workflow is similar to that shown in Figure 3-1. Figure 3-1 Basic Development Workflow Using Pre-Configured Examples Select example Run example on target hardware Develop application(s) A more complex development scenario might involve making modifications to the a pre-configured example to better support your target environment. Similar to BSP development in native VxWorks, your starting point is a pre-configured example and you should still begin by selecting the example that most closely matches your development requirements (just as you might select a Wind River-supplied BSP to use as a start point in native VxWorks BSP development). However, you may need to make changes to hypervisor design and configuration and also to the guest OS. Guest OS changes may require additional project builds using VxWorks source build (VSB) projects and VxWorks image projects (VIPs). In even more advanced scenarios, you may need to do further development on the BSPs and drivers associated with each virtual board/guest OS combination. Figure 3-2 shows what a development process might look like in a more advanced situation. 12

3 Getting Started with the VxWorks Guest OS 3.2 Development Workflow Figure 3-2 Advanced Development Workflow for a Hypervisor System Select example and make a copy Guest OS device driver development and integration Design and configure hypervisor system BSP development for virtual board and guest OS Build VSB to enable guest OS support Build VxWorks image with VIP Guest OS Development Build configuration Run example on target hardware Develop application(s) As shown in Figure 3-2, guest OS development is just one piece of the overall development process required for a hypervisor system. NOTE: The focus of this document is VxWorks Guest OS development. Design, configuration, and build of an overall hypervisor system is covered in the Wind River Hypervisor User s Guide. 13

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 3.3 Configuring and Building the VxWorks Guest OS As mentioned previously, the pre-configured examples provided for the Wind River Hypervisor can be used to set up a complete development environment for certain hypervisor configurations. (For more information on working with the pre-configured examples, see the Wind River Hypervisor Getting Started.) Before working with the pre-configured examples, you must first build the VxWorks libraries with guest OS support enabled (see 3.3.2 Configuring and Building the VxWorks Guest OS Libraries with VSB, p.14). This can be done using Workbench or the vxprj command-line facility. NOTE: This section assumes that you are familiar with the standard process for configuring and building VxWorks projects. For complete information on building and configuring VxWorks projects, see the VxWorks Command-Line Tools User s Guide and Wind River Workbench By Example, VxWorks Edition. 3.3.1 Developing Drivers and BSPs If you wish to make changes to the default BSP and device drivers provided in the pre-configured hypervisor examples, you should make these changes prior to building your VSB. Note that if you modify a device driver, you must rebuild your VSB (you do not need to create a new VSB, simply rebuild what is there). If you modify the BSP only, you do not need to rebuild or re-create your VSB (rebuilding of your VxWorks Image Project (VIP) is sufficient). For more information on BSP and device driver development, see 5. BSP and Device Driver Considerations. 3.3.2 Configuring and Building the VxWorks Guest OS Libraries with VSB The VxWorks Guest OS must be built using a VSB. In order to build or rebuild the necessary libraries, the WRHV_GUEST option must be selected in the VSB configuration. In the native VxWorks environment, VSBs can be built by specifying either a target BSP or a target processor (CPU). However, because of the limitations associated with target support on hypervisor and guest OS, all guest OS builds must be executed by specifying a BSP. For example: -> vxprj vsb create -bsp bspname In addition, because the VxWorks Guest OS does not support SMP configurations in this release, if you attempt to build the guest OS libraries with the SMP option selected, the WRHV_GUEST option is disabled. 14

3 Getting Started with the VxWorks Guest OS 3.3 Configuring and Building the VxWorks Guest OS Specifically, you can build the VxWorks libraries with Wind River Hypervisor capabilities as follows: For IA-32: -> cd installdir/vxworks-6.7/target/proj -> vxprj vsb create bsp pcpentium4... Build VxWorks Guest OS for WR Hypervisor (WRHV_GUEST) [N/y/?] (NEW) y Select the VBI version to use > 1. Build GuestOS for VBI 2.0 (WR Hypervisor 1.1) (VBI_VER_2_0) (NEW) choice[1]: 1... -> cd vsb_pcpentium4 -> make For PowerPC: -> cd installdir/vxworks-6.7/target/proj -> vxprj vsb create bsp rvb8572... Build VxWorks Guest OS for WR Hypervisor (WRHV_GUEST) [N/y/?] (NEW) y Select the VBI version to use > 1. Build GuestOS for VBI 2.0 (WR Hypervisor 1.1) (VBI_VER_2_0) (NEW) choice[1]: 1... -> cd vsb_rvb8572 -> make NOTE: You can also build the VxWorks libraries using Workbench. Depending on your target hardware, the default option for enabling a guest OS (WRHV_GUEST) build may be different. In this release, the default option for the pcpentium4 BSP is off (n) but you have the option to enable it by selecting y. For the rvb8572 BSP, the guest OS BSP build is enabled (set to y) by default and you are not given the option to turn it off. This setting is controlled by a BSP component description file (CDF) entry. The BSP CDF file (20bsp.cdf) includes the GUEST_OS_OPTION property that can be set to REQUIRED for BSPs that can only be built as a guest OS BSP or SUPPORTED for BSPs that can be built as either a native or guest OS BSP. The following example is taken from the pcpentium4 BSP 20bsp.cdf file and shows that this BSP can be built as either a native BSP or a guest OS BSP:... Bsp pcpentium4 { NAME board support package CPU PENTIUM4 REQUIRES INCLUDE_KERNEL \ INCLUDE_PCI \ INCLUDE_PENTIUM_PCI \ INCLUDE_PCI_OLD_CONFIG_ROUTINES \ INCLUDE_PCPENTIUM4_PARAMS \ DRV_TIMER_I8253 \ DRV_NVRAM_FILE \ INCLUDE_MMU_P6_32BIT \ INCLUDE_CPU_LIGHT_PWR_MGR FP hard MP_OPTIONS SMP GUEST_OS_OPTION SUPPORTED }... 15

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 Note that a VIP created from a BSP with GUEST_OS_OPTION set as REQUIRED can only be associated with a VSB that has been built with the WRHV_GUEST attribute enabled. All other VSBs will reject the VIP, including the default libraries under: installdir/vxworks-6.x/target/lib Or: installdir/vxworks-6.x/target/lib_smp A VIP created from a BSP that has GUEST_OS_OPTION set as SUPPORTED can be associated with a VSB with the WRHV_GUEST attribute enabled or with any other VSB that supports the BSP. 3.3.3 Configuring and Building the VxWorks Image with a VIP Once you have built the hypervisor-enabled VxWorks libraries, you can configure a VxWorks image to suit your development requirements. This process is similar to that used to configure and build a VxWorks image using a VIP in the native VxWorks environment. However, you must keep in mind any limitations or restrictions imposed by the hypervisor. (For information on how the VxWorks Guest OS interface differs from the native VxWorks interface, see 4. VxWorks Guest OS Development Environment.) To build the default VxWorks image or images for the virtual board or boards: For IA-32: For PowerPC: -> cd installdir/vxworks-6.x/target/proj -> vxprj create vsb vsb_pcpentium4 pcpentium4 gnu -> cd pcpentium4_gnu -> vxprj build -> cd installdir/vxworks-6.x/target/proj -> vxprj create vsb vsb_rvb8572 rvb8572 sfdiab -> cd rvb8572_sfdiab -> vxprj build NOTE: You can also build the VxWorks image using Workbench. For more information on building your image with Workbench, see Wind River Workbench By Example: Configuring and Building VxWorks. 16

4 VxWorks Guest OS Development Environment 4.1 Introduction 17 4.2 Virtual Board Interface (VBI) Support 18 4.3 General Interface Variations 18 4.4 Architecture Considerations 19 4.1 Introduction This chapter discusses the differences between the VxWorks Guest OS development environment and the standard (native) VxWorks development environment. NOTE: In general, this chapter should be considered a supplement to the VxWorks development information provided in the standard documentation (see 1.4 Additional Documentation, p.4). In general, the guest OS environment closely parallels that of a standard VxWorks system where the VxWorks OS runs natively (that is, directly on the target hardware as a standalone OS). However, because VxWorks runs as a guest in the hypervisor environment, certain limitations and interface variations are necessary. This includes: Support must be provided for the hypervisor virtual board interface (VBI). Certain standard VxWorks routines must be modified to support the hardware access provided by the hypervisor Standard VxWorks architecture support is limited by the capabilities provided by the Wind River Hypervisor. That is, the full architecture support provided by standalone VxWorks may not be available in a hypervisor system and the architecture implementation may be different. 17

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 4.2 Virtual Board Interface (VBI) Support The virtual board interface provides a software layer that allows a guest OS or standalone application to use services that are provided by the hypervisor. The software layer provides a set of public APIs that are used to interact with the target hardware through the hypervisor. For example, drivers running in a virtual board (guest OS) use the VBI APIs to make hypercalls in order to perform tasks that require manipulating the actual target hardware. The VxWorks 6.8 Guest OS provides full support for the hypervisor VBI. For detailed information on working with the VBI, see the Wind River Hypervisor Virtual Board Interface Guide. 4.3 General Interface Variations This section describes changes to standard VxWorks interfaces that you should be aware of when working with the VxWorks Guest OS. For more information on the standard VxWorks interface, see the VxWorks Kernel Programmer s Guide and the VxWorks Application Programmer s Guide. NOTE: General limitations for VxWorks features are noted in 1.3.1 VxWorks Guest OS and Native VxWorks, p.3. In general, libraries and routines associated with those limitations are not available in this release of the guest OS. This section does not include a comprehensive list of all unavailable libraries and routines. Power Management Power management is not supported in this release. The guest OS does not have access to the power management hardware on the target board. When the guest OS runs out of work, instead of entering the power reduced CPU state (C-state), it gives up the CPU and passes control to the hypervisor. This is done by making a call to vbiidle( ). RTPs RTPs are not supported in this release of the VxWorks guest OS. 18

4 VxWorks Guest OS Development Environment 4.4 Architecture Considerations 4.4 Architecture Considerations This section supplements the architecture-specific information provided in the VxWorks Architecture Supplement for those architectures that are supported by the Wind River Hypervisor. For more information on architecture support for the Wind River Hypervisor, see the Wind River Hypervisor Release Notes. 4.4.1 IA-32 For IA-32, Wind River Hypervisor utilizes the Intel VT-x hardware virtualization support. This makes the para-virtualized environment close to the native one. However, the following changes have been made to the architecture code so that VxWorks can run in the hypervisor environment: Because there is no support for a virtual TSS register, the IA-32 hardware task switch is not supported by the hypervisor (in the current release, a task switch causes an unhandled VM-exit in the hypervisor). For this reason, the VxWorks Operating System Miscue (OSM) stack 1 and interrupt stack protection (guard zones on the stack) are not supported in the hypervisor environment. 4.4.2 PowerPC The following are some notable architecture code changes that have been made in order to support the VxWorks guest OS on PowerPC: The VxWorks Guest OS on PowerPC targets runs in user mode. No supervisor-only instructions can be called directly. Cache is always enabled on guest OS and cache enable and disable routines are no-op. Critical interrupt support is disabled because critical interrupts are handled by the hypervisor. The guest OS library is compiled with software floating-point enabled. Most vxalib routines are disabled in the guest OS because the associated functionality is not required and the hardware is not accessible in guest OS configurations. 1. The OSM stack is used for the handling and recovery of stack overflow and underflow conditions. For more information, see the VxWorks Architecture Supplement: IA-32. 19

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 20

5 BSP and Device Driver Considerations 5.1 Introduction 21 5.2 Hardware Interface Development Workflow 23 5.3 Device Driver Development and Integration 23 5.4 BSP Development 29 5.1 Introduction Each VxWorks Guest OS instance runs on a virtual board. Virtual boards are created and configured by the hypervisor and are used to regulate guest OS access to physical target hardware resources. These resources are frequently shared among multiple virtual boards. In order to coordinate access to these shared resources, the hypervisor abstracts the physical hardware and presents a virtualized device or devices to each virtual board. The level of abstraction varies from full virtualization to physical presentation. Devices that are entirely shared require some sort of virtualization. Devices that share no registers require little or no virtualization. Drivers that manage these virtual devices must be tailored for this virtualization. There are three types of drivers used in this virtualization virtual, paravirtual, and physical. The differences are described in the following sections. Drivers for Virtual Devices Virtual devices fully abstract the underlying hardware from the virtual board. Physical access to these devices is managed entirely by the hypervisor. The guest OS is presented with an abstraction of the device that can be entirely hardware-independent or may resemble the hardware of the same or a similar device. From the guest OS, access to and from the virtual device is done either through the shared data structures wrhvvbcontrol, wrhvvbstatus, and wrhvvbconfig or through the hypercall mechanism. In both cases, the actual hardware device driver is owned and managed by the hypervisor. (For more information on the wrhvvbcontrol and wrhvvbstatus structures and hypercalls, see the Wind River Hypervisor Virtual Board Interface Guide.) 21

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 Examples of drivers for virtual devices in this release include the system timer (vxbvbtimer) and the interrupt controller (vxbvbintctlr). Drivers for Paravirtual Devices A paravirtual device presents an abstraction that is close to, but not identical, to the underlying hardware. Paravirtualization can potentially allow for better performance than a fully virtualized device. The paravirtual device drivers provided with this release are typically standard VxWorks drivers that are modified for use with the guest OS. The source file is usually common with a standard VxWorks driver. The standard device driver is conditionally compiled to provide the necessary guest OS adaptations. Changes are made where hardware is unavailable or access is restricted due to sharing with multiple virtual boards. Some examples include registers that perform multiple functions or manage multiple devices. Examples of drivers for paravirtual devices in this release include the vxbopenpictimer and vxbetsecend network drivers. Drivers for Physical Devices As mentioned previously, access to hardware devices is controlled by the hypervisor through the virtual board interface. Because of this and because devices in a hypervisor system can be shared among more than one guest OS or virtual board application some devices presented to the guest OS must be virtualized or paravirtualized. In many situations, no abstraction is required and the device is physically presented. In general, if no control structures such as register sets are shared among virtual boards, and access to the device can be managed entirely by a single virtual board, no virtualization is required within the device and the device is handled as a physical presentation. The NS16550 serial driver (vxbns16550sio) is used (unmodified) from the native VxWorks implementation in the rvb8572 BSP and fits this type of driver situation. BSP Integration During the boot process, BSP routines call core OS and device driver routines to configure the operating system and drivers. The OS and device drivers make calls to the BSP routines during system operation in order to make specific hardware requests. In a guest OS system, the board support package uses virtual and paravirtual resources, as well as physical resources, to configure the VxWorks kernel for the physical hardware as presented by the virtual board. 22

5 BSP and Device Driver Considerations 5.2 Hardware Interface Development Workflow 5.2 Hardware Interface Development Workflow In general, device driver development begins before BSP development because drivers provide access to the underlying hardware and, by extension, the ability to test the BSP on the target. However, this is an iterative process and it is common for the developer to cycle between BSP and device driver development until the guest OS hardware interface is complete. Device driver development is directly affected by the device presentation provided by the hypervisor virtual board context. Similarly, because the hypervisor virtual board context is the gatekeeper for all access to the underlying hardware, the BSP configuration must be precisely matched to the virtual board. 5.3 Device Driver Development and Integration Drivers can either be adaptations of existing VxWorks drivers or they can be developed specifically for use with a guest OS configuration. If there is an existing VxWorks driver, but changes are necessary to adapt the driver for the guest OS, Wind River recommends that you conditionally process the driver using the _WRS_CONFIG_WRHV_GUEST keyword. This option is used to ensure that guest OS support is only enabled for qualified targets and hardware platforms. It also simplifies maintenance of the new driver. If the driver is new or unique to the VxWorks Guest OS, the standard naming convention for the driver is to prepend the driver name with vxbvb. The VxBus device driver infrastructure defines standard interfaces for the driver to interact with the operating system and device hardware. For more information on VxBus, see the VxWorks Device Driver Developer s Guide, Volume 1. NOTE: Wind River strongly recommends that you do not use legacy (pre-vxbus) device drivers in a hypervisor system. For more information, see the VxWorks Device Driver Developer s Guide, Volume 3. Note that complete information on VxWorks device driver development is beyond the scope of this document. For detailed information on VxWorks device driver development, see the VxWorks Device Driver Developer s Guide (Volumes 1-3). 5.3.1 Porting a Native VxWorks Driver to VxWorks Guest OS In many cases, the hypervisor allows the VxWorks Guest OS direct access to target hardware resources. If a hardware resource is not shared across multiple guest OSs or VBAs, it is generally presented directly to the VxWorks Guest OS. When a device is shared directly (physically presented) to the guest OS, the process for porting the guest OS device driver is the same as porting a device driver to native VxWorks. This process is discussed in the VxWorks Device Driver Developer s Guide (Volumes 1-3). In some cases, such as with interrupt controllers or timers, the hardware resource must be shared across multiple guest OSs or VBAs. In this case, porting is 23

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 dependent on the level of hardware virtualization provided by the Wind River Hypervisor. The VxWorks 6.8 Guest OS includes drivers for all virtualized or paravirtualized devices currently supported by the Wind River Hypervisor. These drivers can be used as-is with the target hardware supported by this release. At this time, porting custom drivers for virtualized or paravirtualized devices is not supported for VxWorks Guest OS. For more information on supported hardware and devices, see the Wind River Hypervisor Release Notes. For more information on device driver migration and porting, see the information in the following sections and B.2 Device Driver Migration, p.35. 5.3.2 Available Guest OS Device Drivers This section discusses the device drivers available for the VxWorks Guest OS. It also includes architecture-specific information for available drivers and how they are virtualized (or paravirtualized). Interrupt Controller Drivers IA-32 PowerPC Interrupts are managed by the hypervisor and forwarded to a virtual board. The guest OS is presented with a virtual interrupt controller called the virtual I/O APIC that is modeled on the Intel I/O APIC. Intel Pentium 4 hardware platforms use a physical I/O APIC device for interrupt routing. The hypervisor environment provides a virtual I/O APIC (vio APIC) in the virtual board environment. The following driver is provided for use with the guest OS: installdir/vxworks-6.x/target/src/hwif/intctrl/vxbvioapicintr.c Note that this driver is closely related to the vxbioapicintr.c driver provided for native VxWorks installations. PowerPC hardware typically includes an EPIC (or OpenPIC) programmable interrupt controller on the physical board but the virtual board (provided by the hypervisor) presents a virtual I/O APIC regardless of the hardware implementation. For these targets, a virtualized interrupt controller based on the I/O APIC is provided in: installdir/vxworks-6.x/target/src/hwif/intctrl/vxbvbintctlr.c Although this driver is modeled on the I/O APIC model, the driver does not resemble the vxbioapicintr.c driver implementation provided for native VxWorks. 24

5 BSP and Device Driver Considerations 5.3 Device Driver Development and Integration Timer Drivers The hypervisor provides a virtual fixed interval timer device that can be used as a system clock by the virtual board. The timer rate is fixed by the hypervisor and cannot be changed for the virtual board. IA-32 IA-32 systems use a modified virtual timer device driver: installdir/vxworks-6.x/target/src/hwif/timer/vxbi8253timer.c This driver is virtualized for use in guest OS using preprocessor statements. The standard driver is modified and then conditionally compiled as needed. PowerPC PowerPC systems use a generic virtual timer driver: installdir/vxworks-6.x/target/src/hwif/timer/vxbvbtimer.c This driver is hardware independent but currently used for PowerPC only. Network Drivers IA-32 The supported network driver for IA-32 systems is: installdir/vxworks-6.x/target/src/hwif/end/gei825xxvxbend.c This driver is the same as that used in the native VxWorks environment; no changes are required to support the hypervisor environment. DMA Considerations Device drivers that need to do DMA should use the hyvirttophys( ) routine. This routine makes a hypercall to vbiguestdmaaddrget( ) which translates the DMA physical address. For 64-bit hypervisor systems, the VT-d must be turned on to translate the address into a proper 32-bit address. PowerPC The supported network driver for IA-32 systems is: installdir/vxworks-6.x/target/src/hwif/end/vxbetsecend.c The vxbetsecend.c driver has been modified to make use of these hypercalls when configured for use in the guest OS. MDIO Bus Considerations The Freescale MPC8572DS reference board includes four etsec network devices. These etsec devices are all connected to the same PHY chip and the MDIO bus is shared among them. When the etsec devices are assigned to different virtual boards, multiple virtual boards may attempt to access the MDIO bus simultaneously. Therefore, accesses to this bus must be sequenced. Access attempts by multiple virtual boards at the same time can cause sequencing issues and potential race conditions. To avoid these race conditions, the hypervisor provides MDIO access methods. Whenever the driver attempts to access the MDIO 25

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 bus, a hypercall is made through the hypervisor to physically access the bus. The hypervisor is then responsible for ensuring proper sequencing of requests from the virtual boards. PCI Configuration Space IA-32 PowerPC On IA-32 systems, the PCI configuration space access is trapped by the hypervisor. The guest OS cannot see any PCI device that is not explicitly assigned to the guest OS instance. The intline value returned by the hardware is not applicable to the virtual board environment where the guest OS runs. For this reason, pentiumpci.c has been modified to return the intline value assigned to the virtual board. The VxWorks Guest OS for PowerPC does not include support for PCI. 5.3.3 Configuring the Hypervisor System to Support a Guest OS Driver The hypervisor enables the target MMU and maps the virtual board memory to physical memory even if the guest OS lacks MMU support. The memory presented to the virtual board is called the virtual board physical address. Figure 5-1 Virtual Board Physical Address Example Virtual Board Virtual Address 0x40000000 Translated by virtual board MMU page table Virtual Board Physical Address (hypervisor virtual address) 0x00100000 Translated by hypervisor MMU page table Physical Address 0x10100000 Address used by hypervisor TLB miss handler Each device instance needs to have its physical address space mapped to the guest OS virtual board physical address. Space is mapped in multiples of the MMU page 26

5 BSP and Device Driver Considerations 5.3 Device Driver Development and Integration size. Each allocation must be mapped on a page boundary. Each device allocated to a virtual board must have a unique address, but different virtual boards may use the same addresses. The hypervisor VxWorks configuration file (for example, vxworks.xml) has memory map entries for each device accessed by the virtual board. As an example, an OpenPic timer which has a physical base address of 0xE00410F0 and virtual board base address is 0xD00060F0 would have the following entry in the configuration file: <Region Name="picTimer" DataType="IO_DEVICE" length="0x1000" VirtualAddress="0xD0006000" PhysicalAddress="0xE0041000" MmuCacheAttr="0xa36"/> Global interrupt vectors that are assigned to devices by the hypervisor are translated to virtual vectors. Figure 5-2 Virtual Interrupt Controller Mappings Interrupt from virtual board Virtual board outgoing interrupt number to global interrupt number mapping Hardware interrupt from BSP BSP interrupt number to global interrupt number mapping Also provides virtual board ID. Global interrupt to virtual board interrupt table (0-127) Virtual interrupt controller (128 levels) Direct interrupt from hypervisor The interrupt assignment of the physical interrupt vector to a global interrupt vector is defined in the wrhvconfig.xml file. In the following examples, a name is assigned to the vector and the global vector is associated with a virtual board: <Int Name="picTimerA0" HwIntNumber="60" DestBoard="vxworks"/> The VxWorks configuration file (vxworks.xml) assigns the named vector to a global interrupt number as follows: <In Name="picTimerA0" Vector="5"/> 5.3.4 Configuring and Building Guest OS Device Drivers The following sections provide information on configuring and building the supported guest OS drivers for use in a hypervisor environment. 27

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 Configuring Guest OS Drivers Configuration of VxBus drivers is done in the BSP hwconf.c file. This file provides the configuration values that are necessary for the device to function with the target board and operating system. These values include register base addresses, vector offsets, and other various configuration parameters that are unique to each BSP. For more information on the VxBus driver infrastructure and VxBus device drivers, see the VxWorks Device Driver Developer s Guide (Volume 1). It is important to keep the memory addresses that are defined by the hypervisor XML files consistent with the addresses defined in the virtual board BSP hwconf.c. Register addresses defined in the device resource table must have their page address match the values used by the hypervisor. The page offset is the same as the physical offset. For example, the device described in the previous section would have the following entry in the hwconf.c openpictimerdevaresources resource table: {VXB_REG_BASE, HCF_RES_INT, {(void *)(0xD00060F0)}}, The virtual board provides the routine interruptconversiontablesetup( ) (PowerPC only) to determine the virtual board interrupt vector assignments as configured by the hypervisor. Wind River recommends that you use the standard VxWorks vector numbers for virtual assignments. Virtual vector numbers must be unique within a virtual board, but multiple virtual boards can use the same virtual vector numbers. The following code fragment illustrates the translation from guest OS virtual vectors to hypervisor global vectors. The first argument in the call must match the name used in the VxWorks configuration file (for example, vxworks.xml). case OPENPIC_TIMERA0_INT_VEC: vector = vbiintvecfind ((int8_t *)"pictimera0", VB_INPUT_INT); break; VxBus drivers typically register their interrupts by obtaining the device interrupt number from hwconf.c. An entry for the virtual interrupt number must be added to the virtual board interrupt controller input table (vbintctlrinputs) table. For example: { OPENPIC_TIMERA0_INT_VEC, "openpictimer", 0, 0 }, Building Device Drivers Normally, drivers are built into the architecture archive using a VxWorks source build (VSB) project (see 3.3.2 Configuring and Building the VxWorks Guest OS Libraries with VSB, p.14). Rules for each object are defined in the driver folder makefile or in a make fragment in the same folder. If the driver is adapted from an existing VxWorks driver, there should be a rule in the driver folder makefile to build the object file. If the driver is new or if the makefile lacks a rule for the target architecture, create a new make fragment for the driver. By convention, these files have the root name of the driver with a.mk extension. The.mk file conditionally adds the driver object for guest OS builds. With rules in place, the driver is built when the VSB project for the architecture is built. For an example, see: installdir/vxworks-6.x/target/src/hwif/timer/vxbvbtimer.mk For information on building VSB projects, see the VxWorks Command-Line Tools User s Guide or Wind River Workbench By Example, VxWorks Edition. 28

5 BSP and Device Driver Considerations 5.4 BSP Development 5.4 BSP Development Similar to drivers, board support packages (BSPs) can either be adaptations of existing VxWorks BSPs or can be developed specifically for use in a guest OS configuration. To use a BSP with the VxWorks guest OS, the architecture must first be supported by Wind River Hypervisor and the VxWorks guest OS. For information on supported target architectures, see your product release notes. Note that complete information on VxWorks BSP development is beyond the scope of this document. For detailed information on VxWorks BSP development, see the VxWorks BSP Developer s Guide. 5.4.1 Native BSP Development and the Native VxWorks Boot Process The native VxWorks BSP development process is described in the VxWorks BSP Developer's Guide. Guest OS BSP development follows a parallel process. However, you should note the following exceptions: ROM images are not supported with the guest OS Boot ROM images (for the hypervised system) can be generic VxWorks boot ROM images developed for native VxWorks or they can be third-party utilities. NOTE: For information on booting the hypervisor image, see the Wind River Hypervisor User s Guide. Once it is loaded and booted, the hypervisor is responsible for loading and booting the VxWorks image. Note that the initialization sequence for native VxWorks is described in the VxWorks BSP Developer's Guide. The guest OS initialization sequence is identical to the native VxWorks sequence except that the CPU and RAM are already initialized by the hypervisor. 5.4.2 Guest OS BSP Development If there is an existing native VxWorks BSP for the target board and the modifications necessary to adapt the BSP are straightforward, Wind River recommends that you conditionally compile the BSP with the _WRS_CONFIG_WRHV_GUEST keyword. This simplifies maintenance of the guest OS BSP in those cases where you intend to support both a native BSP and a guest OS BSP. However, if preprocessing directives make the source code difficult to read, you can create a unique guest OS version of your BSP. In this case, because the new BSP is specific to the guest OS only, preprocessing with the _WRS_CONFIG_WRHV_GUEST keyword is not necessary (if the BSP is built as part of a guest OS build, it is always set to TRUE). The guest OS support in a BSP is specified by the SUPPORTS_WRHV_GUEST keyword defined in the BSP common.vxconfig file. Because of limitations in hypervisor and guest OS support for this release, you must also ensure that the 29

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 initialization sequence for a guest OS BSP is similar to a standard VxWorks BSP with the following exceptions: Cache and MMU are setup by the hypervisor. (PowerPC only) Some registers may not be available. The boot line is passed from the hypervisor rather than read from memory. Initialization starts by setting the stack pointer for the initial guest task and then pointers are set up for communication between the guest OS and the hypervisor. Three data structures are used to coordinate the guest OS, wrhvvbcontrol, wrhvvbstatus, and wrhvvbconfig. The wrhvvbcontrol structure is used as a communication channel from the virtual board to the hypervisor. It controls interrupts from virtual and physical hardware and provides MMU mapping for the guest OS, if supported. The wrhvvbstatus structure provides read-only fields that are used to identify which interrupt is currently asserted, to provide system time and timestamp values, and to provide access to the current MMU map. The wrhvvbconfig structure provides virtual board configuration data such as the VBI version supported by hypervisor, memory region addresses, board name, board ID, boot string, and so forth. In general, none of these structures should be accessed directly. Instead, they are accessed through VBI calls. The initialization sequence then continues by initializing various hardware devices. 5.4.3 Configuring the BSP BSP configuration includes project configuration settings that are defined in config.h and CDF files. The process for configuring a guest OS BSP is the same as that used for a standard VxWorks BSP. For more details concerning BSP configuration and initialization, see the VxWorks BSP Developer s Guide. 5.4.4 Building the BSP There are two ways to build a VxWorks BSP, VxWorks image project (VIP) builds in Workbench and VIP command-line builds using the vxprj command-line facility. (For more information on vxprj, see the VxWorks Command-Line Tools User s Guide). NOTE: VxWorks guest OS BSPs must be built using a VIP. Command-line builds using config.h are not supported. The project build must specify the kernel VSB project that was built for the target (see 3.3 Configuring and Building the VxWorks Guest OS, p.14). When building a VIP in Workbench, select Setup the Project Based on a Source Build Project. When using vxprj, your command-line is similar to the following: -> vxprj create -vsb vsbname bspname toolchain For vsbname, use the full path to the directory that contains the VSB. For more information on creating projects, see the VxWorks Command-line Tools User s Guide or Wind River Workbench By Example, VxWorks Edition. 30

5 BSP and Device Driver Considerations 5.4 BSP Development NOTE: To properly build the VSB, the BSP must have the GUEST_OS_OPTION property set in the BSP 20bsp.cdf file. For more information, see 3.3.2 Configuring and Building the VxWorks Guest OS Libraries with VSB, p.14. 5.4.5 Configuring the Hypervisor for the Virtual Board At its simplest, the configuration process for assigning resources to virtual boards involves editing the VxWorks configuration file (vxworks.xml) file for a hypervisor example. (For more information on hypervisor examples, see the Wind River Hypervisor Getting Started.) One of the most important configuration parameters is the amount of memory assigned and available to the VxWorks image as defined by RamSize. For example: RamSize="0x800000" The RamSize value must be equal to or greater than the RAM allocation defined by RAM_HIGH_ADRS and RAM_LOW_ADRS in the VIP. Any extra memory provided by the hypervisor is recovered by VxWorks and returned to the heap. For more information on this configuration, see the Wind River Hypervisor User s Guide. 31

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 32

A Glossary bare metal application hypercalls native VxWorks virtual board See virtual board application. A call made from the guest OS to the hypervisor through the virtual board interface that allows access to certain privileged operations on the physical hardware. Native VxWorks refers to the traditional VxWorks configuration where the operating system runs standalone directly on the target hardware. A virtual target hardware configuration presented to a guest OS or virtual board application by the hypervisor. The virtual board is the mechanism used by the hypervisor to partition the physical hardware. In a hypervisor system, the guest OS treats the virtual board as a physical target and has no awareness of the physical hardware or other partitions in the system. virtual board application An application that runs natively (directly) on a virtual board without operating system support. This is sometimes referred to as a bare metal application. One possible Wind River Hypervisor configuration includes the VxWorks guest OS running in a system that includes one or more virtual board applications (each running in its own virtual board context). virtual board physical address Virtual boards run on physical hardware and the physical hardware has devices at certain physical addresses. When a virtual board s context is created, these physical addresses are translated to a different address as presented to the virtual board. To the hypervisor these are virtual addresses but in the virtual board context, they are physical addresses. These translated addresses are virtual board physical addresses. 33

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 34

B BSP and Driver Migration B.1 Introduction 35 B.2 Device Driver Migration 35 B.3 BSP Migration 36 B.1 Introduction Because of restrictions in architecture and hardware support (when compared to native VxWorks support), this release of the VxWorks Guest OS does not support substantial migration of existing device driver and BSP code to the Wind River Hypervisor environment. The guidelines provided here and in 5. BSP and Device Driver Considerations are intended to help you understand the migration and porting limitations for this release. B.2 Device Driver Migration If a device is not shared between virtual boards, it can typically be adapted for use with the VxWorks Guest OS with little or no modification. Any existing VxWorks device driver must be carefully examined to insure that only one virtual board can access the registers for the given device. The hypervisor scheduler can interrupt accesses to a given device even if the virtual board locks its interrupts. While execution within the virtual board is guaranteed to be undisturbed, timed loops may be inaccurate and instruction sequences such as flash-writes may not sequence as expected. You must verify that the device instance is entirely owned by the virtual board and that instruction sequence restrictions will not be violated. For more information on device driver development for VxWorks Guest OS, see 5.3 Device Driver Development and Integration, p.23. 35

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 B.3 BSP Migration Guest OS virtual boards are supported using a standard BSP model similar to that used in native VxWorks development. Guest OS support is added either by modifying an existing BSP or by creating a new BSP. An existing VxWorks BSP can be used to support a guest OS through the use of preprocessor directives. If modifications make maintenance of the BSP difficult, a new BSP is the preferred solution. New BSPs do not need to build a bootable image, but they must not fail to build. For more information on BSP development for VxWorks Guest OS, see 5.4 BSP Development, p.29. 36

Index Symbols _WRS_CONFIG_WRHV_GUEST 23, 29 Numerics 20bsp.cdf 15 A architecture considerations 19 B bare metal application 33 board support package see BSP BSP building 30 common.vxconfig 29 config.h 30 configuring 30 developing 14, 29 development workflow 23 device driver integration 22 hwconf.c 28 migration 35 building a VIP 16 device drivers 28 guest OS libraries 14 the BSP 30 VxWorks image 16 C changes in the guest OS interface 18 common.vxconfig 29 config.h 30 configuring device drivers 28 guest OS libraries 14 the BSP 30 the hypervisor for guest OS drivers 26 D developing BSPs 29 the hardware interface 14 development environment 17 guest OS workflow 11 hardware interface workflow 23 device drivers available guest OS drivers 24 building 28 configuring 28 developing 14, 23 development workflow 23 for paravirtual devices 22 for physical devices 22 for virtual devices 21 integrating into the BSP 22 integration 23 interrupt controller 24 migration 35 network 25 timers 25 VxBus 23 documentation 4 37

VxWorks Guest OS Programmer's Guide for Hypervisor 1.1, 6.8 E etsec network devices 25 examples 11 G guest OS 2 available device drivers 24 building device drivers 28 configuring and building 14 configuring device drivers 28 configuring the hypervisor to support drivers 26 limitations imposed by hypervisor 17 GUEST_OS_OPTION 15 H hardware interface development workflow 23 hwconf.c 28 hypercalls 2, 33 see also virtual board interface hyvirttophys( ) 25 vbiguestdmaaddrget( ) 25 hypervisor 2 booting the VxWorks guest OS 29 configuring for guest OS drivers 26 configuring for the virtual board 31 examples 11 limitations on guest OS 17 system configurations 7 hyvirttophys( ) 25 I IA-32 architecture considerations 19 hardware task switch 19 integrating device drivers 23 Intel VT-x hardware virtualization 19 interface variations 18 interrupt controller drivers 24 interrupt stack protection 19 interruptconversiontablesetup( ) 28 L limitations hypervisor 17 VxWorks 3 M migration BSP and device driver 35 MMU support 18 N native VxWorks 1, 3, 33 boot process 29 BSP development 29 network drivers 25 O operating system miscue stack see OSM stack OSM stack 19 P paravirtual devices 22 PCI configuration space 26 PCI support 26 physical devices 22 power management 18 PowerPC architecture considerations 19 PCI support 26 vxalib 19 R RAM_HIGH_ADRS 31 RAM_LOW_ADRS 31 RamSize 31 real-time process see RTP routines hyvirttophys( ) 25 interruptconversiontablesetup( ) 28 vbiidle( ) 18 RTP 18 S sharing hardware resources 21 38

Index structures wrhvvbconfig 21, 30 wrhvvbcontrol 21, 30 wrhvvbstatus 21, 30 supported architectures 3 device drivers 3 SUPPORTS_WRHV_GUEST 29 W Wind River Hypervisor see hypervisor WRHV_GUEST 14 wrhvvbconfig 21, 30 wrhvvbcontrol 21, 30 wrhvvbstatus 21, 30 T timer drivers 25 U usage caveats 4 V VBI see virtual board interface vbiguestdmaaddrget( ) 25 vbiidle( ) 18 VIP configuring and building 16 virtual board 2, 21, 33 configuring the hypervisor for 31 virtual board application 2, 33 virtual board interface 2, 18 wrhvvbconfig 21, 30 wrhvvbcontrol 21, 30 wrhvvbstatus 21, 30 virtual board physical address 26, 33 virtual devices 21 VSB 28 configuring and building 14 vxbetsecend 22 vxbns16550sio 22 vxbopenpictimer 22 VxBus 23 vxbvbintctlr 22 vxbvbtimer 22 vxprj 30 VxWorks development environment 17 limitations 3 native 1, 3, 33 VxWorks image project see VIP VxWorks source build see VSB 39