Software System Development for Spacecraft Data Handling & Control Data Handling System Concepts and Structure



Similar documents
Operability in the SAVOIR Context

Transport Layer Protocols

CONTROL MICROSYSTEMS DNP3. User and Reference Manual

ATV Data Link Simulator: A Development based on a CCSDS Layers Framework

Synchronization in. Distributed Systems. Cooperation and Coordination in. Distributed Systems. Kinds of Synchronization.

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, Lecturer: Kartik Krishnan Lecture 1-3

5.2 Telemetry System Concept

Network Layer: Network Layer and IP Protocol

Objectives of Lecture. Network Architecture. Protocols. Contents

RMON, the New SNMP Remote Monitoring Standard Nathan J. Muller

Switch Fabric Implementation Using Shared Memory

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

2.0 Command and Data Handling Subsystem

Protocols and Architecture. Protocol Architecture.

Linux Driver Devices. Why, When, Which, How?

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Chapter 2: OS Overview

Communications and Computer Networks

CSMA/CA. Information Networks p. 1

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.

Communication Protocol

2. What is the maximum value of each octet in an IP address? A. 128 B. 255 C. 256 D. None of the above

NetFlow probe on NetFPGA

Module 5. Broadcast Communication Networks. Version 2 CSE IIT, Kharagpur

RARP: Reverse Address Resolution Protocol

C-GEP 100 Monitoring application user manual

Lecture 15. IP address space managed by Internet Assigned Numbers Authority (IANA)

UG103.5 EMBER APPLICATION DEVELOPMENT FUNDAMENTALS: SECURITY

NetStream (Integrated) Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

In: Proceedings of RECPAD th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal

PART OF THE PICTURE: The TCP/IP Communications Architecture

UC CubeSat Main MCU Software Requirements Specification

Exam 1 Review Questions

ARM Ltd 110 Fulbourn Road, Cambridge, CB1 9NJ, UK.

PART III IMPLEMENTATION OF THE MASTER SCIENCE PLAN

Use of CCSDS File Delivery Protocol (CFDP) in NASA/GSFC s Flight Software Architecture: core Flight Executive (cfe) and Core Flight System (CFS)

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

SCSI The Protocol for all Storage Architectures

AIAA Distributed Operation of a Military Research Micro Satellite Using the Internet

Question: 3 When using Application Intelligence, Server Time may be defined as.

- Hubs vs. Switches vs. Routers -

ESTRACK Management System Support for the CCSDS Space Communication Cross Support Service Management

AFDX networks. Computers and Real-Time Group, University of Cantabria

Infrastructure Components: Hub & Repeater. Network Infrastructure. Switch: Realization. Infrastructure Components: Switch

Mobile IP Network Layer Lesson 01 OSI (open systems interconnection) Seven Layer Model and Internet Protocol Layers

The OSI model has seven layers. The principles that were applied to arrive at the seven layers can be briefly summarized as follows:

Real Time Programming: Concepts

AERONAUTICAL COMMUNICATIONS PANEL (ACP) ATN and IP

Satellite Control Software (SCS) Mission Data Client Extensibility User Guide

Database Administration for Spacecraft Operations The Integral Experience

Application Unit, MDRC AB/S 1.1, GH Q R0111

Design and Verification of Nine port Network Router

Computer Organization & Architecture Lecture #19

Lesson 5-2: Network Maintenance and Management

GEIGER COUNTER "Gamma Check Pro"

Switching in an Enterprise Network

Managing Variability in Software Architectures 1 Felix Bachmann*

Ethernet. Ethernet Frame Structure. Ethernet Frame Structure (more) Ethernet: uses CSMA/CD

DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL

First Application of the Generic Emulated Test Software, GETS, in the LISA Pathfinder Operational Simulator SESP 2008, 8 th October 2008, ESTEC

Technical Bulletin. Enabling Arista Advanced Monitoring. Overview

AlliedWare Plus OS How To Use sflow in a Network

MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC

WANPIPE TM. Multi-protocol WANPIPE Driver CONFIGURATION M A N U A L. Author: Nenad Corbic/Alex Feldman

SAN Conceptual and Design Basics

TECHNICAL NOTE. GoFree WIFI-1 web interface settings. Revision Comment Author Date 0.0a First release James Zhang 10/09/2012

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

Ethernet. Ethernet. Network Devices

Introduction VOIP in an Network VOIP 3

Advanced VSAT Solutions Bridge Point-to-Multipoint (BPM) Overview

Final for ECE374 05/06/13 Solution!!

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

DATRAN RTU Data Logging

ASTERIX Format Analysis and Monitoring Tool

Real-time Ethernet with TwinCAT network variables

IP - The Internet Protocol

Comparison of FlexRay and CAN-bus for Real-Time Communication

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?

The PACS Software System. (A high level overview) Prepared by : E. Wieprecht, J.Schreiber, U.Klaas November, Issue 1.

Overview of Asynchronous Transfer Mode (ATM) and MPC860SAR. For More Information On This Product, Go to:

A Transport Protocol for Multimedia Wireless Sensor Networks

Computer Gateway Specification and Technical Data

White Paper Abstract Disclaimer

ProSAFE 8-Port and 16-Port Gigabit Click Switch

Overview of Routing between Virtual LANs

Written examination in Computer Networks

28 Networks and Communication Protocols

PANDORA FMS NETWORK DEVICE MONITORING

CPS221 Lecture: Layered Network Architecture

SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT

A network is a group of devices (Nodes) connected by media links. A node can be a computer, printer or any other device capable of sending and

Networking Test 4 Study Guide

Fiber Channel Over Ethernet (FCoE)

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012

ARINC-653 Inter-partition Communications and the Ravenscar Profile

CREW - FP7 - GA No Cognitive Radio Experimentation World. Project Deliverable D7.5.4 Showcase of experiment ready (Demonstrator)

Chapter 6: Conclusion

Analysis of IP Network for different Quality of Service

Transcription:

Software System Development for Spacecraft Data Handling & Control Data Handling System Concepts and Structure Document No.: Date: 13.09.99 Issue: 1 Revision: - Distribution: TERMA Prepared by: Gert Caspersen Authorised by: Carsten Jørgensen

The intellectual property of this document is vested in TERMA Elektronik AS.

Document Change Record Issue Date Change 1 13.09.99 Initial Issue i

Table of Contents 1 Introduction... 1 1.1 Scope... 1 1.2 Abbreviations and Acronyms... 1 1.3 Document Outline... 1 2 Bibliography... 3 3 Software System Development for Spacecraft Data Handling & Control... 4 3.1 Packet Utilisation Standard... 4 3.2 Data Handling System Platform... 4 4 Interpretation of Packet Utilisation Standard... 6 4.1 Executive Summary of PUS... 6 4.1.1 Telecommand Verification Service... 6 4.1.2 Device Level Commanding Service... 6 4.1.3 Housekeeping & Diagnostic Data Reporting Service... 6 4.1.4 Event Reporting Service... 7 4.1.5 Function Management Service... 8 4.1.6 Onboard Scheduling Service... 8 4.1.7 Onboard Monitoring Service... 9 4.1.8 Onboard Storage and Retrieval Service... 10 4.2 Supported Services... 10 4.3 Telecommand Verification... 11 4.4 Device Level Commanding... 12 ii

4.5 Housekeeping & Diagnostic Reporting... 13 4.6 Event Reporting... 13 4.7 Memory Management... 14 4.8 Function Management... 14 4.9 Onboard Scheduling... 15 4.10 Onboard Monitoring... 15 4.11 Onboard Storage & Retrieval... 16 4.12 Onboard Traffic Management... 17 5 Understanding the OBOSS-II Software... 18 5.1 General Architecture... 18 5.2 Reusability of Architecture... 19 5.3 Rationale... 19 5.4 Telecommand and Telemetry Flows... 20 5.5 Data Flow... 22 5.6 Control Flow... 23 5.7 Structuring of Software Components... 24 5.7.1 Basic Services... 25 5.7.2 CDH Structure... 26 5.7.3 PUS Services... 26 5.7.4 PROBA... 27 Appendix A General Control Flows... 28 A.1 Hard Real-Time Control Flows... 30 A.2 Sporadic Task... 31 A.3 Cyclic Task... 32 A.4 Protected Object... 33 iii

iv

1 Introduction 1.1 Scope The current document is to describe the Concepts and overall structure of the software framework and components resulting from the Software system Development for Spacecraft Data Handling & Control project. The software components were originaly developed for an 1750A platform as part of the ESTEC project Onboard Operations Support Software. As part of the Software system Development for Spacecraft Data Handling & Control project they have been ported to an ERC32 platform. 1.2 Abbreviations and Acronyms OBOSS-II PUS IF Software System Development for Spacecraft Data Handling & Control Packet Utilisation Standard Interface 1.3 Document Outline A general introduction to the Software System Development for Spacecraft Data Handling and Control project is given in chapter 3. The Packet Utilisation Standard is a fundamental baseline for the work carried out in the project. Chapter 4 provides an executive summary of the standard and elaborates on any deviations or interpretations that has been applied in the implementation of the services from the standard. Chapter 5 introduces the architecture of the reusable software developed in the OBOSS-II project. An outline of the general data flows and control flows in the system is given in order to ease the reuse process. 1

Appendix A introduces the set of control structures that has been applied in the reusable software components. 2

2 Bibliography [PTC] [PTM] [PUS] Packet Telecommand Standard European Space Agency PSS-04-107 Packet Telemetry Standard European Space Agency PSS-04-106 Packet Utilisation Standard ESA, PSS-07-101 Issue 1, 1994 3

3 Software System Development for Spacecraft Data Handling & Control The goal of the Software System Development for Spacecraft Data Handling & Control (OBOSS-II) project was to develop a reusable onboard software architecture implementing a subset of the services defined in Packet Utilisation Standard [PUS] on a command and data handling platform. This architecture shall be able to support and ease (through intensive reuse) development of onboard command and data handling software for a series of future missions. Consequently, the context of the OBOSS-II is dominated by three elements: the Packet Utilisation Standard (PUS), the satellite platforms on which the OBOSS-II architecture is to build, and the reusable software components being part of the reusable architecture. Each of these is discussed in the following. 3.1 Packet Utilisation Standard This standard defines several general services whose onboard implementation supports satellite operation. It defines the application level of Packet Telecommand Standard [PTC] and Packet Telemetry Standard [PTM]. It is not in the scope of the OBOSS-II project to develop a complete PUS implementation. Consequently, a subset of the services and subservices have been selected for implementation. 3.2 Data Handling System Platform The OBOSS-II architecture will be affected by any assumptions made regarding the command and data handler platform on which it is to reside. This will define the environment of the command and data handler. The environment depicted in Figure 1 is considered as the baseline. Each element is briefly described below. 4

Subsystem 1 I/F Subsystem 2 I/F Subsystem N I/F Payload Bus Data Handling System Up/Downlink Bus COM Subsystem Ground Segment Figure 1: Data Handling System Platform Ground Segment Control centre, ground stations etc. responsible for operation of the satellite on which command and data handler platform resides. COM Subsystem Communication subsystem providing the onboard support for the radio link between space segment and ground segment. It implements several layers in the ESA Packet Telecommand Standard [PTC] and Packet Telemetry Standard [PTM]. Up/Downlink Bus Onboard bus dedicated to uplink and downlink data flows. Data Handling System Onboard subsystem responsible for commanding of onboard subsystems and data collection from these. Payload Bus Any bus chosen for interfacing to payloads. May be multidrop bus as shown or serial busses in star shaped configuration. Subsystem 1 I/F.. Subsystem N I/F Collection of onboard subsystems controlled by the data handling system. Includes payloads as well as attitude control subsystem, electrical power subsystem etc. 5

4 Interpretation of Packet Utilisation Standard Although the packet utilisation standard aims at being unambiguous and complete, room is still left for interpretation. Any implementation may have to deviate due to limitations imposed by the target platform. For each of the supported PUS services, this chapter describes any interpretations and limitations present in the OBOSS-II implementation. 4.1 Executive Summary of PUS This section is to give a short informal introduction to the PUS services implemented in the OBOSS-II project. 4.1.1 Telecommand Verification Service Any of the telemetry packages from this service may be generated by PUS. We have to use these for acknowledgement of the correct execution of telecommands we send. 4.1.2 Device Level Commanding Service Two subservices are supported: 1. On/Off Commands 2. Register Load Commands Such commands have to be transformed into messages on the OBDH bus in some simple way. These commands are important as they provide ground control with a simple way of bypassing the more advanced PUS functions. 4.1.3 Housekeeping & Diagnostic Data Reporting Service Housekeeping/diagnostic collection is based on a table of report definitions associated with each subsystem onboard. This table is initially empty. Report definitions indicate that a report shall be generated with a given interval. The report is identified by a unique report ID (being unique for the subsystem in question). The content of the report is based on a list of parameter numbers telling what parameters are to be sampled in the report. These parameter numbers are unique 6

over the entire onboard software, meaning that the list of possible parameters and their associated types have to be defined when OBOSS-II is tailored for a given mission. Here we support the following PUS services: Define Housekeeping/Diagnostic Data Report These commands insert report definitions in the above table. The commands are aimed at the subsystem whose parameters are to be monitored. Report generation is enabled after definition has succeeded (see below). Any report in progress is discarded. Clear Housekeeping/Diagnostic Data Report One or more report definitions are removed from the above table. Consequently, the associated housekeeping collection stops. Any report in progress is discarded. En/Disable Housekeeping/Diagnostic Data Report Starts and stops generation of housekeeping reports for the identified report definitions without removing these from the table. Report Housekeeping/Diagnostic Report Definitions Requests a definition report showing the current definitions for the identified reports. Select Periodic/Filtered Report Generation Mode Switches reporting between periodic mode and filtered mode. In filtered mode reports are generated when one of the contained parameter values exceeds a given threshold or a given time has elapsed. Report Masked Parameters Generates report identifying the parameters that are not thresholded in a Filtered Report Generation Mode. 4.1.4 Event Reporting Service This service only includes telemetry. Such reports are used to signal severe errors in the onboard system (including the software). 7

4.1.5 Function Management Service Commands associated to this service may be used for implementation of commands not otherwise covered by the PUS. The idea is to have one instance of this service for each subsystem. A function and an activity are both sequences of simple actions (e.g. OBDH bus interrogations) directed at the associated subsystem. However, several activities may be associated with one function (see below). Activate Function Execute the sequence of actions associated with the identified function based on the provided parameters values. Often used to enable a mode for a subsystem. Deactivate Function Perform a sequence of actions. Often used to disable a mode for the associated subsystem. Perform Activity Perform an activity of the identified function (if this is enabled). 4.1.6 Onboard Scheduling Service Implements a command schedule being a collection of time tagged PUS telecommands divided into subschedules. The time tags may be absolute or relative to events such as the enabling of a subschedule. Commands in the schedule are to be released when their time is due (based on one central onboard clock). En/Disable Release of Telecommands or Subschedules Enabling or disabling of a subschedule enables or disables all commands in this subschedule. Disabled commands are not released for execution even when their time is due. Enabled commands are released when they are due. Reset Command Schedule Remove all commands and subschedules from the schedule. Insert Telecommands Inserts telecommands in the command schedule and a specific subschedule. Delete Telecommands Delete telecommands explicitly identified in command or having a time tag within a given time interval. 8

Report Schedule Contents Dumps specified part of the current contents of Command Schedule in a report. 4.1.7 Onboard Monitoring Service Maintains a table of expected parameter values or limits for an associated subsystem. Parameter values are sampled from the subsystem at specified intervals. A report is to be generated when one of these values is not within the nominal range. The table defines a collection of parameters (through their unique parameter number, see housekeeping above) that are to be monitored. Each of these parameters has associated a set of check definitions being checked with a given interval. An out-of-limit event occurs if a parameter value has been outside the nominal interval a specified number of times. Out of limit events are collected in reports being sent down at some point in time. Onboard procedures may be associated to the occurrence of out-of-limit events allowing for autonomous anomaly handling onboard. En/Disable Monitoring Controls whether or not specified parameters are to be monitored. Their corresponding nominal definitions remain in the table. Clear Monitoring List Remove all check definitions from the table. Consequently stops all monitoring. Add Parameters Add check definitions to table. Associates check definitions to parameters, starting monitoring of these. Delete Parameters Remove specified parameters from table meaning that monitoring of these stops. Modify Parameter Checking Definition Modifies check definitions associated to parameters. Report Monitoring List Generates report telemetry showing current contents of table. Report Current Out-Of-Limit List Generates report showing out-of-limit events currently valid (meaning that the parameter value has not returned to nominal values). 9

4.1.8 Onboard Storage and Retrieval Service Similar to old tape recorders. Telemetry of given types is stored (in FIFO order) for later downlink on request. Several stores may exist onboard. They may or may not be shared among subsystems. Definition of what telemetry is to be stored where (or not stored at all) may be controlled through telecommands. Storage of telemetry is controlled through storage selections. These identify where telemetry of a given type/subtype is to be stored (or whether they are not to be stored at all). If some type of telemetry does not have an associated storage selection, then the telemetry is downlinked immediately. En/Disable Storage Starts and stops storage of telemetry in identified packet stores. Telemetry whose storage is disabled is downlinked immediately. Add/Remove Types/Subtypes to Storage Selection Defines that telemetry of given types/subtypes are to be stored in identified store by adding these to the collection of storage selections. Report Storage Selection Definition Requests report of storage selections for identified packet stores. Downlink Packet Store Contents for Packet Range / Time Period Results in specified telemetry packets being submitted for downlink. Specification based on sequence count interval or time span. Note that these time stamps are based on reception time in the store and not the time stamp in the telemetry header. Delete Contents of Packet Store up to Specified Packet / Storage Time Remove telemetry packets matching the given criteria, meaning that they have a sequence number or time stamp less than the specified one. May also be used to delete the entire contents of all stores or a given store, or all packets of a given type. 4.2 Supported Services Only a subset of the services defined in the standard is supported in the OBOSS-II implementation. These are shown in the following table. Service Type PUS Service Supported 1 Telecommand Verification U 10

Service Type PUS Service Supported 2 Device Level Commanding U 3 Housekeeping & Diagnostic Reporting U 4 Parameter Statistics Reporting 5 Event Reporting U 6 Memory Management U 7 Task Management 8 Function Management U 9 Time Management 10 Time Reporting 11 Onboard Scheduling U 12 Onboard Monitoring U 13 Large Data Transfer 14 Packet Transmission Control 15 Onboard Storage & Retrieval U 16 Onboard Traffic Management U 17 Test In the following sections, each of the supported services will be dealt with. 4.3 Telecommand Verification For this service, all subservices are supported. Deviations/Interpretations: Telecommand verification packets are generated according to the scheme depicted in. 11

Start Acceptance Failure Success Success Start Execution Failure Done Success Progress of Execution Success Failure Success Failure Completion of Execution Success Failure Figure 2 Telecommand Verification Scheme The general ideas are: If execution of a telecommand involves several steps (e.g. insertion of values into a table) then each of these will be verified by an execution progress report. Should execution of a step fail the following steps will still be executed. Completion of execution fails if any step has failed. The generation of telecommand verification packets is controlled by the acknowledge field of the telecommand in question. 4.4 Device Level Commanding For this service, only the following subservices are supported: 12

PUS Subservice Subtype On/Off 1 Register Load 2 Deviations/Interpretations: Register data included in register load commands must be unsigned integers. 4.5 Housekeeping & Diagnostic Reporting For this service, only the following subservices are supported: PUS Subservice Subtype Define Report 1, 2 Clear Definition 3, 4 Control Generation 5 8 View Definition 9 12 Select Periodic 17, 18 Select Filtered 19, 20 Report Masked Parameters 21 24 Data Report 25, 26 Deviations/Interpretations: Sampling time offset reporting capability is not supported. 4.6 Event Reporting For this service, only the following subservices are supported: PUS Subservice Subtype Error/Anomaly Report - High Severity 4 13

Deviations/Interpretations: Only one event is reported by the OBOSS-II software. When a task dies due to an unhandled exception being propagated to the outermost level, a Task Exception Report is generated with the following structure: Field Name Type Value RID (2, x) 1 Exception_Name (8, 0) Variable Following the generation of the report, the task in question terminates. 4.7 Memory Management For this service, only the following subservices are supported: PUS Subservice Subtype Load Memory using Absolute Addresses 2 Dump Memory using Absolute Addresses 5 Memory Dump Using Absolute Addresses Report 6 Check Memory Using Absolute Addresses 9 Memory Check Using Absolute Addresses Report 10 Deviations/Interpretations: There are no deviations from the standard. 4.8 Function Management For this service, only the following subservices are supported: 14

PUS Subservice Subtype Activate Function 1 Deactivate Function 2 Perform Activity 3 Deviations/Interpretations: There are no deviations from the standard. 4.9 Onboard Scheduling For this service, only the following subservices are supported: PUS Subservice Subtype En/Disable 1, 2 Reset 3 Insert ( Interlock) 4 Delete 5 Detailed Report 9 11 Summary Report 12 14 Deviations/Interpretations: Time shifting is not supported. Telecommand interlocking is not supported. Jumps in CUC representation of onboard time are not detected. 4.10 Onboard Monitoring For this service, only the following subservices are supported: PUS Subservice Subtype Control 1, 2 15

PUS Subservice Subtype Clear 4 Add ( delta) 5 Delete 6 Modify 7 Report Definition 8, 9 Report Out-Of-Limits 10, 11 Deviations/Interpretations: Maximum Reporting Delay is a system parameter common to all application processes. For telecommands of type (12,5) Adding Parameters to Monitoring List and (12,7) Modifying Parameter Checking Information all fields related to delta checking are left out. This includes the field NOD as well. A procedure may be associated to initiate possible contingency procedures in case of out-of-limit events. 4.11 Onboard Storage & Retrieval For this service, only the following subservices are supported: PUS Subservice Subtype En/Disable 1, 2 Add/Remove 3, 4 Report 5, 6 Downlink Packet Range 7, 8 Downlink Time Period 9 Delete to Packet 10 Delete to Time 11 16

Deviations/Interpretations: If current storage selection definition is empty, and a storage selection definition report is requested then a verification failure is returned in response to the telecommand. This allows ground to distinguish among all packet types being selected and no packet types being selected. 4.12 Onboard Traffic Management This service is not defined in the current version of the packet utilisation standard. However, OBOSS-II has included a concept for onboard routing of telecommands and telemetry allowing for a later adaptation to this service. 17

5 Understanding the OBOSS-II Software 5.1 General Architecture The onboard operations support software architecture is based on the general structure depicted in Figure 3. This is specifically aimed at onboard software using the packet utilisation standard. Subsystem X Application Process 1 Application Process 2 PUS Packets PUS Packets Packet Router PUS Packets Ground I/F Packet TC/TM Protocol Ground Segment Subsystem Y Application Process 3 PUS Packets Data Handling System Figure 3: Command & Data Handler Architecture Ground I/F Communication between the ground segment and onboard application processes is based on PUS packets. Moving from the packetisation layer of the Packet Telecommand Standard [PTC] to the application process layer, the representation may be changed from an external representation to an internal representation. The inverse applies to the Packet Telemetry Standard [PTM]. Packet Router Onboard routing of PUS packets is managed by one central component. This implements a mailbox-based message passing scheme. Based on the PUS packet contents, a destination application process is derived, and the packet is placed in the mailbox associated to the application process in question. Application processes are then responsible for fetching packets in their mailbox. The internal PUS packet representation is extended with an additional Source/Destination field. For telecommands, this identifies the onboard application process that produced the command (in this context, Ground I/F is considered as an 18

application process). For telemetry, it identifies the destination of the telemetry packet (e.g. Ground I/F or an onboard packet store). Subsystem 1 I/F.. Subsystem N I/F Each onboard subsystem has associated one PUS application process responsible for implementation of PUS services for that specific subsystem. The process is implemented by an interface object responsible for control of that specific subsystem. 5.2 Reusability of Architecture The particular features that make the above structure well-suited for reuse in other contexts are: The use of PUS packets for sending information, i.e. telecommands and telemetry, among parts of the system provide a standardised interface to adhere to. Having a packet router as the focal point of the architecture means that the other parts are loosely coupled to each other, and therefore changes in one part of the system can be kept local and will typically at most effect the packet router. The packet router also allows for a simple and efficient control structure in that each application process, e.g. a payload, waits to collect a packet from the packet router, performs some required actions, potentially sends packets to other processes via the router, and finally waits for the next packet to process. The structure, besides being simple, also increases the robustness of the system, because if an application process, e.g. a payload, for some reason stops, the rest of the system can keep on executing (if no response from stopped process is required in order to proceed). This is because the only effect of a process becoming inactive is that a buffer with PUS packets for it, will flow over, but every process sending to it will be allowed to proceed. If an overflow of such a buffer is detected, it can be used as an indication of problems with the corresponding application process: handling the incoming packets may be too slow or it may be completely stopped. Adding a new payload is simple because all one need is to create a new application process with the above described simple control structure and add a new buffer to the packet router. 5.3 Rationale Designing onboard software for reuse, the architecture shall possess a number of properties: Scalability : The resulting structure shall be easily extendible and reducible in order not to prohibit accommodation of future needs. 19

Low Coupling : Software components in the design shall have as few interdependencies as possible. Consequently, spacecraft modifications affecting subsystems including those components will be more easily accommodated. Layering : The structure shall be layered based on abstractions moving from the platform or payload specific to the mission independent. Accommodation to new platforms or payloads will then be focussed at components at the same abstraction level. Adaptability : Software components that are likely to need changes to fit in a new system shall be functionally isolated as much as possible. This will ease the adaption. Considering the architecture with respect to each of these properties provides the following rationale for the architecture: Scalability New application processes may easily be added as the packet router and a couple of descriptor modules are the only onboard components possessing knowledge of the set of onboard application processes. Adding an application process only involves extension of the set of application ids used by the packet router. Based on this, a new mailbox is provided for the new application process. In addition, new application process descriptors are included. Similarly, application processes that are not needed on a particular mission can easily be removed together with their application identifier and its associated mailbox. Low Coupling Application processes do not communicate with one another directly. All communication takes place through the packet router. Failure of one application process does not affect other application processes resulting in a more robust system. Layering The architecture presented above is partitioned into layers. A four-layer structure has been developed as described in section Structuring of Software Components. Adaptability Modifying one application process, for example a payload interface, will only have a limited impact on other application processes. This is a direct result of the low coupling stated above. 5.4 Telecommand and Telemetry Flows Packet Utilisation Standard includes an abstract model in which the onboard segment is viewed as a collection of application processes. However, any given implementation including the one in OBOSS will place some operational constraints on the behaviour of the data handling software. One important aspect during system level 20

design as well as during operation is the telecommand and telemetry data flows. This section will outline how these are implemented in OBOSS. Telecommand and telemetry flows in OBOSS are based on the following very simple characteristics: Packet Utilisation Standard source packets exists in three different formats: * Byte stream at the interface to the system. * External format (record structure), corresponding completely to the definition in Packet Utilisation Standard. * Internal format (record structure), in which constant fields have been stripped off and the data from the packet has been placed in internal buffers. When telecommand packets and telemetry packets are routed internally in the data handling software, the only thing carried around the system is a reference ID. Neither telecommand packets nor telemetry packets are prioritised in any way. All communication between application processes is based on Packet Utilisation Standard source packets represented in the internal format. Source packets to an application process are forwarded in an FIFO manner. Telemetry packets to ground 1 are forwarded in an FIFO manner. This leads to the overall source packet flows depicted in Figure 4 below. Bytes 8SOLQN,) Telecommand $SSOLFDWLRQ 3URFHVVÃ &RPPXQLFDWLRQ Ã6XEV\VWHP $SSOLFDWLRQ 3URFHVVÃ Bytes 'RZQOLQN,) Telemetry 3DFNHWÃ5RXWHU $SSOLFDWLRQ 3URFHVVÃ Figure 4 Source Packet flows 1 Ground is being represented by a special application process with its own application process ID. 21

The execution order of telecommands destined at different application processes are thus completely dependent on the priority assigned to the Ada tasks implementing the application processes. The following example may visualise this. Example Consider the data handling system depicted in Figure 4. Each application process has associated one Ada task responsible for telecommand execution. These have associated the following priorities: Application Process TC Execution Priority 1 20 2 15 3 10 Imagine that the control centre transmits a telecommand T 1 destined for application process 2 followed by a telecommand T 2 destined for application process 1. The system possesses FIFO properties until the telecommands arrive at the application processes for execution. Telecommand execution of T 1 starts but it is preempted the moment that T 2 arrives as the task responsible for telecommand execution in application process 1 has a higher priority. When carrying out system level design and considering whether or not some Packet Utilisation Standard service shall be distributed among application processes it is thus important to take any requirements for FIFO properties into account. At least the following items have to be considered: If commanding of one application process AP 1 is to take precedence over the commanding of another application process AP 2 then the sporadic telecommand execution task for AP 1 should be assigned a higher priority than the sporadic telecommand execution task for AP 2. If it is a system level requirement that the execution of commands destined at two or more onboard subsystems should be executed in FIFO order, then the control of these subsystems has to be combined into one application process. 5.5 Data Flow A typical data flow is presented in Figure 5. A telecommand destined for an application process, say Payload, is transmitted from ground and received by the communication system. 22

TC Byte Stream Device TC Device TC Up/Down Link Bus Uplink I/F Packet Router Application Process On/Off Command On/Off Bus Driver OBDH Message OBDH Bus TM Byte Stream TC Verification TM TC Verification TM Figure 5: Data Flow for On/Off Command The physical layer of the uplink and downlink channels are provided by a dedicated Up/Down Link Bus of the native computer. The bus carries simple byte streams as defined by [PTC] and [PTM], respectively. Transformations between the external representation and an internal PUS packet representation are performed by Ground IF (Ada package Ground_IF). This component also manages the relevant parts of the uplink and downlink protocols. The received telecommand is sent to the Payload application process via the Packet Router. Inside the Payload application process the telecommand will be forwarded to the relevant (sub-)process for processing based on the packet type and subtype. In the example depicted in Figure 5, the relevant process is the payload telecommand interpreter, which will interpret the command resulting in the generation of commands for the payload interface driver. Depending on what type of verification is required, a verification TM packet may be produced to acknowledge receipt of the command. The verification packet will indicate acceptance or rejection of the telecommand. If the command is accepted parameters are extracted to construct commands for the payload driver interface. This provides an implementation of low-level commands on the OBDH bus that connects the onboard software with the payload. If required, the completion of the telecommand execution will be acknowledged by issuing a verification TM packet, which is sent via the Packet Router to Ground I/F for downlinking The Ground I/F will subsequently transform them to the external format and submit them on the Up/Down Link Bus. 5.6 Control Flow The onboard control flow is event-driven, with external interfaces and the packet router as primary synchronization points. This means that application processes have one or more sporadic tasks associated that wait for an event and perform some activity based on the event. Refer to Appendix for an introduction to the implementation of sporadic tasks in OBOSS-II. The packet router is a central component. It maintains a buffer of PUS packets destined for each application process (including ground). Each application process will check its buffer and wait for telecommands if it is empty. Given the data flow shown in Figure 5 above, the corresponding control flow will be as shown in Figure 6. 23

= Passive/protected component 1 2 3 = Active component with own thread of control 1 2 Up/Down Link Bus Uplink I/F Packet Router Application Process On/Off Bus Driver OBDH Bus 3 2 1 4 3 Figure 6: Control flow for on/off device command The TC Uplink IF waits for the arrival of telecommands on the Up/Down Link Bus. On arrival of a telecommand, Uplink IF transfers it to the Packet Router and returns to Up/Down Link Bus to wait for the next telecommand. Awaiting the arrival of telecommands destined for the particular payload, the Application Process extracts the oldest from the dedicated buffer maintained by the packet router (if more than one in the buffer). Based on the type and subtype of the telecommand the Application Process forwards it to the relevant (sub-)process for execution. Here the payload telecommand interpreter executes the command. On completion of this, Application Process will return to the packet router to get (or wait for) the next telecommand. One or more specific payload driver commands are constructed by the payload telecommand interpreter subsequently passing these on to the driver. If required, the telecommand interpreter generates one or more verification TM packets forwarding these to the packet router. These will be returned to the issuer of the telecommand. Finally, the TM Downlink IF is awaiting the arrival of telemetry packets for downlinking. It has a dedicated Ground buffer of PUS packets administered by the packet router. Upon arrival of verification TM packets (or other TM packets), TM Downlink IF transforms these to the external format and forward them to the Up/Down Link Bus. 5.7 Structuring of Software Components The OBOSS-II collection of reusable software components is partitioned into a layered PROBA PUS Services CDH Structure Basic Services Figure 7 OBOSS-II Software Structure structure as shown in Figure 7. 24

The contents of the various layers will be described in the following. 5.7.1 Basic Services This collection of software components provides a range of simple services being shared among all the PUS services: Low Level Stuff defining a number of compiler and platform specific entities. This ranges from representation of simple data types to implementation of an interface to the onboard clock. Source Data providing abstract data types for the source data contained in PUS packets. Mission Parameters defining a set of mission specific parameters (re. [PUS] Appendix B: Mission Parameters ). Containers implementing a collection of general data structures (e.g. sets, maps, lists, queues etc.) Control Structures implementing the general control structures for sporadic and cyclic tasks as outlined in Appendix A. Internal PUS providing an internal representation of Packet Utilisation Standard packets. These are represented as a reference (or pointer) for efficiency reasons. An abstract data type is provided with constructors, accessors etc. External PUS implementing the external representation of Packet Utilisation Standard packets. Transformations are provided between internal and external Packet Utilisation Standard packet representations, and also between external Packet Utilisation Standard packets and byte streams. Platform Parameters defining a collection of platform specific parameters ranging from representation of parameters collected by Packet Utilisation Standard services to stack sizes allocated to Ada tasks. Resource Manager providing a mechanism for management of a collection of identical resources. Used for management of data buffers. Parameter Structure Descriptions mapping parameter IDS into the type and representation (Re. section 23 Parameter Types and Structure Rules in [PUS]) of the identified parameter. Basic Services Initialiser responsible for initialisation of the entire Basic Services collection. 25

5.7.2 CDH Structure The general framework in which application processes are to be implemented is provided by this collection. This covers onboard routing and facilities for management of the uplink and downlink data streams. The following components are included: Packet Router responsible for onboard routing of telecommands and telemetry. Implements the Onboard Traffic management Service from the Packet Utilisation Standard. Downlink IF managing the downlink telemetry stream. Responsible for transformation of PUS packets from internal to external format. Uplink IF managing the uplink telecommand stream. Responsible for transformation of PUS packets from external to internal format. Up Down Link Bus implementing a telecommand byte stream and a telemetry byte stream. Managing the interface to a telecommand decoder and telemetry frame generator (if any) including any applicable protocols. Dynamic Application Process Descriptors managing a state for each application process. This includes management of source sequence counters, current storage selection definitions etc. CDH Structure Initialiser responsible for initialisation of the entire CDH Structure collection. 5.7.3 PUS Services This is a collection of components each implementing a service from the packet utilisation standard. These may be combined (through instantiation) to implement application processes for specific missions. Monitor implements the onboard monitoring service from PUS. HK Collector provides a housekeeping and diagnostic data reporting service as defined in PUS. Memory Management implementing the memory management service from the Packet Utilisation Standard. Storage and Retrieval implementing an onboard storage and retrieval service according to PUS. Event Scheduler offering services for management of a time-line of cyclic events. Used by other components to implement e.g. periodic monitoring or housekeeping collection. On Board Scheduler being an implementation of the onboard scheduling service defined in PUS. Function Management supplying a function management service according to PUS. 26

Event Reporting offering services for reporting of critical onboard events. TC Verification providing facilities for acknowledgement of telecommand executions according to telecommand verification service defined in PUS. Device Command Distribution realising a device-level commanding service as defined in PUS. 5.7.4 PROBA This layer contains software components implementing application processes. These are based on instantiations of the generic PUS services provided in the PUS Services collection. At the current time only a PROBA command and data handler is included as inspiration for future reuse. The components included in this are: PROBA Data Handling System - represented by the Ada main program proba_dhs.ada combining the application processes implemented by onboard logistics, electrical subsystem IF, and telemetry store into one piece of command & data handling software. Attitude Control System implementing an example of an application process for an attitude control system. Contains instances of the following Packet Utilisation Standard services: * Device Level Commanding * Onboard Monitoring * Housekeeping & Diagnostics Data Reporting Service * Function Management Electrical Power Subsystem implementing an example of an application process associated to an electrical power subsystem also known as the power distribution unit. The following services from the Packet Utilisation Standard are provided by the application process: * Onboard Monitoring * Housekeeping & Diagnostics Data Reporting Service * Function Management Memory Manager showing a possible implementation of one centralised memory management service. Onboard Storage Administrator providing a centralised onboard storage and retrieval service. Telecommand Scheduler implementing an onboard scheduling service. 27

Appendix A General Control Flows 28

29

A.1 Hard Real-Time Control Flows In order to enable a schedulability analysis of the resulting data handling system software are the following types of control flows used in the implementation: Sporadic Tasks Cyclic Tasks Protected Objects The implementation of each of these types of tasks are outlined below. 30

A.2 Sporadic Task Sporadic tasks implements a thread of control that is initiated when a specific event occurs. The synchronisation related to the event is normally based on a Start entry being called to initiate response. Use of such control structures is based on instantiation of a generic Ada package named Sporadic_Task. The control structure is started by a Go event, which has been introduced to eliminate problems with lacking elaboration. To enhance the understanding of this control structure, a sample template expressed in Ada 83 is provided in Figure 8. task is entry Go; entry Start; end ; task body is begin accept Go; loop accept Start;! The list of actions end loop; end ; Figure 8: Implementation of sporadic task 31

A.3 Cyclic Task The cyclic task implements a thread of control that repeatedly performs some actions with a given period. This is based on the use of a special Delay_Until construct providing an absolute delay. Use of such control structures is based on instantiation of a generic Ada package named Cyclic_Task. Similar to the sporadic control structure, cyclic control flows are started by a Go event. A sample template of an Ada 83 implementation of such a control flow is shown below in Figure 9. task is entry Go; end ; task body is Next_Time : System_Clock.Time; begin accept Go; Next_Time := System_Clock.System_Start_Time + Period; loop System_Clock.Delay_Until(Next_Time); Next_Time := Next_Time + Period;! -- Cyclic Operation end loop; end ; Figure 9: Implementation of cyclic task 32

A.4 Protected Object This object is to provide a critical region typically protecting the state of a shared abstract data type. Its implementation is based on the passive tasks provided in the ALSYS run-time system for the ERC32 cross-compiler. Protected objects does not have a Go entry as they contain no thread of control. The general structure of such an objet is as outlined in Figure 10 below. task is pragma PASSIVE( ); entry ;! entry ; end ; task body is begin loop select accept ( ) do! end ; or accept ( ) do! end ; or! or accept terminate; end select; end loop; end ; Figure 10: Implementation of protected object 33