Orla: A data flow programming system for very large time series
|
|
|
- Randolf Barnett
- 10 years ago
- Views:
Transcription
1 DSC 2001 Proceedings of the 2nd International Workshop on Distributed Statistical Computing March 15-17, Vienna, Austria K. Hornik & F. Leisch (eds.) ISSN X Orla: A data flow programming system for very large time series Gilles Zumbach and Adrian Trapletti Abstract To analyze tick-by-tick financial time series, programs are needed which are able to handle several millions of data points. For this purpose we have developed a data flow programming framework called Orla. 1 The basic processing unit in Orla is a block, and blocks are connected to form a network. During execution, the data flow through the network and are processed as they pass through each block. The main advantages of Orla are that there is no limit to the size of the data sets, and that the same program works both with historical data and in real time mode. In order to tame the diversity of financial time series, the Orla data structure is specified through a BNF description called SQDADL, and the Orla data are expressions in this language. For storage, the times series are written in a tick warehouse which is configured completely by the SQDADL description. Queries to the tick warehouse are SQDADL expressions and the repository returns the matching time series. In this way, we achieve a seamless integration between storage and processing, including real time mode. Currently, our tick warehouse contains elementary time series. In this paper, we provide a brief overview of Orla and present a few examples of actual statistical analysis computed with Orla. Olsen & Associates, Research Institute for Applied Economics, Seefeldstrasse 233, 8008 Zürich, Switzerland. [email protected] 1 Orla and the tick warehouse are the result of a company wide development over several years, involving many individuals. The main contributers are: David Beck, Devon Bowen, Dan Dragomirescu, Paul Giotta, Loic Jaouen, Martin Lichtin, Roland Loser, Kris Meissner, James Shaw, Leena Tirkkonen, Robert Ward, and Gilles Zumbach.
2 Proceedings of DSC Introduction The basic idea of Orla can be summarized as data flowing through a network of blocks and is nicely illustrated by looking at the example given in figure 1. Every Orla application can be represented by a network which in turn consists of interconnected blocks. Each block performs a computation on the data flowing through it, and the data are flowing between blocks along the connections. OrlaReadRepo.#1 R0 OrlaDaVinci.#1 R1 OrlaProject.#1 R2 OrlaVolatility.#1 R3 OrlaMA.#1 R17 OrlaMA.#2 R4 OrlaRTSgenerate.#1 R18 OrlaRTSgenerate.#2 R5 OrlaChop.#1 R15 OrlaHistogram.#1 R16 OrlaLaggedCorrelation.#1 R6 OrlaSMSAdaptive.#1 R8 OrlaSMSUniversal.#1 R7 OrlaPrintForTransform.#1 OrlaMakeVector.#1 R9 OrlaLaggedCorrelation.#2 R10 OrlaChop.#2 R12 OrlaPrintLargeEvent.#1 R11 OrlaHistogram.#2 R13 V2.1 OrlaWriteAscii.#1 R19 Figure 1: An example of an Orla network. The main concepts used in Orla are therefore the datum, the block, the connection and the network: Datum (pl. Data): the unit of information flowing in the network. Since we work with financial time series, the diversity of financial assets needs to be represented in the datum. A datum can be as simple as a price for a spot foreign exchange rate or as complicated as a forecast for an interest rate yield curve. Because the data are not simple pairs of time stamps and values, but are instead highly structured, a type is associated with each kind of data. Datum structures and types are further explained in section 4.
3 Proceedings of DSC Block: the basic processing unit. A block receives data through a number of input ports, processes the data according to some algorithm, and sends the data further down through its output ports. The philosophy behind blocks is to implement at the block level fairly elementary functionality and to obtain more complex algorithms through a chain or network of blocks. Connection: the link between blocks. A connection establishes a path for the data flow between output ports and input ports of different blocks. Several connections may start from the same output port, but only one connection must arrive at an input port (this is required for a meaningful time ordering). The flow through a connection is always a valid time series, i.e., a time ordered series of data with an associated type. Network: the set of blocks, connections and data. When a network is evaluated, the data flows through the network. Results are written either continuously during the computation (e.g., when recording a time trace of some computed time series quantities), or at the end of the computation (e.g., when evaluating a global average). The block structure of Orla imposes a local processing model. This has mainly two implications: First, there are no other dependencies between blocks than the connections in the network. A block communicates only through its ports on which it receives and sends valid time series. Second, each datum carries all the needed information for its processing by the blocks. For example, as some algorithms can be implemented much more efficiently for regularly spaced time series, the typing system of the data includes the kind of the time series (irregular or regular with a given time interval between two subsequent data points), and the blocks can use this information to optimize their computations. The block structure of Orla is similar to a Lego world, and allows to achieve an optimal code reuse. To summarize, Orla is a data-flow programming system. Its purpose is to: analyze unevenly spaced time series, process data sets of unlimited size, and work both in historical mode and in real time. The first point is mainly a question of algorithms used for the actual statistical computations. Most empirical studies work with discretely and equally spaced time series. However, simple and efficient algorithms exist for the analysis of irregularly spaced time series and are presented for example in [1]. To support this type of analysis, the time stamp in Orla is not just an index but a meaningful time that is also used for timing and synchronization issues. The second and third points in the above list are byproducts of the data flow paradigm. Because the time series are not loaded into main memory, Orla is not limited with respect to the size of the data sets. We are working routinely with time series containing 10 5 to 10 7 data
4 Proceedings of DSC points, possibly computing vector valued time series with 50 variables per vector. On the other hand, this implies that incremental algorithms have to be used as for example the computation of the mean, but not of the median (which would essentially require to order the full time series). Furthermore, because Orla is data driven, it also works in real time for the delivery of value added information. A key advantage here is that the same code can be used both for research using historical data and for the real time delivery of products. This avoids the reimplementation and testing of software already used and tested in historical mode. Hence, for us Orla is the platform for economic research (in particular, for time series analysis) and for the delivery of real time information. Moreover, Orla is also very flexible with respect to the type of time series which can be analyzed. In principle, any time series can be analyzed as long as the particular data structure is appropriately specified (e.g., Orla could be used to analyze weather data, industry processing data, electricity network data, etc.). 2 Working with Orla Concretely, Orla is written in C++ and appears as a set of classes and libraries. The blocks are grouped into libraries according to their functionality, and the user needs to create the desired instances with the corresponding parameters. Presently, the user needs only to write a main program which can be compiled and linked as usual. 2 An elementary example that just read and write data is as follows: OrlaReadRepo OrlaPrint The corresponding C++ code is // create a network OrlaNetwork network( "A simple network" ); // create a block which gets data from the repository OrlaReadRepo in( "FX(USD,ATS),Quote(,,),Source(,,),Filter(,,)" ); // create a block which prints the data OrlaPrint out( cout ); // bind together the network and the blocks network >> in >> out; // and run the network network.run(" ", " "); 2 In principle, it would be easy to put a GUI interface on top of Orla where for example blocks are created from pull down menus. However, the current implementation does not support such a GUI interface.
5 Proceedings of DSC The last statement triggers the run time engine of Orla that will let the data with time stamps in January 2001 flow through the network. More generally, the user writes a high-level C++ program to perform the desired task. This consists of statements creating an empty network, constructing and binding the blocks, and running the network. All the magic happens when running the network. During execution, the blocks are activated by the network (more precisely by the scheduler contained in the network) by two kinds of events: The data: these events depend on the inputed time series and the blocks upstream. They are essentially random as a block has no control over its inputed data. A timer: scheduled events according to the physical clock, e.g., an event every day at 16:00. Timers are used for example to create a regularly spaced time series, say the number of ticks within the last day. These events are essentially deterministic as a block itself must create the timer according to its behavior. Timers are also used in real time to implement a grace time interval (see below). The scheduler guarantees that every block gets the events time ordered. In this way, the block writer does not have to care about scheduling issues. It should also be emphasized that the scheduling has to be time-wise efficient, particularly because the granularity of the tasks performed by the blocks is small. In practical tests with networks which perform mildly computationally intensive tasks, the scheduling overhead is below 5% of the overall used CPU time. Another issue which we do not further discuss is the strategy of the block scheduling in a network in order to avoid large queues of data on input ports leading to an excessive memory consumption. Essentially, this is controlled by following the data, i.e., by activating the blocks downward of the last produced data (deep first strategy). 3 How Orla is working The complete evaluation of a network is a succession of tasks: Check the sanity of the network, mainly data type checking and checking the number of connections. This is done by each block by checking that it receives the appropriate data type on each port. Possibly, blocks can grab memory according to the number of ports or the length of the received data vector and set timers. Activate the producer blocks. This will start the flow of data. Process incoming data or timer events for all the blocks. Possibly, switch to real time mode. Possibly, process and forward end-of-data conditions.
6 Proceedings of DSC Let the network finish once everything is idle, namely return from the run method. The timing paradigm is to start networks in historical-time processing mode, and to switch to real-time when caught up with the wall clock. This allows blocks to build up their internal states with historical data, and then to switch to real time. The changes of processing mode (historical real-time, processing endof-data) are trickling downward the network, similarly to the data. The important point is that the scheduling is done differently in historical and real time mode. For historical time, the scheduling is data and timer driven and the wall clock is irrelevant. This corresponds to a simple ordering of the events (for every block), essentially because the incoming events are known. In real time, the data are processed as they are coming, while the timers are handled according to the wall clock. Moreover, we want to have identical results if the same data are processed in real time or historical mode; this is essential to have reproducibility of the real time computations. A subtle point arising in real time is to add a bit of slack in the scheduling. We are working in a distributed environment, where the data are received, filtered, and stored on other computers in the local network. The time stamps put on the data are set by the first computer that receives them from the external source (e.g., from the data provider Reuters). Then, there are delays in the processing upstream of an Orla network, while the data are filtered, stored and forwarded inside the local network. The end result is that a datum with a given time stamp is actually received a few milliseconds to a few seconds later in a running Orla network. A clear scheduling problem appears with timers, e.g., if a datum has a time stamp of 11:59:59, a timer is set at 12:00:00 and this datum is received by the network only at 12:00:02. For this reason, a grace time interval δt is added to the incoming events of blocks with multiple inputs (input ports and/or timers). Namely for every event (data or timer) arriving at t, a timer is set at t + δt for the actual processing of this event. This allows to correctly time order the events occurring inside the grace time interval. The main design criteria for Orla are: A high level of abstraction to hide underlying details. The users and even the block writers are unaware of the scheduling and its complexity. This makes Orla easy to learn and safe to use. Minimal overhead in scheduling the blocks and forwarding the data. The goal has been to have a modular structure, which is efficient. A versatile datum structure. This topic deserves a full section, see section 4. Automatic vectorization on the time horizons. When analyzing financial time series, it is often very instructive to investigate different time horizons. An example is to analyze the price differences (returns) at increasing time intervals, say 1h, 2h, 4h, etc.. In this case, the block that computes the returns can be used with a vector of time horizons as argument in the constructor. The output of this block is then a vector containing the respective returns. This block can feed, say, a histogram block in order to compute the probability
7 Proceedings of DSC distribution of the returns. The histogram block itself recognizes that the input is a vector and therefore computes a vector of histograms corresponding to each value in the inputed vector. In this way, computational and diagnostic blocks can be chained using scalar or vector data. This is a very powerful feature of Orla as one simple network can analyze at once a time series at various time horizons. Easily extensible by adding new blocks. This is important in order to get new blocks written by users that are not aware of all the foundations and subtleties of Orla. Only valid values are send. All algorithms computing moving values need to be initialized with some data. Consider the example of a 1 day moving average computation. Although a value could be issued already at the first tick, one day of data is needed to properly initialize the internal states and to provide a meaningful result. This first day during which the result is invalid is called the build-up period. For some algorithms, no results can be produced during the build-up of the internal states. Orla is designed to send only valid data, i.e., to send only data after the build-up time interval of the particular algorithm is elapsed. In this way, blocks are guaranteed to receive only valid data. Moreover, blocks can return their build-up time, and the network can calculate the cumulative build-up time of a network. In this way, the user can easily specify a time interval for valid values, and the network can process earlier data according to the network cumulative build-up time interval Figure 2: The general block set-up.
8 Proceedings of DSC The block programmer s view of a general block is as in figure 2. The block has input and output ports with corresponding data types. The input types can be queried (and are provided by the block upstream of each port). On the output side, each block must provide the actual types of each output port. The blocks must provide essentially three methods corresponding to the block initialization and setup, the processing of the data, and the end-of-data handling. As the network is taking care of all synchronization and scheduling issues, this simplifies the task of the block writer. For example, the data processing method possesses as argument a time and a vector of simultaneous data corresponding to each input ports, with a nil pointer indicating that one of the input ports has no datum at this time. 4 The data representation Our goal is to work with financial data and with the whole variety of possible financial contracts. Financial contracts range from the simplest instruments such as spot foreign exchange rates which are characterized by a single number (the price) to complex derivatives including more parameters like the expiry date, the strike price, the detail of the derivative (e.g., European call option, American put option, etc.) and the characteristic of the underlying. This is a vast world which contains plenty of different instruments such as interest rates, futures, forwards, options, swaps, options on futures, and so on. Beside, depending of the computation, a time series with some fixed parameters might be desired, e.g., a 3 months interest rate on USD. Or say for another computation, all values of the same parameters are desired, e.g., in the construction of a yield curve on the USD all maturities are needed, but the currency is still fixed. Clearly, there is no general rule to fix what are the (constant) parameters and what is part of the time series (with changing values). The distinction between parameters and time series values is meaningless as this changes with the use of the time series. Therefore, the data structure needs to be flexible enough to accommodate all possible financial contracts with their particular characteristics. Potentially, all fields can be values or parameters depending on the computation. For storing the data, the same problem occurs. In order to represent any kind of financial securities in the Orla datum, the data structure is given through a BNF description called SQDADL (SeQuential DAta Description Language). The description of the whole world of financial securities is given in approximately 6 pages of text. This simplification is made possible because of the power and recursivity of the BNF description; the recursivity is used in the description of the derivatives (a derivative depends on one or possibly several underlying contracts). A general parser of SQDADL expressions is constructed from the BNF specification. This ensures a completely generic approach without hard coding of the financial peculiarities. SQDADL expressions can be concrete or abstract. In a concrete expression, all the fields have a value, e.g., FX(USD, CHF) specifies a spot foreign exchange (FX) rate between the United States dollar (USD) and the Swiss franc (CHF). In an abstract expression, some of the fields are empty with the meaning of anything, e.g., in FX(,) or FX(USD, ). Furthermore, SQDADL expressions have a is
9 Proceedings of DSC a relationship, e.g., FX(USD, CHF) is a FX(USD,) is a FX(,). This powerful mechanism is used for type checking in Orla. For data storage and retrieval the same technique is applied. We have developed a tick warehouse as a special purpose financial time series repository. The tick warehouse is configured completely with the SQDADL description, while the internal organization of the tick warehouse is hidden to the users. The abstract view of the tick warehouse is that of a large store which understands SQDADL requests. For storage, the data are thrown in the repository. The queries to retrieve data are abstract SQDADL expressions, and the repository returns the matching time series of concrete SQDADL ticks. In this way, we have a seamless integration between the tick warehouse and Orla. The internal organization of the tick warehouse is optimized for time series in order to give a high throughput for the data request. The tick store works also both in historical mode and in real time. A general request contains a start and end time for the time series; no end time means that the request will continue with real time data. Currently, we are storing in the repository time series (or with a more stringent definition of a time series) with an increment of 10 6 ticks per day. 5 A few example computations done with Orla In this section, we give four examples of statistical computations carried out with Orla. The data used are USD/CHF foreign exchange data from to and , respectively. Figure 3 shows an intra-week analysis of the volatility, namely a 30 minutes volatility is computed followed by an average conditional to the time in the week. Moreover, only the data during winter time are used, i.e., when Europe and the US have no daylight saving time (DST). The figure shows very clearly the weekly cycle of human activity with almost no activity during the week-end. Moreover, the volatility during each day is the signature of the different open markets: first the Japanese (and Asiatic) market with the very sharply defined opening at 0h and the lunch break in Tokyo at around 4h. Then, the opening of the European market occurs at around 7h followed by a weak lunch break. The sharp rise at around 13h corresponds to the opening of the American market, followed by the successive closing of the European and American market. This cycle is repeated over the five working days. Figure 4 displays the correlation between the historical volatility σ[ t](t) and the realized volatility σ[ t ](t + t ). The historical volatility is computed with data in the past of t over a time interval of t. The realized volatility corresponds to the future volatility and is computed using data in the future of t over a time interval of t. Hence, the correlation between these two quantities is a measure of the information contained in the historical volatility about the realized volatility and can be used to forecast realized volatility. This computation has been done using the vectorization capabilities on a range of time horizons. The figure shows that fairly large correlations are present, and that realized volatility at a given time horizon t has the largest correlation with historical volatility with a slightly larger
10 Proceedings of DSC volatility time in the week [hour] Figure 3: The volatility conditional to the time in the week for winter time (no DST) for USD/CHF. The axis is given in GMT time. time horizon t. Figure 5 shows the correlation between a change in historical volatility and realized volatility. The change of volatility is similar to a derivative of volatility with respect to time. The correlation is essentially a measure for the response of the market to a change of volatility: if the volatility increases (decreases) and market participants mostly do (do not) trade, then the realized volatility will increase (decrease), resulting in a positive correlation. On the other hand, if market participants are not influenced by changes of volatility, a zero correlation would result. The figure shows the segmentation of the market in groups like intra-day traders reacting to changes at all time scales or like long term players (e.g., pension funds) influenced only by long term changes in the market. This correlation makes visible the cascade of information from long term to short time horizons. Finally, an example of a real time application using Orla is our Olsen Information System (OIS). A demonstration version is available at com/channels/investor/timing.shtml and then clicking on one of the links for previewing the OIS. These computations are done at our company site using small Orla networks in real time and storing the computed values in the tick warehouse. Then, other C++ clients of the tick warehouse are broadcasting the values to the java applets of the web browsers.
11 Proceedings of DSC References [1] Gilles O. Zumbach and Ulrich A. Müller. Operators on inhomogeneous time series. International Journal of Theoretical and Applied Finance, 4(1): , 2001.
12 Proceedings of DSC historical volatility realized volatility linear correlation Figure 4: The correlation between historical and realized volatility. The axes corresponds to the time intervals t and t in a logarithmic scale for time intervals ranging from 1h20 to 57 days
13 Proceedings of DSC Figure 5: The correlation between the historical volatility derivative and the realized volatility. The axes correspond to the time intervals t and t in a logarithmic scale for time intervals ranging from 4h to 42 days
OANDA FXTrade Platform: User Interface Reference Manual
User Manual Home OANDA FXTrade Platform: User Interface Reference Manual Login/Logout Procedure FX Platform (Main Window) Action Buttons Pull-down Menus Transaction History Account Summary Tables Trades
Ulrich A. Muller UAM.1994-01-31. June 28, 1995
Hedging Currency Risks { Dynamic Hedging Strategies Based on O & A Trading Models Ulrich A. Muller UAM.1994-01-31 June 28, 1995 A consulting document by the O&A Research Group This document is the property
Financial Econometrics and Volatility Models Introduction to High Frequency Data
Financial Econometrics and Volatility Models Introduction to High Frequency Data Eric Zivot May 17, 2010 Lecture Outline Introduction and Motivation High Frequency Data Sources Challenges to Statistical
Technical Bulletin. Arista LANZ Overview. Overview
Technical Bulletin Arista LANZ Overview Overview Highlights: LANZ provides unparalleled visibility into congestion hotspots LANZ time stamping provides for precision historical trending for congestion
High-Level Interface Between R and Excel
DSC 2003 Working Papers (Draft Versions) http://www.ci.tuwien.ac.at/conferences/dsc-2003/ High-Level Interface Between R and Excel Thomas Baier Erich Neuwirth Abstract 1 Introduction R and Microsoft Excel
11.1 inspectit. 11.1. inspectit
11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.
4. ANNEXURE 3 : PART 3 - FOREIGN EXCHANGE POSITION RISK
Annexure 3 (PRR) - Part 3, Clause 18 - Foreign Exchange Position Risk Amount 4 ANNEXURE 3 : PART 3 - FOREIGN EXCHANGE POSITION RISK (a) CLAUSE 18 - FOREIGN EXCHANGE POSITION RISK AMOUNT (i) Rule PART 3
Quantitative Analysis of Foreign Exchange Rates
Quantitative Analysis of Foreign Exchange Rates Alexander Becker, Ching-Hao Wang Boston University, Department of Physics (Dated: today) In our class project we have explored foreign exchange data. We
Smart Queue Scheduling for QoS Spring 2001 Final Report
ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang([email protected]) & Liu Tang([email protected])
TOPFX. General Questions: FAQ. 1. Is TOPFX regulated?
General Questions: 1. Is TOPFX regulated? TOPFX has been a regulated broker since April 2011, when it was granted a license by the Cyprus Securities and Exchange Commission (http://www.cysec.gov.cy/en-
A Talk ForApacheCon Europe 2008
a talk for ApacheCon Europe 2008 by Jeremy Quinn Break My Site practical stress testing and tuning photo credit: Môsieur J This is designed as a beginner s talk. I am the beginner. 1 I will present two
SMock A Test Platform for the Evaluation of Monitoring Tools
SMock A Test Platform for the Evaluation of Monitoring Tools User Manual Ruth Mizzi Faculty of ICT University of Malta June 20, 2013 Contents 1 Introduction 3 1.1 The Architecture and Design of SMock................
Real-Time Scheduling 1 / 39
Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A
Glossary of Object Oriented Terms
Appendix E Glossary of Object Oriented Terms abstract class: A class primarily intended to define an instance, but can not be instantiated without additional methods. abstract data type: An abstraction
National CR16C Family On-Chip Emulation. Contents. Technical Notes V9.11.75
_ V9.11.75 Technical Notes National CR16C Family On-Chip Emulation Contents Contents... 1 1 Introduction... 2 2 Emulation options... 3 2.1 Hardware Options... 3 2.2 Initialization Sequence... 4 2.3 JTAG
Agile Manufacturing for ALUMINIUM SMELTERS
Agile Manufacturing for ALUMINIUM SMELTERS White Paper This White Paper describes how Advanced Information Management and Planning & Scheduling solutions for Aluminium Smelters can transform production
The Institute of Trading and Finance. The Master in Trading Course (For futures)
The Institute of Trading and Finance Presents: The Master in Trading Course (For futures) Syllabus The IOTAF Master in Trading Course provides students with comprehensive training in trading financial
Die Mobiliar Insurance Company AG, Switzerland Adaptability and Agile Business Practices
Die Mobiliar Insurance Company AG, Switzerland Adaptability and Agile Business Practices Nominated by ISIS Papyrus Software 1. EXECUTIVE SUMMARY / ABSTRACT The Swiss insurance company Die Mobiliar is the
Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies
Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies Drazen Pesjak Supervised by A.A. Tsvetkov 1, D. Posthuma 2 and S.A. Borovkova 3 MSc. Thesis Finance HONOURS TRACK Quantitative
NEW TO FOREX? FOREIGN EXCHANGE RATE SYSTEMS There are basically two types of exchange rate systems:
NEW TO FOREX? WHAT IS FOREIGN EXCHANGE Foreign Exchange (FX or Forex) is one of the largest and most liquid financial markets in the world. According to the authoritative Triennial Central Bank Survey
A CONFIGURABLE SOLUTION TO EFFICIENTLY MANAGE TREASURY OPERATIONS
Wallstreet City Financials The integrated treasury workstation for mid-sized corporations Wall Street Systems Empowering Treasury, Trading and Settlement FACT SHEET: Wallstreet City Financials A CONFIGURABLE
GETTING STARTED WITH LABVIEW POINT-BY-POINT VIS
USER GUIDE GETTING STARTED WITH LABVIEW POINT-BY-POINT VIS Contents Using the LabVIEW Point-By-Point VI Libraries... 2 Initializing Point-By-Point VIs... 3 Frequently Asked Questions... 5 What Are the
Computer-System Architecture
Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection General System Architecture 2.1 Computer-System Architecture 2.2 Computer-System
Replication on Virtual Machines
Replication on Virtual Machines Siggi Cherem CS 717 November 23rd, 2004 Outline 1 Introduction The Java Virtual Machine 2 Napper, Alvisi, Vin - DSN 2003 Introduction JVM as state machine Addressing non-determinism
Cover. SEB SIMOTION Easy Basics. Collection of standardized SIMOTION basic functions. FAQ April 2011. Service & Support. Answers for industry.
Cover SEB SIMOTION Easy Basics Collection of standardized SIMOTION basic functions FAQ April 2011 Service & Support Answers for industry. 1 Preface 1 Preface The SEB is a collection of simple, standardized
Semester Thesis Traffic Monitoring in Sensor Networks
Semester Thesis Traffic Monitoring in Sensor Networks Raphael Schmid Departments of Computer Science and Information Technology and Electrical Engineering, ETH Zurich Summer Term 2006 Supervisors: Nicolas
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann [email protected] Universität Paderborn PC² Agenda Multiprocessor and
The ISP Column A monthly column on all things Internet
The ISP Column A monthly column on all things Internet Just How Good are You? Measuring Network Performance February 2003 Geoff Huston If you are involved in the operation of an IP network, a question
Synchronization of sampling in distributed signal processing systems
Synchronization of sampling in distributed signal processing systems Károly Molnár, László Sujbert, Gábor Péceli Department of Measurement and Information Systems, Budapest University of Technology and
Grid Middleware for Realizing Autonomous Resource Sharing: Grid Service Platform
Grid Middleware for Realizing Autonomous Resource Sharing: Grid Service Platform V Soichi Shigeta V Haruyasu Ueda V Nobutaka Imamura (Manuscript received April 19, 2007) These days, many enterprises are
Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach
Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach S. M. Ashraful Kadir 1 and Tazrian Khan 2 1 Scientific Computing, Royal Institute of Technology (KTH), Stockholm, Sweden [email protected],
NUCSOFT. Asset Liability Management
NUCSOFT Asset Liability Management ALM Overview Forecasting, Budgeting & Planning Tool for Advanced Liquidity Management & ALM Analysis & Monitoring of Liquidity Buffers, Key Regulatory Ratios such as
Operating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
Expansion of a Framework for a Data- Intensive Wide-Area Application to the Java Language
Expansion of a Framework for a Data- Intensive Wide-Area Application to the Java Sergey Koren Table of Contents 1 INTRODUCTION... 3 2 PREVIOUS RESEARCH... 3 3 ALGORITHM...4 3.1 APPROACH... 4 3.2 C++ INFRASTRUCTURE
FX Key products Exotic Options Menu
FX Key products Exotic Options Menu Welcome to Exotic Options Over the last couple of years options have become an important tool for investors and hedgers in the foreign exchange market. With the growing
Switching Architectures for Cloud Network Designs
Overview Networks today require predictable performance and are much more aware of application flows than traditional networks with static addressing of devices. Enterprise networks in the past were designed
Operating Systems. Lecture 03. February 11, 2013
Operating Systems Lecture 03 February 11, 2013 Goals for Today Interrupts, traps and signals Hardware Protection System Calls Interrupts, Traps, and Signals The occurrence of an event is usually signaled
CHAPTER 4: SOFTWARE PART OF RTOS, THE SCHEDULER
CHAPTER 4: SOFTWARE PART OF RTOS, THE SCHEDULER To provide the transparency of the system the user space is implemented in software as Scheduler. Given the sketch of the architecture, a low overhead scheduler
TAYLOR II MANUFACTURING SIMULATION SOFTWARE
Prnceedings of the 1996 WinteT Simulation ConfeTence ed. J. M. ClIarnes, D. J. Morrice, D. T. Brunner, and J. J. 8lvain TAYLOR II MANUFACTURING SIMULATION SOFTWARE Cliff B. King F&H Simulations, Inc. P.O.
WEB TRADER USER MANUAL
WEB TRADER USER MANUAL Web Trader... 2 Getting Started... 4 Logging In... 5 The Workspace... 6 Main menu... 7 File... 7 Instruments... 8 View... 8 Quotes View... 9 Advanced View...11 Accounts View...11
White Paper. How Streaming Data Analytics Enables Real-Time Decisions
White Paper How Streaming Data Analytics Enables Real-Time Decisions Contents Introduction... 1 What Is Streaming Analytics?... 1 How Does SAS Event Stream Processing Work?... 2 Overview...2 Event Stream
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps
Integrating Warehouse and Inventory Management Practices
Integrating Warehouse and Inventory Management Practices One of the benefits of OpenERP's modular application approach is that you can often avoid dealing with complex functionality until your business
Contributions to Gang Scheduling
CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,
Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005
Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005 Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005... 1
Memory Database Application in the Processing of Huge Amounts of Data Daqiang Xiao 1, Qi Qian 2, Jianhua Yang 3, Guang Chen 4
5th International Conference on Advanced Materials and Computer Science (ICAMCS 2016) Memory Database Application in the Processing of Huge Amounts of Data Daqiang Xiao 1, Qi Qian 2, Jianhua Yang 3, Guang
Algorithmic Trading: A Quantitative Approach to Developing an Automated Trading System
Algorithmic Trading: A Quantitative Approach to Developing an Automated Trading System Luqman-nul Hakim B M Lukman, Justin Yeo Shui Ming NUS Investment Society Research (Quantitative Finance) Abstract:
From Control Loops to Software
CNRS-VERIMAG Grenoble, France October 2006 Executive Summary Embedded systems realization of control systems by computers Computers are the major medium for realizing controllers There is a gap between
System Administrator Training Guide. Reliance Communications, Inc. 603 Mission Street Santa Cruz, CA 95060 888-527-5225 www.schoolmessenger.
System Administrator Training Guide Reliance Communications, Inc. 603 Mission Street Santa Cruz, CA 95060 888-527-5225 www.schoolmessenger.com Contents Contents... 2 Before You Begin... 4 Overview... 4
find model parameters, to validate models, and to develop inputs for models. c 1994 Raj Jain 7.1
Monitors Monitor: A tool used to observe the activities on a system. Usage: A system programmer may use a monitor to improve software performance. Find frequently used segments of the software. A systems
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Monitoring Infrastructure (MIS) Software Architecture Document. Version 1.1
Monitoring Infrastructure (MIS) Software Architecture Document Version 1.1 Revision History Date Version Description Author 28-9-2004 1.0 Created Peter Fennema 8-10-2004 1.1 Processed review comments Peter
The foreign exchange market operates 24 hours a day and as a result it
CHAPTER 5 What Are the Best Times to Trade for Individual Currency Pairs? The foreign exchange market operates 24 hours a day and as a result it is impossible for a trader to track every single market
Chapter 6, The Operating System Machine Level
Chapter 6, The Operating System Machine Level 6.1 Virtual Memory 6.2 Virtual I/O Instructions 6.3 Virtual Instructions For Parallel Processing 6.4 Example Operating Systems 6.5 Summary Virtual Memory General
In: Proceedings of RECPAD 2002-12th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal
Paper Title: Generic Framework for Video Analysis Authors: Luís Filipe Tavares INESC Porto [email protected] Luís Teixeira INESC Porto, Universidade Católica Portuguesa [email protected] Luís Corte-Real
Multiple Kernel Learning on the Limit Order Book
JMLR: Workshop and Conference Proceedings 11 (2010) 167 174 Workshop on Applications of Pattern Analysis Multiple Kernel Learning on the Limit Order Book Tristan Fletcher Zakria Hussain John Shawe-Taylor
Keep it Simple Timing
Keep it Simple Timing Support... 1 Introduction... 2 Turn On and Go... 3 Start Clock for Orienteering... 3 Pre Start Clock for Orienteering... 3 Real Time / Finish Clock... 3 Timer Clock... 4 Configuring
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming Java has become enormously popular. Java s rapid rise and wide acceptance can be traced to its design
Ethernet. Customer Provided Equipment Configuring the Ethernet port.
Installing the RDSP-3000A-NIST Master Clock. Ethernet Connect the RJ-45 connector to a TCP/IP network. Equipment The following equipment comes with the clock system: RDSP-3000A-NIST Master Clock Module.
Key Requirements for a Job Scheduling and Workload Automation Solution
Key Requirements for a Job Scheduling and Workload Automation Solution Traditional batch job scheduling isn t enough. Short Guide Overcoming Today s Job Scheduling Challenges While traditional batch job
Tool - 1: Health Center
Tool - 1: Health Center Joseph Amrith Raj http://facebook.com/webspherelibrary 2 Tool - 1: Health Center Table of Contents WebSphere Application Server Troubleshooting... Error! Bookmark not defined. About
How To Use Vista Data Vision
Data Management SOFTWARE FOR -meteorological Data -Hydrological Data -Environmental data ÖRN SMÁRI // GRAFÍSK HÖNNUN // DE Trend lines What is Vista Data Vision VDV is a comprehensive data management system
Software Project Management Matrics. Complied by Heng Sovannarith [email protected]
Software Project Management Matrics Complied by Heng Sovannarith [email protected] Introduction Hardware is declining while software is increasing. Software Crisis: Schedule and cost estimates
Monitoring Software using Sun Spots. Corey Andalora February 19, 2008
Monitoring Software using Sun Spots Corey Andalora February 19, 2008 Abstract Sun has developed small devices named Spots designed to provide developers familiar with the Java programming language a platform
Forensic Analysis of Internet Explorer Activity Files
Forensic Analysis of Internet Explorer Activity Files by Keith J. Jones [email protected] 3/19/03 Table of Contents 1. Introduction 4 2. The Index.dat File Header 6 3. The HASH Table 10 4. The
SignalDraw: GUI Tool For Generating Pulse Sequences
SignalDraw: GUI Tool For Generating Pulse Sequences Konstantin Berlin Department of Computer Science University of Maryland College Park, MD 20742 [email protected] December 9, 2005 Abstract Generating
METER DATA MANAGEMENT FOR THE SMARTER GRID AND FUTURE ELECTRONIC ENERGY MARKETPLACES
METER DATA MANAGEMENT FOR THE SMARTER GRID AND FUTURE ELECTRONIC ENERGY MARKETPLACES Sebnem RUSITSCHKA 1(1), Stephan MERK (1), Dr. Heinrich KIRCHAUER (2), Dr. Monika STURM (2) (1) Siemens AG Germany Corporate
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications Harris Z. Zebrowitz Lockheed Martin Advanced Technology Laboratories 1 Federal Street Camden, NJ 08102
Forex Trading. Instruction manual
Forex Trading Instruction manual 1. IMPORTANT NOTES...2 1.1 General notes...2 1.2 Inactivity Logout...2 1.3 Exit...2 1.4 Performance Indicator...2 1.5 Cancelling transactions...2 2. SUPPORT-HOTLINE...2
JMulTi/JStatCom - A Data Analysis Toolkit for End-users and Developers
JMulTi/JStatCom - A Data Analysis Toolkit for End-users and Developers Technology White Paper JStatCom Engineering, www.jstatcom.com by Markus Krätzig, June 4, 2007 Abstract JStatCom is a software framework
Traffic Monitoring in a Switched Environment
Traffic Monitoring in a Switched Environment InMon Corp. 1404 Irving St., San Francisco, CA 94122 www.inmon.com 1. SUMMARY This document provides a brief overview of some of the issues involved in monitoring
An Oracle White Paper July 2012. Load Balancing in Oracle Tuxedo ATMI Applications
An Oracle White Paper July 2012 Load Balancing in Oracle Tuxedo ATMI Applications Introduction... 2 Tuxedo Routing... 2 How Requests Are Routed... 2 Goal of Load Balancing... 3 Where Load Balancing Takes
Using Macro News Events in an Automated FX Trading Strategy. www.deltixlab.com
Using Macro News Events in an Automated FX Trading Strategy www.deltixlab.com The Main Thesis The arrival of macroeconomic news from the world s largest economies brings additional volatility to the market.
What Are the Best Times to Trade for Individual Currency Pairs?
What Are the Best Times to Trade for Individual Currency Pairs? By: Kathy Lien The foreign exchange market operates 24 hours a day and as a result it is impossible for a trader to track every single market
The Evolution of Load Testing. Why Gomez 360 o Web Load Testing Is a
Technical White Paper: WEb Load Testing To perform as intended, today s mission-critical applications rely on highly available, stable and trusted software services. Load testing ensures that those criteria
Sensex Realized Volatility Index
Sensex Realized Volatility Index Introduction: Volatility modelling has traditionally relied on complex econometric procedures in order to accommodate the inherent latent character of volatility. Realized
THE MIRAGE OF TRIANGULAR ARBITRAGE IN THE SPOT FOREIGN EXCHANGE MARKET
May, 9 THE MIRAGE OF TRIANGULAR ARBITRAGE IN THE SPOT FOREIGN EXCHANGE MARKET DANIEL J. FENN and SAM D. HOWISON Oxford Centre for Industrial and Applied Mathematics Mathematical Institute, University of
Using In-Memory Computing to Simplify Big Data Analytics
SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed
Risk Management for Fixed Income Portfolios
Risk Management for Fixed Income Portfolios Strategic Risk Management for Credit Suisse Private Banking & Wealth Management Products (SRM PB & WM) August 2014 1 SRM PB & WM Products Risk Management CRO
Eclipse Visualization and Performance Monitoring
Eclipse Visualization and Performance Monitoring Chris Laffra IBM Ottawa Labs http://eclipsefaq.org/chris Chris Laffra Eclipse Visualization and Performance Monitoring Page 1 Roadmap Introduction Introspection
KITES TECHNOLOGY COURSE MODULE (C, C++, DS)
KITES TECHNOLOGY 360 Degree Solution www.kitestechnology.com/academy.php [email protected] [email protected] Contact: - 8961334776 9433759247 9830639522.NET JAVA WEB DESIGN PHP SQL, PL/SQL
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
ADVANCED SCHOOL OF SYSTEMS AND DATA STUDIES (ASSDAS) PROGRAM: CTech in Computer Science
ADVANCED SCHOOL OF SYSTEMS AND DATA STUDIES (ASSDAS) PROGRAM: CTech in Computer Science Program Schedule CTech Computer Science Credits CS101 Computer Science I 3 MATH100 Foundations of Mathematics and
Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2
Job Reference Guide SLAMD Distributed Load Generation Engine Version 1.8.2 June 2004 Contents 1. Introduction...3 2. The Utility Jobs...4 3. The LDAP Search Jobs...11 4. The LDAP Authentication Jobs...22
Technical Note T1. Batch Mode to Sequence Experiments. 1. Introduction
Technical Note T1 Batch Mode to Sequence Experiments 1. Introduction Ivium Potentiostat/Galvanostats are controlled by IviumSoft Control and Data Analysis Software. There are three ways that IviumSoft
Scicos is a Scilab toolbox included in the Scilab package. The Scicos editor can be opened by the scicos command
7 Getting Started 7.1 Construction of a Simple Diagram Scicos contains a graphical editor that can be used to construct block diagram models of dynamical systems. The blocks can come from various palettes
A Simple Model for Intra-day Trading
A Simple Model for Intra-day Trading Anton Golub 1 1 Marie Curie Fellow, Manchester Business School April 15, 2011 Abstract Since currency market is an OTC market, there is no information about orders,
Paper 064-2014. Robert Bonham, Gregory A. Smith, SAS Institute Inc., Cary NC
Paper 064-2014 Log entries, Events, Performance Measures, and SLAs: Understanding and Managing your SAS Deployment by Leveraging the SAS Environment Manager Data Mart ABSTRACT Robert Bonham, Gregory A.
Communication Protocol
Analysis of the NXT Bluetooth Communication Protocol By Sivan Toledo September 2006 The NXT supports Bluetooth communication between a program running on the NXT and a program running on some other Bluetooth
Rotorcraft Health Management System (RHMS)
AIAC-11 Eleventh Australian International Aerospace Congress Rotorcraft Health Management System (RHMS) Robab Safa-Bakhsh 1, Dmitry Cherkassky 2 1 The Boeing Company, Phantom Works Philadelphia Center
