What is an Operating System? An operating system (OS) is a collection of software that acts as an intermediary between users and the computer hardware One can view an OS as a manager of system resources An operating system is a control program There is no universally accepted definition of what is part of the OS and what is not Components of a Computing System user 1 user 2 user 3 operating system computer hardware user n compiler quake emacs Oracle application programs 3/6/99 ICSS440 - Overview 1 3/6/99 ICSS440 - Overview 2 Resources Process: An executing program Resource: Anything that is needed for a process to run Memory Space on a disk The CPU An OS creates resource abstractions An OS manages resource sharing 3/6/99 ICSS440 - Overview 3 Abstract Resources User Interface Application Abstract Resources (API) Middleware OS Resources (OS Interface) OS Hardware Resources 3/6/99 ICSS440 - Overview 4 System Software Goals of an OS A common term used to describe the software responsible for running a computer system (sometimes the OS is included) Independent of applications, but common to all Examples C library functions A window system A database management system Resource management function 3/6/99 ICSS440 - Overview 5 The primary goal of an operating system is to make the computer system convenient to use A secondary goal of the operating system is to use the computer hardware in an efficient manner OS is pure overhead, the OS performs no useful function by itself Application programs have the real value to person who buys the computer 3/6/99 ICSS440 - Overview 6 1
Why Study OS? A Religious Note Understand model of operation Easier to see how to use the system Enables you to write efficient code Learn to design an OS See application of CS theory Study a large and complex software system 3/6/99 ICSS440 - Overview 7 There are many different operating systems available today Each one provides some service that users want size, cost, ease of use interface, availability of software efficiency, networking Don t be an OS snob, know the concepts, know the needs, and learn how to pick the right tool for the task at hand 3/6/99 ICSS440 - Overview 8 A Course Note Early Systems This course is not about how to use an OS how to administer an OS This course is about the theory behind OS the study of large software systems what makes an OS tick concurrent processes Early computers were large, expensive, machines run by a single user from a console The user/programmer/operator would write a program load program into memory start execution monitor progress debug/collect results 3/6/99 ICSS440 - Overview 9 3/6/99 ICSS440 - Overview 10 Software Libraries Language Tools Initially programmers would be responsible for writing all of the code required to run their programs Soon collections of common functions, software libraries, were created Rather than rewriting code, the appropriate routines was copied from the library The routines that provided input/output (I/O), device drivers, were especially important 3/6/99 ICSS440 - Overview 11 Soon tools that allowed programmers to code in something other than machine language were developed assemblers FORTRAN, COBOL Made programming easier, but running programs became much more difficult 3/6/99 ICSS440 - Overview 12 2
Running a Program Simple Batch Systems Load FORTRAN compiler Load Linker Run compiler Run Linker Load Assembler Load Program Run Assembler Run Program Job setup was a real problem. Computers were extremely expensive and owners needed high utilization Two-fold solution professional computer operators were hired quicker setup programmers programmed Jobs with similar needs were batched together and run through the computer as a group 3/6/99 ICSS440 - Overview 13 3/6/99 ICSS440 - Overview 14 Programming in a Batch System More Problems A typical run of a program programmer completes code and submits job (includes compilation and linking instructions) to the operator operator groups jobs into batches and runs a batch when appropriate when the batch is done, perhaps days later,the output from each job is returned to the programmer The delay between job submission and completion is called turnaround time (often measured in days) When a job stopped operator would have to notice it stopped determine why it stopped if program terminated abnormally, operator would copy the contents of memory for the programmer to analyze (called a core dump) start the next job In between these steps, the computer would be idle 3/6/99 ICSS440 - Overview 15 3/6/99 ICSS440 - Overview 16 Automatic Job Sequencing The First OS Automatic job sequencing used a small program, called a resident monitor, that transferred control from one job to another The monitor is always resident in memory Control cards were used to tell the monitor what programs to execute This led to the development of control-card interpreters and job control languages (JCL) Resident monitors were the first, rudimentary, operating systems monitor is similar to OS kernel that must be resident in memory control-card interpreters eventually become command processors or shells There were still problems with computer utilization. Most of these problems revolved around I/O operations 3/6/99 ICSS440 - Overview 17 3/6/99 ICSS440 - Overview 18 3
Off-line Processing The Big Gain One common solution to the I/O problem was to replace slow devices with faster ones card images were often transferred to tape (maybe disk) before the program was run when the tape was full, it was brought to the operator to be processed output was often copied to tape first and printed later The card-readers/line printers were operated offline, rather than by the main computer The obvious advantage is that the computer was no longer bound to operate at the speed of the slower I/O devices Another big advantage, was the ability to use multiple reader-to-tape and tape-to-printer systems for one computer A disadvantage was slightly longer turnaround time 3/6/99 ICSS440 - Overview 19 3/6/99 ICSS440 - Overview 20 Spooling Overlapped CPU and I/O Operations One big problem with tape systems is that it was impossible to simultaneously read and write Disk systems changed this card/print images were placed in files on the disk which can be accessed randomly by the computer This form of processing is called simultaneous peripheral operation on-line (spooling) In essence the disk is used as a large buffer, for reading/writing as far ahead as possible 3/6/99 ICSS440 - Overview 21 The big advantage in spooling was the ability to overlap CPU and I/O operations the spooler may be reading the input of one job while printing the output of another during this time, another job (or jobs) may be executed, reading cards and printing output from/to disk This also started people thinking about virtual devices, and device independent I/O 3/6/99 ICSS440 - Overview 22 Scheduling Multiprogramming Spooling provides an important data structure: a job pool a job pool consists of a collection of jobs, sitting on a disk, waiting to be run Now the computer can select which job to run on some basis other than on a first-come, first-served basis Decisions can be made based on program size, estimated run time, etc. Most computer programs follow a pattern CPU I/O CPU I/O CPU I/O... During the I/O steps, even with disks, the CPU sits idle waiting for the operation to finish 3/6/99 ICSS440 - Overview 23 3/6/99 ICSS440 - Overview 24 4
Multiprogramming Multiprogramming interleaves program execution to increase CPU, and I/O, utilization CPU I/O CPU I/O CPU I/O CPU I/O CPU I/O CPU I/O CPU I/O CPU I/O CPU I/O Job Pool (ready and waiting to run) A Multiprogramming System memory Monitor job 1 job 2 job 3 job 4 CPU time-multiplexing space-multiplexing 3/6/99 ICSS440 - Overview 25 3/6/99 ICSS440 - Overview 26 Issues Time Sharing Systems Multiprogramming systems present several interesting issues job scheduling - How do you decide which job to bring into memory from the job pool? memory management - How do you put multiple program in memory? How do you prevent one program from interfering with another? CPU scheduling - How do you decide which in memory job to run? resource management 3/6/99 ICSS440 - Overview 27 Multiprogrammed batched systems provide an environment where resources can be utilized efficiently The big problems though are long turn around times non-interactive system post-mortem debugging is difficult Time sharing, or multitasking, is a logical extension of multiprogramming 3/6/99 ICSS440 - Overview 28 Multitasking OS and Hardware The basic idea is to have the CPU switch between jobs so quickly that the user does not notice This allows users to interact with a program while it is running, while still providing good utilization of system resources Timesharing systems are even more complex than multiprogramming systems More interested in improving response time and decreasing variance in delay 3/6/99 ICSS440 - Overview 29 OS and hardware often develop hand in hand first OS was to improve utilization new hardware was developed to improve OS The dinosaur mainframe computers are a thing of the past New computing technology, and new uses of this technology, has resulted in changes in OS 3/6/99 ICSS440 - Overview 30 5
Personal-Computer Systems As hardware costs decreased, it became possible to dedicate a single computer to a single user Early operating systems for these types of machines consisted of nothing more than a simple resident monitor (CP/M) State of the art systems include support for multitasking and typically provide graphical interfaces Parallel Systems Machines consisting of many, sometimes thousands, of processors in a single box Most OS use symmetric multiprocessing each processor runs an identical copy of the OS, and these copies communicate as needed Another model is asymmetric multiprocessing each processor is assign a specific task a master processor controls the system 3/6/99 ICSS440 - Overview 31 3/6/99 ICSS440 - Overview 32 Networks Networks have changed the face of computing LAN (Local Area Network) evolution 3Mbps (1975) 10 Mbps (1980) 100 Mbps (1990) Networks provide new ways to do computing Shared files Shared memory??? 3/6/99 ICSS440 - Overview 33 Distributed Systems Wave of the future App App App App App Distributed OS App Multiple Computers connected by a Network 3/6/99 ICSS440 - Overview 34 Process Control & Real-Time Systems Computer is dedicated to a single purpose Classic embedded system Must respond to external stimuli in fixed time Continuous media popularizing real-time techniques An area of growing interest Batch Modern Operating Systems Timesharing PC & Workstation Network OS Memory Mgmt Scheduling Protection Memory Mgmt Protection Scheduling Files System software Windowing Devices Modern OS Scheduling Client-Server Model Protocols Human-Computer Interface Real-Time 3/6/99 ICSS440 - Overview 35 3/6/99 ICSS440 - Overview 36 6
Mainstay Systems Unix The major systems that are out there include Mainframe OS (VMS, CMS, MVS, ) UNIX (Solaris, IRIX, HPuX, Linux, ) Windows (Win9x, NT, Win2000) We will look at primarily UNIX and NT in this course Develop by AT&T Bell Labs in the late 1960's Written by a small group of engineers Ken Thompson & Dennis Ritchie User interface built on Honeywell Multics Research in Operating System Idea was to write a "portable" operating system C Language was invented to write UNIX Not intended to be commercial product Developers of UNIX were primary users 3/6/99 ICSS440 - Overview 37 3/6/99 ICSS440 - Overview 38 Unix Popularity Sun and UNIX UNIX was readily available to universities with source code University of California Berkeley modified UNIX Virtual Memory Faster File System TCP/IP Networking (telnet, FTP, SMTP) Given away freely UNIX was used primarily in Universities and Research Labs 3/6/99 ICSS440 - Overview 39 SUN Microsystems Formed by ex Stanford and Berkeley Grad students Evolved BSD UNIX into a reliable product Added a network file system (NFS) capability Created the workstation market Technology Expansion in the 1980's fueled the growth of UNIX Writing from scratch was very expensive, UNIX was complete, portable, and inexpensive 3/6/99 ICSS440 - Overview 40 X Windows UNIX Standards Massachusetts Institute of Technology (MIT) developed the X-Window System Graphical User Interface Client-Server Design Network Oriented Although the implementation of the X-Window System is very different, the user interface is very similar to Windows. The portability of UNIX was a benefit and a curse many companies created ports of UNIX, which resulted in many different flavors of UNIX Standards were developed to make sure that UNIX would be stable across different platforms IEEE POSIX Open Software Foundation UNIX International X/Open 3/6/99 ICSS440 - Overview 41 3/6/99 ICSS440 - Overview 42 7
Windows Windows Windows NT has a long history as well 1981: PC-DOS 1.0 ships with the IBM PC 1983: Apple releases the Lisa 1983: MS-DOS 2.0, announces Windows 1984-1985: Many multitasking enhancements for DOS 1985: Windows 1.0 released 1987: IBM and Microsoft announce OS/2 1.0 1987: Windows 2.0 ships 1988: IBM and Microsoft release OS/2 1.1 has a graphical interface still has problems IBM & Microsoft split 1990: Windows 3.0 ships 1991: Windows 3.1 ships, Chicago is mentioned 1992: Windows 3.1 for Workgroups ships 1993: NT is launched 1995: Windows95 ships 3/6/99 ICSS440 - Overview 43 3/6/99 ICSS440 - Overview 44 Windows 1996: NT 4.0 ships 1997: NT 5.0 ships 1998: Windows 98 is released last version based on kernel running on top of DOS????: Windows 2000 3/6/99 ICSS440 - Overview 45 8