Einführung Computergraphik (WS 2014/15)



Similar documents
OpenGL & Delphi. Max Kleiner. 1/22

Computer Graphics. Computer graphics deals with all aspects of creating images with a computer

Input and Interaction. Project Sketchpad. Graphical Input. Physical Devices. Objectives

Graphics Input Primitives. 5. Input Devices Introduction to OpenGL. String Choice/Selection Valuator

Graphics Pipeline in a Nutshell

Computer Graphics. Anders Hast

Input and Interaction

CMSC 427 Computer Graphics 1

CMSC 427 Computer Graphics 1

An Introduction to. Graphics Programming

CS130 - Intro to computer graphics. Dr. Victor B. Zordan vbz@cs.ucr.edu Objectives

Input and Interaction. CS 432 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Instructor. Goals. Image Synthesis Examples. Applications. Computer Graphics. Why Study 3D Computer Graphics?

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies

1. INTRODUCTION Graphics 2

Introduction to Computer Graphics

Lecture Notes, CEng 477

The mouse callback. Positioning. Working with Callbacks. Obtaining the window size. Objectives

2: Introducing image synthesis. Some orientation how did we get here? Graphics system architecture Overview of OpenGL / GLU / GLUT

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

C O M P U T E R G R A P H I C S. Computer Graphics. Introduction I. Guoying Zhao 1 / 58

Introduction to Computer Graphics. Reading: Angel ch.1 or Hill Ch1.

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

How To Teach Computer Graphics

A Short Introduction to Computer Graphics

Computer Graphics (Basic OpenGL, Input and Interaction)

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

Introduction Week 1, Lecture 1

Aston University. School of Engineering & Applied Science

Comp 410/510. Computer Graphics Spring Introduction to Graphics Systems

Computer Graphics Labs

Computer Graphics Labs

Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Chapter 1 Introduction to OpenGL

COMPUTER GRAPHICS Computer Graphics

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

Fundamentals of Computer Graphics

Computer Graphics Hardware An Overview

Introduction to Computer Graphics

Chapter 2 - Graphics Programming with JOGL

INTRODUCTION TO RENDERING TECHNIQUES

Gursharan Singh Tatla Page No. 1 COMPUTER GRAPHICS (Short Answer type Questions)

CS 378: Computer Game Technology

Introduction Week 1, Lecture 1

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

CS 325 Computer Graphics

CSE 564: Visualization. GPU Programming (First Steps) GPU Generations. Klaus Mueller. Computer Science Department Stony Brook University

Computer Graphics on Mobile Devices VL SS ECTS

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

GUI GRAPHICS AND USER INTERFACES. Welcome to GUI! Mechanics. Mihail Gaianu 26/02/2014 1

NVIDIA GeForce GTX 580 GPU Datasheet

Introduction to GPGPU. Tiziano Diamanti

Introduction to WebGL

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

Keyboard Mouse and Menus

COMP-557: Fundamentals of Computer Graphics McGill University, Fall 2010

3D U ser I t er aces and Augmented Reality

Computer Graphics. Dr. S.M. Malaek. Assistant: M. Younesi

ICS : 435. Computer Graphics Applications. Instructor : Da'ad Albalawneh

Silverlight for Windows Embedded Graphics and Rendering Pipeline 1

Visualization of 2D Domains

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

Sources: On the Web: Slides will be available on:

The Evolution of Computer Graphics. SVP, Content & Technology, NVIDIA

CSCD18: Computer Graphics

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Computer Applications in Textile Engineering. Computer Applications in Textile Engineering

Introduction to. With Slides from Dongho Kim and Karen Liu. School of Computer Soongsil University Animation. Computer Animation

Scan-Line Fill. Scan-Line Algorithm. Sort by scan line Fill each span vertex order generated by vertex list

An Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June

REFERENCE GUIDE 1. INTRODUCTION

An introduction to 3D draughting & solid modelling using AutoCAD

CS 4300 Computer Graphics. Prof. Harriet Fell Fall 2012 Lecture 1 September 5, 2011

Working With Animation: Introduction to Flash

CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy

Advanced Rendering for Engineering & Styling

A Proposal for OpenEXR Color Management

The Rocket Steam Locomotive - Animation

Adobe Illustrator CS5 Part 1: Introduction to Illustrator

Adding Animation With Cinema 4D XL

Part 1 Foundations of object orientation

TABLE OF CONTENTS. INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE...

Windows Phone 7 Game Development using XNA

TEACHING INTRODUCTORY COMPUTER GRAPHICS WITH THE PROCESSING LANGUAGE

Basic controls of Rhinoceros 3D software

SkillsUSA 2014 Contest Projects 3-D Visualization and Animation

SketchUp Instructions

SmartArrays and Java Frequently Asked Questions

Chapter 9 Input/Output Devices

Computer Graphics. Introduction. Computer graphics. What is computer graphics? Yung-Yu Chuang

animation animation shape specification as a function of time

An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Optimizing AAA Games for Mobile Platforms

Computers in Film Making

QCD as a Video Game?

2013 Getting Started Guide

Transcription:

Einführung Computergraphik (WS 2014/15) Martin Held FB Computerwissenschaften Universität Salzburg A-5020 Salzburg, Austria held@cosy.sbg.ac.at 28. Juli 2015 UNIVERSITÄT SALZBURG Computational Geometry and Applications Lab

Personalia Instructor: M. Held. Email: held@cosy.sbg.ac.at. Base-URL: http://www.cosy.sbg.ac.at/ held. Office: Universität Salzburg, Computerwissenschaften, Rm. 1.20, Jakob-Haringer Str. 2, 5020 Salzburg-Itzling. Phone number (office): (0662) 8044-6304. Phone number (secr.): (0662) 8044-6328. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 2

Formalia URL of course: Base-URL/teaching/einfuehrung_graphik/cg.html. Lecture times (VO): Thursday 9 00 10 40. Lecture times (PS): Thursday 8 00 8 50. Venue: T03, Computerwissenschaften, Jakob-Haringer Str. 2. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 3

Electronic Slides and Online Material In addition to these slides, you are encouraged to consult the WWW home-page of this lecture: http://www.cosy.sbg.ac.at/ held/teaching/einfuehrung_graphik/cg.html. In particular, this WWW page contains links to online manuals, slides, and code. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 4

A Few Words of Warning I hope that these slides will serve as a practice-minded introduction to various aspects of discrete mathematics which are of importance for computer science. I would like to warn you explicitly not to regard these slides as the sole source of information on the topics of my course. It may and will happen that I ll use the lecture for talking about subtle details that need not be covered in these slides! In particular, the slides won t contain all sample calculations, proofs of theorems, demonstrations of algorithms, or solutions to problems posed during my lecture. That is, by making these slides available to you I do not intend to encourage you to attend the lecture on an irregular basis. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 5

Acknowledgments Several students contributed to the genesis of these slides, by assembling reports on graphics projects, producing electronic transcripts of my own lectures, and by writing L A T E X code and generating computer-based figures: Richard Bauer, Stephan Czermak, Gerd Dauenhauer, Mohamed Elkattahf, Christian Gasperi, Martin Hargassner, Claudia Horner, Christian Koidl, Florian Krisch, Claudio Landerer, Lothar Mausz, Kathrin Meisl, Oskar Schobesberger, Roland Schorn, Rolf Sint, Alex Stumpfl, Oliver Suter, Florian Treml, Christian Zödl, and Gerhard Zwingenberger; Matthias Ausweger, Günther Gschwendtner, Herwig Höfle, Balthasar Laireiter, Bernhard Salzlechner, and Gerald Wiesbauer; and Markus Amersdorfer, Martin Angerer, Matthias Ausweger, Richard Bauer, Fritz Bischof, Ronald Blaschke, Michael Brachtl, Markus Chalupar, Walter Chalupar, Werner Dietl, Johann Edtmayr, Gregor Haberl, Dorly Harringer, Sandor Herramhof, Martin Hinterseer, Hermann Huber, Gyasi Johnson, Wolfgang Klier, August Mayer, Albert Meixner, Christof Meerwald, Michael Neubacher, Michael Noisternig, Christoph Oberauer, Christoph Obermair, Peter Palfrader, Marc Posch, Christopher Rettenbacher, Herwig Rittsteiger, Gerhard Scharfetter, Josef Schmidbauer, Ingrid Schneider, Harald Schweiger, Stefan Sodar, Gerald Stieglbauer, Marc Strapetz, Johanna Temmel, Christopher Vogl, Werner Weiser, Gerald Wiesbauer, Franz Wilhelmstötter. I would like to express my thankfulness for their help with these slides. My apologies go to all those who should be on this list and who I omitted by mistake. This revision and extension was carried out by myself, and I am responsible for all errors. Salzburg, August 2014 Martin Held c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 6

Legal Fine Print and Disclaimer To the best of our knowledge, these slides do not violate or infringe upon somebody else s copyrights. If copyrighted material appears in these slides then it was considered to be available in a non-profit manner and as an educational tool for teaching at an academic institution, within the limits of the fair use policy. For copyrighted material we strive to give references to the copyright holders (if known). Of course, any trademarks mentioned in these slides are properties of their respective owners. Please note that these slides are copyrighted. The copyright holder(s) grant you the right to download and print it for your personal use. Any other use, including non-profit instructional use and re-distribution in electronic or printed form of significant portions of it, beyond the limits of fair use, requires the explicit permission of the copyright holder(s). All rights reserved. These slides are made available without warrant of any kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose. In no event shall the copyright holder(s) and/or their respective employers be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, arising out of or in connection with the use of information provided in these slides. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 7

Recommended Textbooks I P. Shirley, M. Ashikhmin, and S. Marschner. Fundamentals of Computer Graphics. A K Peters/CRC Press, 3rd edition, July 2009; ISBN 978-1568814698. E. Angel, D. Shreiner. Interactive Computer Graphics: A Top-Down Approach With Shader-Based OpenGL. Addison-Wesley, 7th edition, Feb 2014; ISBN 978-0133574845. D.D. Hearn and M.P. Baker and W. Carithers. Computer Graphics with OpenGL. Prentice Hall, 4th edition, 2010; ISBN 978-0136053583. F.S. Hill, Jr., and S.M. Kelley. Computer Graphics Using OpenGL. Pearson Education, 3rd edition, 2007; ISBN 978-0131496705. S. Guha. Computer Graphics Through OpenGL: From Theory to Experiments. A K Peters/CRC Press, 2nd edition, 2014; ISBN 978-1482258394. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 8

Recommended Textbooks II E. Angel. OpenGL: A Primer. Addison-Wesley, 3rd edition, 2007; ISBN 978-0321398116. G. Sellers, R.S. Wright, and N. Haemel. OpenGL SuperBible. Addison-Wesley, 6th edition, 2013; ISBN 978-0321902948. http://www.openglsuperbible.com/ D. Shreiner, G. Sellers, J. Kessenich, and B. Licea-Kane. The OpenGL Programming Guide. Addison-Wesley, 8th edition, 2013; ISBN 978-0321773036. http://www.opengl-redbook.com/ T. Akenine-Möller, E. Haines, and N. Hoffman. Real-Time Rendering. A.K. Peters, 3rd edition, 2008; ISBN 978-1-56881-424-7. http://www.realtimerendering.com c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 9

Table of Content Introduction OpenGL Representation and Modeling Raster Graphics Basic Rendering Techniques c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 10

Introduction A First Step into Computer Graphics Basics c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 11

What is Computer Graphics? The term computer graphics was coined by William Fetter in 1960 to describe the work he was pursuing at Boeing.... a consciously managed and documented technology directed toward communicating information accurately and descriptively. William A. Fetter, Computer Graphics (1960). Computer graphics is generally regarded as the creation, storage and manipulation of objects for the purpose of generating images of those objects.... the use of computers to produce pictorial images. The images produced can be printed documents or animated motion pictures, but the term computer graphics refers particularly to images displayed on a video display screen, or display monitor. Encyclopedia Britannica. Why Computer Graphics? Humans enjoy visual information. Visual information is easy to comprehend. Visual information is difficult to generate manually. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 13

Photorealism What is a realistic image? What does it mean for a picture, whether painted, photographed, or computer-generated, to be realistic? Answer is subject to much scholarly debate! Photorealism The term photorealism is normally used to refer to a picture that captures many of the effects of light interacting with real physical objects. It is an attempt to synthesize the field of light intensities that would be focused on the film plane of a camera aimed at the objects depicted. Hall&Greenberg (1983): Our goal in realistic image synthesis is to generate an image that evokes from the visual perception system a response indistinguishable from that evoked by the actual environment. Physical properties of objects have to be taken into account! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 14

Photorealism There exist applications, however, where perfection is not such a mandatory feature. For instance, flight simulators need a fairly believable output, but need not to be perfect in every detail. The dominant challenge here is real-time interactive control. A more realistic picture is not necessarily a more desirable or useful one. E.g., to convey information: A picture that is free of the complications of shadows and reflections may well be more successful than a tour de force of photorealism! In molecular modeling, the realistic depictions are not of real atoms, but rather of stylized ball-and-stick and volumetric models that permit special effects, such as animated vibrating bands and color change representing reactions. In many applications reality is intentionally altered for esthetic effect or to fulfill a naive viewer s expectation. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 15

Visualization Defined as the art to produce images of objects that could not (or hardly) be seen otherwise. E.g., since they are too small, too abstract, too slow or too fast, or simply invisible for some other reason. Typical examples include weather forecast charts in meteorology, hearts, brains and bones of living creatures in medicine, temperature distributions on brakes, the growth of plants over years, geological changes like volcano eruptions or continental movements. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 16

Applications of Computer Graphics Entertainment: Games, Commercials, Movies (Tron, Toy Story, Jurassic Park, Ants, A Bug s Life, Star Wars, Toy Story 2, Titanic, Gladiator, Troja,...), Virtual Reality. Computer-Aided Design (CAD, CAM): One of the earliest applications. Car parts, Boeing 777, submarine design. City models, architectural walk-throughs. Control of robots and manufacturing cells. Education and Training: Simulated environment, Virtual Reality, Augmented Reality. Flight simulation, pilot training. Maintenance and assembly training. Military training (digitized battlefields, mission rehearsal). Telemedicine 3D models of heart, brain, skeleton, etc. Haptic interface. Minimal non-invasive surgery. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 17

Applications of Computer Graphics: Augmented Reality [Image credit: BOEING FRONTIERS.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 18

Applications of Computer Graphics Scientific Visualization and Data Analysis: Molecular graphics (protein structures) Geographic information systems (maps, topographic maps). Turbulence, temperature, stress, etc. Weather models. Business Visualization: Data mining, visualization of massive commercial data. Graphical User Interfaces (GUI). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 19

History of Computer Graphics: 1959s and 1960s early 1950s US military used an interactive CRT graphics called SAGE. 1959 Computer drawing system DAC-1 by IBM and GM. 1961 Sketchpad developed by Ivan Sutherland at MIT. 1963 Douglas Englebart used a mouse as an input device. 1965 Jack Bresenham introduced his line-drawing algorithm. 1966 First computer-controlled head-mounted display (HMD) designed by Ivan Sutherland. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 20

History of Computer Graphics: 1970s 1971 Henri Gouraud developed Gouraud shading. 1972 2D raster display for PC workstations at Xerox. 1973 First SIGGRAPH Conference. Roughly 600 attendees. 1974 Ed Catmull introduced texture mapping (and z-buffering). 1974 Phong Bui-Tong developed Phong shading. 1975 Benoit Mandelbrot published the paper A Theory of Fractal Sets. 1977 Nintendo entered the graphics market. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 21

History of Computer Graphics: 1980s 1980 Bol Libre (by Loren Carpentar) shown at SIGGRAPH. 1980 Ray tracing developed by Turner Whitted. 1982 Tron produced by Disney. 1982 Silicon Graphics founded by Jim Clark. Sun Microsystems, Autodesk, and Adobe Systems founded. 1982 AutoCAD developed by John Walker and Dan Drake. 1984 Radiosity method developed at Cornell University by Ben Battaile, Cindy Goral, Don Greenberg, Ken Torrance. 1985 Adobe System introduced Postscript. 1986 Pixar founded. 1988 Pat Hanrahan implemented and released Renderman. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 22

History of Computer Graphics: 1990s 1990 Autodesk introduced 3D Studio. 1991 Terminator 2 was released. 1991 JPEG and MPEG standards were introduced. 1992 SGI specified OpenGL. 1992 Wavelets were used for radiosity. 1994 Industrial Light and Magic won an Oscar for its special-effects work on Jurassic Park. 1994 Sun Microsystems introduced Java. 1995 Pixar released Toy Story. 1997 SIGGRAPH 97 had more than 48 000 attendees; Titanic released. 1998 Ants and A Bug s Life released. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 23

History of Computer Graphics: 2000 and Beyond 2001 Microsoft s Xbox console (based on NVIDIA graphics) makes its debut. 2003 Graphics cards (by NVIDIA, ATI, Matrox, and others) became widely available. 2004 Half Life 2 released; graphics cards for mobile phones and PDAs. 2004 OpenGL Shading Language formally included into OpenGL 2.0. 2005 StarWars Episode III. 2007 CUDA (Compute Unified Device Architecture) specified and released by NVIDIA. 2008 OpenCL (Open Computing Language) specified by the Khronos Group. 2010 GPUs with native 64bit floating-point precision and support for massively-parallel computing become widely available. 2013 OpenGL 4.4 released. 20?? Real-time radiosity rendering? Photo-realistic consumer graphics? Realistically rendered humans? c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 24

Foundations of Computer Graphics Computer Science: algorithms, data structures, programming, software engineering, architecture, artificial intelligence, 3D modeling. Mathematics: linear algebra, analytical geometry, complex analysis, numerical analysis, differential geometry, topology, 3D modeling. Physics: optics, fluid dynamics, energy, kinematics and dynamics. Psychology: human light and color perception. Biology: human body, behavioral and cognitive systems, nervous system. Art: realism, esthetics. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 26

3D Graphics System A system for 3D graphics consists of three major (software) parts: the modeler, the renderer, image handling and display. Image Handling Image handling is often only a device driver to make the computed image visible for the user on a screen or on a hard copy device. It can also be an image processing system to improve the quality of images or to alter or transform them before displaying. Wide-spread simple modelers CAD systems, 3D editors, object description languages. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 27

Graphics Modeling Geometry-based modeling: Lines, polygons, polyhedra, Free-form curves and surfaces, Quadtrees, octrees, bounding volumes, Geometry and topology compression. Physics-based modeling: Kinematics and dynamics (contact detection, contact resolution, force calculation, natural gait), Fluid dynamics (e.g., for modeling water and waves), Gas, smoke, fire, Deformable objects (e.g., clothes, cords), Haptics (e.g., touch sensors). Cognitive-based modeling: Domain knowledge, learning. Interaction with real world. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 28

3D Graphics System: Rendering System The output of the modeler is used as input to the rendering system. Rendering is the process of reducing the 3D information of the scene to a 2D representation, the image. A camera definition is necessary to project the 3D scene onto the desired image plane. Conventional rendering is very simple: visibility determination and simple shading are performed. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 29

3D Graphics System: Rendering System In a more realistic image synthesis system the rendering consists of several parts: Visibility determination, Shading, Texturing, Anti-aliasing. To get a high quality image: The input to the renderer must be of high quality, too. No modeled aliasing should have happened. Only correctly modeled objects can be rendered in a correct way. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 30

3D Graphics System: Rendering System Shading It is the main part of every rendering system. Shadows must be determined. The intensity and color of light leaving an object, given the incoming light distribution and the surface properties, must be computed. Texture It achieves surface details which, in reality, are caused by varying optical properties of the object. Such variations are often stored in texture maps for fast access. Texture mapping is the process of determining the transformation from a texture map onto the surface of an object and then onto the screen. Anti-aliasing It tries to correct all errors that occur by aliasing. Aliasing can be caused during various steps in the image generation process. Aliasing is, more or less, usually due to a discretization of a continuum. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 31

Key Hardware Components of a Graphics System 1. Processor 2. Memory 3. Output devices (monitor, printer,...) 4. Input devices (keyboard, mouse, joystick, spaceball, data glove, eye tracker, light pen, digitizing tablet,...) 5. Graphics Processing Unit (GPU) 6. Frame buffer c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 32

Frame Buffer A common graphics technique is to use a frame buffer or refresh buffer. In its simplest meaning, a frame buffer is simply the video memory that holds the pixels from which the video display (frame) is refreshed. The frame buffer can be manipulated by the rendering algorithm, and its contents can be moved to the screen when desired. Double buffering is a technique whereby the graphics system displays one finished buffer while the hardware renders the next frame into a second buffer. Without double buffering, the process of drawing is visible as it progresses. Double buffering generally produces flicker-free rendering for simple animations. Note, however, that double buffering may slow down the graphics performance on high-end graphics systems, as buffer swaps (normally) can only happen when both buffers are untouched. Triple buffering makes use of one front buffer and two back buffers, thus enabling the graphics hardware and the rendering algorithm to progress on their own speeds. Quadruple buffering means the use of double buffering for stereoscopic images. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 33

Graphics Processing Unit By today s definition, a graphics processing unit (GPU) is a graphics output device that manipulates the frame buffer and provides accelerated 2D or 3D graphics. The frame buffer usually is stored in the memory chips on the GPU. Modern GPUs provide much more than z-buffer memory! (E.g., a stencil buffer for computing shadows and reflections has become common.) Modern GPUs also offer a (comparatively cheap) hardware for massively parallel computations, at a floating-point precision of 64 bits. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 34

Output Devices Bit-mapped raster graphics: cathode-ray tube (CRT), flat-panel liquid crystal display (LCD), flat-panel light-emitting diode display (LED display), inkjet printer, laser printer. Line-drawing displays: vector cathode-ray tube, pen-based plotter. 3D output devices: 3D printer, holographic display. Due to their flexibility and wide-spread use, in this course we will focus on raster devices. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 35

Device-Independent Graphics Primitives Since graphics output devices are many and diverse, it is imperative to achieve device independence. Thus, it is generally preferred to work in world coordinates rather than device coordinates. Typical graphics commands will be DrawLine(x1, y 1, x 2, y 2 ); DrawCircle(x1, y 1, r); DrawPolygon(PointArray); DrawText(x1, y 1, "A Message"); where x 1, y 1, x 2, y 2, r are specified in world coordinates. Graphics primitives have attributes, such as style, thickness and color for a line, or font, size and color for a text. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 36

Application Programmer s Interface (API) Graphic APIs provide the programmer with procedures for handling menus, windows and, of course, graphics. Well-known APIs for 3D graphics: OpenGL, Direct3D, Java3D, WebGL. We will use OpenGL for practical work in this course. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 37

OpenGL Introduction to OpenGL Crash Course in C Sample OpenGL Program Event Handling Data Types and Primitives Transformations c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 38

What is OpenGL? OpenGL stands for Open Graphics Library. It is a high-performance system interface to graphics hardware. OpenGL is the most widely used library for high-end platform-independent computer graphics; de-facto industry standard. Runs on different operating systems (including Unix/Linux, Windows, MacOS) without requiring changes to the source code. Platform-specific features can be implemented via extensions. OpenGL is a C Library of several hundreds of distinct functions. OpenGL is not object-oriented. (GLT and GlutMaster do provide a C++ interface to OpenGL and GLUT, though!) Several (commercial) versions of an OpenGL library have been implemented. Most OpenGL functionality is also provided by Mesa3D, which is free. (However, as of 16-Oct-2014, Mesa 10.3.1 implements only the OpenGL 3.3 API, which dates back to 2010 and does not include tesselation and the full OpenGL Shading Language 4.x.) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 40

What is OpenGL? OpenGL takes advantage of graphics hardware where it exists; whether or not hardware acceleration is used depends on the availability of suitable drivers. OpenGL does not come with any windowing functionality; it has to rely on additional libraries (such as GLUT or SDL). It ties into standard C/C++; various other language bindings exist, too. In particular, OpenGL can be used from within C, C++, Java, Fortran, Ada. OpenGL supports polygon rendering, texture mapping, anti-aliasing, shader-level operations. OpenGL does not (directly) support ray tracing, radiosity calculations, volume rendering. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 41

What is OpenGL? Designed by Silicon Graphics Inc. (SGI) in 1991. Initial design in 1982 ( IRIS GL ). For many years, development of OpenGL had been coordinated by an Architectural Review Board (ARB). In 2006, the ARB and the Khronos Board of Directors voted to transfer control of the OpenGL API standard to the non-profit technology consortium Khronos Group. As of August 2014, the following companies were promoting members of the Khronos Group: AMD, Apple, ARM, Epic Games, Imagination Technologies Group, Intel, Nokia, NVIDIA, QUALCOMM, Samsung, Sony, Vivante. The Khronos Group now controls the adaption/extension of OpenGL to reflect new hardware and software advances,... to bring advanced 3D graphics to all hardware platforms and operating systems from supercomputers to jet fighters to cell phones. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 42

OpenGL Libraries OpenGL consists of three main libraries: the Graphics Library (GL), the Graphics Library Utilities (GLU), and the Graphics Library Utilities Toolkit (GLUT). Function names begin with gl, glu, or glut, depending on which library they belong to. GLU OpenGL Application Program GL GLUT Xlib, Xtk Frame Buffer GLX c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 43

OpenGL Libraries OpenGL proper does not provide any windowing functionality! Thus, a link to the underlying windowing system (e.g., GLX for the X window system or WGL for Windows) and an add-on library like GLUT is required! GLUT code is portable, but it lacks a good toolkit for a specific platform. In particular, GLUT does not offer ready-to-use widgets for constructing GUIs. If standard menus are required then one has to resort to add-on packages that can interact with OpenGL. See, for instance, FOX, FLTK, GLFW, GLUI, and GtkGLExt. Since GLUT has not been actively maintained in recent years its most recent release dates back to 1998 switching to freeglut (Development of the more recent fork OpenGLUT seems to have stalled.) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 44

Crash Course in C This introduction to OpenGL is targeted at using OpenGL in conjunction with GLUT within a C (or C++) program in a Unix/Linux environment. Thus, we start with a short introduction to C. Save the following lines in a file named HelloWorld.c : #include <stdio.h> int main(int argc, char *argv[]) { printf("hello, world!\n"); return 0; } Compile using the command: gcc -o HelloWorld -W -Wall HelloWorld.c c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 46

Crash Course in C: Functions Functions can be called from main() or from any other function. They have to be (or should be) declared beforehand, though. #include <stdio.h> void PrintGreeting(void) { printf("hello, world!\n"); return; } int main(int argc, char *argv[]) { PrintGreeting(); return 0; } c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 47

Crash Course in C: Branching A case-statement exists in addition to the if-construct. #include <stdio.h> int main(int argc, char *argv[]) { int i; printf("please enter an integer value:\n"); scanf("%d", &i); if (i == 42) { printf("you entered the correct answer!\n"); } else { printf("you entered %d!\n", i); } } return 0; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 48

Crash Course in C: Loops C knows for-loops. (Note that the following code requires the math library for linking: use -lm as a command-line parameter for gcc.) #include <stdio.h> #include <math.h> int main(int argc, char *argv[]) { int i; double x; for (i = 2; i < 10; ++i) { x = sqrt((double) i); printf("%d^2 = %d, sqrt(%d) = %f\n", i, i * i, i, x); } } return 0; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 49

Crash Course in C: Loops C knows while-loops. (Note that the following code requires the math library for linking: use -lm as a command-line parameter for gcc.) #include <stdio.h> #include <math.h> int main(int argc, char *argv[]) { int i; double x; i = 2; while (i < 10) { x = sqrt((double) i); printf("%d^2 = %d, sqrt(%d) = %f\n", i, i * i, i, x); ++i; } } return 0; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 50

Crash Course in C: Loops C knows do-while-loops. (Note that the following code requires the math library for linking: use -lm as a command-line parameter for gcc.) #include <stdio.h> #include <math.h> int main(int argc, char *argv[]) { int i; double x; i = 2; do { x = sqrt((double) i); printf("%d^2 = %d, sqrt(%d) = %f\n", i, i * i, i, x); ++i; } while (i < 10); } return 0; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 51

Crash Course in C: Pointers and Arrays C relies heavily on pointers, both for output parameters of functions and for constructing arrays and more complex data structures. Pointers are powerful but dangerous! void ComputePowers(float *powers, float *sum) { powers[0] = 4.0; powers[1] = 9.0; powers[2] = 16.0; *sum = *powers + *(powers+1) + *(powers+2); return; } int main(int argc, char *argv[]) { float powers[3], sum; } ComputePowers(powers, &sum); printf("the sum of %f and %f and %f is %f\n", powers[0], powers[1], powers[2], sum); return 0; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 52

Compiling an OpenGL Program The source code for an OpenGL program has to contain the following directive for including OpenGL header files: If GLUT is not used: #include <GL/gl.h> #include <GL/glu.h> If GLUT is used: #include <GL/glut.h> The file glut.h includes gl.h and glu.h. Note: When linking an OpenGL program, the OpenGL libraries have to be available for inclusion. This means compiling with the -lglut -lglu -lgl loader flags, and possibly, with -L flags for the X libraries. On our Linux machines, the header files are in /usr/include/gl and the libraries are in /usr/lib. See the sample Imakefile in my WWW graphics resources. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 53

Sample OpenGL Program Most OpenGL programs have a similar structure and consist of the following main parts: main(): defines the callback functions, opens one or more windows, enters event loop (code as last executable statement); init(): initializes the state variables; callbacks: display function, event handling. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 55

Sample OpenGL Program #include <GL/glut.h> /* g l u t. h i n c l u d e s gl. h / g l u. h */ int main(int argc, char** argv) { glutcreatewindow(argv[0]);/* w i n d o w t i t l e is n a m e of p r o g r a m ( a r g [ 0 ] ) */ glutdisplayfunc(display); /* s e t u s e r s c a l l b a c k f u n c t i o n */ init(); /* a p p l i c a t i o n - d e p e n d e n t i n i t s */ glutmainloop(); /* O p e n G L s event - h a n d l i n g l o o p */ } return 0; Our main() is preceded by declarations of the functions display() and init(); see next slides. This program, sample.c, is available on my WWW graphics resources. Note: It relies heavily on OpenGL s default values. A more refined version of an OpenGL program will want to specify at least some of the many OpenGL states explicitly. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 56

Sample OpenGL Program Dissected Recall: Every GLUT-based OpenGL program has to include GL/glut.h. The function glutcreatewindow() causes a graphics window to be created, using OpenGL s default set-up. It takes a string as argument which becomes the title of the graphics window. The function glutmainloop() maps the window and starts the processing of events until the program is terminated. What shall be displayed is specified in the function display(). The function display() is the function invoked by OpenGL to draw a user s graphical content. It is not called explicitly by the user, but it will be called at appropriate times by OpenGL ( call-back function ). It is passed to OpenGL as an argument of glutdisplayfunc(). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 57

Sample OpenGL Program void display(void) /* c a l l b a c k f u n c t i o n */ { glclear(gl_color_buffer_bit); /* c l e a r w i n d o w p i x e l s */ glbegin(gl_triangles); /* d e f i n e t r i a n g l e */ glvertex2f(0.0, 0.0); glvertex2f(1.0, 0.0); glvertex2f(0.0, 1.0); glend(); glflush(); /* f l u s h GL b u f f e r s */ } return; void init(void) /* a p p l i c a t i o n - d e p. i n i t s */ { return; } c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 58

Sample OpenGL Program Dissected The call glclear(gl_color_buffer_bit) clears the background color of the graphics window to the (default) background color. The pair of commands glbegin(...) and glend() is used to display groups of primitive objects. In our example, a triangle is drawn; its vertices are specified by three calls to glvertex2f. The suffix 2f indicates that the coordinates of a vertex are specified as two floating-point arguments (of type GLfloat). By default, the missing z-coordinates are treated as zero. We will learn more on graphics primitives supported by OpenGL later on. The function glflush() instructs OpenGL to begin drawing with no further delay. Effectively, this causes OpenGL to disable any buffering of the graphics commands that a client-server system might impose. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 59

OpenGL State Machine OpenGL provides an imperative interface to a state machine. Graphics primitives pass the state machine in a pipeline. Attributes of a primitive e.g., its color correspond to states. Functions are used to change the state of the machine. Note: OpenGL s states are persistent! Default values are provided by OpenGL. An attribute set by a user will remain set unless explicitly changed by the user! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 60

Extended Main Function int main(int argc, char** argv) { glutinit(&argc, argv); /* i n i t O p e n G L */ glutinitdisplaymode (GLUT_SINGLE GLUT_RGB); /* s i n g l e b u f f e r i n g, R G B c o l o r s */ glutinitwindowsize(700,500); /* 7 0 0 x 5 0 0 g r a p h i c s w i n d o w */ glutinitwindowposition(50,10); /* upper - l e f t c o r n e r of w i n d o w */ glutcreatewindow("gl window"); /* w i n d o w t i t l e */ glutdisplayfunc(display); /* s e t u s e r s c a l l b a c k f u n c t i o n */ init(); /* a p p l i c a t i o n - d e p e n d e n t i n i t s */ glutmainloop(); /* event - h a n d l i n g l o o p */ } return 0; See sample1.c on my WWW graphics resources. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 61

Extended Main Function The function glutinit() initializes the OpenGL library. By convention, we pass the command-line arguments to this function. The function glutinitdisplaymode() initializes the display. Its argument is a bit string. (The arguments shown are the default values, and could be omitted.) The function glutinitwindowsize(), with arguments 700 and 500, sets the width of the graphics window to 700 pixels and its height to 500 pixels. The function glutinitwindowposition(), with arguments 50 and 10, instructs OpenGL to position the graphics window on the screen such that the upper-left corner of the window has the screen coordinates (50, 10). Be warned that the position is given in screen coordinates, i.e., the y-position of the window is described relative to the top of the screen! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 62

More OpenGL Initializations void init(void) /* a p p l i c a t i o n - d e p. i n i t s */ { glclearcolor(0.0, 0.0, 1.0, 0.0); /* b l u e as b a c k g r o u n d c o l o r */ glcolor3f(0.0, 1.0, 0.0); /* g r e e n as d r a w i n g c o l o r */ /* s e t up s t a n d a r d n o r m a l p r o j e c t i o n w i t h c l i p p i n g, */ /* w i t h i n a c u b e w i t h side - l e n g t h 2 c e n t e r e d at o r i g i n ; */ /* t h i s is t h e d e f a u l t v i e w a n d t h e s e s t a t e m e n t s c o u l d be */ /* o m i t t e d. */ glmatrixmode(gl_projection); glloadidentity(); glortho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0); } return; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 63

More OpenGL Initializations The call glclearcolor(0.0,0.0,1.0,0.0) sets the background color to no red, no green, maximum blue. (The fourth parameter pertains to blending, and can be ignored for the moment.) Each argument is a floating-point value in the range [0, 1], specifying the amount of red, green and blue. The call glcolor3f(0.0,1.0,0.0) specifies the default color of the graphics primitives to be drawn as no red, maximum green, no blue. Of course, if more primitives were to be drawn then different calls to glcolor() would occur within display(). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 64

OpenGL Coordinate System The units of the coordinates of an glvertex() command depend on the application; those coordinates are called world coordinates. A standard right-handed coordinate system is assumed for the world coordinates. OpenGL supports the definition of a viewing volume: only (those portions of) objects that are inside this 3D region will be drawn ( clipping ). Internally, OpenGL will convert to camera coordinates and later to window coordinates. As default, a camera is placed at the origin pointing in the negative z direction of the world coordinate system. The default viewing volume is a box centered at the origin with a side of length 2, and the default viewing plane is the plane z = 0. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 65

OpenGL Viewing Volume The following piece of code defines an axis-aligned box as viewing volume, together with orthogonal projection. (The projection is irrelevant for our 2D example.) glmatrixmode(gl_projection); glloadidentity(); glortho(left, right, bottom, top, near, far); Note: Distance in z is measured from the camera, relative to the camera s coordinate system! Thus, for a camera positioned at the origin, a point with coordinates (x, y, z) is rendered if and only if (in world coordinates) left x right, bottom y top, far z near. Note that distortion may occur if the aspect ratios do not match! For purely 2D data one can also use the following command: gluortho2d(left, right, bottom, top); c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 66

Event Handling and GLUT Callback Functions As virtually all other graphics APIs, OpenGL/GLUT also handle events (such as the pressing of a mouse button or a keystroke) by means of callback functions. Roughly, an OpenGL program runs in a loop, polling the hardware for new events, and calling callback functions for those events for which callback functions were declared. (All other events are ignored!) The command glutmainloop() tells OpenGL to start looping. Each callback function has to be registered: glut...func(foo) tells OpenGL to use the function foo() as callback function for the event.... It is mandatory for every OpenGL program to declare at least the display callback function, by passing its function name as argument to glutdisplayfunc(). A decent OpenGL program will also offer a way to terminate it gently, e.g., by reacting appropriately if Q(uit) or ESC was pressed by a user. Unfortunately, callback functions easily lead to a myriad of global variables. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 68

Event Handling and GLUT Callback Functions GLUT recognizes a common subset of the events typically recognized by a particular window system: Window: resize, expose; Mouse: press or release one or more buttons; Motion: move mouse; Keyboard: press a key; no keyboard release event is handled; Idle: no event, nothing else to do. GLUT callback functions: glutdisplayfunc( mydisplay ): window re-display event; glutidlefunc( myidle ): nothing else to do; glutkeyboardfunc( mykeyboard ): key press event; glutmousefunc( mymouse ): mouse button press or release event; glutmotionfunc( mymotion ): mouse movement while button down; glutpassivemotionfunc( mypassivemotion ): mouse movement with no button pressed; glutreshapefunc( myreshape ): window resize event. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 69

GLUT Callback Function Prototypes Each of the GLUT callback functions has a different prototype, as listed below: void mydisplay( void ); void myidle( void ); void mykeyboard( unsigned char ch, int x, int y ); void mymouse( int button, int state, int x, int y ); void mymotion( int x, int y ); void mypassivemotion( int x, int y ); void myreshape( int w, int h ); c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 70

GLUT Callback Functions: Idle Function The idle function is called by OpenGL if no other processing is required. An idle function foo() is registered with OpenGL by using the command glutidlefunc(foo). Common applications include continuous changes of some OpenGL states, e.g., to achieve an animation. See my WWW graphics resources for wire_cube.c; the code uses an idle callback to spin a wire-frame cube in a random manner. In our example, wire_cube.c, the idle callback function is called idle(), and is registered by putting the command glutidlefunc(idle) right after glutdisplayfunc(display) in main(). (The actual rotation is carried out by a properly modified display().) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 71

GLUT Callback Functions: Idle Function void idle(void) { int axis; RandomInteger(3, &axis); switch(axis) { case x_axis: x_rot += 0.5; if (x_rot > 360.0) x_rot -= 360.0; break; case y_axis: y_rot += 0.5; if (y_rot > 360.0) y_rot -= 360.0; break; case z_axis: z_rot += 0.5; if (z_rot > 360.0) z_rot -= 360.0; break; } glutpostredisplay(); } return; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 72

GLUT Callback Functions: Idle Function The idle callback function, idle(), randomly selects a coordinate axis and increments the corresponding rotation angle. (We will learn about OpenGL transformations later on.) Note that idle() does not call the display callback display() directly. Rather, it calls glutpostredisplay(), which instructs OpenGL to re-display the scene whenever it finds a convenient time to do so. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 73

GLUT Callback Functions: Keyboard Callback Function If an OpenGL program has a registered keyboard callback function then it is called whenever a key is pressed. The function keyboard is registered using glutkeyboardfunc(keyboard). It receives three arguments: the key pressed and the current position of the mouse (as x and y coordinates). A callback function registered by glutkeyboardfunc() can only handle the ASCII characters plus ESC, BSP and DEL. Other keys, such as the arrow keys and the function keys, can be handled by registering a callback function special() by means of glutspecialfunc(special). In our example, wire_cube_key.c, we use the keys x, y and z in order to restrict the motion of the cube to rotations around a coordinate axis, and resort to random tumbling once r has been pressed. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 74

GLUT Callback Functions: Keyboard Callback Function void keyboard(unsigned char key_pressed, int mouse_x, int mouse_y) { switch(key_pressed) { case x : axis = x_axis; random_axis = false; break; case y : axis = y_axis; random_axis = false; break; case z : axis = z_axis; random_axis = false; break; case r : random_axis = true; case q : exit(0); default: break; } break; } return; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 75

GLUT Callback Functions: Mouse Event Callback Function We use the command glutmousefunc(mouse_button) to register the callback function mouse_button() to handle mouse events. It receives four arguments: The first argument defines the mouse button: GLUT_LEFT_BUTTON, GLUT_MIDDLE_BUTTON, and GLUT_RIGHT_BUTTON. The second argument specifies whether the button was pressed or released: GLUT_DOWN and GLUT_UP. The third and the fourth argument specify the x,y-coordinates of the mouse. The command glutmotionfunc(mouse_drag) can be used to register a callback function mouse_drag() for dragging events in which the user holds a mouse button pressed. Similar for glutpassivemotionfunc() to register a callback function that handles dragging events while no key is pressed. Within a mouse callback function we can use glutgetmodifiers() to know whether SHIFT, ALT or CTRL was pressed. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 76

GLUT Callback Functions: Mouse Event Callback Function void mouse_button(int button, int state, int mouse_x, int mouse_y) { if (button == GLUT_LEFT_BUTTON) printf("left"); else if (button == GLUT_MIDDLE_BUTTON) printf("middle"); else if (button == GLUT_RIGHT_BUTTON) printf("right"); else printf("unknown"); printf(" button "); if (state == GLUT_DOWN) printf("pressed "); else if (state == GLUT_UP) printf("released "); else printf("of unknown state "); printf("at mouse position (%d,%d)\n\n", mouse_x, mouse_y); } return; Warning OpenGL pixel coordinates start with (0, 0) at the lower-left corner whereas the window/screen coordinates (as returned by the mouse!) start at the upper-left corner. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 77

GLUT Callback Functions: Window Reshape Callback Function We use the command glutreshapefunc(reshape) to register the callback function reshape() to handle a resizing of the graphics window. It receives the width and height of the new window as its arguments. Note: If we update the viewport to be identical to the entire window then we also need to adjust the view such that the scene can be seen without distortion! That is, we need to adjust the aspect ratio of the cross section of the view! One can also change the position and shape of a graphics window from within the code: glutpositionwindow() and glutreshapewindow(). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 78

GLUT Callback Functions: Window Reshape Callback Function void reshape(int width, int height) { float center, diff, min, max; glviewport(0, 0, width, height);/* v i e w p o r t e q u a l s w i n d o w */ glmatrixmode(gl_projection); glloadidentity(); if (width > height) { center = (left + right) / 2.0; diff = (right - left) * width / (2.0 * height); min = center - diff; max = center + diff; } else { } glortho(min, max, bottom, top, near, far); /* s i m i l a r l y f o r b o t t o m a n d t o p... */ } glutpostredisplay(); return; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 79

OpenGL Data Types OpenGL Data Type Min. Precision Description GLboolean 1 bit Boolean value GLbyte 8 bits signed integer GLubyte 8 bits unsigned integer GLshort 16 bits signed integer GLushort 16 bits unsigned integer GLsizei 32 bits non-negative integer size GLbitfield 32 bits bitmask value GLfloat 32 bits floating-point value GLclampf 32 bits floating-point value clamped to [0.0, 1.0] GLint 32 bits signed integer GLuint 32 bits unsigned integer GLenum 32 bits enumeration type GLdouble 64 bits floating-point value GLclampd 64 bits floating-point value clamped to [0.0, 1.0] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 81

OpenGL Data Types For each OpenGL data type, an implementation must use at least the minimum number of bits specified. An OpenGL data type may but need not match the corresponding C data type on a specific platform. E.g., typedef double GLdouble is common. An implementation may use more bits than the minimum number required to represent a GL type. However, the correct interpretation of values beyond the minimum range is not required. Thus, use the OpenGL types to assure portability! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 82

OpenGL Graphical Primitives OpenGL Primitive Description Min. #(vertices) GL_POINTS 1 GL_LINES 2 GL_LINE_STRIP 2 GL_LINE_LOOP 2 GL_TRIANGLES 3 GL_TRIANGLE_STRIP 3 GL_TRIANGLE_FAN 3 GL_QUADS 4 GL_QUAD_STRIP 4 GL_POLYGON 3 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 83

OpenGL Strips and Fans Make sure to pay close attention to how vertices are grouped for the strips and fans. 1 3 5 1 2 3 1 3 5 7 0 2 4 0 4 5 0 2 4 6 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 84

OpenGL Vertex Function The glvertex function is used to specify points within a glbegin/glend pair. Syntax: glvertexnt or glvertexntv, where n stands for the number of dimensions (2, 3, 4); t stands for the data type (e.g., i, d, f, s); v indicates that the coordinates are specified as an array. Sample use: void glvertex2d(gldouble x, GLdouble y); void glvertex4iv(vertex); /* w i t h G L i n t v e r t e x [ 4 ] */ c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 85

OpenGL Polygons For the sake of rendering efficiency, OpenGL assumes that all polygons are simple, convex, flat. The results are unpredictable if one of these conditions is violated; use a tessellator (e.g., the one provided by GLU) to tessellate polygons that violate any of those conditions. OpenGL distinguishes between the front side and the back side of a (planar) face (e.g., polygon, triangle). A side of a face is considered to be its front side (relative to the viewing direction) if its vertices appear in CCW order in the viewing window. The default normal vector of a face points to the exterior of a solid. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 86

OpenGL Attributes of Graphical Primitives The display of polygons is controlled by glpolygonmode(face,mode), where face is either GL_FRONT_AND_BACK, GL_FRONT, or GL_BACK, and mode is either GL_FILL, GL_LINE, or GL_POINT. Similarly for fill patterns of polygons. Back-face culling is enable by using the command glenable(gl_cull_face), and disabled with gldisable(gl_cull_face). The side of faces to be culled is selected by means of glcullface(face), with the argument face as above. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 87

OpenGL Attributes of Graphical Primitives OpenGL attributes are manipulated as OpenGL states. They determine the visual appearance of objects. Typical attributes: color (points, lines, polygons), diameter (points) and width (lines), stipple pattern (lines, polygons), polygon mode (polygons). The diameter diam of a point is specified by glpointsize(diam). To draw dashed lines, we use the commands glenable(gl_line_stipple); gllinestipple(factor, pattern); where the argument pattern is a 16-bit number, and factor defines by how much each bit is stretched. The command gldisable(gl_line_stipple) disables dashed lines. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 88

GLU Graphical Primitives The GLU library provides a few quadric primitives. Each quadric has a data structure associated with it that stores its state variables; this data structure has to be allocated by using the function glunewquadric(), e.g., GLUquadricObj *quad; quad = glunewquadric(); glucylinder(quad, baseradius, topradius, height, slices, stacks); Stacks and slices affect the polygonal resolution of curved surfaces: Slices are intervals about the z-axis. Stacks are intervals along the z-axis. Note: The solids are always drawn in their canonical position and orientation, centered at the origin, aligned with the z-axis or all coordinate axes (where applicable). Thus, transformations have to be used if more solids are to be visualized. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 89

GLU Graphical Primitives Similarly for glusphere(), gludisk(), and glupartialdisk: void glucylinder(gluquadric *quad, GLdouble baseradius, GLdouble topradius, GLdouble height, GLint slices, GLint stacks); void glusphere(gluquadric *quad, GLdouble radius, GLint slices, GLint stacks); void gludisk(gluquadric *quad, GLdouble innerradius, GLdouble outerradius, GLint slices, GLint loops); void glupartialdisk(gluquadric *quad, GLdouble innerradius, GLdouble outerradius, GLdouble startangle, GLdouble sweepangle); c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 90

GLUT Graphical Primitives The GLUT library provides a few often used solids, including both wire-frame and solid versions. Solids provided include tetrahedron, cube, dodecahedron (12-sided regular polyhedron), icosahedron (20-sided regular polyhedron), octahedron (8-sided regular polyhedron), sphere, cone, torus, teapot (the canonical Utah teapot). E.g., void glutwiresphere(gldouble radius, GLint slices, GLint stacks); void glutwirecube(gldouble size); c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 91

OpenGL Transformations: Matrices OpenGL (internally) uses homogeneous coordinates and 4 4 matrices to carry out transformations. OpenGL uses a stack of matrices and a current matrix: The function glpushmatrix() pushes the current matrix onto the stack. The function glpopmatrix() pops a matrix from the top of the stack, thereby replacing the current matrix. Note: Pushing and popping matrices has an effect, and a pop operation followed by a push operation does not cancel out! The stack holds at least 32 matrices, but it is not arbitrarily large! Matrix transformations affect the current matrix via post-multiplications: if C is the current matrix and T is the matrix of a transformation to be applied, then C is replaced by C T. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 93

OpenGL Transformations: Matrices All matrices are initialized and manipulated via function calls: glloadidentity() initializes the current matrix to be the identity matrix. glrotatef(α, vx, v y, v z) post-multiplies the current matrix with the rotation matrix that models a rotation by angle α about the vector (v x, v y, v z) through the origin, where α, v x, v y, v z are of type GLfloat. gltranslatef(δx, δ y, δ z) post-multiplies the current matrix with the translation matrix that models a translation by the vector (δ x, δ y, δ z). glscalef(λx, λ y, λ z) post-multiplies the current matrix with the matrix that models a (non-uniform) stretching along the x-, y-, and z-axes. Warning Watch the multiplication order! If possible, use push and pop operations in order to convert back to a previously computed matrix rather than to compute and apply inverse matrices. If lighting is enabled and scaling is applied then it is a good idea to enable the automatic normalization of face normals: glenable(gl_normalize). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 94

OpenGL Coordinate Transformation Pipeline OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS The coordinates of a 3D point undergo several transformations until eventually a pixel on the screen is set. This sequence of transformations is encoded in the OpenGL transformation pipeline: from object coordinate system (OCS) to world coordinate system (WCS), viewing coordinate system (VCS), clipping coordinate system (CCS), normalized device coordinate system (NDCS), and finally to device coordinate system (DCS). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 95

OpenGL Coordinate Transformations: Modeling Transformation OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS The modeling transformation places an object somewhere in the world. Typically, this re-positioning of an object is carried out by 1. scaling it, 2. rotating it, and 3. translating it. P WCS = (T R S) P OCS. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 96

OpenGL Coordinate Transformations: Viewing Transformation OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS Suppose that position and orientation of a camera are specified in world coordinates as a frame [p, < a, b, v >], where p is the position and < a, b, v > form a coordinate system such that a points right and b points up in a plane parallel to the image plane, and v is the direction of viewing. The viewing transformation transforms the world coordinate system such that p becomes the origin, b points into the y-axis and v points into the negative z-axis. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 97

OpenGL Coordinate Transformations: Model View Transformation OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS Modeling transformation and viewing transformation combined are called model view transformation. Note: The current transformation matrix is applied to all subsequent calls to draw primitives, until the current matrix is replaced by a new matrix. Typically, one starts with defining the position of the camera. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 98

OpenGL Coordinate Transformations: Projection Transformation OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS The projection transformation transforms the world such that the viewing volume specified by the camera is mapped into an axis-aligned box as a canonical viewing volume. This is the earliest time that clipping can be implemented (in hardware) in a camera-independent manner. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 99

OpenGL Coordinate Transformations: Perspective Division OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS The perspective division maps an axis-aligned viewing box to the axis-aligned cube [ 1, 1] 3. The projection transformation and the perspective division are carried out as one functional unit by OpenGL, in dependence on the type of perspective. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 100

OpenGL Coordinate Transformations: Viewport Transformation OCS Modeling Transformation WCS Viewing Transformation VCS Projection Transformation CCS CCS Perspective Division NDCS Viewport Transformation DCS The viewport transformation allows to restrict drawing to a particular area within the graphics window: glviewport(x, y, width, height); where (x, y) is the location of the lower-left corner of the viewport, and width and height are its dimensions. Note that x and y are given in OpenGL coordinates, and all arguments of glviewport() are specified in pixels. (The origin of the 2D OpenGL coordinate system is at the lower-left corner of the window.) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 101

Orthographic Projection in OpenGL The viewing volume of an orthographic projection is set up by the following OpenGL commands: glmatrixmode(gl_projection); glloadidentity(); glortho(left, right, bottom, top, near, far); Note that near and far are specified as seen from the camera! y x z (right,top,-far) (right,top,-near) (left,bottom,-near) If the user changes the aspect ratio of the viewing window or viewport then the program should also change the projection transformation in order to avoid distorted views. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 102

Perspective Projection in OpenGL The viewing volume of a perspective projection can be set up by the following OpenGL commands: glmatrixmode(gl_projection); lloadidentity(); glfrustum(left, right, bottom, top, near, far); Note that near and far both are positive terms that indicate the distance from the camera! z y x top far near left O right bottom c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 103

Perspective Projection in OpenGL The viewing volume of a perspective projection can also be specified in a more intuitive manner: glmatrixmode(gl_projection); glloadidentity(); gluperspective(angle, aspect, near, far); with height = 2 near tan(angle/2), and aspect = width/height. Note that near and far both are positive terms that indicate the distance from the camera! z y x O near width height far c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 104

Perspective Projection in OpenGL The angle is given in degrees; it specifies the field-of-view angle in the y-direction. The availability of the aspect ratio in gluperspective() makes it easier to respond adequately to a reshaping of the graphics window. Decreasing angle without moving the model or changing the camera position corresponds to switching from a wide-angle lens to to a telephoto lens, i.e., to zooming in. Increasing angle corresponds to zooming out. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 105

Changing the Viewpoint in OpenGL To change the viewpoint (camera position) or viewing direction the following command is used: glulookat(eye_x, eye_y, eye_z, at_x, at_y, at_z, up_x, up_y, up_z) where eye specifies the position of the viewpoint (in world coordinates), at specifies a point towards the camera is aiming, and up specifies a vector that is pointing up. Note: Re-positioning the camera causes the appropriate reverse transformation to be applied to the model. (The OpenGL-internal camera always stays at the origin!) This transformation can cause parts or all of the models to become invisible if near and far are not also changed appropriately. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 106

Representation and Modeling Primitive Objects Regions in 2D Curved Surfaces in 3D Solids in 3D Miscellaneous Modeling Schemes c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 107

Line Segments A line segment can be specified by its endpoints. PSfrag replacements y z a b c u v w A polygonal curve (or polygonal chain) is a sequence of finitely many vertices connected by line segments such that each segment (except for the first) starts at the end of the previous segment. x 1 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 109

Polygons A polygon is a closed polygonal curve where every vertex belongs to exactly two segments. (We will always assume that all vertices of a polygon lie in one plane.) no polygon not a simple polygon simple polygon star-shaped polygon convex polygon x-monotone polygon c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 110

Circular Arcs Circular arcs can be represented in several ways. (And none of them is universally good!) Center C, start point S, end point E, orientation (CW or CCW): redundancy problem! E CCW C S E CW Center C, radius r, start angle α, end angle β, orientation: start and end unknown, potential numerical problems! y E C α S β E C β α C S S x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 111

Circular Arcs Start point S, end point E, a : suggested by Sabin, no redundancy and s numerically reliable, but center and radius unknown (and difficult to compute)! A line segment can be treated as a degenerate arc, but a full circle cannot be represented. E s a C r S a > 0: CCW a < 0: CW Wide-spread practical problem if start point and end point coincide: full circle or degenerate circle? c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 112

General Curves In addition to lines, circular arcs, and quadrics, more general types of curves are used in CAD systems. Well-known representatives of so-called free-form curves include Bézier curves, B-splines, NURBS. uniform clamped cubic B-spline We will not discuss free-form curves in this lecture see my lecture on geometric modelling. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 113

Half-Space The closed half-space defined by a point P in 3D and a unit vector n is given by H(p, n) := {u R 3 : n, u n, p 0}. If n, u n, p = 0 then u ε(p, n) := {u R 3 : n, u n, p = 0}, n, u n, p > 0 then u in the half-space, into which n points, n, u n, p < 0 then u in the half-space, into which n does not point. Universe Void Material Void Material c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 114

Sample Half-Plane in 2D ( ) 4 5 ( ) n = 1 3 13 2 ( ) 6 2 h : 1 13 (3x+2y 22) 0 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 115

Cylinder An upright circular cylinder is specified by: two points (xc, y c, z min ) and (x c, y c, z max) on the axis of rotation (parallel to the z-axis), radius r. Implicit representation: (x x c) 2 + (y y c) 2 = r 2 with z min z z max. Parameterization: (x c + r cos ϕ, y c + r sin ϕ, z) with ϕ [0, 2π[ and z min z z max. r c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 116

Cone An upright frustum of a cone is specified by: two points (xc, y c, z min ) and (x c, y c, z max) on the axis of rotation (parallel to the z-axis), radii r1 and r 2. Parameterization: (x c + (r 1 + λ(r 2 r 1 )) cos ϕ, y c + (r 1 + λ(r 2 r 1 )) sin ϕ, z min + λ(z max z min )), where ϕ [0, 2π[ and 0 λ 1. r 2 r 1 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 117

Sphere A sphere is specified by: center (xc, y c, z c), radius r. Implicit representation: (x x c) 2 + (y y c) 2 + (z z c) 2 = r 2. Parameterization: (x c + r cos δ cos ϕ, y c + r cos δ sin ϕ, z c + r sin δ), with ϕ [0, 2π[, δ [ π, π ]. 2 2 z δ y ϕ x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 118

Torus A torus is specified by: a center point (xc, y c, z c) on the axis of rotation (parallel to the z-axis), radii R and r. Implicit representation: ( (x x c) 2 + (y y c) 2 R) 2 + (z z c) 2 = r 2. Surface blending may generate portions of the surface of a torus. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 119

Parameterization of a Torus Parameterization: ((R + r cos δ) cos ϕ, (R + r cos δ) sin ϕ, r sin δ), with ϕ [0, 2π[, δ [0, 2π[. y z ϕ R x R r δ x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 120

Parameterization of a Torus m = (R cos ϕ, R sin ϕ, 0) p = m + r cos δ(cos ϕ, sin ϕ, 0) = (m x + r cos δ cos ϕ, m y + r cos δ sin ϕ, 0) q = (p x, p y, r sin δ) = (R cos ϕ + r cos δ cos ϕ, R sin ϕ + r cos δ sin ϕ, r sin δ) = ((R + r cos δ) cos ϕ, (R + r cos δ) sin ϕ, r sin δ) z ϕ R r δ M P Q y x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 121

Quadrics: Ellipsoid General implicit representation of a quadric: a 11 x 2 + a 22 y 2 + a 33 z 2 + a 12 xy + a 13 xz + a 23 yz + a 1 x + a 2 y + a 3 z + b = 0. Implicit representation after coordinate transformation: a 11 x 2 + a 22 y 2 + a 33 z 2 + b = 0. Implicit representation of an ellipsoid: x 2 a + y 2 2 b + z2 2 c = 1. 2 If a = b or a = c or b = c: ellipsoid of revolution. If a = b = c: sphere. z O c b y a x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 122

Quadrics: Hyperboloid One-sheet hyperboloid: Implicit representation: x 2 a + y 2 2 b z2 2 c 1 = 0. 2 a = b: hyperboloid of revolution. z c z a b y x x y Two-sheet hyperboloid: Implicit representation: x 2 a + y 2 2 b z2 2 c + 1 = 0. 2 If a = b: hyperboloid of revolution. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 123

Quadrics: Elliptical Paraboloid Implicit representation: x 2 a + y 2 2z = 0. 2 b2 a = b: paraboloid of revolution. a z b y x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 124

Superellipsoids Implicit representation (dependent on the shape parameters e 1, e 2 ): ( x a 2/e 1 + y b 2/e 1 ) e 1/e 2 + z c 2/e 2 = 1. Parameterization (based on the convention s t := sign(s) s t for s, t R): (a cos e 2 δ cos e 1 ϕ, b cos e 2 δ sin e 1 ϕ, c sin e 2 δ), with δ [ π, π ] and ϕ [0, 2π[. 2 2 If e 1 2 and e 2 2: convex solid. If e 1 = e 2 = 2: polyhedron. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 125

Sample Superellipsoids a = 1.0 b = 1.0 c = 1.0 e 1 = 0.5 e 2 = 0.5 a = 1.0 b = 1.0 c = 1.5 e 1 = 2.0 e 2 = 1.0 a = 1.0 b = 1.0 c = 1.5 e 1 = 3.0 e 2 = 3.0 a = 1.0 b = 1.0 c = 1.0 e 1 = 3.0 e 2 = 2.0 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 126

Exact Representation of the Boundary When representing a planar region by its boundary curves the key issue is to be able to extract its interior unambiguously. Warning Different interpretations of interior are in practical use for the regions depicted above! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 128

Exact Representation of the Boundary The boundary curves of a planar region R should meet the following conditions: all curves are simple and closed, one of them is the outer boundary, all other boundary curves ( islands or holes ) lie strictly in the interior region of the outer boundary, the island curves (and their interior regions) are pairwise disjoint, all curves are oriented such that R lies on the same side of every curve. In mathematical terms, a collection of curves that meets these conditions bounds a multiply-connected region. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 129

Bridge Edges We may find it convenient to transform a multiply-connected region into a simply-connected region by means of zero-width bridges. Note that the resulting curve is not a simple polygon in the strict meaning of our original definition! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 130

Cell Decomposition in 2D A cell decomposition provides an approximate representation of a region R. A user-defined subset of the plane ( workspace ) is overlayed with a regular grid. Every cell is classified as full, empty, or partially full depending on whether it lies completely in the interior or exterior of R, or whether it intersects the boundary of R. The region is modeled as the union of those cells that are classified as full. n n 1.. 4 3 2 1 1 2 3 4 n 1 n c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 131

Cell Decomposition in 2D Whether or not the cells that are partially full are added to the approximate representation depends on the application. There is an obvious trade-off between modeling accuracy and memory consumption. The high memory consumption tends to be a serious problem unless a very coarse approximation suffices: for an n n grid, the number of cells increases by a factor of four if the resolution of the grid is doubled! n n 1.. 4 3 2 1 1 2 3 4 n 1 n c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 132

Quadtree Goal: Model the boundary of a region R with sufficient detail, but use larger cells within the interior of R. We subdivide the rectangular workspace recursively into four sub-rectangles by bisecting it in both x and y. Again, a cell is classified as full, empty, or partially full. The recursive subdivision of cells that are partially full continues until a minimum resolution or maximum depth of the quadtree is reached. 1 2 4 5 3 6 13 7 8 9 10 11 12 14 15 16 17 18 19 20 21 22 23 24 25 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 133

Constructing a Quadtree 3 Input Region Level 1 full empty c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 134

Constructing a Quadtree 1 2 1 2 3 3 4 5 4 5 6 9 10 6 7 8 11 12 9 10 13 13 14 15 16 17 18 19 20 21 22 23 24 25 Level 1+2 Level 1+2+3 full empty c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 136

Tree Structure of a Quadtree It is natural to store a quadtree as a tree. 1 2 4 5 3 tree node empty leaf full leaf 6 13 7 8 9 10 11 12 14 15 16 17 18 19 20 21 22 23 24 25 3 1 2 4 5 6 13 9 10 7 8 11 12 14 15 20 21 16 17 22 23 18 19 24 25 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 137

Boolean Operations on Quadtrees S T Int(S,T) Union(S,T) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 138

Quadtrees for Curved Data It is natural to extend quadtree representations to regions with curved boundaries. Warning A recursive quadtree decomposition will, in general, never terminate unless a minimum cell size is specified. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 139

Pros and Cons of Quadtrees Pros: Standard advantages of hierarchical modeling, such as a fast test for disjointness. Boolean operations are easy to compute (provided that the quadtrees are aligned). Point-in-region test is straightforward. Cons: The representation is coordinate-dependent and not invariant under transformations! The representation is only approximate, and memory may become an issue. A suitable approximation accuracy may be hard to predict. Neighbor finding is tricky. Graphical zooming in is only supported until the representation accuracy is reached. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 140

Ruled Surface Consider two curves C 1, C 2 : [α, β] R 3. The ruled surface (Dt.: Regelfläche) S : [α, β] [0, 1] R 3 defined by C 1, C 2 is given by the linear interpolation of C 1 and C 2 : S(s, t) := (1 t)c 1 (s) + tc 2 (s) with s [α, β], t [0, 1]. Note that S may be curved even if C 1, C 2 are line segments! C 1 C 2 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 142

Sample Ruled Surface Example: C 1 (s) := 0 0 1 + s 1 0 0, C 2 (s) := 0 1 1 + s 1 0 0, with s [0, 1]. We get the ruled surface S : [0, 1] [0, 1] R 3 S(s, t) = (1 t) 0 0 1 + s 1 0 0 + t 0 1 1 + s 1 0 0 = 0 0 1 + s 1 0 0 + t 0 1 0, i.e., the top of the unit cube. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 143

Surface of Revolution Consider the 3D curve C(s) := C 1 (s) 0 C 2 (s) Obviously, C is constrained to the xz-plane of R 3. parameterized by C 1, C 2 : [α, β] R. A rotation of C about the z-axis yields the surface of revolution (Dt.: Rotationsfläche) S(s, ϕ) := C 1 (s) cos ϕ C 1 (s) sin ϕ C 2 (s), where s [α, β], ϕ [0, 2π[. Properties: Every point of C which does not lie on the z-axis creates a circle in a plane parallel to the xy-plane; A line segment which is parallel to the xy-plane creates a disk or circular annulus; A line segment which is parallel to the z-axis creates a cylinder; Any other line segment creates a cone; A circular arc that is part of C creates a portion of a torus. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 144

Sample Surface of Revolution For s [ π 2, π 2 ], let C 1(s) := cos s and C 2 (s) := sin s. This yields S(s, ϕ) = cos s cos ϕ cos s sin ϕ sin s with ϕ [0, 2π] and s [ π 2, π 2 ] as surface of revolution, i.e., the surface of the unit sphere. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 145

General Surfaces In addition to quadratic surfaces, ruled surfaces, or surfaces of revolution, more general types of surfaces are used in CAD systems. Well-known representatives of so-called free-form surfaces include Bézier surfaces, B-splines and T-splines, NURBS. We will not discuss free-form surfaces in this lecture see my lecture on geometric modeling. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 146

Wire-frame Model Wire-frame models represent solids by specifying the set of edges of the solid. Outdated nowadays mentioned for historical reasons only! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 148

Spatial Decomposition Divide the space into cells, aka voxels. Often, the collection of cells forms a regular grid. Represent all cells lying in the object. Popular representation in volume rendering. High storage requirement. Similar pros and cons as in 2D. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 149

Octree Hierarchical representation. PSfrag replacements Requires much less space than a standard spatial decomposition. Extension of 2D quadtree. Each cube is divided into eight octants. y 2 3 6 2 3 6 7 6 7 6 7 3 3 4 5 5 1 1 5 4 5 x Useful for many operations, e.g., collision detection, ray tracing. Similar pros and cons as quadtrees. z c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 150

Constructive Solid Geometry (CSG) Simple solids so-called primitives are combined using Boolean operations. \ \ c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 151

CSG Tree Sample primitives include half-space, sphere, cylinder, cone, pyramid, cube, box, ellipsoid. A CSG object is stored as a tree with operators at interior nodes, and the primitives at the leaves. Every interior node stores the position and orientation of its children, and the Boolean operation to be applied to them. Edges of the tree are ordered. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 152

CSG and Boolean Set Operations CSG combines solid objects by using three or sometimes four different Boolean operations: Union: Create a new solid that is the union of two solids; denoted by or +. Intersection: Create a new solid that is the intersection of two solids; denoted by or. Difference: Create a solid by subtracting one solid from another solid; denoted by \ or. Complement: Create a new solid by subtracting a solid from the universe. In theory, the minus operation can be replaced by a complement and intersection operation. In practice, the minus operation is often more intuitive as it corresponds to removing a solid volume. CSG models are commonly used to describe man-made shapes. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 153

CSG Representation Caveats Not commutative Boolean operations are not commutative! Not unique A CSG representation is not unique. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 154

Problems of Standard Boolean Operations Boolean operations may create dangling faces or edges, or result in lower-dimensional solids. Possible types of intersection of two solids: Solid Plane Line Point Empty Set c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 155

Regularized Boolean Operations To eliminate those lower-dimensional sets, the Boolean set operations are regularized: 1. Compute the interior of the solids. This yields objects without their boundaries. 2. Apply the standard Boolean set operation. 3. Compute the closure of the resulting object. This will add back the boundary. More formally, let,, \ be the standard Boolean operations. We define their regularized counterparts,, \ as follows: A B := int(a) int(b), A B := int(a) int(b), A \ B := int(a) \ int(b). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 156

Pros and Cons of CSG A CSG tree mimics the design and construction process. Boolean operations are trivial for CSG objects. The surface of a CSG object is not readily available. Rendering a CSG objects is difficult unless (massively parallel) ray tracing is used. CSG trees are not unique: same-object detection and null-object detection are difficult. Support for free-form surfaces requires complicated mathematics. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 157

Boundary Representation e 4 Describes a solid in a graph-like structure in terms of its surface boundaries: vertices, edges, faces. Common abbreviation: b-rep. It is imperative to model the full set of topological and numerical properties. v 4 Solid e 4 e 6 f 1 f 2 f 3 f 4 e 3 e 5 e 1 e 2 e 3 e 4 e 5 v e 6 3 v 1 e 1 f 1 e 2 v 1 v2 v 2 v 3 x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 v 4 x 4 y 4 z 4 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 158

Boundary Representation If curved faces are involved, the supporting surfaces of the faces have to be stored, too. Similarly for the edges of a b-rep model. Most b-rep modelers support only solids whose boundaries are 2-manifolds. So-called Euler operators can be used to guarantee that b-rep modeling produces 2-manifolds. Boolean operations require sophisticated mathematical tools in order to represent the resulting object as a (valid) b-rep model. In practice, dual and hybrid representation schemes are often used in order to be able to benefit from the advantages of the individual schemes. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 159

Polyhedra A polyhedron is bounded by a set of polygonal faces, where each edge is adjacent to an even number of faces. Faces do not intersect except in common edges. In order to guarantee a 2-manifold surface, every edge has to be shared by exactly two faces. All faces are required to be planar. Deficiences of polyhedral models Be warned that (freely available) polyhedral models that are used purely for rendering purposes tend to be of extremely low quality, and may violate our rules drastically! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 160

Polyhedra Polyhedron of genus 0: Can be deformed to a ball; no holes. Examples: Cube, tetrahedron, pyramid. Torus is not a genus-0 polyhedron. Euler s formula for genus-0 polyhedra: V E + F = 2. V = 8 E = 12 F = 6 V = 5 E = 8 F = 5 V = 6 E = 12 F = 8 The validity of Euler s formula is necessary but not sufficient for a polyhedron to be of genus 0. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 161

Polyhedra of Higher Genus Euler s formula generalizes to polyhedra of higher genus with 2-manifold boundaries: V E + F H = 2(C G), where H: #(holes in 2D faces), G: #(holes passing through the polyhedron), i.e., the genus of the polyhedron, C : #(connected components). V E + F H = 2(C G) 24 36 15 3 1 1 PSfrag c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 162

Plane Equation for Polygonal Face Position yourself so that you are looking at the face from the outside. Choose three vertices of the face in CCW order. Call them P 1, P 2 and P 3. Let P 1 = (x 1, y 1, z 1 ), P 2 = (x 2, y 2, z 2 ), and P 3 = (x 3, y 3, z 3 ). Then the coefficients in the plane equation ax + by + cz d = 0 are computed by the following four determinants. a = 1 y 1 z 1 1 y 2 z 2 b = 1 y 3 z 3 x 1 1 z 1 x 2 1 z 2 x 3 1 z 3 c = x 1 y 1 1 x 2 y 2 1 x 3 y 3 1 d = x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 163

Winged-Edge Representation Common way to represent 2-manifold polyhedra of genus 0. Each edge e stores Two faces f1, f 2 adjacent to e, Two endpoints v1, v 2 of e, Two edges incident to v1 immediately before and after e in clockwise direction, Two edges incident to v2 immediately before and after e in clockwise direction. PSfrag replacements v 2 e 4 e 5 f 1 e 1 f2 e 2 v 1 e3 Each vertex v stores a pointer to one of the edges incident to v. Each face f stores a pointer to one of the edges bounding f. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 164

Particle Systems A collection of points (aka particles ) that follow laws of physics is used to model an object. Sample applications: Smoke, fire, fog; Deformable objects: clothes, elastic objects, rope; Waves, turbulent air flow, storm. Independent particles: Position of a particle does not depend on others, e.g., particles under gravity. A time step for an n-particle simulation requires Θ(n) time. Interactive particles: Position of a particle depends on the others, e.g., stars. Each time step requires Θ(n 2 ) time. p i,j+1 p i+1,j p i 1,j p i,j p i,j 1 In practice the dynamics of a particle do often depend on its neighbors, e.g., c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 166

Fractal Models The term fractals is used for self-similar objects. When constructed by an algorithm, we repeat the same construction scheme recursively. E.g., Koch Curve. Sample fractal: Mandelbrot and Julia sets. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 167

Fractal Models Fractals are used to model mountains, rocks, trees, coast-lines, etc. Fractal dimension: We use the formula d = log n log s to calculate the fractal dimension d, where n is the number of small pieces that go into the larger one, and s is the scale to which the smaller pieces compare to the larger one. A line in the Koch Curve breaks up into four smaller pieces, which are three times as short as the original. This yields d = 1.2619... as the fractal dimension of the Koch Curve. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 168

PSfrag replacements +* )( '& #" %$ Fractals: Modeling Peaks Fournier, Fussel and Carpenter first used recursive subdivision to model peaks. y y y 0 (a) 1 x 0 (b) 1 x 0 (c) 1 x 1!! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 169

Grammar Models Generalization of fractals. Sample grammar with alphabet {A, B, [, ], (, )}, where A is a vertical segment, B is a horizontal segment, (, ) models a right branch, [, ] models a left branch, and rules A AA, B A[B]AA[B], PSfrag replacements B A[B]AA(B). Sample grammar model relative to those rules, where only the left branch is used: 1. B, A A A 2. A[B]AA[B], 3. AA[A[B]AA[B]]AAAA[A[B]AA[B]]. B B B B B A A A B A A B A A A A A A A c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 170

Grammar Models Another sample grammar model: Both left and right branches, where B models a diagonal segment. 1. B, PSfrag replacements 2. A[B]AA(B), 3. AA[A[B]AA(B)]AAAA(A[B]AA(B)). B B B A A A B A A B A A B A A A A A A A A B c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 171

Raster Graphics Light and Color Raster Devices Color Models Scan Conversion c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 172

The Physical Nature of Light Caveat The following slides present a simplified view of the physical nature of light. Consult a physics textbook for an in-depth coverage of these topics! Light is electromagnetic energy that is visible to humans. According to physicists, light exhibits a wave-particle duality. The wave model: makes an analogy comparing light to water waves. The particle model: light is made up of many little particles. Neither model is really complete or correct. Under some circumstances light behaves like a wave. Under other circumstances it behaves like a particle. The particle model alone can go a long way towards understanding and explaining light, though. The basic particle of light is called a photon moving in a straight line and vibrating. This vibration is a kind of mathematical abstraction. It is useful since much of the mathematics that describe vibrations seem to work in describing the behavior of light. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 174

The Physical Nature of Light With every photon we can associate a particular frequency f of vibration. The frequency is measured in Hertz (Hz). Visible frequencies are within 4.3 10 14 7.5 10 14 Hz. An alternate way to characterize the vibration of a photon is to consider its wavelength λ. The wavelength is measured in meters, or nanometers (with 1nm = 10 9 m). Visible wavelengths lie in the 400 740nm range, or, at best, within 380 780nm. Wavelength and frequency are closely related: λ f = c, where c = 299 792 458ms 1 is the speed of light traveling in vacuum. The energy, E, of a photon is directly related to its frequency (Planck-Einstein relation): E = hf, where h 6.63 10 34 Js is Planck s constant (Dt.: Plancksches Wirkungsquantum); energy is measured in Joule (J). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 175

The Physical Nature of Light [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 176

Color and Spectra Have you ever seen a white band in a rainbow? Likely not. White is not a pure spectral color: No single photon can give us the impression of white light. A prism can be used to show that white light is really a mixture of different spectral colors. sunlight glass prism Note that this implies that the reflection and refraction of light depends on the wavelength! White light: photons of many different spectral colors strike the same region of our eye nearly simultaneously. red orange yellow green blue indigo violet How can we characterize different amounts of photons at different wavelengths? c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 177

Color and Spectra We can set up a measuring instrument, count the average number of photons, at each visible wavelength, over some period of time, and then plot the results. Such an intensity versus wavelength plot is often called a frequency spectrum plot, which is often abbreviated simply as spectrum. Hence, one way to handle color is to attach a spectrum with each light ray, describing the light traveling along that ray. Except for a few cases, such as fluorescent light that has spikes, the spectrum regarded as a function of wavelength tends to be smooth. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 178

Spectrum of CIE Standard Illuminant D6500 relative power 1.0 CIE D6500 0.8 0.6 0.4 0.2 0.0 nm 350 450 550 650 750 CIE: Commission Internationale de l Eclairage. D6500 has (rounded) CIE 1931 coordinates x = 0.313 and y = 0.329, and a color temperature of 6504K. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 179

Terminology for Color Specification Achromatic (Dt.: achromatisch, unbunt): all photons have (virtually) the same wavelength! Aka monochrome light. Luminance (Dt.: Leuchtdichte): Physical intensity; photometric measure of the luminous intensity per unit area of light travelling in a specified direction. Brightness (Dt.: Helligkeit): Perceived intensity of reflected or self-luminous light. Hue (Dt.: Farbton, Buntton): Distinguishes among spectral colors. E.g., red, green, blue, purple. Saturation (Dt.: Sättigung): Distance from equal-intensity gray. E.g., royal blue versus sky blue. Note: These terms are often used without a great deal of precision, and many more seemingly similar terms are known in color theory and photometry. Note: An artist s view of colors may be quite different from the terms defined above... c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 180

Color and the Eye: Rods and Cones Human vision relies on light sensitive cells ( sensors ) in the retina of the eye: rods and cones. Rods (Dt.: Stäbchen) are cells which can work at low intensity, but cannot handle color. Cones (Dt.: Zäpfchen) can handle color, but require brighter light to function. Cones are concentrated near the fovea (Dt.: Sehgrube) of the retina (Dt.: Netzhaut). Many more rods ( 120M) than cones ( 6M). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 181

Color and the Eye: Trichromatic Theory First proposed by Thomas Young (1801), refined by Hermann von Helmholtz (1861). The standard assumption is that the retina has three different types of cones, with peak sensitivity to yellow the peak response is around 580nm, most commonly but not correctly also referenced as red; green the peak response is around 545nm; and blue the peak response is around 440nm. Every single wavelength triggers all three kinds of cones, but by different amounts. The trichromatic theory explains color blindness: Protanope (red blindness), Deuteranope (green blindness), Tritanope (blue blindness, very rare). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 182

Color and the Eye.20.18.16.14.12.10.08.06 G R.04.02 B 0 400 480 560 640 100 80 60 40 20 0 400 500 600 700 Left: Spectral-response functions of each of the three types of cones on the human retina, describing the fraction of light absorbed by each cone with respect to wavelength. Right: Luminous-efficiency function for the human eye, describing the relative sensitivity of the eye with respect to wavelength. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 183

Color and the Eye We have a peak sensitivity to yellow-green light of wavelengths around 550nm: About two thirds of the cones are sensitive to yellow. Almost one third is sensitive to green. Only a few percent have a sensitivity to blue. 100 80 60 40 20 0 400 500 600 700 Thus, the eye s response to blue light is much weaker than its response to yellow or green. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 184

Human Color Perception The human eye can distinguish about 350 000 different colors in color space: about 128 different hues, for each hue around 20-30 different saturations, depending on the hue, 60 100 different brightness levels. When colors differ only in hue, the difference in wavelength, between just noticeably different colors varies from more than 10nm at the extremes of the spectrum to less than 2nm around 480nm (blue) and 580nm (yellow). (λ) 16 14 12 10 8 6 4 2 400 500 600 Wavelength 700 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 185

Optical Illusions The human visual system is very susceptible to optical illusions! Humans are not particularly good at judging absolute quantities, such as angles or areas. E.g., we tend to overestimate small angles and underestimate large angles. Subjective contours: Our visual system fills in the missing portions of the edges in order to allow us to see triangles ( Kanizsa triangles ). Müller-Lyer Illusion: The horizontal line segments are of the same length! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 186

Optical Illusions Colors and shapes can fool humans easily. Ebbinghaus Illusion: The two inner circles are of the same size! Simultaneous contrast: the two inner squares are colored equally! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 187

Optical Illusions Checker shadow illusion (Adelson 1995): The two cells labelled A and B are of exactly the same color. (E.g., a color picker returns the hex value 787878, or 47% each of red, green and blue.) [Image credit: Wikipedia] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 188

Light and Color: Recommendations As blue gives only poor contrast, it is not appropriate for text, fine lines, or small objects. Neighboring colors should vary in more than their value of blue. Red or green should be avoided on the boundary of large displays. Associated objects should be shown on the same background. Similar colors should signal similar meaning: consistency is vital! Keep in mind that the meaning of a color may be dependent on the cultural or national background of the user. The elderly need more brightness to distinguish between colors. For color-deficient people different colors have to vary in more than a single primary color. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 189

Light and Color: Recommendations Too many colors with different meanings demand too much from the human perception. The magic number of objects recognized at a single glance: 5 ± 2. Kiss keep the color scheme simple! The magic number for short-term memory of colors is 7 ± 2. Colors should be used in the order of their spectral position (red, orange, yellow, green, blue, indigo, violet). Brightness and saturation are suitable for catching a users attention. Warm colors (i.e., colors with a larger wavelength) and cold colors should be used for visualization of the desired action: cold colors are ideal for status information, while warm colors are suitable for the visualization of an action, for a request to the user, etc. Unfortunately...... it is more difficult to use color effectively than to use it ineffectively. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 190

Raster Devices The most common graphics output device is the raster display. An image is generated by a 2D array of small dots or squares: pixels (shorthand for picture elements ). Every pixel can be set individually; a typical (API) command might be SetPixel(x,y,color) where x and y are pixel coordinates. Depending on the number of different color and intensity values of every pixel we distinguish among the following displays. Monochrome display: Each pixel can either be on or off. Can only display one color. Typical device: laser printer. Grey-scale display: Each pixel can have one of n possible brightness values ( intensities ). Color display: Each pixel can have one of 2 k possible colors (if k bits per pixel are used). Each pixel is composed of a cluster of single-color pixels that fool the eye. Typical device: color monitor. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 192

Different Device Coordinate Systems Unfortunately not all systems adopt the same pixel addressing conventions: some systems have the origin at the upper-left corner, some have it at the lower-left corner. 100 x y 50 50 y 100 x Warning X11 has its coordinate origin in the upper-left corner, while OpenGL coordinates have their origin in the lower-left corner! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 193

Primary Colors It is natural to attempt to model colors as a mixture of a small number of primary colors. There are two basic ways of mixing color: one is additive, by combining emitted light of different colors, while the other is subtractive, by preventing certain portions of white light from being reflected. Additive representation: Starting from black, create colors by adding different amounts of the primaries. E.g., adding red and blue generates magenta. Subtractive representation: Starting from white, create colors by subtracting portions of white. E.g., magenta dye blocks green from being reflected. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 195

Perceptual Color Matching: CIE-XYZ Long before the advent of computers, the need for a succinct specification of primaries became apparent. In 1931, CIE (Commission Internationale de l Eclairage) defined a standard observer : Roughly, a standard observer is a small group of 15 20 individuals. It is supposed to be representative of normal human color vision. The observer viewed a split screen with 100% reflectance. On one half, a test lamp casts a pure spectral color on the screen. On the other half, three lamps emitting varying amounts of red, green, and blue light attempted to match the spectral light of the test lamp. The observer determined when the two halves of the split screen were identical, thus defining the tristimulus values for each distinct spectral color. It was realized that a linear combination (with non-negative coefficients) of the red, green and blue primary lamps could not reproduce all spectral light. Since negative coefficients were considered inadequate, CIE defined three (artificial) additive primaries and a corresponding color model, CIE-XYZ, in 1931. Basically, CIE translated the tristimulus values into a different set of all-positive tristimulus values, called XYZ. In 1960, CIE-XYZ model was modified, and again revised in 1976 to become CIE-1976-L*a*b and CIE-1976-L*u*v, in an attempt to linearize the perceptibility of color differences. (The basic principles remained the same, though.) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 196

CIE Primaries The CIE primaries were defined by specifying their color spectra. Tristimulus values 1.8 1.6 1.4 X Y Z 1.2 1.0 0.8 0.6 0.4 0.2 λ (nm) 0.0 400 440 480 520 560 600 640 680 720 760 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 197

CIE Chromaticity Diagram Every visible color (and some invisible ones, too) can be expressed as a combination of the CIE primaries: α X + β Y + γ Z. This defines a 3D linear color space with respect to X, Y and Z. It is common to project this space onto the plane X + Y + Z = 1. The coordinates of this projected 2D plane are usually called x and y, where x = X X + Y + Z and y = Y X + Y + Z and z = 1 x y = Z X + Y + Z. The resulting diagram is known as chromaticity diagram. Note that this diagram captures only color but not luminance of the light source. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 198

CIE Chromaticity Diagram 1.0 y 0.9 0.8 0.7 0.6 0.5 0.4 0.3 520 green 530 510 540 500 cyan 490 550 560 570 yellow 580 590 600 white orange 610 620 pink red 780 0.2 480 blue purple 0.1 470 violet 460 x 380 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 199

Properties of the CIE Model Spectral colors are on the boundary of the horseshoe. The line joining violet and red, the purple line, is not part of the spectrum. White is near the middle. (CIE D6500 is at position (0.313,0.329).) Any color that results from additive mixing of two colors and will lie on the line segment joining these two colors. Any color that results from additive mixing of three colors will lie in the triangle connecting these three colors. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 y 520 green 530 510 540 500 cyan 490 550 560 570 yellow 580 590 600 white orange 610 620 pink red 780 0.2 480 blue purple 0.1 470 violet 460 x 380 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 200

RGB Color Model The RGB color space is given by the unit cube, where the primaries red (R), green (G), and blue (B) correspond to the coordinate axes. In this system, (0, 0, 0) corresponds to black and (1, 1, 1) is white. Magenta = (1,0,1) Black = (0,0,0) Blue = (0,0,1) Cyan = (0,1,1) White = (1,1,1) Green = (0,1,0) Red = (1,0,0) Yellow = (1,1,0) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 201

RGB Color Model The RGB model is the most widely used color model for specifying the color of a pixel on a monitor. Its practical importance is derived from the fact that triads of three phosphor dots or LCD/LED cells with colors red, green, and blue are used to produce a color in an additive way on a standard monitor. Although the arithmetic interpolation between two RGB triples is geometrically linear, such an interpolation need not be linear perceptually: An incremental change of an RGB triple may produce no perceivable difference in one part of the RGB cube, while it may create visually different colors in some other part of the cube. Always bear in your mind that the class of all colors that can be displayed on a monitor is a subset of the colors perceivable by humans. Recall that an RGB image may look different on different monitors. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 202

CMYK for Color Printing When a white surface is coated with cyan ink, no red light is reflected: Cyan subtracts red from the reflected white light. CMY color model: The inks used in color printing are cyan (light blue), magenta (purple), and yellow. To maintain black color purity, and to speed-up the drying process, a separate black ink is used rather than to rely on cyan, magenta, and yellow to generate black: CMYK. Dye Color Absorbs Color Reflects Colors Cyan (C) Red Blue and Green Magenta (M) Green Blue and Red Yellow (Y) Blue Green and Red Black (K) All None As the RGB model, the CMY model can be regarded as a unit cube, where (0, 0, 0) corresponds to white and (1, 1, 1) is black. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 203

RGB-to-CMYK Conversion Given intensity values R, G, B, where each value is between 0 and 1, we can convert to CMY using the following masking equations: C = 1 R and M = 1 G and Y = 1 B. This is approximate: It assumes that the printed cyan is equal to white minus the red of the monitor, and this is rarely the case. Adding black (K) as an additional color further complicates the matter. Typically, a color printer cannot print all colors a computer monitor can display, and a computer monitor cannot display all colors a color printer can! E.g., pure green or pure blue is outside of the gamut of printers. Consequently, the same image displayed on a computer monitor may not match to that printed in a publication. Color shifts may occur when the RGB-to-CMYK conversion takes place. Nevertheless, this four-color process or full-color printing generates the vast majority of magazines and marketing publications. High-fidelity conversions from RGB to CMYK currently require careful tweaking to compress and stretch the RGB gamut of a particular image so that it fits into the available CMYK gamut. This is an area of active research! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 204

Color Gamut The color gamuts of film, monitors and color printers form (fairly small) subsets of the chromaticity diagram: gamut mapping may be required! (Note that hardware vendors sometimes prefer to claim larger/different gamuts!) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 y 510 500 520 490 530 540 550 560 570 580 Adobe-RGB (1998) srgb CMYK (DIN 16539) 590 600 610 620 780 480 0.1 470 460 x 380 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 205

Other Color Models The CIE color diagram is a scientific formalism, but it does not provide a natural user interface for specifying colors. RGB and CMY(K) are great from a technical point of view, but both are equally bad from an artist s perspective. Several other color models have been developed: HSV: Hue, Saturation, Value. HLS: Hue, Lightness, Saturation. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 206

Hue, Saturation, Value The HSV dates back to a color notation proposed by Munsell in 1905: Hue: It is that quality by which we distinguish one color from another, as red from yellow. It is given by the dominant wavelength of the light in that color. Saturation: The degree of departure of a color sensation from that of white or gray. It models the purity of the color. Value: It is that quality by which we distinguish a light color from a dark one. It models the brightness (i.e., amount of energy) of the light. Cyan Green o 120 V Yellow Red o 0 Blue o 240 Magenta Black H S c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 207

HSV Pyramid Hue is measured from 0 to 360 degrees counter-clockwise, with red at 0. Saturation is the distance away from the center line; decreasing S corresponds to adding white. Value is the vertical distance above black; decreasing V corresponds to adding black. The spectral colors are given by V = S = 1 and arbitrary H. Cyan Green o 120 V Yellow Red o 0 Blue o 240 Magenta Black H S c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 208

HSV-to-RGB Conversion A hexagonal cross-section of the HSV pyramid can be regarded as a sub-cube of the RGB cube projected onto a plane that is normal to its main diagonal. This establishes a one-to-one mapping between RGB and HSV. Thus, the arithmetic interpolation between two HSV triples is neither geometrically linear nor perceptually linear. Blue Cyan Magenta Green Red Yellow c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 209

Hue, Lightness, Saturation Developed at Tektronix, the HLS model is very similar to HSV. It accounts for the fact that as light gets too bright or too dark, the range of perceivable colors narrows to only white or only black. L Green o 120 Yellow Cyan Red o 0 Blue o 240 Magenta Black H S c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 210

Drawing a Line We will always assume that the end-points of a straight-line segment L are given in integer coordinates relative to the resolution of the output device. Which pixels should be turned on? Pixels should be as close to L as possible! L c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 212

Specifying the Desired Output The general goal is to minimize the stair-case effect ( jaggies ) due to the replacement of a continuum of width zero by a discrete set of pixels of non-zero area, to have a uniform line density, to make sure that a line drawn is independent of whether we draw it from P1 to P 2 or from P 2 to P 1, to cast the conversion into an algorithm that is fast. The following two specifications are widely used: The set of pixels whose centers are covered by a parallelogram of width 1 centered on the line. The shortest sequence of eight-connected pixels that most closely approximate the line. The simple fact that a continuum is replaced by a discrete set of pixels is the source of many serious problems in graphics that are known as aliasing problems! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 213

Specifying the Desired Output Parallelogram Coverage: Select pixels within strip of width 1. Eight-Connectedness: Used by Bresenham s algorithm. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 214

Brute-Force Scan Conversion ( x1 Consider a line segment between and x 1 x 2 and y 1 y 2. The equation of the line is given by y 1 ) ( x2 and y 2 ), where x 1, y 1, x 2, y 2 N, where y = s x + c, s = y 2 y 1 x 2 x 1 and c = y 1 x 2 y 2 x 1 x 2 x 1. We get a simple scan-conversion algorithm by incrementing x, computing the corresponding y, and rounding it to the nearest integer value. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 215

Brute-Force Scan Conversion Algorithm Brute-Force Scan Conversion Input: P 1, P 2 : point ( P 1 = (x 1, y 1 ), P 2 = (x 2, y 2 ) ) 1. var s, c: real; 2. var x, y: integer; 3. s (y 2 y 1 )/(x 2 x 1 ); 4. c (y 1 x 2 y 2 x 1 )/(x 2 x 1 ); 5. for x x 1 to x 2 do 6. y s x + c ; 7. SetPixel(x, y); The rounding operation is expensive. The algorithm involves floating-point arithmetic. In any case, this simple algorithm will not work particularly well... c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 216

Handling Different Inclinations We need a different algorithm depending on whether the change in y is bigger than the change in x. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 217

Bresenham s Algorithm for Drawing a Line Developed in the early 1960s to control the pen movements of plotters. Be warned that several improvements to Bresenham s original algorithm have been proposed since its invention. So, by now, dozens of slightly different scan-conversion algorithms are denoted as Bresenham s Algorithm. The following description is limited to line segments that lie in the first octant, i.e. y = s x + c where 0 s 1. Let x := x 2 x 1 and y := y 2 y 1, where x 1, x 2, y 1, y 2 N, and x 1 x 2 and y 1 y 2. Furthermore, y x. We will draw the segment from left to right. Assume that pixel (x i, y i ) has been set. Which pixel is next? The pixel (x i + 1, y i ) or the pixel (x i + 1, y i + 1)? Remember that we seek an eight-connected set of pixels. Thus, (x i, y i + 1) is no option once (x i, y i ) was drawn. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 218

Basic Idea of a Midpoint Algorithm previous pixel NE M E choices for current pixel choices for next pixel NE M E NE M E c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 219

The Mathematics of Bresenham s Line Algorithm In implicit form we get F(x, y) := x y y x + c x for the equation of the line through F(x, y) < = > 0 (x, y) (x, y) (x, y) ( x1 y 1 ) ( x2 and y 2 above line, on line, below line. ), where Bresenham s Algorithm always increments x. Whether or not y is incremented depends on the position of the midpoint relative to the line. e i := F (x i + 1, y i + 1 2 ) { < 0 : xi+1 := x i + 1, y i+1 := y i (E), 0 : x i+1 := x i + 1, y i+1 := y i + 1 (NE). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 220

The Mathematics of Bresenham s Line Algorithm Goal: derive the error variable e i+1 directly from the last error variable e i. Case (E): e i+1 = F(x i + 2, y } {{ } i + 1 ) = x i y + y + y y i x 1 x + c x 2 } {{ 2 } x i+1 +1 y i+1 + 1 2 = F(x i + 1, y i + 1 2 ) + y = e i + y. Case (NE): e i+1 = F(x i + 2, y } {{ } i + 3 ) = x i y + y + y y i x 1 x x + c x 2 } {{ 2 } x i+1 +1 y i+1 + 1 2 = F(x i + 1, y i + 1 2 ) + y x = e i + y x. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 221

The Mathematics of Bresenham s Line Algorithm The first error variable e 1 is initialized as e 1 := F(x 1 + 1, y 1 + 1 ) 2 = x 1 y + y y 1 x 1 x + c x 2 = F(x 1, y 1 ) + y 1 2 } {{ } 2 =0 For the purposes of Bresenham s algorithm we may replace F(x, y) by 2F (x, y), thus eliminating the division by 2: e 1 = 2 y x, { ei + 2 y if (E), e i+1 := e i + 2 y 2 x if (NE). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 222

Bresenham s Line Algorithm Algorithm Bresenham Input: P 1, P 2 : point ( P 1 = (x 1, y 1 ), P 2 = (x 2, y 2 ) ) 1. var x, y, x, y, error, c 1, c 2 :integer; 2. x x 2 x 1 ; y y 2 y 1 ; 3. x x 1 ; y y 1 ; 4. c 1 2 y; error c 1 x; c 2 error x; 5. repeat 6. SetPixel(x, y); 7. inc(x); 8. if error < 0 then ( (E) ) 9. error error + c 1 ; 10. else ( (NE) ) 11. inc(y); 12. error error + c 2 ; 13. until x > x 2 ; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 223

Drawing a Circle The standard parameterization of a circle with radius r centered at the origin is given by x(ϕ) = r cos ϕ and y(ϕ) = r sin ϕ, with 0 ϕ < 2π. Discretization based on ϕ i := i 2π n for 0 i n 1 yields where x i+1 := x(ϕ i+1 ) = x(ϕ i + δ) = r(cos ϕ i cos δ sin ϕ i sin δ) = x i cos δ y i sin δ, y i+1 := y(ϕ i+1 ) = y(ϕ i + δ) = r(sin ϕ i cos δ + cos ϕ i sin δ) = x i sin δ + y i cos δ, δ := 2π n and x 0 := r and y 0 := 0. A brute-force scan-conversion algorithm for circular arcs and ellipses is easily derived from these equations. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 224

Bresenham s Algorithm for Drawing a Circle We consider the second octant of circles with integer radius centered at the origin. The circular arc is drawn from 90 o to 45 o! Use symmetry to draw the other portions of the circle. ( x, y) (x,y) PSfrag replacements ( y, x) 45 o (y,x) ( y, x) (y, x) ( x, y) (x, y) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 225

The Mathematics of Bresenham s Circle Algorithm We have F(x, y) := x 2 + y 2 r 2 > 0 = 0 < 0 (x, y) (x, y) (x, y) Once again, we use the midpoint algorithm. If the midpoint lies inside the circle then (E) else (SE). M E SE outside the circle, on the circle, inside the circle. previous pixel current choice for pixel next pixel c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 226

The Mathematics of Bresenham s Circle Algorithm The error variable is defined as e i := F (x i + 1, y i 1 2 ) = (x i + 1) 2 + (y i 1 2 )2 r 2. Furthermore, e i { < 0 : xi+1 := x i + 1, y i+1 := y i (E), 0 : x i+1 := x i + 1, y i+1 := y i 1 (SE). We get (E): e i+1 = F(x i + 2, y i 1 ) = (x 2 i + 2) 2 + (y i 1 2 )2 r 2 = F(x i + 1, y i 1 ) + 3 + 2x 2 i = e i + 2x i + 3 = e i + 2x i+1 + 1. (SE): e i+1 = F(x i + 2, y i 3 ) =... = e 2 i + 2x i 2y i + 5 = e i + 2x i+1 2y i+1 + 1. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 227

The Mathematics of Bresenham s Circle Algorithm The first error variable is initialized as e 1 := F(1, r 1 2 ) = 5 4 r. Once again, we substitute F(x, y) by 2F (x, y). Also, we add 1 to e 2 1. Thus, we end up with e i+1 = { ei + 4x i+1 + 2 (E), e i + 4x i+1 4y i+1 + 2 (SE). and e 1 := 3 2r. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 228

Bresenham s Circle Algorithm Algorithm Bresenham Input: rad : integer 1. var x, y, error : integer; 2. x 0; y rad; 3. error 3 2rad; 4. while x y do 5. SetPixel(x, y); 6. inc(x); 7. if error 0 then 8. dec(y); 9. error error 4y; 10. error error + 4x + 2; c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 229

Basic Rendering Techniques Clipping Hidden-Surface Removal Simple Shading Algorithms Incremental Shading Techniques Aliasing and Anti-Aliasing Texture Mapping c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 230

View Volume and Clipping The portion of the view plane (or image plane) that we are interested in is defined by the view window. Together with the view point the view window defines a pyramid-shaped portion of the space: the so-called view volume (or view frustum). The process of restricting an object to the view volume/window is called clipping. Typically, a near plane (or front plane) and a far plane (or back plane) are added in order to exclude objects from being projected that are very far from or very close to the viewer. View Plane View Volume View Point View Window Near Plane Far Plane c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 232

View Volume and Clipping The view volume is a frustum of a pyramid in the case of perspective projection, and a parallelepiped in the case of parallel projection. Since clipping objects to a box is much simpler than clipping to a genuine pyramid, it is common to apply a perspective normalization in order to convert a perspective projection into an orthographic projection. Clipping can be performed in 3D prior to projecting the objects onto the view plane, or in 2D after projecting the objects onto the view plane. View Plane View Volume View Point View Window Near Plane Far Plane c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 233

Clipping in 2D The task of clipping is to replace line segments with shorter segments that fit neatly into the view/clip window. (x max, y max ) (x min, y min ) Clipping can be performed in object space or in image space: Object-space clipping: Compute intersections analytically, and scan-convert only clipped segments. Image-space clipping: scan-convert full segments and perform point clipping afterwards. We assume that the window is given by the axis-parallel rectangle W := [x min, x max] [y min, y max]. Point clipping: xmin x x max, ymin y y max. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 234

Line Clipping Let W := [x min, x max] [y min, y max] be the clip window, and consider the clipping of a line segment l := AB. (x max, y max ) (x min, y min ) Cases: A W, B W : Accept l. A W, B / W : Compute P = l W, accept AP. A / W, B W : Compute P = l W, accept BP. A / W, B / W : If A, B lie outside the same boundary line of W then reject l. Otherwise, a more complicated test is needed.... c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 235

Cohen-Sutherland Algorithm We classify the position of every point P with respect to the supporting lines of W by assigning a 4-bit out code, OC(P), as follows: 0001 : P to the left of x = x min. 0010 : P to the right of x = x max. 0100 : P below of y = y min. 1000 : P above of y = y max. PSfrag replacements 1001 0001 1000 1010 0000 0010 y max 0000 0101 0100 x min x max 0110 y min c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 236

Cohen-Sutherland Algorithm If OC(A) OC(B) = 0000 then accept l, where the operator is applied bitwise. If OC(A) OC(B) 0000 then reject l; again, the operator is applied bitwise. Otherwise: If OC(A) = 0000 then swap A and B. Find the rightmost bit i such that OCi (A) = 1. Compute intersection P of l with the supporting line of W which defines bit i. Let A := P, and update OC(A). Repeat the above steps until l is accepted or rejected. Easily generalized to more general convex clip windows. PSfrag replacements 1001 0001 1000 1010 0000 0010 y max 0000 0101 0100 x min x max 0110 y min c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 237

Cyrus-Beck-Liang-Barsky Algorithm We consider the standard parameterization p(t) := (1 t)a + tb, with t [0, 1]. Goal: Compute t E, t L [0, 1] such that W l = p(t E )p(t L ). We pick an arbitrary point W i on clip edge E i, and denote the outwards normal vector of E i by n i. Where does p(t) lie relative to E i? ni, p(t) w i > 0 p(t) outside, ni, p(t) w i < 0 p(t) inside, ni, p(t) w i = 0 p(t) on E i. W i possibly inside clip window E i B n i A n i, p(t) w i > 0 n i, p(t) w i < 0 n i, p(t) w i = 0 l c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 238

Cyrus-Beck-Liang-Barsky Algorithm Thus, the equation for the intersection of l with E i is n i, p(t i ) w i = 0. Suppose that d := b a 0. We get that is n i, a w i + t i n i, b a = 0, t i = n i, a w i n i, d W i possibly inside clip window if n i, d = 0. If n i, d = 0 then E i l. E i B n i A n i, p(t) w i > 0 n i, p(t) w i < 0 n i, p(t) w i = 0 l c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 239

Cyrus-Beck-Liang-Barsky Algorithm We define the predicates " potentially entering" (PE) and " potentially leaving" (PL) for each intersection: (PL)i n i, d > 0, (PE)i n i, d < 0. Line 1 B PE PL B B A A PL A PE PE PE PL Line 2 B PL PE Line 3 Line 4 PL A c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 240

Cyrus-Beck-Liang-Barsky Algorithm This yields the parameters t E, t L as where t E := max ({t i : (PE) i } {0}), t L := min ({t i : (PL) i } {1}), t i := n i, a w i. n i, d If the clip edge E i is parallel to a coordinate axis then the term for t i is much simpler. E.g., for E i x = x min we get n i, d = a x b x and t i = ax x min a x b x. Similar to the Cohen-Sutherland algorithm, also the CBLB algorithm can easily be generalized to general convex clip windows. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 241

Polygon Clipping Clipping the edges of a polygon individually is not good enough. Note that the clipped polygon may be disconnected! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 242

Polygon Clipping Two main options, and none of them is ideal... c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 243

Sutherland-Hodgeman Algorithm A conventional line-clipping algorithms would clip one edge of the polygon with respect to the entire clip window, and would loop over all edges of the polygon. The Sutherland-Hodgeman Algorithm clips the entire polygon with respect to one clip edge of the clip window, and loops over all clip edges. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 244

Sutherland-Hodgeman Algorithm We distinguish four cases, depending on how the start-point A and end-point B of an edge E are located relative to the clip edge. B A A B B B B A A B c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 245

Sutherland-Hodgeman Algorithm Initialize the output list L for the clipped polygon: L := /0. For all clip edges of the clip window: 1. Pick an edge E of the polygon, and let E 0 := E. 2. Let A be its start-point and B be its end-point.. 3. Distinguish the following four cases: If both A and B are on the interior side of the clip edge, then add B to L. If point A is on the interior side of the clip edge and point B is on the exterior side of the clip edge, then add the intersection B := AB E to L. If point A is on the exterior side of the clip edge and point B is on the interior side of the clip edge, then add both the intersection B := AB E and point B to L. If both A and B are on the exterior side of the clip edge, then no output is generated. 4. Let E be the polygon edge starting in B. 5. If E 0 E then goto Step 2. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 246

Clipping in 3D The Cohen-Sutherland Algorithm and the Cyrus-Beck-Liang-Barsky Algorithm are straightforward to extend to 3D. E.g., for the Cohen-Sutherland Algorithm it suffices to maintain a six-bit out code, where Bit 5 is set if p z < z min, and Bit 6 is set if p z > z max. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 247

Invisible Polygons A point P 1 is occluded by a point P 2 if P1 and P 2 project onto the same point in the view plane. P2 is closer to the viewer than P 1. Why can a polygon be invisible? The polygon is outside of the view frustum. The polygon is on the back side (with respect to the viewer) of an opaque object. The polygon is occluded by some other polygon(s) that are closer to the viewer. For the sake of efficiency, we want to know whether a polygon is outside of the view frustum: view frustum culling! Also for the sake of efficiency, we want to know whether a polygon is occluded by some other polygon(s): occlusion culling! For correctness reasons, we need to know when a polygon is invisible hidden-surface removal (HSR) and visible-surface determination! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 249

Invisible Polygons In the subsequent slides on hidden-surface removal, we will always assume that the canonical orthographic projection from z = onto the xy-plane is used. Recall that a coordinate transformation and perspective normalization suffices to transform any projection to this canonical set-up. Thus, P 1 occludes P 2 exactly if x 1 = x 2 and y 1 = y 2 and z 1 < z 2. For simple rendering of complex scenes, hidden-surface removal accounts for a substantial portion of the total rendering time. Thus, efficiency is a key issue! A major step on the path to efficiency is to avoid sending many polygons to the HSR algorithm that are obviously not visible and this is true even with hardware support for HSR! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 250

Back-Face Culling On the surface of a closed manifold, polygons whose exterior normal vectors point away from the viewer are always invisible. The process of removing all back-facing polygons is called back-face culling or back-face removal. Note that back-face culling may not be applied if the surface is not a closed manifold! Also, note that back-face culling does, in general, not solve the HSR problem for a non-convex object! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 251

Back-Face Culling Consider the supporting plane of a polygon. To check whether the polygon is back-facing under a parallel projection, it suffices to check whether the view point is in the inside half-space. For a general parallel projection, this test is quickly performed by computing the sign of the dot product between the normal vector n and the viewing direction. For the canonical orthographic projection, this test boils down to checking whether n z > 0. Inside Outside c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 252

Hidden-Line Removal and Hidden-Surface Removal Knowing the visible edges does not really help with HSR. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 253

Object Space Versus Image Space Object-Space HSR Algorithm: For each polygon σ in the scene: 1. Compute the visible portion σ of σ analytically. 2. Project σ onto the image plane and scan-convert the projection. 3. Use σ to assign the appropriate color to each pixel corresponding to σ. Image-Space HSR Algorithm: For each pixel π in the image plane: 1. Find the polygon σ closest to the view point that is pierced by the projector through π. 2. Use σ to assign the appropriate color to this pixel. Several algorithms employ a hybrid strategy: the polygons are re-arranged and subdivided in object space, until drawing them in proper order suffices to solve the visibility problem in image space. E.g., Binary Space Partitioning. Even in the presence of hardware support (mostly for image-space algorithms), there still is a need for object-space algorithms! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 254

Object Space Versus Image Space A lower bound on the worst-case complexity of object-space algorithms is Ω(n 2 ) for a scene consisting of n polygons. A lower bound on the worst-case complexity of image-space algorithms is Ω(n p), where p denotes the resolution of the image. (Typically, p 10 6.) Note that both Ω-terms are not very realistic for practical applications, and that they hide multiplicative constants! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 255

Painter s Algorithm For objects that can be placed in an occlusion-compatible order, the painter s algorithm combined with depth sorting is sufficient: 1. Sort all of the potentially visible polygons by decreasing z-coordinates 2. Draw them in this order. 3. Polygons in front of other polygons will be drawn later, so they will be visible, and they will occlude the polygons behind them. The painter s algorithm fails for polygons that interpenetrate each other, or in the case of cyclic overlaps. y x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 256

Hidden-Surface Removal for Octrees Recursively visit and draw the cells of an octree in the proper order. z y 2 3 0 1 2 6 3 7 3 7 7 1 5 5 6 4 y x x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 257

Bounding-Box Test The (axis-aligned) bounding box (AABB) of an object is the smallest axis-aligned rectangle (in 2D) or box (in 3D) that encloses the object. If the bounding boxes do not intersect then the objects do not intersect. If two objects intersect then their bounding boxes intersect. Two objects cannot occlude each other if the projections of their bounding boxes do not intersect. No conclusion is possible if the bounding boxes intersect each other. This idea can be generalized to other bounding volumes, like oriented bounding boxes (OBB), discrete-orientation polytopes (k-dop), spheres, etc. y y y x x x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 258

Binary Space Partitioning A binary space partition tree (BSP-tree) subdivides 3D space along the supporting planes of particular polygons [Fuchs&Kedem&Naylor 1980]. A C A B B C Appropriately traversing this tree enumerates the polygons from back to front. Analogously for 2D and edges of polygons. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 259

Constructing BSP Trees BSP trees are constructed much like Quicksort works. Suppose that all polygons are triangular. We use the supporting plane of a (randomly selected) triangle to split the space into triangles on one side and triangles on the other side. Triangles must be split (and re-triangulated) if they cross the plane. This process continues recursively until every cell contains at most one triangle, or until the depth of the tree exceeds a threshold. Care has to be taken to keep the vertices of new triangles in consistent order. Ideally, a BSP tree should be small in size and balanced! Note that naive splitting of n input triangles (that do not intersect) may cause O(n 3 ) output polygons in the worst case. Paterson&Yao (1990): O(n 2 ) size can be achieved. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 260

Splitting Spanning Triangles Triangles must be split (and re-triangulated) if they span the splitting plane. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 261

Sample BSP Tree in 2D Left children are left of the splitting line segment; right children are right of the splitting line segment. Each node stores all line segments that are collinear with the splitting line segment. All resulting cells I,II,...,VII are convex. II III D B F I IV V E VI C A VII D B E F A C c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 262

Sample BSP Tree in 2D: Tree Traversal Locate the viewpoint in the BSP subdivision. At each splitting plane, first draw the stuff on the farther side, then the polygon that defines the splitting plane, and finally the stuff on the nearer side. Moving the viewpoint does not render the BSP tree invalid, but changes the traversal order. II III D B F I IV V E VI C A VII D B E F A C BSP Tree Traversal VI E V F I B III D II A VII C IV c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 263

Depth-Buffer Algorithm Idea: Determine visibility independently for each pixel. The depth-buffer algorithm, aka z-buffer algorithm, makes use of two buffers: Frame buffer: F[i, j] contains the color of pixel (i, j). Z buffer: Z [i, j] contains the z-coordinate of the object visible at pixel (i, j). Algorithm Depth-Buffer 1. i, j : Z [i, j] +. ( initialization of z buffer ) 2. i, j : F[i, j] background color. ( initialization of frame buffer ) 3. for each polygon π do 4. for each pixel (i, j) in projection of π do 5. z z-coordinate of point P of π that projects onto pixel (i, j). 6. if z Z [i, j] then 7. Z [i, j] z. 8. compute color at P and assign it to F [i, j]. Interpolation can be used to speed-up the computation of the z-values. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 264

Depth-Buffer Algorithm: Pros and Cons Pros: Simple, easy to implement, and ideally suited for hardware implementation. Primitives can be processed in arbitrary order. Interpenetration of primitives poses no problem. Cons: Needs fast memory for read/write statements in the inner loop. Pixels are written repeatedly. The basic z-buffer algorithm does not handle translucency. Most GPUs offer only 32-bit floating-point precision; 64-bit precision not yet mandatory! A perspective-to-orthogonal transformation tends to reduce z precision. Aliasing problems if different polygons share the same pixel. Co-planar primitives are handled unpredictably ( z-buffer fighting ). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 265

Depth-Buffer Algorithm: z-buffer Fighting c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 266

Depth-Buffer Algorithm: Handling Translucent Objects Translucent objects require a modification of the standard z-buffer: 1. Draw all the opaque objects first, using the standard z-buffer. 2. Draw translucent objects with blending: Translucent objects behind an opaque object do not have any effect. Translucent objects in front of all opaque objects do not change the z-value. Blend colors appropriately. E.g., alpha blending as utilized by OpenGL: (R, G, B) (R, G, B, A), where smaller values of A denote higher translucency. A = 0 means transparent and A = 1 means opaque. Modern GPUs use several other specialized buffers to simulate other special effects, e.g., an accumulation buffer for simulating multiple exposures, motion blur or depth of field. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 267

The Physics of Reflection One result of physics is that atoms move from one discrete energy state to another. Consider a vibrating photon striking the surface of some object. The atoms of that object are themselves always vibrating, at a variety of frequencies. When a photon strikes an atom, it might transfer some or all of its energy. If there is not enough energy to transfer the atom to the next energy state, the atom will absorb the photon s energy for a moment but then lose it very soon by radiating that energy in form of heat. This is the physical explanation for black tar getting hot in the sun. If, however, the photon has exactly enough energy to promote the atom to its next stable energy state, then the atom will absorb the photon s energy (destroying the photon) and oscillate at an increased level. The atom cannot remain at this higher energy state for ever. After a while, it drops back down to its original energy state. As the atom drops back, energy becomes free and a new photon is generated, carrying the energy released by the atom. The amount of energy released matches the amount of energy that the photon imparted to the atom in the first place. In effect, the resonant photon appears to be reflected off the surface. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 269

Modeling Reflection A reflectance spectrum indicates for a particular angle of incidence the percentage of the incoming light reflected at each wavelength by the surface. To find the color of the light leaving the surface, we multiply the amount of incoming light by the percentage reflectance of the surface at each wavelength. Generalization: Bidirectional reflectance distribution function (BRDF). 100 90 80 70 60 50 40 30 % 20 10 nm 400 500 600 700 800 reflectance of a greenish surface 100 90 80 70 60 50 40 30 % 20 10 nm 400 500 600 700 800 reflectance of copper at normal incidence c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 270

Light Interacting with an Object Light in Scattering and Emission (Fluorescence) Reflection (diffuse) A Absorption B Reflection (specular) Internal reflection Transmitted light c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 271

Surface Characteristics and Types of Reflection Incident Light Reflected Direction Incident Light Perfectly matt surface: diffuse reflection Slightly specular (shiny) surface Reflected Direction Incident Light Reflected Direction Incident Light Highly specular (shiny) surface Perfect mirror: specular reflection Perfectly diffuse or perfectly specular surfaces hardly occur in practice. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 272

Basics of Illumination Models An illumination model defines the nature of the light emanating from a source, e.g., the geometry of its intensity distribution, etc. An illumination model can be expressed by an illumination equation in variables associated with the point on the object being shaded. An illumination equation can be interpreted as an equation for intensities e.g., in a grey-scale image as well as an equation for colors, for example RGB. This makes the illumination equation a vector equation, which must be evaluated for the red, green, and blue component separately. Obvious trade-off between the accuracy and complexity of a physics-based model and the convenience and speed of a purely heuristic model. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 273

Phong s Illumination Model The standard illumination model in computer graphics that compromises between acceptable results and processing cost is the Phong model [Phong 1975]. This model handles reflected light in terms of a diffuse and specular component together with an ambient term: I = I a + I d + I s I... intensity at a point, I a... ambient part of I, I d... diffuse part of I, I s... specular part of I. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 274

Diffuse Reflection Diffuse reflection: Light is scattered uniformly in all directions from a point on the surface of the object. The amount of reflected light seen by the viewer does not depend on the viewer s position. Such surfaces are dull. L N V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 275

Diffuse Reflection The intensity of diffusely reflected light is given by Lambert s Cosine Law: I d = I l k d cos θ (0 θ π 2 ) where Il... intensity of the light source, θ... angle between surface normal N and vector L (pointing to the light), kd... diffuse-reflection coefficient, ranging between 0 and 1. The value k d depends on the material and the wavelength of the incident light. L N V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 276

Diffuse Reflection For simplicity, all vectors are normalized! Since cos θ = L, N, the illumination equation I d = I l k d cos θ can be rewritten using the dot product: I d = I l k d L, N If a point light is sufficiently distant from the objects: It makes essentially the same angle with all surfaces sharing the same surface normal. In this case, L is a constant for the light source. If L does not vary then two parallel surfaces with identical surface normal will be shaded the same, no matter how different their distances from the light source or viewer are. This effect can be mitigated by using a light-source attenuation factor. (Problematic in practice, though.) Perfect Lambertian diffusers do not exist in nature. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 277

Ambient Light Using the diffuse illumination model, any surface visible by the viewer but not directly lit by the light (since N, L = 0) is............ black! N 3 N 1 N 2 L This does not match reality! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 278

Ambient Light Ambient light is the result of multiple reflections from walls and objects, and it is incident on a surface from all directions. It is modeled as a constant term I al for a particular object using a constant ambient-reflection coefficient k a ranging between 0 and 1: I a = I al k a The ambient-reflection coefficient is a material property. Caveat The ambient-light term is an empirical convenience and does not correspond directly to any physical property of real materials. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 279

Specular Reflection The law of reflection tells us that the reflected ray remains within the plane of incidence, and the angle of reflection equals the angle of incidence. The plane of incidence includes the incident ray and the normal to the point of incidence. N Reflected Beam L θ θ R Surface However, most surfaces and their reflectance properties are somewhere in between a perfect diffuser and perfect glossiness, i.e., a perfect mirror. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 280

Specular Reflection Glossy surfaces differ from dull surfaces in a number of important aspects: Light reflected from a glossy surface leaves the surface at angle θ, where θ is the angle of incidence. Thus, the degree of specular reflection seen by a viewer depends on the view point. In practice, specular reflection is not perfect and reflected light can be seen for viewing directions close to the direction of the reflected beam. The area over which specular reflection is seen for a given view point is commonly referred to as highlight. The color of the specularly reflected light is different from the color of the diffusely reflected light. In simple models the specular component is assumed to be the color of the light source. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 281

Specular Reflection For a perfect mirror, a highlight is only visible if φ the angle between the viewing direction V and the reflection vector R is zero. In practice, however, specular reflection is seen over a range of φ that depends on the glossiness of the surface. L N R L N R θ θ φ V V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 282

Coefficient of Glossiness Phong modeled this behavior empirically by a term cos n φ: I s = I l k s cos n φ (0 k s 1; 0 n ) where k s is the specular reflection coefficient, usually taken to be a material-dependent constant. Note that large values of n are required for a tight highlight to be obtained. For a perfect mirror surface, this coefficient of glossiness is infinite. L N R L N R θ θ φ V V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 283

Specular Reflection The expense of the specular illumination equation can be considerably reduced by making some geometric approximations. Since the vector R is expensive to compute, Blinn [J. Blinn 1977] suggested to use the vector H instead: Blinn-Phong reflection model. H is the mean vector of L and V. The specular term then becomes a function of N, H rather than R, V. L N H θ θ V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 284

Specular Reflection Thus, I s = I l k s ( N, H ) n. As the angle between R and V is twice the angle between N and H, the use of N and H spreads the highlight over a greater area. Like the diffuse term this simple model of specular reflection is a local model. Light reflected onto the surface that originates from specular reflections in other objects is not considered! L N H θ θ V c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 285

Specular Reflection Summarizing, for colored objects the easiest way to model the specular highlights is to use the color of the light source, and to control the color of the objects by setting the diffuse reflection coefficients appropriately: I r = I a k ar + I i [k dr L, N + k s( N, H ) n ] I g = I a k ag + I i [k dg L, N + k s( N, H ) n ] I b = I a k ab + I i [k db L, N + k s( N, H ) n ] Combining these three equations in a single vector expression yields: I r,g,b = I a k a(r, g, b) + I i [k d (r, g, b) L, N + k s( N, H ) n ] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 286

Discussion of Phong s Illumination Model Light sources are assumed to be point sources. Any intensity distribution of the light source is ignored. All geometry except the surface normal is ignored and no distance information is considered. That is, light source(s) and viewer are (assumed to be) located at infinity. The diffuse and specular terms are modeled as local components. No reflections of other objects in the surface of the object being rendered are considered. Phong s model lacks shadows! Lack of shadows means not only that objects do not cast a shadow on other objects, but self-shadowing within an object is omitted. Concavities in an object that are hidden from the light source are erroneously shaded simply on the basis of their surface normal. An empirical model is used to simulate the decrease of the specular term around the reflection vector modeling the glossiness of the surface. The color of the specular reflection is assumed to be that of the light source. That is, for white light highlights are rendered white regardless of the material. Phong s illumination model (or some variant thereof) is in wide-spread use due its apparent main advantage: Its simplicity is unruled, and it produces acceptable results for many applications. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 287

Flat Shading The simplest shading model for a polygon is flat shading, also known as faceted shading or constant shading. This approach applies an illumination model once for a polygon to determine a single intensity value (or color value) that is then used to shade an entire polygon. Basically, the illumination equation is sampled once for each polygon, and this value is used across the polygon to reconstruct the polygon s shade. This approach is only valid if the following assumptions are true: The light source is at infinity, so N, L is constant across the polygon s face. The viewer is at infinity, so N, V is constant across the polygon face. The polygon represents the actual surface being modeled, and is not an approximation to a curved surface. Otherwise, constant shading produces a faceted appearance. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 289

Flat Shading The simple solution of using a finer surface mesh turns out to be surprisingly ineffective: The perceived difference in shading between adjacent facets is accentuated by the Mach band effect (E. Mach 1860). Mach banding is caused by lateral inhibition of the receptors in the eye: The more light a receptor receives, the more that receptor blocks the response of the receptors adjacent to it. That is, the human visual system performs some form of edge enhancement by exaggerating the intensity change at any edge where there is a discontinuity of intensity. As a result, at the border between two facets the dark facet appears even darker and the light facet appears even lighter. The two shading models discussed next, Gouraud shading and Phong shading, take advantage of the information provided by adjacent polygons to simulate a smooth surface. Virtually all modern GPUs support one or both of these models through a combination of hardware and firmware. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 290

Gouraud Shading Gouraud shading [H. Gouraud 1971], also called intensity interpolation shading, eliminates intensity discontinuities up to some extent. Gouraud shading extends the concept of interpolated shading applied to individual polygons by interpolating polygon vertex illumination values that take into account the surface being approximated. The technique first calculates the intensity at each vertex of the polygon by applying an illumination model. These vertex intensities are afterwards interpolated over the polygon. The normal vector N used in these equations is the so-called vertex normal: It is calculated as the average of the normals of the polygons that share the vertex. This is an important feature of the method since the vertex normal is an approximation to the true normal of the surface (which the polygon represents) at that point. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 291

Gouraud Shading The interpolation process that calculates the intensity over a polygonal surface can then be integrated with a scan-conversion algorithm that evaluates the screen position of the edges of a polygon from the vertex intensities, and the intensities along a scan line from these. This yields the following bilinear interpolation scheme: 1 I a = [I 1 (y s y 2 ) + I 2 (y 1 y s)] y 1 y 2 1 I b = [I 1 (y s y 4 ) + I 4 (y 1 y s)] y 1 y 4 1 I s = [I a(x b x s) + I b (x s x a)] x b x a I s (x s, y s ) I 1 (x 1, y 1 ) I b (x b, y s ) These equations can be implemented as incremental calculations. I a (x a, y s ) Gouraud shading handles I 4 (x 4, y 4 ) specular reflection only correctly if the highlight I 2 (x 2, y 2 ) scan line at y s occurs in one of the polygon vertices. I 3 (x 3, y 3 ) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 292

Phong Shading Phong shading [B.T. Phong 1975], which is also known as normal-vector interpolation shading, makes use of the vertex normal vectors for bilinear interpolation in the following steps: 1. Interpolation of the normal vectors along the edges between the vertices. 2. By sliding a horizontal scan line from, say, bottom to top the normal vectors of the surface inclosed by the edges are interpolated. 3. So there exists a normal vector for each point of the polygon surface. This normal vector can then in turn be used for evaluating the illumination equation. Vertex normal Interpolated normals Vertex normal Original surface The interpolation of the normal vectors tends to restore curvature. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 293

Bilinear Interpolation of the Vertex Normals With Phong shading, normal vectors are interpolated rather than the vertex intensities. N 1 N a N b Current scan line rag replacements N 2 N s N 4 N 3 Scan Line c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 294

Characteristics of Phong Shading With Phong shading, vector interpolation replaces intensity interpolation. The normal vectors are interpolated rather than the vertex intensities. Interior points are calculated incrementally by using bilinear interpolation. A separate intensity is evaluated for every pixel from the interpolated normals. Specular reflection can be successfully incorporated into this scheme. Results of normal vector interpolation are in general superior to intensity interpolation, because an approximation to the normal is used at each point. (This is true even without taking into account specular reflection.) Phong shading tends to be much more expensive as Gouraud shading. In particular, the illumination equation has to be evaluated at every pixel. To avoid the costs of normalizing the interpolated normals, approximation techniques (such as Taylor series expansion [Bishop&Weimer 1986]) may be used. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 295

Sample Shading: Ellipsoid The images show, from left to right, line drawing, flat shading, Gouraud shading, and Phong shading. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 296

Sample Shading: Torus c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 297

Problems of Interpolated Shading Silhouette edges: No matter how good a polygonal approximation an interpolated shading model offers to the actual shading of a curved surface, the silhouette edges of the mesh are still clearly polygonal. Orientation Dependencies: The results of interpolated shading are not independent of the projected polygon s orientation. This problem can be mitigated by using a larger number of smaller polygons, or by decomposing the polygon into triangles and using barycentric coordinates for the interpolation. 1 0 scan line 0 0 0 1 1 1 1 dark midpoint 0 light midpoint c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 298

Problems with Interpolated Shading Perspective distortion: Anomalies can appear in animated sequences because the intensity/normal-vector interpolation is carried out in screen coordinates from vertex-normals as calculated in world coordinates. This is not invariant with respect to transformations, and may cause frame-to-frame disturbances in animations. Problems with shared vertices: Shading discontinuities can occur when two adjacent polygons fail to share a vertex that lies along their common edge. Thus, a vertex has to be shared by all adjacent areas. As further improvement, such a vertex is connected to other vertices of the adjacent polygon. 1 1 1 0 0 0 1 0 1 0 1 1 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 299

Problems with Interpolated Shading Unrepresentative surface normals: The process of averaging surface normals to provide vertex normals for the intensity calculation can cause errors which result in corrugations being smoothed out. This would result in a visually flat surface. Surface normals c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 300

Aliasing and Anti-Aliasing Aliasing is the collective term for any form of visual artifact caused by mapping a continuum to a discrete set of samples. In the figure, the first letter suffers from aliasing and looks coarse compared to the second letter. Anti-aliasing is one of the most important classes of techniques for making graphics visually pleasing and text easy to read. It is a way of fooling the eye into seeing straight lines, smooth curves, and smooth motions where there are none. The aliasing problem thoroughly permeates computer graphics. Its most familiar manifestations are spatial aliasing, temporal Aliasing. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 302

Spatial Aliasing Spatial aliasing is due to the discrete nature of pixels on a monitor, and results in silhouette edges that do not look smooth: jaggies and stair-casing. A silhouette edge is the boundary of a polygon, or of any surface unit, that exhibits a high contrast over its background. (In general, contrast means light and dark areas of the same color.) A long thin object may break up depending on its position with respect to the pixel array. Long thin objects may break up unpredictably! Another aliasing artifact occurs when small objects, whose spatial extent is less than the area of a pixel, are rendered or not depending on whether they are intersected by a sample point. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 303

10 /. -, +* Temporal Aliasing New problems may occur when still images are shown in an animated sequence: Crawling edges (i.e., moving jaggies); Scintillating objects (i.e., objects (dis-)appearing during a move). A slight change in the position of a line in world coordinates can cause a huge jump in the position of the digitized line in screen coordinates, i.e., a crawling of the pixels that represent the line. < = < = < = < = < = < = < = < = < = < = < = < = < = &'() < = < = < = < = < = < = < = < = < = < = < = < = " < = "# < = < = $% < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = < = : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :; : : :;! : : :; : : :; : : :; : : :; : : :; 2 : : :; 23 : : :; 4 : : :; 45 : : :; : : :; 67 : : :; 89 : : :; : : :; : : :; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; Such changes can be very distracting and are intolerable in some applications (e.g., flight simulators). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 304

Temporal Aliasing: Scintillating and Jumping Screen Moving Polygon Time = 0 Flow of time (successive frames) Time = 1 This row suddenly pops on when the moving edge covers the pixel centers Time = 2 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 305

Temporal Aliasing: Spinning Wheel In the third row, the wheel is spinning at 5 revolutions per second, but appears to be spinning backward at 1 revolution per second. Thus, the fast speed is aliasing as a slower speed after sampling. Time: t=0 t=1/6 t=2/6 t=3/6 t=4/6 t=5/6 t=1 No. rev.: 0 1/6 2/6 3/6 4/6 5/6 1 No. rev.: 0 1/2 1 1 1/2 2 2 1/2 3 No. rev.: 0 5/6 1 4/6 2 3/6 3 2/6 4 1/6 5 c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 306

Nyquist-Shannon Sampling Theorem The Nyquist-Shannon sampling theorem tells us that a periodic signal can be properly reconstructed from equally-spaced samples if the the sampling rate is greater than twice the frequency of the highest-frequency component in its spectrum. This lower bound on the sampling rate is known as the Nyquist rate. f(x) sample points function to be sampled x sampling interval f(x) f(x) f(x) x aliased sine wave x aliased sine wave x The phenomenon of high frequencies masquerading as low frequencies in the reconstructed signal is aliasing: the high-frequency components of the original signal appear to be lower-frequency components in the reconstructed signal. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 307

Anti-Aliasing: Area Sampling Area sampling (aka prefiltering), which is one of the more prominent anti-aliasing methods, attempts to assign an intensity to a pixel that depends on the percentages of the areas that are covered by the objects. The actual pixel intensity is obtained as a weighted average of the object intensities, by using the overlap areas as weights. 3 pixel %(intensity) 2 (1,0) 10% 1 (1,1) 20% (2,1) 80% 0 (3,1) 40% 0 1 2 3 4 Despite several advances, prefiltering is rarely used since it tends to be computationally expensive. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 308

Anti-Aliasing: Supersampling In supersampling (aka postfiltering) more than one sample per pixel is evaluated. In practice, supersampled images are computed by applying standard image-generation techniques to an n-times increased resolution, and by obtaining the value of an actual pixel as the (weighted) average of its corresponding n 2 supersampled pixels. This method works well with most computer graphics images and is easily integrated into a depth-buffer algorithm. Note that n n supersampling increases the number of samples and the image-generation time by a factor of n 2! Main draw-backs: Non-adaptive supersampling does not work with images whose spectral energy does not fall off with increasing frequency. (Texture rendered in perspective is a common example of an image that does not exhibit a falling spectrum with increasing spatial frequency.) Blurring effects occur because information is integrated from a number of neighboring samples. The fixed, regular grid used for supersampling might create a variety of new aliasing artifacts in an image. Variants: Adaptive supersampling and stochastic/jittered supersampling. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 309

Anti-Aliasing: Filters Rather than combining samples with an unweighted average one might use a weighted filter: (a) Box filter (unweighted average), (b) Bartlett filter (linearly weighted average), (c) Gaussian filter. (a) (b) (c) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 310

Anti-Aliasing: Sample Images [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 311

Anti-Aliasing: Sample Images [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 312

Anti-Aliasing: Sample Images [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 313

Adding Surface Details All the shading techniques dealt with so far produce uniform and smooth surfaces in sharp contrast to real-world surfaces. The simplest approach to add gross detail is to use so-called surface-detail polygons. Every surface-detail polygon is coplanar with its base polygon, and is marked in order to exclude it from the hidden-surface removal. As details become finer and more intricate, explicit modeling with polygons or other geometric primitives becomes less feasible. Catmull ( 74) suggested as an alternative to map an image, either digitized or synthesized, onto a surface. This approach is known as texture mapping (or pattern mapping). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 315

Adding Surface Details Even rudimentary textures make an image much more pleasing and convey additional information! (Both images are based on the same number of polygons.) [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 316

Dimensionality of Texture Space The process of mapping a texture onto an object is called texture mapping. The texture space can be one-dimensional, two-dimensional, or three-dimensional. Texture Object Screen T u T u v T u v w x y z x y For the sequel, except for the slides on solid texturing, we will focus on a 2D texture space. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 317

Texture Map The 2D image that is mapped onto an object is called a texture, and its individual elements are often referred to as texels. A one-to-one mapping between pixel and texel need not exist! Texels Pixel Texture domain Screen domain c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 318

Texture Mapping Caveats A simple-minded approach to texture mapping assigns texture coordinates to vertex coordinates and uses a sweep line (scan line) for the interpolation of the texture coordinates This approach results in horrible anomalies, e.g., the bent checkerboard. An improved approach performs the interpolation in texture and screen space in parallel. This approach avoids bent lines, but still gives way to incorrectly spaced lines, i.e., to incorrect perspective views. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 319

Texture Mapping Caveats The bent checkerboard is caused by the scan-line interpolation (in image space) of the texture coordinates assigned to the vertices of the quadrilateral (in object space). u = 1 v = 0 u = 0 u = 1 v = 0 v = 1 lines of constant u u = 0 lines of constant v v = 1 direction of scan line c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 320

Correct Texture Mapping The only remedy applicable is to determine the texture coordinates explicitly for every pixel via transformations from screen space to object space, and from object space to texture space. v y Texture map u Surface of object Four corners of pixel on screen x c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 321

Texture Mapping Problems: Fixed Resolution A major problem of texturing is that the texture space has a fixed resolution! Too small a resolution causes zooming to result in poor-quality images. Too high a resolution causes blurring and other aliasing problems, and is a source for computational inefficiency. What is a good resolution?? Even if the resolution of the texture space has been chosen judiciously, an extremely large number of texels may have to be weighted and summed just to texture a single pixel. This phenomenon may arise when a large number of texels maps onto a surface, but the projection of the surface in screen space is small, either because of its depth or because of its orientation with respect to the viewing direction. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 322

Texture Mapping Problems: Inverse Mapping In general, the inverse mapping from screen to the object surface and to the texture space is highly non-trivial. This is particularly true for curved objects. Two approaches are commonly used: Unfolding the polygon mesh The dimensionality is reduced from 3D to 2D by unfolding adjacent polygons, thus generating a flat polygonal mesh which is easier to project into the texture space. Two-part mapping the 3D object surface is mapped to an intermediate surface (such as a cylinder), and then into the texture space. For texturing a parametric surface its parametric representations can be employed. Note, though, that parametric mapping does not take care of perspective foreshortening! [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 323

Texture Mapping Problems: Aliasing Even if the inverse mapping is performed accurately, serious aliasing errors may occur, and, in the worst case, may ruin the visual appearance of a textured object. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 324

Texture Mapping Problems: Aliasing Aliasing is due to the point sampling problem in texel space and due to perspective foreshortening. Typically, all texels that correspond to the area of a pixel are summed by applying a weighing process (such as a box filter). However, neighboring pixels need not map to neighoring texels, and information of the texture map may be lost. This is particularly problematic if the texture contains thin curves or small details. Also, simple box filtering is not sufficient in the case of perspective foreshortening because texels in the back have the same weight as texels in the front. Such aliasing artifacts are particularly noticeable for textures that exhibit coherence or regularity (e.g., a checkerboard). Correct filtering of non-linearly mapped areas would require space-variant filters, i.e., filters whose shape and area change as they move across the texture domain. However, prefiltering, supersampling or mipmapping go a long way to reduce aliasing artifacts. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 325

Texture Mapping Problems: Aliasing Prefiltering and supersampling to cope with the point sampling problem in texel space. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 326

Mipmapping Mipmapping computes and stores textures at diverse resolutions. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 327

Mipmapping To compute the color of a pixel, we determine the number of texels that correspond to the pixel in the original texture map, find the two texture maps closest in size, and average the pixel colors obtained from those two texture maps. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 328

Solid Texturing For wood, granite, marble, and other natural materials, simply pasting a 2D texture onto the exterior of the object does not give the desired result: the texture does not appear to be seamless! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 329

2D Textures Versus Solid Textures 3D ( solid ) textures allow the user to carve an object out of a solid block. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 330

Solid Texturing In order to avoid gaps, and also to circumvent the mapping problem, a 3D texture space can be employed. Imagine that a texture value exists to every point in 3D object space. We may then assume that the texture coordinates of a point on a 3D surface are given by the identity mapping. The color of the object is determined by the intersection of its surface with the predefined 3D texture field of the block. This is equivalent to sculpting or carving an object out of a solid block of material. A major advantage of the elimination of the mapping problem is that objects of arbitrary complexity can receive a texture on their surface in a coherent fashion. In an animated sequence, the texture space would have to be transformed in the same way as the object space. (The incorrect approach of moving the object through the texture space can produce unique visual effects, though.) A digitizing approach is not applicable to generate solid textures due to memory constraints. Typically, 3D textures are generated procedurally. A 3D texture can also be generated by sweeping a 2D texture through 3D space. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 331

Sample Solid Texture: Marble Marble textures are typically generated as procedural textures, based on noise functions, e.g., Perlin noise [1982] or Perlin simplex noise [2001]. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 332

Sample Solid Texture: Perlin Noise Initialization: Allocate a regular 3D grid within [0, 1] 3 or within [ 1, 1] 3. Assign a random unit vector ( gradient vector ) to each node of the grid. Noise function: Determine the (cubic) grid cell that contains a texture point p := (u, v, w). For each node of that cell: 1. Compute a distance vector between p and the node. 2. Compute the dot product between the distance vector and the corresponding gradient vector. Compute a bilinear interpolation of the values obtained. Standard and fractional Perlin noise. [Image credit: Wikipedia.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 333

Sample Solid Texture: Black&White Again assign random numbers to the vertices of a regular (coarse) grid. Obtain intermediate texture values by (linear) interpolation. If a texture value is above a threshold then paint the pixel white, else black. [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 334

Sample Solid Texture: Wood Grain Wood grain can be simulated by a set of concentric cylinders, whose reference axis is, in general, tilted with respect to a reference axis of the object. The texture field is given by a modular function of the radius, returning a color for texture space coordinates (u, v, w). r := u 2 + v 2 α := arctan u v r := r + 2 sin(20α + c := r mod 60 w 150 ) Assign a dark brownish color if c > 40, and a light color, otherwise. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 335

Example: Wood Grain [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 336

Example: Wood Grain c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 337

Bump Mapping Texture mapping affects the shading of a surface, but the surface will still look smooth: It changes the color, diffuse, and specular reflection properties, but it does not change the surface normal. How can we make a surface look rough? If a photograph of a rough surface is used as a texture map then the shaded surface will not look quite right because the direction to the light source used to create the texture map is likely different form the direction to the light source illuminating the surface. Blinn s bump mapping (1978) is a way to provoke the appearance of a rough surface geometry that avoids explicit geometric modeling. It involves perturbing a surface normal before it is used in the illumination model, just as a slight roughness in a surface would perturb the surface normal in the real world. A bump map is an array of displacements, each of which can be used to simulate displacing a point on a surface a little above or below that point s actual position. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 338

Bump Mapping Original surface A bump map Simulate the displacement of the surface Normal vectors to the new surface c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 339

y Bump Mapping for a Parametric Surface z N j Let P(s, t) = [x(s, t), y(s, t), z(s, t)] be a parametric N i surface. To compute its surface normal at point P(s, t), wea compute j A i P(s, t) da j P s(s, t) =, s da i P(s, t) P t(s, t) =, t N(s, t) = P t(s, t) P s(s, t), n(s, t) = N(s, t)/ N(s, t). φ i φ j P t (s,t) t N P s (s,t) s Let B(s, t) be the bump map value that will be applied at P(s, t). (For simplicity, we assume that the bump map is also parameterized over s, t.) We add this amount in the direction normal to P(s, t): P (s, t) = P(s, t) + B(s, t) n(s, t). c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 340

Bump Mapping for a Parametric Surface Compute partial derivatives of the altered surface P (s, t) = P(s, t) + B(s, t) n(s, t): P s(s, t) := P (s, t) s P t(s, t) := P (s, t) t = P s(s, t) + B s(s, t) n(s, t) + n s(s, t) B(s, t), = P t(s, t) + B t(s, t) n(s, t) + n t(s, t) B(s, t). Blinn showed that a good approximation to the new (unnormalized) normal N is obtained by ignoring the last term in each partial derivative and by taking their cross-product. (Recall that (A + B) (C + D) = A C + A D + B C + B D.) N (s, t) := P t(s, t) P s(s, t) [P t(s, t) + B t(s, t) n(s, t)] [P s(s, t) + B s(s, t) n(s, t)] = P t(s, t) P s(s, t) + P t(s, t) (B s(s, t) n(s, t)) + (B t(s, t) n(s, t)) P s(s, t) + (B s(s, t) B t(s, t) (n(s, t) n(s, t)) = N(s, t) + B s(s, t) (P t(s, t) n(s, t)) + B t(s, t) (P s(s, t) n(s, t)) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 341

Computing the Altered Surface Normal By dropping the parameters, we obtain the more concise formula N = N + Bs(Pt N) Bt(Ps N). N s P s n N N t P t P s P t n N is then normalized and substituted for the true surface normal in the illumination equation. Note we do not actually compute the altered surface it suffices to compute only (an approximation of) the altered normal! c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 342

Sample Bump Map c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 343

Bump Mapping: Embossing Embossing, a technique borrowed from 2D image processing, can be used to achieve a chiseled look on 3D surfaces. [Image credit: SIGGRAPH Educator s Slide Sets.] Varying the surface normal on a per-pixel basis tends to be too time-consuming for real-time imaging. Real-time embossing avoids the modification of surface normals by using, e.g., the so-called dot-product method. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 344

Displacement Mapping A method similar to bump mapping for adding wrinkles to a surface is displacement mapping. Displacement mapping is applied to a surface by first dividing the surface up into a mesh of coplanar polygons. The vertices of these polygons are then perturbed according to the displacement map. The resulting model is then rendered with any standard polygon renderer. Displacement mapping can be used to convert the visual appearance of a cylinder into a screw. However, to achieve a fine resolution in the texture of the wrinkles, the additional polygons would get ever smaller and more numerous, placing a tremendous burden on the renderer. c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 345

Displacement Mapping Displacement mapping does alter the object s geometry! [Image credit: SIGGRAPH Educator s Slide Sets.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 346

Reflection Mapping Reflection mapping (aka environment mapping ) refers to the process of reflecting the surrounding environment on a shiny, reflective object without resorting to ray tracing (or similar means). Let V be the reflection vector of a viewer direction V for a particular point on the surface of the reflective object. The intersection of V with a surface, such as the interior of a sphere that contains an image of the environment to be reflected in the object, gives the shading attributes for the point P on the object surface. V N V P reflective object E c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 347

Reflection Mapping In practice, four rays through the pixel point define a reflection cone with a quadrilateral cross-section. The region that subtends the environment map is then filtered to give a single shading attribute for the pixel. Surface Area subtended in environment map Pixel View point c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 348

Reflection Mapping: Cube Mapping Nowadays, the environment is mostly mapped into a cube ( cube mapping ) or some other polyhedral object. E.g., sky box. [Image credit: Wikipedia.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 349

Reflection Mapping: Sample Cube Maps [Image credit: Wikipedia.] c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 350

Problems with Reflection Mapping A practical difficulty is in the production of the environment map. Predistorting an (photographed) environment to fit the interior surface of a sphere is difficult, and storing a reflection on the six sides of a cube requires tricks to produce seamless reflections. A general disadvantage is that reflection mapping is geometrically accurate only for (rather) small reflective objects located at the center of the surrounding environment sphere/cube. As the object size becomes large with respect to the environment sphere/cube, reflections tend to appear in the wrong place on the reflected object. If the reflective object is positioned away from its center then the geometric distortion increases. Also, reflection mapping works well only if the reflective object is mostly convex. (A non-convex object does not appear as self-reflection in the reflection!) c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 351

The End! I hope that you enjoyed this course, and I wish you all the best for your future studies. UNIVERSITÄT SALZBURG Computational Geometry and Applications Lab c M. Held (Univ. Salzburg) Einführung Computergraphik (WS 2014/15) 352