Spatial Dynamic Media Systems

Similar documents
Tutorial for Tracker and Supporting Software By David Chandler

Teaching Methodology for 3D Animation

White paper. In the best of light The challenges of minimum illumination

Information Technology Career Field Pathways and Course Structure

Space Perception and Binocular Vision

Hidden Camera Surveillance

Lesson 3: Behind the Scenes with Production

Example Chapter 08-Number 09: This example demonstrates some simple uses of common canned effects found in popular photo editors to stylize photos.

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

Proposal for a Virtual 3D World Map

Beyond Built-in: Why a Better Webcam Matters

DATA VISUALIZATION GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF GEOGRAPHIC INFORMATION SYSTEMS AN INTRODUCTORY TEXTBOOK CHAPTER 7

How To Use The Dc350 Document Camera

How to rotoscope in Adobe After Effects

Application Report: Running µshape TM on a VF-20 Interferometer

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

Mouse Control using a Web Camera based on Colour Detection

Introduction to Computer Graphics

How To Use 3D On A Computer Or Tv

White paper. CCD and CMOS sensor technology Technical white paper

IP Surveillance. Presentation for BICSI Regional Conference Troy, Michigan March 15, Tom Jones, PE, RCDD / NTS Field Sales Engineer, D-Link

Motion Activated Camera User Manual

Mirror Mount Video Monitor/Recorder with Front and Rear View Night Cameras PLCMDVR5

The Art Institute of Philadelphia Catalog Addendum GAME ART & DESIGN

Interactive Visualization of Magnetic Fields

Adaptive strategies for office spaces in the UK climate

Working with the BCC Clouds Generator

Effective Use of Android Sensors Based on Visualization of Sensor Information

Outdoor Security Camera

Basic Theory of Intermedia Composing with Sounds and Images

COMPACT GUIDE. Camera-Integrated Motion Analysis

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

A Short Introduction to Computer Graphics

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Service-Oriented Visualization of Virtual 3D City Models

Video Conferencing Display System Sizing and Location

This section will focus on basic operation of the interface including pan/tilt, video, audio, etc.

Video Baby Monitor System. User Guide

MovieClip, Button, Graphic, Motion Tween, Classic Motion Tween, Shape Tween, Motion Guide, Masking, Bone Tool, 3D Tool

LIGHT SECTION 6-REFRACTION-BENDING LIGHT From Hands on Science by Linda Poore, 2003.

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Basler. Line Scan Cameras

Automated Recording of Lectures using the Microsoft Kinect

A Brief on Visual Acuity and the Impact on Bandwidth Requirements

This course description will be replaced with one currently under review by College Council.

Rodenstock Photo Optics

KINECT PROJECT EITAN BABCOCK REPORT TO RECEIVE FINAL EE CREDIT FALL 2013

ROAD WEATHER AND WINTER MAINTENANCE

Beginners Guide to Digital Camera Settings

CANON XA 10 IMPORTANT VIDEO & AUDIO SETTINGS

High Resolution Media Display - B series 20mm Pitch Outdoor RGB Module

The infrared camera NEC Thermo tracer TH7102WL (1) (IR

Greenwich Visual Arts Objectives Photography High School

(Refer Slide Time: 2:03)

Feasibility of an Augmented Reality-Based Approach to Driving Simulation

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Information and Communication Technology 2014

The School-assessed Task has three components. They relate to: Unit 3 Outcome 2 Unit 3 Outcome 3 Unit 4 Outcome 1.

CCTV - Video Analytics for Traffic Management

Multi-Zone Adjustment

Light Waves and Matter

A method of generating free-route walk-through animation using vehicle-borne video image

How To Design A Project

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Opto-Mechanical I/F for ANSYS

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801

Handheld USB Digital Endoscope/Microscope

Using Text & Graphics with Softron s OnTheAir CG and OnTheAir Video

A Simple Guide To Understanding 3D Scanning Technologies

6 LED colours: White Blue Green Red Yellow Amber

GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT. Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus

Chapter 1. Animation. 1.1 Computer animation

Unit Syllabus. Level 2 Digital Home Technology Integrator November 2008 Version 1.0

Megapixel PoE Day / Night Internet Camera TV-IP572PI (v1.0r)

CONVERGE Features, Capabilities and Applications

What is a DSLR and what is a compact camera? And newer versions of DSLR are now mirrorless

Silverlight for Windows Embedded Graphics and Rendering Pipeline 1

Technical summary X-LED MESH X-LED - Made by Carl Stahl

Position Classification Flysheet for Exhibit Specialist Series, GS Table of Contents

Open up new and exciting options for spatial design by combining lighting and image projections.

ANALYZING A CONDUCTORS GESTURES WITH THE WIIMOTE

KVM Cable Length Best Practices Guide

Making natural looking Volumetric Clouds In Blender 2.48a

Epson 3LCD Technology A Technical Analysis and Comparison against 1-Chip DLP Technology

A framework for Itinerary Personalization in Cultural Tourism of Smart Cities

MRC High Resolution. MR-compatible digital HD video camera. User manual

LAMBDA CONSULTING GROUP Legendary Academy of Management & Business Development Advisories

Bathroom Cabinet. Installation & User Guide. Illuminated Mirrors.

ADVANCES IN AUTOMATIC OPTICAL INSPECTION: GRAY SCALE CORRELATION vs. VECTORAL IMAGING

Demo: Real-time Tracking of Round Object

Wireless Day / Night Cloud Camera TV-IP751WIC (v1.0r)

Whitepaper. Image stabilization improving camera usability

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

A pragmatic look at higher frame rates in Digital Cinema

PDF Created with deskpdf PDF Writer - Trial ::

IC 1101 Basic Electronic Practice for Electronics and Information Engineering

Transcription:

Spatial Dynamic Media Systems Amalgam of Form and Image through Use of a 3D Light-point Matrix to Deliver a Content-driven Zone in Real-time Matthias Hank Haeusler http://www.sial.rmit.edu.au/people/hhaeusler.php matthias.haeusler@ems.rmit.edu.au The core project of this paper is the development of a system that allows me to test the representation of information and ideas as form within space that is constantly generated and regenerated as a result of fresh input. The hypothesis being that this real time configuration of space using light offers a variety of new perceptions ranging from information sharing to public art never experienced previously. In this paper I consider the technical and media implications of extending conventional 2D screens, which are limited currently to architectural cladding, into a 3D matrix thereby inducing an alteration in spatial perception via the content animating the 3D matrix. The content is the result of an information injection derived from sensors of what from ever, where data have been captured and translated into a digital signal. Keywords: Interactivity; content; real-time; spatial. Introduction I draw from experience in practice when applying media technology onto build projects as well as academic discourse through my PhD research. I have undertaken four projects to develop and test the system presented in this paper. I have developed a prototype system which, with the use of a 3D lightpoint matrix, alters space. The prototype system has then been applied in an architectural context, tested by applying designed content to it. Lastly I have experimented with writing software applets for a third party to adapt their own input for display purposes. I am interested in understanding more about the significance for architecture when applying a system that allows a modification of a zone through delivered content. Within this zone I wish to investigate the possibility of filling a gap that exists in the discussion about architecture when shifting from an autoplastic to alloplastic substance. Zone here refers to a defined location in which modifications of space, shape, image and form will happen, delivered through a later introduced system. By autoplastic and alloplastic I refer to an article published by Mark Goulthorpe (1999) where he defines autoplastic as a self-determinated operative strategy and alloplastic as a reciprocal environment modification. The project draws from different examples of existing buildings which are used to demonstrate how architectural space has been transformed through the participation of the public. All these projects create an alloplastic substance either though an alteration of form or through an alteration Session 02: Virtual Environments - ecaade 25 69

of image. None of them seeks an alteration of space through both form and image. This is the gap I wish to fill by creating an alloplastic substance through an amalgam of form and image. Premises - Search for an interactive 3D dynamic system Three discrete considerations are required in order that a 3D light-point matrix system achieves its full potential and allows an alteration of space through form and image. Firstly, I am investigating how are forms received and therefore whether they can be defined with the use of light. The illumination of the city gives the beholder a notion of space at night time, where one can locate the city and its buildings in relation to one s own position - something one could not do if there had not been any lit forms and therefore no possibility to define space with the use of light. Secondly, I want to investigate the potential of existing media technologies beyond their typical application in architecture. Currently the typical application can only offer the display of apparent 3D images and forms but these 3D forms are never actually 3D but 2D. The crucial reason for this is the 2D nature of the display. Extended beyond their typical 2D application set to give an illusion of 3D is my proposed zone built with a 3D grid of lights points each with X, Y, Z coordinates with LEDs placed at each intersection of the grid. This zone will be created by a number of LED sticks attached at a 90 degree angle to the wall. Each of these sticks will contain an array of LEDs arrayed at certain distances to each other along the length of the stick. Through their nonphysical existence, this zone allows modifications of space, shape, image and form within its spatial boundaries over time. Thirdly, I want to investigate the potential of existing media technologies beyond their typical application in architecture. Three modes of representation are currently used to display contents on screen: the pre-recorded content; live content, and living content content that is altered and defined by its environment in real-time. Sensors are able to capture what happens in our environment and translate this into electronic data, which could then function as a digital information injection for the zone in order to define the zone s movement and its form in real-time. My research will go further and take any of these modes of representation into the 3D lightpoint matrix to deliver a zone in real-time. Steps towards a realization of the system with regard to product design Four series of experiments have been undertaken to create the foundation to develop an industrially designed product. These steps are as follows: Experiment Series One: Defining space with light points The first experiment answers the primary question of whether space can be defined by a light point matrix. Here a model in the form of a net with lights attached at the crossing points have been built and filmed. The net model was moved, thus a moving surface could be perceived created by light points only (see figure 1). Experiment Series Two: Qualities of LEDs regarding brightness and perception Based on this general research of the technology of the electronic LED more specific research has been Figure 1 Defining space with light points 70 ecaade 25 - Session 02: Virtual Environments

conducted which investigated the qualities of LEDs regarding brightness and perception. Here an LED for further experiments has been chosen; the chosen LED will then be used for a bigger set up to conduct a series of experiments done in the next experiment series. Experiment Series Three: Defining a surface in a set up model and viewing surface from different angles In the third experiment a physical model of 10 * 10 LEDs was built, each LED in an acrylic tube as a substructure for the LEDs. The idea of LED sticks had its origin in the previous research when the LEDs had to be held in place to analyze them, and were therefore embedded into an acrylic substructure. The term LED stick will be used in further texts for a single component of the spatial dynamic media system and should also be the final result of this research project, where a prototype of an array of LEDs embedded in an acrylic stick should be developed. The LEDs were placed in this array of acrylic tubes in such a way that they apparently defined a surface, which will be analyzed in the subsequent paragraph of this paper. The first research experiment with the model was therefore: can one perceive a surface created by light points, if parts of the lights are masked by a substructure? For an understanding of how much masking is created by the substructure and the LEDs, a picture of the model with the LEDs in the acrylic tubes was taken (see figure 2) and as a next step one or more LEDs moved to a new position and another picture taken (see figure 3). With this sequence of pictures a stop motion animation could be created to give the viewer a clear understanding of of the extent to which the surface defined by the light points could be perceived. This stop motion animation has been made from one fixed position only. The next research experiment using the model asked from which angles/positions can the beholder see the full surface (all lights) or can see a sufficient number of lights to understand the form or the shape of the surface. Here pictures of the model were taken from different points and analyzed to answer the question raised above. Both of these experiments showed that there is a problem of masking of the LEDs or the substructure. The following possible steps towards solving these problems have been offered to develop the LED stick further. Improvements could be achieved when using a water-clear LED or when using SMD (Surface Mounted Device) LED Technology and arranging more then one SMD LED to create a light point. Not only do the LEDs cause masking but also the cables required for the LEDs are a major source of masking, and this could be improved with conductor layers, a technology applied in other applications. Lastly the LEDs could be placed in small chambers instead of tubes to define a clearer light point. Figure 2 Picture of model set up Figure 3 Picture of surface defined by light points Session 02: Virtual Environments - ecaade 25 71

Experiment Series Four: Reflection of tubes Lastly, reflection of the tubes has been investigated as a matter that could obstruct the image from the viewers point of view. Also, this will lead into the final experiment where a designed product based on the previous experiments is built. Design of a product based on previous experiments This product, the LED stick, is the result of several considerations and experiments documented previously in this paper; the LED stick will be completed by considering how it could be mounted onto a surface when applied in architecture. More then one of these LED sticks should then create a system which would allow the creation of a zone in which contents could be displayed in 3D. Issues like exchanging a default LED stick on a façade or how a corner could be defined with the LED stick have played a role in the further design of the prototype (see figure 4 and 5). System to content connection through a digital injection The proposed system as such would need content in order to display anything within the zone. How could content be fed into the zone? What kind of inputs could be captured and then translated into light points which will then generate a zone? Various kinds of information which can be stored in a digital form, e.g. in an Excel data sheet, could be used as input to the system. The light points which will generate the zone will also have zeros and ones as an input via a hardware device that will give the LEDs the on/off and colour information they need. In the following section I will discuss some input options to give an understanding what kind of content could be displayed within the zone. Senor and instrument input Michelle Addington and Daniel Schodek (2005) state that the term sensor derives from the word sense, which means to perceive the presence or properties of things. A sensor is a device that detects or responds to a physical or chemical stimulus (e.g., motion, heat, or chemical concentration). Addington and Schodek differentiate sensors from instruments or meters that measure the amount or extent of something in relation to a pre-determined standard or fixed unit in length, mass, time or temperature. Other forms of capturing data could include ways of recording audiovisual contents with devices such as cameras, microphones etc. All devices that are capable of providing data which could then be used to run other electronic devices, such as a system that creates a zone by a 3D light point matrix. I do not want to list all the different sensors mentioned in the book Smart Materials and New Figure 4 Picture of LED Stick a product based on previous experiments Figure 5 Picture of LED Stick a product based on previous experiments 72 ecaade 25 - Session 02: Virtual Environments

Technologies by Addington and Schodek, but to give an understanding of what kind of information could be captured, I want to choose one form of input and define how this input could feed into the system and what the spatial representation of the data could look like. The chosen input is part of my research when looking into the question of creating an amalgam of form and image. Video or image input In the following I want to demonstrate how a fixed or moving image as a possible form of input could be altered to a three dimensional surface within the zone. Concentrating on the modus operandi of digitally generated movies, the research is interested in what condition each single pixel has within a single frame of a movie clip. Each movie will contain a number of frames and each single frame has, depending on the resolution, a certain number of pixels. Each pixel then stores particular colour information. So during the movie the colour information of a pixel in a particular position will change. The array of many pixels and their constant change in colour is what generates the movement in a movie. This movement was then used to generate movement in the zone, when equating the RGB value of the pixel with a spatial location. The resolution of the frame will therefore generate the size of the surface in the X and Y direction and the RGB value of each pixel within the frame the Z Value of the surface in a three-dimensional grid. To demonstrate this, a movie clip of white clouds moving over a blue sky has been used (see figure 6) and translated into a 3D zone. Due to the system being still a prototype, the effects have been simulated with a visualization program which would allow a projection of the installation in a virtual environment (see figure 7). At this stage the code to generate surface movement can only use prerecorded videos, due to the need to convert the images into text files manually, but in future developments this will happen automatically. Pre-recorded content would not allow interaction between the beholder and the zone but the intention is that the direct response of the zone to an event occurring in front of the surface, or in a remote location, would be possible when using the principle discussed above. Here the behavior of the environment is recorded and then translated into a surface movement based on the change of colors, as described before. Significance of system for architecture In the following section I want to explain three characteristics of space when using a 3D light-point matrix to deliver a content-driven zone in real-time. A space defined in the zone of a 3D light point matrix delivers a visual surface but not a haptic surface, but what kind of space is it? The system creates a virtual Figure 6 Test movie clip clouds on a blue sky Figure 7 Translation of movement in a movie into a spatial movement caused by change of colour information (detailed view) Session 02: Virtual Environments - ecaade 25 73

space only in the sense of the Latin virtualis, or that which exists potentially but not actually, but not in the sense of the idea of virtual reality as it is used in current popular culture and mass media. It will be incorporeal. Anomaly of surfaces in an incorporeal zone The incorporeal presence of the surface, and its anomalous nature, will open up two new aspects which have not been explored in architecture; firstly, the creation of a multilayered surface and secondly achieving a decay function. A multilayered surface is possible as a result of seeing through more than one surface - an x amount of different surfaces can be arranged in layers, one surface behind another, with all of them visible. The number of layered surfaces is only defined by the dimensions of the zone. But not only layering is possible. Several surfaces could be interwoven with each other to define a multilayered surface. A decay function of the surface could be achieved by writing a script that incorporates a decay factor on each light point. The light point will not be simply be switched on and off, it will be decayed to create an after-effect of the façade that has just been, or will be in the future when the zone modifies itself through movement from its present state to one in the future. Movement and speed Movement itself will be the main force in how one will perceive space. Traditionally in architecture the beholder takes the role of moving through a static space, and the speed of the beholder then has an influence in how that space is perceived. The focus here will go towards a discussion of speed and space, conditioned through the incorporeal nature of the zone. A delivered surface, when not flat, will create a multilayered surface when viewed from a certain angle. When moving this surface through a digital input, the zone will invert the relation of vision and movement. The system would, as Brian Massumi said about the Blur Building (http:// www.intelligentagent.com/archive/vol5_no2_ massumi_markussen+birch.htm :2006) rather then address vision first and using vision to guide movement as is usually the case; frustrate vision in order to address movement first. Multilayered surfaces have a complexity where a visual expectation of architectural style by presenting no longer exists. Vision will become vague, but processes of crossmodal interaction will become visible through movement. Design process Cross-modal interaction will extend the design process when designing space. When designing any kind of object, design allows for x number of possibilities. Each of these variations could be built at some stage, so the designer has to stop the design process at one point and bring the dynamic process of design into a static form, where the ideas freeze into an unchangeable form. The design process in architecture of a spatially dynamic media system is divided into two sections. The first section is similar to conventional design. A core is defined and stays static, but it is the core of the building designed for the end-users, and the skin of the core is only designed as a substructure. The substructure made of LED sticks hosts the zone where the design can be changed in real time and can be defined by processes of cross-media interaction or content. With the possibilities of a spatially dynamic media system, where space will be defined through a 3D light point matrix, not only could the existing gap in the discussion of architecture when shifting from an autoplastic to alloplastic substance be filled, but the system has shown that, with attributes of the zone such as described above, a new dimension for architecture could be introduced. A dimension where by architecture could pick up on the complexity of private and public and display these complexities through sophisticated forms not taken from a pre-existing repertoire but through the interactions themselves. 74 ecaade 25 - Session 02: Virtual Environments

Conclusion The developed system has demonstrated a way to display information and affect at the same time, and the end use of this system could be in both fields. The main interest of this paper is the feasibility of a system which through use of a 3D light-point matrix could deliver a content-driven zone in real-time. One possible way of feeding in content, with the translation of a movie into a moving surface, has been presented. When considering what a variety of movie clips could be content to be translated into the surface and then reflect on that a video input could be only one of many forms of input, with having a range of sensors and instruments and other ways of feed in data, one can begin to understand the potential of such a system. The implications are beyond the scope of this paper, but the system would allow the participation of the public and could show affect as an artistic installation and a spatial representation of information one after another or as a mixture of all of them at the same time. In other words, architects along with artists can take the next logical step to consider the invisible space of electronic data flows as substance rather then just as void something that needs a structure, a politics and a poetics Lev Manovich (2006). Goulthorpe, M.: 1999, Aegis Hypo-surface: Autoplastic to Alloplastic, Architectural Design Hypersurface Architecture II, Vol. 69 9-10. Massumi, B.: 2006, Transforming Digital Architecture from Virtual to Neuro, An interview with Brian Massumi by Thomas Markussen & Thomas Birch, aka bleep.dk published online at: http://www.intelligentagent.com/archive/vol5_no2_massumi_ markussen+birch.htm. Manovich, L.: 2006, The poetics of urban media surfaces; Published at First Monday; Peer-reviewed journal on the internet at: http://www.firstmonday.org/issues/special11_2/manovich/index.html. Acknowledgements I would like to acknowledge my PhD supervisor Professor of Innovation Mark Burry SIAL / RMIT University, Melbourne for his help during my research, as well as Dr. Juliette Peers for giving me feedback on the paper. Also, Markus Bachmann / SAMBA Photography Stuttgart for taking the pictures of the LED sticks prototype model in figure 4 and figure 5. References Addington, M. and Schodek: 2005, Smart materials and technologies, Architectural Press, London. Session 02: Virtual Environments - ecaade 25 75

76 ecaade 25 - Session 02: Virtual Environments