Smaller Than the Eye Can See: Vibration Analysis with Video Cameras

Similar documents
Vibrometry. Laser Vibrometers Optical Measurement Solutions for Vibration Product Brochure


Whitepaper. Image stabilization improving camera usability

Self-Mixing Laser Diode Vibrometer with Wide Dynamic Range

PeakVue Analysis for Antifriction Bearing Fault Detection

MEASURING THE 3D PROPAGATION OF SOUND WAVES USING SCANNING LASER VIBROMETRY

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

High Quality Image Magnification using Cross-Scale Self-Similarity

Technical Information

A Learning Based Method for Super-Resolution of Low Resolution Images

Tutorial for Tracker and Supporting Software By David Chandler

Impact Echo Scanning Technology for Internal Grout Condition Evaluation in Post-Tensioned Bridge Ducts

Fluid structure interaction of a vibrating circular plate in a bounded fluid volume: simulation and experiment

Synthetic Sensing: Proximity / Distance Sensors

CHAPTER 3 MODAL ANALYSIS OF A PRINTED CIRCUIT BOARD

T-REDSPEED White paper

physics 1/12/2016 Chapter 20 Lecture Chapter 20 Traveling Waves

PERPLEXING VARIABLE FREQUENCY DRIVE VIBRATION PROBLEMS. Brian Howes 1

Analysis of free reed attack transients

ACOUSTIC EMISSION SIGNALS GENERATED BY MONOPOLE (PEN- CIL-LEAD BREAK) VERSUS DIPOLE SOURCES: FINITE ELEMENT MODELING AND EXPERIMENTS

False alarm in outdoor environments

Robot Perception Continued

Optical Metrology. Third Edition. Kjell J. Gasvik Spectra Vision AS, Trondheim, Norway JOHN WILEY & SONS, LTD

Non-Contact Vibration Measurement of Micro-Structures

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

Scanners and How to Use Them

DEVELOPMENT AND APPLICATIONS OF TUNED/HYBRID MASS DAMPERS USING MULTI-STAGE RUBBER BEARINGS FOR VIBRATION CONTROL OF STRUCTURES

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

The Sonometer The Resonant String and Timbre Change after plucking

Motion & The Global Positioning System (GPS)

SOFTWARE FOR GENERATION OF SPECTRUM COMPATIBLE TIME HISTORY

WE ARE in a time of explosive growth

Determination of source parameters from seismic spectra

Analysis of an Acoustic Guitar

Color Segmentation Based Depth Image Filtering

Building a simple seismometer

Traffic Monitoring Systems. Technology and sensors

A New Look at an Old Tool the Cumulative Spectral Power of Fast-Fourier Transform Analysis

NEW TECHNIQUE FOR RESIDUAL STRESS MEASUREMENT NDT

3D Scanner using Line Laser. 1. Introduction. 2. Theory

State Newton's second law of motion for a particle, defining carefully each term used.

Lecture 1-10: Spectrograms

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

Automatic and Objective Measurement of Residual Stress and Cord in Glass

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute

Dynamics-based Structural Health Monitoring Using Laser Vibrometry

Matlab GUI for WFB spectral analysis

State Newton's second law of motion for a particle, defining carefully each term used.

Building Design for Advanced Technology Instruments Sensitive to Acoustical Noise

STUDY OF DAM-RESERVOIR DYNAMIC INTERACTION USING VIBRATION TESTS ON A PHYSICAL MODEL

Development of Optical Wave Microphone Measuring Sound Waves with No Diaphragm

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA

The use of Operating Deflection Shapes (ODS) to model the vibration of sanders and polishers HSL/2006/104. Project Leader: Author(s): Science Group:

STAAR Science Tutorial 30 TEK 8.8C: Electromagnetic Waves

Using angular speed measurement with Hall effect sensors to observe grinding operation with flexible robot.

A System for Capturing High Resolution Images

Practice Test SHM with Answers

Ultrasound Condition Monitoring

Linear Parameter Measurement (LPM)

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Bandwidth-dependent transformation of noise data from frequency into time domain and vice versa

Simple Harmonic Motion

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Mobile field balancing reduces vibrations in energy and power plants. Published in VGB PowerTech 07/2012

Digital Photography Composition. Kent Messamore 9/8/2013

Self-Mixing Differential Laser Vibrometer

Motion Activated Camera User Manual

Doppler. Doppler. Doppler shift. Doppler Frequency. Doppler shift. Doppler shift. Chapter 19

Similar benefits are also derived through modal testing of other space structures.

Introduction to acoustic imaging

Herschel PACS Test Procedure STM PACS-KT-PR-011 Issue: 1 Revision: -

The Limits of Human Vision

Address for Correspondence

FXA UNIT G484 Module Simple Harmonic Oscillations 11. frequency of the applied = natural frequency of the

The Scientific Data Mining Process

SIESMIC SLOSHING IN CYLINDRICAL TANKS WITH FLEXIBLE BAFFLES

An Automatic Optical Inspection System for the Diagnosis of Printed Circuits Based on Neural Networks

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

Simulation and Design of Printed Circuit Boards Utilizing Novel Embedded Capacitance Material

Colorado School of Mines Computer Vision Professor William Hoff

Inspection and Illumination Systems for Visual Quality Assurance. Large selection Attractive prices Tailored solutions. optometron.

Advantages of Auto-tuning for Servo-motors

Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect

ABSG Consulting, Tokyo, Japan 2. Professor, Kogakuin University, Tokyo, Japan 3

Ultrasonic Wave Propagation Review

Image Analysis Using the Aperio ScanScope

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

Waves Sound and Light

Sound absorption and acoustic surface impedance

AP Physics C. Oscillations/SHM Review Packet

Transcription:

19 th World Conference on Non-Destructive Testing 2016 Smaller Than the Eye Can See: Vibration Analysis with Video Cameras Oral BUYUKOZTURK 1, Justin G. CHEN 1, Neal WADHWA 1, Abe DAVIS 1, Frédo DURAND 1, William T. FREEMAN 1 1 Massachusetts Institute of Technology 77 Massachusetts Ave. Cambridge, MA 02139, USA Contact e-mail: obuyuk@mit.edu Abstract. In the field of non-destructive testing the tools that we use to conduct inspections has evolved over the years. Originally, inspectors used their senses: sight, hearing, and touch to evaluate structures, but now they use equipment such as: boroscopes, dye penetrant, microphones, and ultrasonic transducers, which can augment the human senses and/or make much finer quantitative measurements beyond any human capabilities. Recent developments in computer vision have created a motion microscope, where small imperceptible motions in videos can be magnified and easily seen; this technique is called motion magnification. Additionally, the method enables quantitative analysis of structures from video as the information acquired for motion magnification also provides displacement signals as a basis for vibrational analysis. In this paper, we present applications of motion magnification to vibration analysis in non-destructive testing. We describe the methodology and theory behind motion magnification, the workflow from video to modal information, and discuss strategies for processing the numerous signals collected from video data. We demonstrate making video measurements of small structures and rotating machinery in controlled conditions, and in near real time, extracting displacements from video and doing simple modal and frequency analysis. We also present the results of outdoor measurements of civil infrastructure and discuss future applications for the methodology in non-destructive testing. Introduction The origins of non-destructive testing (NDT) are as simple as visual and acoustic inspection of objects by one s own eyes and ears. As technology has improved, inspectors have made use of a wide range of measurement tools to augment their senses and quantify the presents or effects of damage on a structure. Developments in computer vision have made it possible to qualitatively and quantitatively analyze small, imperceptible motion in camera videos. This capability allows for passive non-contact measurement of vibrations and displacements in structures, potentially at low cost as cameras are less expensive than laser-based, radar, or radiographic techniques. This can help with modal analysis and vibration based NDT techniques or qualitative inspection looking for parts of a structure that vibrate excessively. License: http://creativecommons.org/licenses/by-nd/3.0/ 1 More info about this article: http://ndt.net/?id=19318

In this paper, we present applications of motion magnification to vibration analysis in NDT and structural health monitoring (SHM). Section 1 describes the methodology behind motion magnification, specifically how videos are processed to measure the vibrations of objects in the video. Several applications of video measurements are presented in Section 2, for small structures, rotating machinery, and civil infrastructure. Section 3 discusses uses and limitations of the methodology, and potential future improvements for NDT applications. 1. Video Processing for Vibration Measurement This section is a brief summary of how video processing is accomplished for vibration measurement, both for motion magnification to visualize imperceptible motions in videos, and for quantitative measurement of displacements and vibrations. The full descriptions for the processing procedures can be found in the following cited papers for readers requiring more detail [1-6]. 1.1 Motion Magnification Motion Magnification is an algorithm for the amplification of small motions in videos, acting as a sort of motion microscope. The basic principle is to obtain a representation for the video such that signals representing the motions of objects in the video which can be processed and reconstructed back into a video where the apparent motion of objects in the video is larger in a certain frequency band. The phase-based motion magnification algorithm [1] decomposes the signal of a video into the local spatial amplitude and phase using a complex-valued steerable pyramid filter bank [7]. The local spatial phase signals, representing motion, are Fourier decomposed into a series of sinusoids representing harmonic motion in the time domain. The phase signals are then temporally bandpass filtered, amplified, and recombined to form a motion magnified video. The result is a video with motion in the specified band of temporal frequencies amplified and made more visible. This procedure can be run in real-time for reasonably sized videos and framerates with a slightly different processing procedure [2]. 1.2 Displacement Measurement This procedure can be short-circuited and the phase information from the video decomposition can be used to provide displacement information of structures. The spatial local phase can be used to calculate the displacement of objects [8, 9]. Only regions with sufficient local contrast can provide accurate motion information, as textureless regions do not provide any information on the motion of the object. Pixels with sufficient local contrast are identified and a set of representative motion signals is determined for the video. This set of displacement time series can be used for modal analysis or vibration analysis of an object in the video. 2. Example Applications In this section we discuss several example applications of our methodology including a lightweight structure, rotating machinery, civil infrastructure, and a material test. 2.1 Lightweight Beams Lightweight structures are a particularly relevant application for non-contact measurement 2

ASDFs df SDF methods because the attachment of traditional sensors such as accelerometers or strain gauges may alter the system dynamics by adding mass or stiffness. As a part of a separate experimental study, we made measurements of a series of lightweight cantilever beams made of different plastic materials to look for differences in the material properties. This was quantified by looking at the frequency of the first resonant modes. An example measurement is presented here with the desktop experimental setup shown in Fig. 1a, and a screenshot of the recorded video shown in Fig. 1b. Fig. 1. Experimental setup and screenshot from the recorded video The input video had a resolution of 1200 x 512 pixels, 1000 frames recorded at 161.4 frames per second (fps). From this video, the first two resonant modes of the beam were measured at 11.95 Hz and 78.94 Hz, as shown in Fig. 2a, and the frequency spectrum is shown in orange in Fig. 2b. Measurements with an accelerometer were also done, with the frequency spectra shown in blue and red in Fig. 2b for two different locations of the accelerometer on the beam. They show a significant shift in the frequency of the first resonant mode, due to the fact that the accelerometer mass of 16 grams is of similar mass as the beam itself at 10 grams. Comparison of Measurements of a Lightweight Beam 10 0 Normalized Acceleration 10-1 10-2 Accelerometer at the end Accelerometer 3/4 towards the end Camera (integrated to acceleration) 10-3 0 5 10 15 20 25 30 35 40 45 50 Frequency (Hz) Fig. 2. Measured mode shapes and frequency spectra 3

2.2 MIT Facilities Measurement A camera measurement was made of a motor that runs a water pump in the physical facilities of the Massachusetts Institute of Technology (MIT). A picture of the experimental setup is shown in Fig. 3a with a screenshot of the recorded video, 1280 x 720 pixels at 2000 fps, shown in Fig. 3b. Two regions of interest where chosen for extraction of displacements in the video and calculation of the measured frequency spectra. Additionally, a laser vibrometer was used to separately measure the motions of the motor and the ground as a reference for the vibrations in the environment itself, caused by the motor and other machinery in the area. 1 2 Fig. 3. Camera measurement of a motor running a pump, experimental setup and screenshot from video with regions of interest 1 and 2 The frequency spectra extracted from the regions of interest 1 and 2 in Fig. 3b are shown in blue in Fig. 4a and 4b respectively. The frequency spectra reasonably match for lower frequencies, with those measured for some of the resonant frequencies measured by the laser vibrometer, shown in red. There are some issues for a camera measurement in an environment with many sources of vibration as evidenced by the laser vibrometer measurement of the ground. There are many vibrations in the environment that shake the camera, giving the appearance of motion of the object. Reference background objects included in the video frame can be used to measure and rule out motion due to global motion of the camera. 10-1 MIT Power Plant - Pump Horizontal Motion 10-1 MIT Power Plant - Pump Vertical Motion 10-2 Camera Avg. FFT of Pixels Laser Vib. Disp. Laser Vib. Ground 10-2 Camera Avg. FFT of Pixels Laser Vib. Disp. Laser Vib. Ground 10-3 10-3 10-4 10-4 Amplitude 10-5 Amplitude 10-5 10-6 10-6 10-7 10-7 10-8 0 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) 10-8 0 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) Fig. 4. Camera measured frequency spectra with comparison to laser vibrometer measurement for horizontal motion at location 1, and vertical motion at location 2 4

2.3 Taipei 101 Earthquake In this application, we demonstrate the approach on a video from YouTube which is not recorded by the researchers, for measurement of civil infrastructure. The example video is from a fixed camera within the observation deck of the Taipei 101 tower in Taipei, Taiwan during a M6.4 earthquake which occurred at a distance of 120.85 km south east, on 2015/04/20 at 01:42:58 UTC [10]. The source normal framerate (30 fps) video starts at 01:43:01 UTC, 3 seconds after the earthquake starts, and continues for 2 minutes. The Taipei 101 tower is unique in that the structure s tuned mass damper, a massive steel pendulum, is plainly visible in the indoor observation desk, and also this video. A screenshot of the video is shown in Fig. 5a. The displacement of the tuned mass damper is extracted from this video sequence, from the cropped video of size 640 x 130 pixels, shown in Fig. 5b. The mask for pixels with satisfactory displacement information is shown in Fig. 5c. (c) Fig. 5. Screenshot from source video, video cropped to tuned mass damped, and (c) pixels with valid displacements The displacement extracted from the tuned mass damper in the video is shown in Fig. 6. At 43 seconds after the video starts, there is clearly a large amount of motion, also visible in the video, that likely corresponds to the arrival of S-waves from the earthquake. At 28 seconds, there is a much smaller disturbance not visible in the video, but revealed by the processing which corresponds to the arrival of P-waves from the earthquake. This behavior is standard for earthquakes as the wave speed of P-waves is faster than those of S-waves. Motion of Tuned Mass Damper atop Taipei 101 on 2015/04/20 6 4 2 P-wave arrival 28 sec. Displacement (pixels) 0-2 -4 S-wave arrival 43 sec. -6 0 20 40 60 80 100 120 Time after UTC 01:42:58 (s) Fig. 6. Extracted motion of tuned mass damper from video 5

2.4 MIT Green Building Antenna This application is a measurement of the Green Building at the Massachusetts Institute of Technology over a long distance, originally presented at the NDT-CE 2015 conference [6]. The video measurement of the Green Building was made from a distance of approximately 175 m (by land) on a terrace of the Stata Center, another building at MIT as shown in Fig. 7a. A picture of the experimental setup and the view from the measurement location is shown in Fig. 7b. A camera was used to record a 150 second long video at 10 fps and a resolution of 1200 1920, where 1 pixel corresponds to about 3.65 cm at the depth of the structure. A screenshot from the recorded video is shown in Fig. 7c showing the Green Building filling most of the frame of the video, with the crow s nest or antenna tower visible as the long thin structure on the roof of the building. (c) Fig. 7. Satellite video of measurement location, experimental setup for measurement of MIT s Green Building, and (c) recorded video screenshot with crow s nest seen at top The video was processed using the previously described procedure. Even though the building s motion was too subtle to recover from this distance, vibrations of the crow s nest on top of the building, were strong enough to generate a recoverable signal. Shown in Fig. 8a is the cropped video of the crow s nest, with Fig. 8b showing high contrast pixels with displacements extracted from the video. These displacements were averaged to obtain a single signal for the structure, and an FFT was taken to determine the frequency spectrum shown in Fig. 8c. In the frequency spectrum there is a resonant peak at 2.433 Hz which suggests that the crow s nest atop the building has a resonant mode at that frequency. The amplitude of the resonant peak from the crow s nest works out to be 0.21 mm. The noise floor for the measurement is approximately 0.07 mm, which gives a signal to noise ratio of 3. For these measurement parameters, any structural motion above the noise floor would be measureable. Improvements can be made by using a more telephoto zoom lens in addition to a higher resolution camera. 6

10 0 FFT of Average Displacement Signal 10-1 Amplitude (Pixels) 10-2 10-3 X: 2.433 Y: 0.005886 10-4 10-5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Frequency (Hz) (c) Fig. 8. Screenshot of video cropped to crow s nest, pixel mask of valid displacements, and (c) results from camera measurement of the Green Building crow s nest As verification for the resonant frequency of the crow s nest measured by the camera, a laser vibrometer was used to measure the frequency response during a day with similar weather conditions. Fig. 9a shows the measurement setup on the roof of the Green Building, measuring the crow s nest vibrations from close range. A measurement was also made of the ground next to the crow s nest as a reference so that any vibrations of the tripod itself could be discounted. The laser vibrometer measurement is shown in Fig. 9b with the crow s nest signal shown in blue and the reference signal in red. Two potential resonant peaks are seen in the crow s nest measurement, however the peak at around 1 Hz is also seen in the reference measurement and thus not a resonance of the crow s nest. The other peak occurs at 2.474 Hz which is similar to the 2.433 Hz measured by the camera; this lends credibility to the camera measurement. 10-3 Laser Vibrometer Measurement of Crow's Nest X: 2.474 Y: 0.0007632 10-4 Amplitude 10-5 10-6 Fig. 9. Laser vibrometer verification of MIT Green Building crow s nest resonant frequency with measurement setup and result with static reference for comparison 2.5 Granite Fracture Test Crow's nest measurement Reference 10-7 0 1 2 3 4 5 6 7 8 9 10 Frequency (Hz) This application involves a high-speed video of a piece of granite with manufactured defects undergoing a standard compression test, where the quantitative values of interest are the displacements and directions of motion at different locations in the specimen. This is in contrast to many of the previous applications where the measurement is aimed at determining the frequency spectrum of the object in the video. The video was recorded at 14,000 fps with 7

a resolution of 640 x 792 pixels over 4828 frames of video; a screenshot of the video is shown in Fig. 10a. (c) Fig. 10. Screenshot of source video (brightened for visibility, frame from generated video visualizing motion, and (c) color representation of direction and magnitude of motion, e.g. red for left, teal for right The final 1329 frames were processed to determine the displacement at every pixel in the frame, with the granite providing sufficient visual texture to properly measure the displacements. To visualize these displacements, they were converted into a video representation with the hue (color) signifying the direction, and the brightness representing the magnitude of the displacement as shown in Fig. 10c. To fit within the range of values for a video, the displacements were normalized and the fourth root of the values was taken to magnify smaller values. A frame from the visualization just after fracture occurs is shown in Fig. 10b. A sample horizontal displacement measured for the pixel located at (213,115) in the video is shown in Fig. 11a on a screenshot of the video, with Fig. 11b clearly showing the large amount of movement at fracture, and relaxation afterwards in units of pixels. Fig. 11. Screenshot of source video with pixel selected and measured displacement time series of the selected pixel, amplitude in units of pixels 8

3. Discussion In this section we discuss the capabilities, limitations, and potential future work and applications for camera measurement of structural motions and vibrations. 3.1 Capabilities and Limitations Motion magnification makes it very simple to visualize the vibrations and operational mode shapes of an object, something that may be cumbersome with traditional sensors if there isn t a preexisting model for the object. The camera can also be used in a non-contact way to measure the displacements of structures without altering the dynamics of the system. The camera also measures displacements which may otherwise be difficult to measure, since most current vibration sensors measure acceleration, velocity, or strain. There are several assumptions and limitations of the current measurement and processing methodology. Small motion is a necessary requirement of this processing. This generally means that the motion should be less than one pixel in amplitude, however the video can be downsampled to change the scale of the motion to make it small. A stationary camera is ideal, as any motion of the camera will cause apparent motion of the objects in the scene. Typically this means that video needs to be filmed with a sturdy tripod, however even so a mechanically noisy environment may also corrupt the result. Currently video filmed from a moving platform is generally too unstable for motion magnification or displacement measurement. A limitation is that motion signals are only accurate in areas with good local contrast, as it is not possible to determine the motion of textureless regions in a video. This means displacements are most easily found on the edges of objects with contrast against a background, or with objects painted with a speckle pattern. 3.2 Future Work Future work involves improving the processing procedure to run even faster so that higher frame rate and larger videos can be handled in real-time. For handing camera motion, work is being done to determine and subtract the motion from known non-moving reference objects in the video, and also use sensors on the camera itself to measure and correct for its motion. For even longer distance measurements atmospheric turbulence is an issue that can also cause apparent motion in the video. Work will need to be done to remove or correct those effects, potentially with a stereo or multiple camera setup. 4. Conclusion Motion magnification refers to a collection of computer vision algorithms that are used to qualitatively visualize and quantatively measure small motions of objects in videos. We summarized the video processing procedure and gave example applications for a lightweight beam, rotating machinery, measurement of civil infrastructure both from an indoor security video and an outdoor long range measurement, and a material compression test. The unique capabilities of a camera for non-contact measurement of displacement with potentially low cost equipment opens up a wide range of opportunities in NDT and SHM applications. Acknowledgements This research was supported by Shell and we thank chief scientists Sergio Kapusta and Dirk Smit for their support. We also thank Maryse Vachon for providing the lightweight beams, 9

Sean Kinderman for access and assisting with the MIT facilities measurement, and Stephen Morgan and Bing Li for the granite video. References [1] Wadhwa, N., Rubinstein, M., Durand, F., & Freeman, W. T. (2013). Phase-based video motion processing. ACM Transactions on Graphics (TOG), 32(4), 80. [2] Wadhwa, N., Rubinstein, M., Durand, F., & Freeman, W. T. (2014, May). Riesz pyramids for fast phasebased video magnification. In Computational Photography (ICCP), 2014 IEEE International Conference on (pp. 1-10). IEEE. [3] Chen, J. G., Wadhwa, N., Cha, Y. J., Durand, F., Freeman, W. T., & Buyukozturk, O. (2015). Modal identification of simple structures with high-speed video using motion magnification. Journal of Sound and Vibration, 345, 58-71. [4] Chen, J. G., Wadhwa, N., Durand, F., Freeman, W. T., & Buyukozturk, O. (2015). Developments with Motion Magnification for Structural Modal Identification Through Camera Video. In Dynamics of Civil Structures, Volume 2 (pp. 49-57). Springer International Publishing. [5] Davis, A., Bouman, K. L., Chen, J. G., Rubinstein, M., Durand, F., & Freeman, W. T. (2015, June). Visual vibrometry: Estimating material properties from small motions in video. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on (pp. 5335-5343). IEEE. [6] Chen, J. G., Davis, A., Wadhwa, N., Durand, F., Freeman, W. T., & Buyukozturk, O. (2015). "Video camera-based vibration measurement for Condition Assessment of Civil Infrastructure", NDT-CE International Symposium Non-Destructive Testing in Civil Engineering, Berlin, Germany, Sept. 15-17, 2015. [7] Simoncelli, E. P., & Freeman, W. T. (1995, October). The steerable pyramid: A flexible architecture for multi-scale derivative computation. In ICIP (p. 3444). IEEE. [8] Fleet, D. J., & Jepson, A. D. (1990). Computation of component image velocity from local phase information. International journal of computer vision, 5(1), 77-104. [9] Gautama, T., & Van Hulle, M. M. (2002). A phase-based approach to the estimation of the optical flow field using spatial filtering. Neural Networks, IEEE Transactions on, 13(5), 1127-1136. [10] YouTube video. (2015, April 15). https://youtu.be/la-zcewcoic 10