Pattern Discovery Techniques for Music Audio
|
|
|
- Ambrose Wilkinson
- 10 years ago
- Views:
Transcription
1 Pattern Discovery Techniques for Music Audio Roger B. Dannenberg and Ning Hu School of Computer Science Carnegie Mellon University Pittsburgh, PA USA ABSTRACT Human listeners are able to recognize structure in music through the perception of repetition and other relationships within a piece of music. This work aims to automate the task of music analysis. Music is explained in terms of embedded relationships, especially repetition of segments or phrases. The steps in this process are the transcription of audio into a representation with a similarity or distance metric, the search for similar segments, forming clusters of similar segments, and explaining music in terms of these clusters. Several transcription methods are considered: monophonic pitch estimation, chroma (spectral) representation, and polyphonic transcription followed by harmonic analysis. Also, several algorithms that search for similar segments are described. These techniques can be used to perform an analysis of musical structure, as illustrated by examples. 1. INTRODUCTION Digital sound recordings of music can be considered the lowest level of music representation. These audio representations offer nothing in the way of musical or sonic structure, which is problematic for many tasks such as music analysis, music search, and music classification. Given the current state of the art, virtually any technique that reveals structure in an audio recording is interesting. Techniques such as beat detection, key detection, chord identification, monophonic and polyphonic transcription, melody and bass line detection, source separation, speech recognition, and instrument identification all derive some higher-level information from music audio. There is some hope that by continuing to develop these techniques and combine them, we will be better able to reason about, search, and classify music, starting from an audio representation. In this work, we examine ways to discover patterns in music audio and to translate this into a structural analysis. The main idea is quite simple: musical structure is signaled by repetition. Of course, repetition means similarity at some level of abstraction above that of audio samples. We must process sound to obtain a higher-level representation before comparisons are made, and must allow approximate matching to allow for variations in performance, orchestration, lyrics, etc. In a number of cases, our techniques have been able to describe the main structure of music compositions. We have explored several representations for comparing music. Monophonic transcription can be used for music where a single voice predominates (even in a polyphonic recording). Spectral frames can be used for more polyphonic material. We have also experimented with a polyphonic transcription system. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page IRCAM Centre Pompidou {rbd,ninghu}@cs.cmu.edu For each of these representations, we have developed heuristic algorithms to search for similar segments of music. We identify pairs of similar segments. Then, we attempt to simplify the potentially large set of pairs to a smaller set of clusters. These clusters identify components of the music. We can then construct an explanation or analysis of the music in terms of these components. The goal is to derive structural descriptions such as AABA. We believe that the recognition of repetition is a fundamental activity of music listening. In this view, the structure created by repetition and transformation is as essential to music as the patterns themselves. In other words the structure AABA is important regardless of what A and B represent. At the risk of oversimplification, the first two A s establish a pattern, the B generates tension and expectation, and the final A confirms the expectation and brings resolution. Structure is clearly important to music listening. Structure can also contribute expectations or prior probabilities for other analysis techniques, such as transcription and beat detection, where knowledge of pattern and form might help to improve accuracy. It follows that the analysis of structure is relevant to music classification, music retrieval, and other automated processing tasks. 2. RELATED WORK It is well known that music commonly contains patterns and repetition. Any music theory book will discuss musical form and introduce notation, such as AABA, for describing musical structures. Many researchers in computer music have investigated techniques for pattern discovery and pattern search. Cope [4] searches for signatures that are characteristic of composers, and Rolland and Ganascia describe search techniques [21]. Interactive systems have been constructed to identify and look for patterns [24], and much of the work on melodic similarity [9] is relevant to the analysis of music structure. Simon and Sumner wrote an early paper on music listening and its relationship to pattern formation and memory [23], proposing that we encode melody by referencing patterns and transformations. This has some close relationships to data compression, which has also inspired work in music analysis and generation. [11] Narmour describes a variety of transformative processes that operate in music to create structures that listeners perceive. [18] A fundamental idea in this work is to compare every point of a music recording with every other point. This naturally leads to a matrix representation in which row i, column j corresponds to the similarity of time points i and j. A two-dimensional grid to compute and display self-similarity has been used by Wakefield and Bartsch [1] and by Foote and Cooper [8]. Mont-Reynaud and Goldstein proposed rhythmic pattern discovery as a way to improve music transcription. [16] Conklin and Anagnostopoulou examine a technique for finding significant exactly identical patterns in a body of music. [3] A different approach is taken by Meek and Birmingham to search for commonly occurring melodies or other sequences. [14]
2 3. PATTERN SEARCH In this section, we describe the general problem of searching for similar sections of music. We assume that music is represented as a sequence s i, i =0 n 1. A segment of music is denoted by a starting and ending point: (i, k), 0 i k < n. Similar sections consists of two segments: ((i, k), (j, l)), 0 i k<j l<n. For convenience, we do not allow overlapped segments 1, hence k<j. There are O(n 4 ) possible pairs of segments. To compute a similarity function of two segments, one would probably use a dynamic programming algorithm with a cost proportional to the lengths of the two segments. This increases the cost to O(n 6 )if each pair of segments is evaluated independently. However, given a pair of starting points, i,j, the dynamic programming alignment step can be used to evaluate all possible pairs of segment endpoints. There are O(n 2 ) starting points and the average cost of the full alignment computation is also O(n 2 ), so the total cost is then O(n 4 ). Using frame sizes from 0.1 to 0.25 seconds and music durations of several minutes, we can expect n to be in the range of 200 to This implies that a brute-force search of the entire segment pair space is will take hours or even days. This has led us to pursue heuristic algorithms. In our work, we assume a distance function between elements of the sequence s i. To compute the distance between two segments, we use an algorithm for sequence alignment based on dynamic programming. A by-product of the alignment is a sum of distances between corresponding sequence elements. This measure has the property that it generally increases with length, whereas longer patterns are generally desirable. Therefore, we divide distance by length to get an overall distance rating. Typically there are many overlapping candidates for similar segments. Extending or shifting a matching segment by a frame or two will still result in a good rating. Therefore, the problem is not so much to find all pairs of similar segments but the locally best matches. In practice, all of our algorithms work by extending promising matches incrementally to find the best match. This approach reduces the computation time considerably, but introduces heuristics that make formal descriptions difficult. Nevertheless, we hope this introduction will help to explain the following solutions. 4. MONOPHONIC ANALYSIS Our first approach is based on monophonic pitch estimation, which is used to transcribe music into a note-based representation. Notes are represented as a pitch (represented on a continuous rather than quantized scale), starting time, and duration (in seconds). The pitch estimation is performed using autocorrelation [20] and some heuristics for rejecting false peaks and outliers, as described in an earlier paper. [5] We worked with a saxophone solo, Naima, written and performed by John Coltrane [2] with a jazz quartet (sax, piano, bass, and drums). To find matching segments in the transcription, we construct a matrix M where M i,j is the length of a segment 2 starting at note i and matching a segment at note j. 4.1 Algorithm 1 The search algorithm in this case is a straightforward search of every combination of i, j such that i < j. Fornnotes, there are n(n 1)/2 pairs. The search proceeds only if there is a close match between pitch i and pitch j. Although we could use dynamic programming for note alignment [9, 22], we elected to try a simple iterative algorithm that attempts to extend the current pair of similar segments with additional matching notes to find the longest similar segments starting at i and j. The rules for extending segments consider note pitch and duration, and allow for (short) note deletions and consolidation [15]. If segment (i, k) matches (j, l), then in many cases, (i +1,k)will match (j +1,l) and so on. To eliminate the redundant pairs, we make a pass through the elements of M, clearing cells contained by longer similar segments. For example if (i, k) matches (j, l), we clear all elements of the rectangular submatrix M i..k,j..l except for M i,j. Finally, we can read off pairs of similar segments and their durations by making another pass over the matrix M. Although this approach works well if there is a good transcription, it is not generally possible to obtain a useful melodic transcription from polyphonic audio. In the next section, we consider an alternative representation. 5. SPECTRUM-BASED ANALYSIS When transcription is not possible, a lower-level abstraction based on the spectrum can be used. We chose to use Wakefield s chroma because it seemed to do a good job of identifying similar segments in an earlier study where the goal was to find the chorus of a pop song. [1] The chroma is a 12-element vector where each element represents the energy associated with one of the 12 pitch classes. Essentially, the spectrum wraps around at each octave and bins are combined to form the chroma vector. Distance is then defined as Euclidean distance between vectors normalized to have a mean of zero and a standard deviation of one. The most important feature of a chroma representation is that the music is divided into equal-duration frames rather than notes. Typically, there will be hundreds or thousands of frames as opposed to tens or hundreds of notes. Matching will tend to be more ambiguous because the data is not segmented into discrete notes. Therefore, we need to use more robust (and expensive) sequence alignment techniques and therefore more clever algorithms. 5.1 Brute-Force Approach At first thought, observing that dynamic programming computes a global solution from incremental and local properties, one might try to reuse local computations to form solutions to our similar segments problem. A typical dynamic programming step computes the distance at cell i,j in terms of cells to the left (j 1), above (i 1), and diagonal (i 1, j 1). The value at i,j is: M i,j = d i,j +min(m i,j 1, M i 1,j, M i 1,j 1 ) In terms of edit distances, we use d i,j, the distance from frame i to frame j as either a replacement cost, insertion cost, or deletion cost, although many alternative cost/distance functions are possible within the dynamic programming framework. [10] Unfortunately, even if we precompute the full matrix, it does not help us in computing the distance between two segments because of initial boundary conditions, which change for every combination of i and j. As mentioned in the introduction, the best we can do is to compute a submatrix starting at i,j for every 0 i < j < n. This leaves us with an O(n 4 ) algorithm to compute the distance for every pair ((i, k), (j, l)). To avoid very long computation times, we developed a faster, heuristic search. 5.2 Heuristic Search We compute the distance between two segments by finding a path from i,j to k,l that minimizes the distance function. Each step of the path takes one step to the right, downward, or diagonally. In practice, similar segments are characterized by paths that consist mainly of diagonal segments because tempo variation is typically small. Thus we do not need to compute a
3 full rectangular array to find good alignments. Alternatively, we can compute several or even all paths with a single pass through the active matrix. This method is described here. 5.3 Algorithm 2 The main idea of this algorithm is to identify path beginnings and to follow paths diagonally across a matrix until the path rating falls below some threshold. The algorithm uses three matrices we will call distance (D), path (P), and length (L). D and L hold real (floating point) values, and P holds integers. P is initialized to zero so that we can determine which cells have been computed. If P i,j = 0, we say cell i,j is uninitialized. The algorithm scans the matrix along diagonals of constant i+j as shown in Figure 1, filling in corresponding cells of D, P, and L. A cell is computed in terms of the cells to the left, above, and diagonal. First, compute distances and lengths as follows: d h =ifp i,j 1 0thenD i,j 1 +d i,j, else ÿ; l h = L i,j 1 + 2/2 d v =ifp i 1,j 0thenD i 1,j +d i,j, else ÿ; l v = L i 1,j + 2/2 d d =ifp i 1,j 1 0thenD i 1,j 1 +d i,j, else ÿ; l d = L i 1,j 1 +1 The purpose of the infinity (ÿ) values is to disregard distances computed from uninitialized cells as indicated by P. Now,letc= min(d h /l h, d v /l v, d d /l d ). If c is greater than a threshold, the cell at i,j is left uninitialized. Otherwise, we define D i,j = c, L i,j = l m,and P i,j =P m, where the subscript m represents the cell that produced the minimum value for c, either (i,j 1), (i 1,j), or (i 1,j 1). SONIC does not attempt to perform source separation, so the resulting MIDI data combines all notes into a single track. Although SONIC was intended for piano transcription, we get surprisingly good results with arbitrary music sources. Transcriptions inevitably have spurious notes, so we reduce the transcriptions to a chord progression using the Harman program by Pardo [19]. Harman is able to ignore passing tones and other non-chord tones, so in principle, Harman can help to reduce the noise introduced by transcription errors. After computing chords with Harman, we generate a sequence of frames s i, 0 < i < n, where each frame represents an equal interval of time and s i is a set of pitch classes corresponding to the chord label assigned by Harman. In our experiments with polyphonic transcription, we developed yet another algorithm for searching for similar segments. This algorithm is based on an adaptation of dynamic programming for computer accompaniment [6]. In this accompaniment algorithm, a score is matched to a performance not by computing a full n m matrix but by computing only a diagonal band swept out by a moving window, which is adaptively centered on the best current score position. 6.1 Algorithm 3 To find similar segments, we will sweep a window diagonally from upper left to lower right as shown in Figure 2. When a match is found, indicated by good match scores, the window is moved to follow the best path. We need to decide where paths begin and end. For this purpose, we compute similarity (rather than distance) such that similarity scores increase where segments match, and decrease where segments do not match. Figure 1. In Algorithm 1, the similarity matrix is computed along diagonals as shown. As described so far, this computation will propagate paths once they are started, but how is a path started? When P i,j is left uninitialized by the computation described in the previous paragraph and d i,j is below a threshold (the same one used to cut off paths), P i,j is set to a new integer value to denote the beginning of a new path. We also define D i,j = d i,j and L i,j =1at the beginning of the path. After this computation, regions of P are partitioned according to path names. Every point with the same name is a candidate endpoint for the same starting point. We still need to decide where paths end. We can compute endpoints by reversing the sequence of chroma frames, so that endpoints become starting points. Recall that starting points are uninitialized cells where d i,j is below a threshold. To locate endpoints, scan the matrix in reverse from the original order (Figure 1). Whenever a new path name is encountered, and the distance d i,j is below threshold, find the starting point and output the path. An array can keep track of which path names have been output and where paths begin. 6. POLYPHONIC TRANSCRIPTION Polyphonic transcription offers another approach to similarity. Although automatic polyphonic transcription has rather high error rates, it is still possible to recover a significant amount of musical information. We use Marolt s SONIC transcription program [12], which transcribes audio files to MIDI files. Figure 2. In Algorithm 2, the similarity matrix is computed in diagonal bands swept out along the path shown. The shaded area shows a partially completed computation. An example function for similarity of chords is to count the number of notes in common minus the number of notes not in common. For chords A and B (sets of pitch classes), the similarity is: σ(a, B) = A B A B A B, where X is the number of elements in (cardinality of) set X. We will write σ i,j to denote σ(s i, s j ), the similarity between chords at frames i and j. When we compute the matrix, we initialize cells to zero and store only positive values. A path begins when a window element becomes positive and ends when the window becomes zero again. The computation for a matrix cell is: M i,j = max(m i,j 1 p, M i 1,j p, M i 1,j 1 )+σ i,j c where p is a penalty for insertions and deletions, and c is a bias constant, chosen so that matching segments generate increasing values along the alignment path, and non-matching segments quickly decrease to zero.
4 The computation of M proceeds as shown by the shaded area in Figure 2. This evaluation order is intended to find locally similar segments and follow their alignment path. The reason for computing in narrow diagonal bands is that if M were computed entire row by entire row, all paths would converge to the main diagonal where all frames match perfectly. At each iteration, cells are computed along one row to the left and right of the current path, spanning data that represents a couple of seconds of time. Because of the limited width of the path, references will be made to uninitialized cells in M. These cells and their values are ignored in the maximum value computation. This algorithm can be further refined. The score along an alignment path will be high at the end of the similar segments, after which the score will decrease to zero. Thus, the algorithm will tend to compute alignment paths that are too long. We can improve on the results by interactively trimming a frame from either end of the path as long as the similarity/length quotient increases. This does not always work well because of local maxima. Another heuristic we use is to trim the final part of a path where the slope is substantially off-diagonal, as shown in Figure 3. Figure 3. The encircled portion of the alignment path is trimmed because it represents an extreme difference in tempo. The remainder determines a pair of similar segments. Because the window has a constant size, this algorithm runs in O(n 2 ) time, and by storing only the portion of the matrix swept by the window, O(n) space. The algorithm is quite efficient in practice. 7. CLUSTERING After computing pairs of similar segments with any of the three previously described algorithms, we need to form clusters to identify where segments occur more than twice in the music. Essentially, we are just applying the transitivity property of similarity to locate sets of similar segments. However, similarity is not strictly transitive, and the boundaries of segments are imprecise. We must allow approximate matches and somewhat inconsistent data. We have implemented clustering using a simple algorithm: Start with a set of similar pairs, as computed by algorithms 1, 2, and 3. Remove any pair from the set to form the first cluster. Then search the set for pairs (a, b) such that either a or b (or both) is an approximate match to a segment in the cluster. If a (or b) is not already in the cluster, add it to the cluster. Continue extending the cluster in this way until there are no more similar segments in the set of pairs. Now, repeat this process to form the next cluster, etc., until the set of pairs is empty. Sometimes, a segment in a cluster will correspond to a subsegment of a pair, e.g. (10, 20) overlaps half of the first segment of the pair ((10, 30), (50, 70)). We do not want to add (10, 30) or (50, 70) to the cluster because these have length 20, whereas the cluster element (10, 20) only has length 10. However, it seems clear that there is a segment similar to (10, 20) starting at 50. In this situation, we split the pair proportionally to synthesize a matching pair. In this case, we would create the pair ((10, 20), (50, 60)) and add (50, 60) to the cluster. 8. ANALYSIS AS EXPLANATION The final step is to produce an analysis of the musical structure implied by the clusters. We like to view this as an explanation process. For each section of music, we explain the music in terms of its relationship to other sections. If we could determine relationships of transposition, augmentation, and other forms of variation, these relationships would be part of the explanation. With only similarity, the explanation amounts to labeling music with clusters. To build an explanation, recall that music is represented by a sequence s i,0 i<n. Our goal is to fill in an array E i,0 i<n, initially nil, with cluster names, indicating which cluster (if any) contains a note or frame of music. The explanation E serves to describe the music as a structure based on the repetition and organization of patterns. Recall that a cluster is a set of intervals. For each i in some member of the cluster, we set E i to the name of the cluster. (Names are arbitrary, e.g. A, B, C, etc.) We then continue searching for the next i such that E i = nil and i is in some new cluster. We then label additional points in E i with this new cluster. However, once a label is set, we do not replace it. This gives priority to musical material that is introduced the earliest, which seems to be a reasonable heuristic to resolve conflicts when clusters overlap. 9. EXAMPLES 9.1 Transcription and Algorithm 1 Figure 4 illustrates an analysis of Naima using monophonic transcription and Algorithm 1 to find similar segments. Clusters are shown as heavy lines, which show the location of segments, connected by thin lines. The analysis is shown at the bottom of the figure. The simple textbook analysis of this piece would be a presentation of the theme with structure AABA, followed by a piano solo. The saxophone returns to play BA followed by a short coda. In the computer analysis, further structure is discovered within the B part (the bridge), so the computer analysis might be written as AABBCA, where BBC forms the bridge. The transcription failed to detect more than a few notes of the piano solo. There are a few spurious matching segments here. After the solo, the analysis shows a repetition of the bridge and the A part: BBCA. This is followed by the coda in which there is some repetition. Aside from the solo section, the computer analysis corresponds quite closely to the textbook analysis. It can be seen that the A section is half the duration of the B part, which is atypical for an AABA song form. If the program had additional knowledge of standard forms, it might easily guess that this is a slow ballad and uncover additional structure such as the tempo, number of measures, etc. Note, for example, that once the piece is subdivided into segments, further subdivisions are apparent in the RMS amplitude of the audio signal, indicating a duple meter. Additional examples are presented in another paper. [7] 9.2 Chroma and Algorithm 2 Figure 5 illustrates an analysis of Beethoven s Minuet in G (performed on piano) using the chroma representation and Algorithm 2 for finding similar segments. Because the repetitions are literal and the composition does not involve improvisation,
5 Time(s) Figure 4. Analysis of Naima. Audio is shown at top. Below that is a transcription shown in piano roll notation. Next is a diagram of clusters. At bottom is the analysis; similar segments are shaded in the same pattern. The form is AABA, where the B part has additional structure that appears as two solid black rectangles and one filled with a /// pattern. The middle section is a piano solo. The saxophone reenters at the B section, repeats the A part, and ends with a coda consisting of another repetition Time(s) Figure 5. Analysis of Beethoven s Minuet in G performed on piano. The structure, shown at the bottom, is clearly AABBCCDDAB Time(s) Figure 6. Analysis of a pop song, showing many repetitions of a single segment. Similar segments exist around 20s and 60s, but this similarity was not detected. the analysis is definitive, revealing that the structure is: AABBCCDDAB. Figure 6 applies the same techniques to a pop song [17] with considerable repetition. Not all of the song structure was recovered because the repetitions are only approximate; however, the analysis shows a structure that is clearly different from the earlier pieces by Coltrane and Beethoven. 9.3 Polyphonic Transcription & Algorithm 3 So far, polyphonic transcription has not yielded good results as anticipated. Recall that we first transcribe a piece of music and
6 then construct a harmonic analysis, so the final representation is a sequence of frames, where each frame is a chord. When we listen to the transcriptions, we can hear the original notes and harmony clearly even though many errors are apparent. Similarly, the harmonic analysis of the transcription seems to retain the harmonic structure of the original music. However, the resulting representation does not seem to have clear patterns that are detectable using Algorithm 3. On the other hand, using synthetic data, Algorithm 3 successfully finds matching segments. The observed problems are probably due to many factors. The analysis often reports different chords when the music is similar; for example, an A minor chord in one segment and C major in the other. Since these chords have 2 pitch classes in common and 2 that are different, σ(amin, Cmaj) = 0, whereas σ(cmaj, Cmaj) = 3. Perhaps there is a better similarity function that gives less penalty for plausible chord substitutions. In addition, chord progressions in tonal music tend to use common tones and are based on the 7-note diatonic scale. This tends to make any two chords chosen at random from a given piece of music more similar, leading to false positives. Sometimes Algorithm 3 identifies two segments that have the same single chord, even though the segments are not otherwise similar. A better similarity metric that requires more context might help here. Also, there is a fine line between similar and dissimilar segments, so finding a good value for the bias constant c is difficult. Finally, the harmonic analysis may be removing useful information along with the noise. To get a better idea of the information content of this representation, Figure 7 is based on an analysis of Let it Be performed by the Beatles [13], using polyphonic analysis and chord labeling. After a piano introduction, the vocal melody starts at about 13.5s and finishes the first 4 measures at about 27s. This phrase is repeated throughout the song, so it is interesting to match this known segment against the entire song. Starting at every possible offset, we can search for the best alignment with the score and plot the distance (negative similarity). The distance is zero at 13.5s because the segment matches itself perfectly. The segment repeats almost exactly at about 27s, which appears as a downward spike at 27s. From the graph, it is apparent that the segment also appears with the repetition several other times, as indicated by pairs of downward spikes in the figure. Distance Correlation With First 4 Bars Time Offset (s) Figure 7. The segment from 13.5s to 27s is aligned at every pointing the score and the distance is plotted. Downward spikes indicate a similar segments, of which there are several. Figure 7 gives a clear indication that the representation contains information and in fact is finding structure within the music; otherwise, the figure would appear random. Further work is required to use this information to reliably detect similar segements. 10. SUMMARY AND CONCLUSIONS Music audio presents very difficult problems for music analysis and processing because it contains virtually no structure that is immediately accessible to computers. Unless we solve the complete problem of auditory perception and human intelligence, we must consider more focused efforts to derive structure from audio. In this work, we constructed programs that listen to music, recognize repeated patterns, and explain the music in terms of these patterns. Several techniques can be used to derive a music representation that allows similarity comparison. Monophonic transcription works well if the music consists primarily of one monophonic instrument. Chroma is a simplification of the spectrum and applies to polyphonic material. Polyphonic transcription simplified by harmonic analysis offers another, higher-level representation. Three algorithms for efficiently searching for similar patterns were presented. One of these works with notelevel representations from monophonic transcriptions and two work with frame-based representations. We demonstrate through examples that the monophonic and chroma analysis techniques recover a significant, and in some cases, essentially complete toplevel structure from audio input. We find it encouraging that these techniques apply to a range of music, including jazz, classical, and popular recordings. Of course, not all music will work as well as our examples. In particular, through-composed music that develops and transforms musical material rather than simply repeating it cannot be analyzed with our systems. This includes improvised jazz and rock soloing, many vocal styles, and most art music. In spite of these difficulties, we believe the premise that listening is based on recognition of repetition and transformation is still valid. The challenge is to recognize repetition and transformation even when it is not so obvious. Several areas remain for future work. We are working to better understand the polyphonic transcription data and harmonic analysis, which offer great promise for finding similarity in the face of musical variations. It would be nice to have a formal model that could help to resolve structural ambiguities. For example, a model could enable us to search for patterns and clusters that give the best global explanation for observed similarities. Another enhancement to our work would be the use of hierarchy in explanations. This would, for example, support a two-level explanation of the bridge in Naima. It would be interesting to combine data from beat tracking, key analysis, and other techniques to obtain a more accurate view of music structure. Finally, it would be interesting to find relationships other than repetition. Transposition of small phrases is a common relationship within melodies, but we do not presently detect anything other than repetition. Transposition often occurs in very short sequences, so a good model of musical sequence comparison that incorporates rhythm, harmony, and pitch seems to be necessary to separate random matches from intentional ones. In conclusion, we offer a set of new techniques and our experience using them to analyze music audio, obtaining structural descriptions. These descriptions are based entirely on the music and its internal structure of similar patterns. Our results suggest this approach is promising for a variety of music processing tasks, including music search, where programs must derive high-level structures and features directly from audio representations.
7 11. ACKNOWLEDGMENTS This work was supported by the National Science Foundation under award number Ann Lewis assisted in the preparation and processing of data. Matija Marolt offered the use of his SONIC transcription software, which enabled us to explore the use of polyphonic transcription for music analysis. We would also like to thank Bryan Pardo for his Harman program and assistance using it. Finally, we thank our other colleagues at the University of Michigan for their collaboration and many stimulating conversations. 12. REFERENCES [1] Birmingham, W.P., Dannenberg, R.B., Wakefield, G.H., Bartsch, M., Bykowski, D., Mazzoni, D., Meek, C., Mellody, M. and Rand, W., MUSART: Music Retrieval Via Aural Queries. in International Symposium on Music Information Retrieval, (Bloomington, Indiana, 2001), [2] Coltrane, J. Naima Giant Steps, Atlantic Records, [3] Conklin, D. and Anagnostopoulou, C., Representation and Discovery of Multiple Viewpoint Patterns. in Proceedings of the 2001 International Computer Music Conference, (2001), International Computer Music Association, [4] Cope, D. Experiments in Musical Intelligence. A-R Editions, Inc., Madison, Wisconsin, [5] Dannenberg, R.B. Listening to "Naima": An Automated Structural Analysis from Recorded Audio, 2002, (in review). [6] Dannenberg, R.B., An On-Line Algorithm for Real-Time Accompaniment. in Proceedings of the 1984 International Computer Music Conference, (Paris, 1984), International Computer Music Association, [7] Dannenberg, R.B. and Hu, N. Discovering Musical Structure in Audio Recordings, 2002, (in review). [8] Foote, J. and Cooper, M., Visualizing Musical Structure and Rhythm via Self-Similarity. in Proceedings of the 2001 International Computer Music Conference, (Havana, Cuba, 2001), International Computer Music Association, [9] Hewlett, W. and Selfridge-Field, E. (eds.). Melodic Similarity: Concepts, Procedures, and Applications. MIT Press, Cambridge, [10] Hu, N. and Dannenberg, R.B., A Comparison of Melodic Database Retrieval Techniques Using Sung Queries. in Joint Conference on Digital Libraries, (2002), Association for Computing Machinery. [11] Lartillot, O., Dubnov, S., Assayag, G. and Bejerano, G., Automatic Modeling of Musical Style. in Proceedings of the 2001 International Computer Music Conference, (2001), International Computer Music Association, [12] Marolt, M., SONIC: Transcription of Polyphonic Piano Music With Neural Networks. in Workshop on Current Research Directions in Computer Music, (Barcelona, 2001), Audiovisual Institute, Pompeu Fabra University, [13] McCartney,P.LetItBeLet It Be, Apple Records, [14] Meek, C. and Birmingham, W.P., Thematic Extractor. in 2nd Annual International Symposium on Music Information Retrieval, (Bloomington, Indiana, 2001), Indiana University, [15] Mongeau, M. and Sankoff, D. Comparison of Musical Sequences. in Hewlett, W. and Selfridge-Field, E. eds. Melodic Similarity Concepts, Procedures, and Applications, MIT Press, Cambridge, [16] Mont-Reynaud, B. and Goldstein, M., On Finding Rhythmic Patterns in Musical Lines. in Proceedings of the International Computer Music Conference 1985, (Vancouver, 1985), International Computer Music Association, [17] Mumba, S. Baby Come On Over Baby Come On Over (CD Single), Polydor, [18] Narmour, E. Music Expectation by Cognitive Rule- Mapping. Music Perception, 17 (3) [19] Pardo, B. Algorithms for Chordal Analysis. Computer Music Journal, 26 (2). (in press). [20] Roads, C. Autocorrelation Pitch Detection. in The Computer Music Tutorial, MIT Press, 1996, [21] Rolland, P.-Y. and Ganascia, J.-G. Musical pattern extraction and similarity assessment. in Miranda, E. ed. Readings in Music and Artificial Intelligence, Harwood Academic Publishers, 2000, [22] Sankoff, D. and Kruskal, J.B. Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison. Addison-Wesley, Reading, MA, [23] Simon, H.A. and Sumner, R.K. Pattern in Music. in Kleinmuntz, B. ed. Formal Representation of Human Judgment, Wiley, New York, [24] Stammen, D. and Pennycook, B., Real-Time Recognition of Melodic Fragments Using the Dynamic Timewarp Algorithm. in Proceedings of the 1993 International Computer Music Conference, (Tokyo, 1993), International Computer Music Association, To understand why, assume there are similar segments, ((i, k), (j, l)), that overlap, i.e. 0 i < j k < l < n. Then, thereissomesubsegmentof(i, k) wewillcall(i, m),m< k, corresponding to the overlapping region (j, k) andsome subsegment of (k, l) we will call (p, l), p > j, corresponding to (j, k). Thus, there are three similar segments (i, m), (j, k), and (p, l) that provide an alternate structure to the original overlapping pair. In general, a shorter, more frequent pattern is preferable, so we do not search for overlapping patterns. 2 An implementation note: for each pair of similar segments, the starting points are implied by the coordinates i, j, but we need to store durations. Since we only search half of the matrix due to symmetry, we store one duration at location i, j and the other at j, i.
2012 Music Standards GRADES K-1-2
Students will: Personal Choice and Vision: Students construct and solve problems of personal relevance and interest when expressing themselves through A. Demonstrate how musical elements communicate meaning
8 th grade concert choir scope and sequence. 1 st quarter
8 th grade concert choir scope and sequence 1 st quarter MU STH- SCALES #18 Students will understand the concept of scale degree. HARMONY # 25. Students will sing melodies in parallel harmony. HARMONY
A causal algorithm for beat-tracking
A causal algorithm for beat-tracking Benoit Meudic Ircam - Centre Pompidou 1, place Igor Stravinsky, 75004 Paris, France [email protected] ABSTRACT This paper presents a system which can perform automatic
AUTOMATIC FITNESS IN GENERATIVE JAZZ SOLO IMPROVISATION
AUTOMATIC FITNESS IN GENERATIVE JAZZ SOLO IMPROVISATION ABSTRACT Kjell Bäckman Informatics Department University West Trollhättan, Sweden Recent work by the author has revealed the need for an automatic
Music Score Alignment and Computer Accompaniment
Music Score Alignment and Computer Accompaniment Roger B. Dannenberg and Christopher Raphael INTRODUCTION As with many other kinds of data, the past several decades have witnessed an explosion in the quantity
Demonstrate technical proficiency on instrument or voice at a level appropriate for the corequisite
MUS 101 MUS 111 MUS 121 MUS 122 MUS 135 MUS 137 MUS 152-1 MUS 152-2 MUS 161 MUS 180-1 MUS 180-2 Music History and Literature Identify and write basic music notation for pitch and Identify and write key
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Sample Entrance Test for CR125-129 (BA in Popular Music)
Sample Entrance Test for CR125-129 (BA in Popular Music) A very exciting future awaits everybody who is or will be part of the Cork School of Music BA in Popular Music CR125 CR126 CR127 CR128 CR129 Electric
National Standards for Music Education
National Standards for Music Education 1. Singing, alone and with others, a varied repertoire of music. 2. Performing on instruments, alone and with others, a varied repertoire of music. 3. Improvising
1. interpret notational symbols for rhythm (26.A.1d) 2. recognize and respond to steady beat with movements, games and by chanting (25.A.
FIRST GRADE CURRICULUM OBJECTIVES MUSIC I. Rhythm 1. interpret notational symbols for rhythm (26.A.1d) 2. recognize and respond to steady beat with movements, games and by chanting (25.A.1c) 3. respond
Music, Grade 11, University/College Preparation (AMU3M)
Music, Grade 11, University/College Preparation (AMU3M) This course emphasizes the appreciation, analysis, and performance of various kinds of music, including baroque and classical music, popular music,
MUSIC GLOSSARY. Accompaniment: A vocal or instrumental part that supports or is background for a principal part or parts.
MUSIC GLOSSARY A cappella: Unaccompanied vocal music. Accompaniment: A vocal or instrumental part that supports or is background for a principal part or parts. Alla breve: A tempo marking indicating a
Music Standards FINAL. Approved on May 5, 2000. Copyright 2003 Texas State Board for Educator Certification
Music Standards FINAL Approved on May 5, 2000 Copyright 2003 Texas State Board for Educator Certification MUSIC STANDARDS Standard I. Standard II. Standard III. Standard IV. Standard V. Standard VI. Standard
Musical Literacy. Clarifying Objectives. Musical Response
North Carolina s Kindergarten Music Note on Numbering/Strands: ML Musical Literacy, MR Musical Response, CR Contextual Relevancy Musical Literacy K.ML.1 Apply the elements of music and musical techniques
MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 099: MUSIC EDUCATION
MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES Subarea Listening Skills Music Theory Music History Music Creation and Performance Music Education Approximate Percentage of Questions on
HOWARD COUNTY PUBLIC SCHOOLS MUSIC TECHNOLOGY
HOWARD COUNTY PUBLIC SCHOOLS MUSIC TECHNOLOGY GOALS AND OBJECTIVES GOAL I: PERCEIVING, PERFORMING, AND RESPONDING: AESTHETICS The student will demonstrate the ability to perceive, perform, and respond
Greenwich Public Schools Electronic Music Curriculum 9-12
Greenwich Public Schools Electronic Music Curriculum 9-12 Overview Electronic Music courses at the high school are elective music courses. The Electronic Music Units of Instruction include four strands
FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
Time allowed: 1 hour 30 minutes
SPECIMEN MATERIAL GCSE MUSIC 8271 Specimen 2018 Time allowed: 1 hour 30 minutes General Certificate of Secondary Education Instructions Use black ink or black ball-point pen. You may use pencil for music
Pizzicato. Music Notation. Intuitive Music Composition. A Full Range of Music Software for the Musician. and
Pizzicato TM Music Notation and Intuitive Music Composition Pizzicato Light Pizzicato Beginner Pizzicato Notation Pizzicato Guitar Pizzicato Choir Pizzicato Soloist Pizzicato Drums & Percussion Pizzicato
This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.
This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Transcription of polyphonic signals using fast filter bank( Accepted version ) Author(s) Foo, Say Wei;
How To Learn Music
Proposed Revisions Texas Essential Knowledge and Skills (TEKS) Fine Arts, High School Music Prepared by the State Board of Education (SBOE) TEKS Review Committees First Draft, These draft proposed revisions
Waves Trans-X. Software Audio Processor. User s Guide
Waves Trans-X Software Audio Processor User s Guide Waves Trans-X software guide page 1 of 8 Chapter 1 Introduction and Overview The Waves Trans-X transient processor is a special breed of dynamics processor
Curriculum Mapping Electronic Music (L) 4202 1-Semester class (18 weeks)
Curriculum Mapping Electronic Music (L) 4202 1-Semester class (18 weeks) Week Standard Skills Resources Vocabulary Assessments Students sing using computer-assisted instruction and assessment software.
Rehearsal Strategies A Night in Tunisia By Reginald Thomas, Professor of Jazz Piano Michigan State University, East Lansing, MI
Rehearsal Strategies A Night in Tunisia By Reginald Thomas, Professor of Jazz Piano Michigan State University, East Lansing, MI Do you remember the first time you heard this jazz classic, A Night In Tunisia?
Medical Laboratory Technology Music
Medical Laboratory Technology Music MLT-2990 Advanced MLT Applications 06 Semester Credits Manual laboratory skills related to clinical chemistry, hematology, coagulation, body fluids, microbiology, parasitology,
GCSE Music Unit 4 (42704) Guidance
GCSE Music Unit 4 (42704) Guidance (There are recordings to accompany this document.) The Task Students will be required to compose one piece of music which explores two or more of the five areas of study.
Music Standards of Learning
Music Standards of Learning for Virginia Public Schools Board of Education Commonwealth of Virginia April 2006 Music Standards of Learning for Virginia Public Schools Adopted in April 2006 by the Board
Guitar Rubric. Technical Exercises Guitar. Debut. Group A: Scales. Group B: Chords. Group C: Riff
Guitar Rubric Technical Exercises Guitar Debut In this section the examiner will ask you to play a selection of exercises drawn from each of the three groups shown below. Groups A and B contain examples
PROVIDENCE UNIVERSITY COLLEGE
4.6.16 Music Academic Staff Bill Derksen, Ph.D., University of Minnesota, Emeritus Karen Sunabacka, Ph.D., University of California at Davis, Program Coordinator Darryl Friesen, D.M.A., University of Illinois
Music. Madison Public Schools Madison, Connecticut
Music Madison Public Schools Madison, Connecticut Dear Interested Reader: The following document is the Madison Public Schools Music Curriculum Guide. If you plan to use the whole or any parts of this
b 9 œ nœ j œ œ œ œ œ J œ j n œ Œ Œ & b b b b b c ÿ œ j œ œ œ Ó œ. & b b b b b œ bœ œ œ œ œ œbœ SANCTICITY
SANCTICITY ohn Scofield guitar solo from the live CD im Hall & Friends, Live at Town Hall, Volumes 1 & 2 azz Heritage CD 522980L Sancticity, a tune by Coleman Hawkins, is based on the same harmonic progression
midi2style - the program for converting MIDI files to YAMAHA keyboard style files
midi2style - the program for converting MIDI files to YAMAHA keyboard style files Help Version 5.0 2002-2013 Jørgen Sørensen Web site: http://www.jososoft.dk/yamaha/software/midi2style E-mail: [email protected]
Local outlier detection in data forensics: data mining approach to flag unusual schools
Local outlier detection in data forensics: data mining approach to flag unusual schools Mayuko Simon Data Recognition Corporation Paper presented at the 2012 Conference on Statistical Detection of Potential
Music Literacy for All
Music Literacy for All Developing Musically Literate Individuals in Performance-Based Ensembles J. Steven Moore, DMA Research Data How Many Public Performances Does the Average H.S. Band Present Annually?
MUSIC. MU 100 Fundamentals of Music Theory (4) MU 101 Listen to the Music (4) MU 102 Music Appreciation (4) MU 109 Class Voice (2)
MUSIC MU 100 Fundamentals of Music Theory (4) An introduction to notation, including a study of intervals, scales, modes, meters, basic triads and seventh chords. Intended for non-majors or students with
PERPETUAL MOTION BEBOP EXERCISES
PERPETUAL MOTION BEBOP EXERCISES Jason Lyon 2007 www.opus28.co.uk/jazzarticles.html Bebop is much more than just one particular style, it is pretty much the grammar of modern jazz, and bebop licks are
MUSICAL ANALYSIS WRITING GUIDE
THE CRITERIA MUSICAL ANALYSIS WRITING GUIDE In writing your essay, the only thing you really need to do is fulfill the marking criteria. Let s look closely at what the criteria says. The student evaluates
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
Categorical Data Visualization and Clustering Using Subjective Factors
Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,
Music Theory: Explanation and Basic Principles
Music Theory: Explanation and Basic Principles Musical Scales Musical scales have developed in all cultures throughout the world to provide a basis for music to be played on instruments or sung by the
General Music K-2 Primary Elementary Grades
The following General K-8 alignment with Iowa Core was developed to provide guidance with the 21 st Century Universal Constructs: Critical Thinking, Effective Communication, Creativity, Collaboration,
FINE ARTS EDUCATION GEORGIA PERFORMANCE STANDARDS Music Technology (Grades 6-8)
FINE ARTS EDUCATION GEORGIA PERFORMANCE STANDARDS Music Technology (Grades 6-8) September 16, 2010 Page 1 of 5 Music Technology (grades 6-8) Georgia Performance Standards Section Page I. Acknowledgements
SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING
AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations
Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration
Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
MUSIC PREPARATION SERVICES 1.25.13
MUSIC PREPARATION SERVICES 1.25.13 Arrangers, Orchestrators and Copyists shall be paid not less than the rates set forth below and the conditions set forth shall apply: A. Arrangers (1) Definition Arranging
CREATING. Imagine Generate musical ideas for various purposes and contexts. Pre K K 1 2 3 4 5 6 7 8. MU:Cr1.1.4b Generate musical
CREATING Imagine Generate musical ideas for various purposes and contexts. Common Anchor #1 Enduring Understanding: The creative ideas, concepts, and feelings that influence musicians work emerge from
Parameters for Session Skills Improvising Initial Grade 8 for all instruments
Parameters for Session Skills Improvising Initial for all instruments If you choose Improvising, you will be asked to improvise in a specified style over a backing track that you have not seen or heard
Piano Accordion vs. Chromatic Button Accordion
Piano Accordion vs. Chromatic Button Accordion Which is best, piano accordion (PA), or five row chromatic button accordion (CBA)? This is a question which is often debated in newsgroups. The question should
Admission Requirements to the Music Program
Department of Humanities and Fine Arts / 111 THE BACHELOR OF ARTS DEGREE IN MUSIC (MUSI, MUAP, MUEN) The Music Program plays a vital role in the life of the University and the community. The training environment
An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis]
An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis] Stephan Spiegel and Sahin Albayrak DAI-Lab, Technische Universität Berlin, Ernst-Reuter-Platz 7,
STUDY GUIDE. Illinois Licensure Testing System. Music (143) This test is now delivered as a computer-based test.
Illinois Licensure Testing System STUDY GUIDE Music (143) This test is now delivered as a computer-based test. See www.il.nesinc.com for current program information. Illinois State Board of Education IL-SG-FLD143-05
MUSICAL INSTRUMENT FAMILY CLASSIFICATION
MUSICAL INSTRUMENT FAMILY CLASSIFICATION Ricardo A. Garcia Media Lab, Massachusetts Institute of Technology 0 Ames Street Room E5-40, Cambridge, MA 039 USA PH: 67-53-0 FAX: 67-58-664 e-mail: rago @ media.
Standard 1: Skills and Techniques 1
1 Standard 1: Skills and Techniques 1 CB.1.1 Instrument Knowledge Skills CB.1.1.1 Instrument in good playing condition- including proper assembly, reed care, and cleaning. CB.1.2 Playing Posture Skills
An accent-based approach to performance rendering: Music theory meets music psychology
International Symposium on Performance Science ISBN 978-94-90306-02-1 The Author 2011, Published by the AEC All rights reserved An accent-based approach to performance rendering: Music theory meets music
Annotated bibliographies for presentations in MUMT 611, Winter 2006
Stephen Sinclair Music Technology Area, McGill University. Montreal, Canada Annotated bibliographies for presentations in MUMT 611, Winter 2006 Presentation 4: Musical Genre Similarity Aucouturier, J.-J.
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Music Theory Unplugged By Dr. David Salisbury Functional Harmony Introduction
Functional Harmony Introduction One aspect of music theory is the ability to analyse what is taking place in the music in order to be able to more fully understand the music. This helps with performing
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya ([email protected]) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper
Copyright 2007 Casa Software Ltd. www.casaxps.com. ToF Mass Calibration
ToF Mass Calibration Essentially, the relationship between the mass m of an ion and the time taken for the ion of a given charge to travel a fixed distance is quadratic in the flight time t. For an ideal
Separation and Classification of Harmonic Sounds for Singing Voice Detection
Separation and Classification of Harmonic Sounds for Singing Voice Detection Martín Rocamora and Alvaro Pardo Institute of Electrical Engineering - School of Engineering Universidad de la República, Uruguay
Bi- Tonal Quartal Harmony in Theory and Practice 1 Bruce P. Mahin, Professor of Music Radford University
Bi- Tonal Quartal Harmony in Theory and Practice 1 Bruce P. Mahin, Professor of Music Radford University In a significant departure from previous theories of quartal harmony, Bi- tonal Quartal Harmony
GAME SYSTEM MANUAL PITTSBURGH MODULAR SYNTHESIZERS
GAME SYSTEM MANUAL PITTSBURGH MODULAR SYNTHESIZERS Chaotic and CV Sequencing Arcade Packed with six unique multi-mode sequencers and a fully voltage controllable user interface, the Game System is deep!
Illuminating Text: A Macro Analysis of Franz SchubertÕs ÒAuf dem Wasser zu singenó
Illuminating Text: A Macro Analysis of Franz SchubertÕs ÒAuf dem Wasser zu singenó Maureen Schafer In ÒAuf dem Wasser zu singen,ó Franz Schubert embellishes a sentimental poem by L. Graf zu Stollberg to
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
1 About This Proposal
1 About This Proposal 1. This proposal describes a six-month pilot data-management project, entitled Sustainable Management of Digital Music Research Data, to run at the Centre for Digital Music (C4DM)
MAKING YOUR LINES SOUND MORE LIKE JAZZ!
MAKING YOUR LINES SOUND MORE LIKE JAZZ! BY ALISDAIR MACRAE BIRCH One of the common questions asked by jazz guitar students is, How do I make my lines sound more like jazz? or Why do my improvised lines
GETTING STARTED WITH LABVIEW POINT-BY-POINT VIS
USER GUIDE GETTING STARTED WITH LABVIEW POINT-BY-POINT VIS Contents Using the LabVIEW Point-By-Point VI Libraries... 2 Initializing Point-By-Point VIs... 3 Frequently Asked Questions... 5 What Are the
QUT Digital Repository: http://eprints.qut.edu.au/
QUT Digital Repository: http://eprints.qut.edu.au/ Brown, Andrew R. and Towsey, Michael W. and Wright, Susan K. and Deiderich, Joachim (2001) Statistical Analysis of the features of diatonic music using
SVL AP Music Theory Syllabus/Online Course Plan
SVL AP Music Theory Syllabus/Online Course Plan Certificated Teacher: Norilee Kimball Date: April, 2010 Stage One Desired Results Course Title/Grade Level: AP Music Theory/Grades 9-12 Credit: one semester
Unit Overview Template. Learning Targets
ENGAGING STUDENTS FOSTERING ACHIEVEMENT CULTIVATING 21 ST CENTURY GLOBAL SKILLS Content Area: Orchestra Unit Title: Music Literacy / History Comprehension Target Course/Grade Level: 3 & 4 Unit Overview
Music Mood Classification
Music Mood Classification CS 229 Project Report Jose Padial Ashish Goel Introduction The aim of the project was to develop a music mood classifier. There are many categories of mood into which songs may
Higher Music Concepts Performing: Grade 4 and above What you need to be able to recognise and describe when hearing a piece of music:
Higher Music Concepts Performing: Grade 4 and above What you need to be able to recognise and describe when hearing a piece of music: Melody/Harmony Rhythm/Tempo Texture/Structure/Form Timbre/Dynamics
MIDI-LAB, a Powerful Visual Basic Program for Creating MIDI Music
MIDI-LAB, a Powerful Visual Basic Program for Creating MIDI Music Kai Yang 1, Xi Zhou 2 1 School of Computing, University of Utah, Utah, USA [email protected] 2 Department of Family & Preventive Medicine,
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Little LFO. Little LFO. User Manual. by Little IO Co.
1 Little LFO User Manual Little LFO by Little IO Co. 2 Contents Overview Oscillator Status Switch Status Light Oscillator Label Volume and Envelope Volume Envelope Attack (ATT) Decay (DEC) Sustain (SUS)
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
GCSE Music Unit 3 (42703) Guidance
GCSE Music Unit 3 (42703) Guidance Performance This unit accounts for 40% of the final assessment for GCSE. Each student is required to perform two separate pieces of music, one demonstrating solo/it skills,
Data Visualization Using Polynomiography
Using Polynomiography Catherine Wilshusen 1, Bahman 2 1 Johns Hopkins University 2 Rutgers University, Dept. of Computer Science DIMACS REU First Student Presentations, 2013 Introduction to Polynomiography
How to Improvise Jazz Melodies Bob Keller Harvey Mudd College January 2007 Revised 4 September 2012
How to Improvise Jazz Melodies Bob Keller Harvey Mudd College January 2007 Revised 4 September 2012 There are different forms of jazz improvisation. For example, in free improvisation, the player is under
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
Silver Burdett Making Music
A Correlation of Silver Burdett Making Music Model Content Standards for Music INTRODUCTION This document shows how meets the Model Content Standards for Music. Page references are Teacher s Edition. Lessons
Automatic Transcription: An Enabling Technology for Music Analysis
Automatic Transcription: An Enabling Technology for Music Analysis Simon Dixon [email protected] Centre for Digital Music School of Electronic Engineering and Computer Science Queen Mary University
A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song
, pp.347-354 http://dx.doi.org/10.14257/ijmue.2014.9.8.32 A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song Myeongsu Kang and Jong-Myon Kim School of Electrical Engineering,
Rameau: A System for Automatic Harmonic Analysis
Rameau: A System for Automatic Harmonic Analysis Genos Computer Music Research Group Federal University of Bahia, Brazil ICMC 2008 1 How the system works The input: LilyPond format \score {
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
C Major F Major G Major A Minor
For this task you will create a 16 bar composition with a Ground Bass Accompaniment. REMINDER A chord is built on the notes 1 3 5 of a scale. e.g. chord of C would have the notes C E G. The first note
COMS 4115 Programming Languages and Translators Fall 2013 Professor Edwards. Lullabyte
COMS 4115 Programming Languages and Translators Fall 2013 Professor Edwards Lullabyte Stanley Chang (cc3527), Louis Croce (ljc2154), Nathan Hayes-Roth (nbh2113), Andrew Langdon (arl2178), Ben Nappier (ben2113),
Tutorial for proteome data analysis using the Perseus software platform
Tutorial for proteome data analysis using the Perseus software platform Laboratory of Mass Spectrometry, LNBio, CNPEM Tutorial version 1.0, January 2014. Note: This tutorial was written based on the information
UNIT DESCRIPTIONS: BACHELOR OF MUSIC (CMP) STAGE 1: FOUNDATION STAGE (TRIMESTER 1)
STAGE 1: FOUNDATION STAGE (TRIMESTER 1) COMPOSITION & MUSIC PRODUCTION 1 (A2CM1) Develops foundation concepts and skills for organising musical ideas, and standard production techniques for converting
Lesson Plans: Stage 3 - Module One
Lesson Plans: Stage 3 - Module One TM Music Completes the Child 1 Stage Three Module One Contents Week One Music Time Song 2 Concept Development Focus: Tempo 4 Song Clap Your Hands 5 George the Giant Pitch
