ios Audio Programming Guide
|
|
|
- Curtis Cameron
- 10 years ago
- Views:
Transcription
1
2 ios Audio Programming Guide A Guide to Developing Applications for Real-time Audio Processing and Playback izotope, Inc.
3 izotope ios Audio Programming Guide This document is meant as a guide to developing applications for real-time audio processing and playback in ios, using Core Audio services. This guide presents a broad overview of audio in ios, as well as specific details regarding the implementation of various audio programming strategies. Important terms to be familiar with are shown in bold and defined in the glossary at the back of this document. Code samples and names of files containing code will be shown in this italicized font Code is taken from Appleʼs sample ios applications as well as the izotope audio effect sample application, distributed with each izotope ios SDK. Who is this guide for? This guide is meant to be a resource for developers who are new to Core Audio on ios. It will be most useful to programmers who are experienced in ios development but not in audio technology, or to audio programmers who are not experienced in ios development. As such, it is highly recommended the reader have an understanding of basic concepts of both ios development and digital signal processing before consulting this guide. For more information on DSP, or ios development and audio programming, take a look at the Appendix - Introductory DSP Tutorial and Additional Help and Resources sections of this guide to find useful information, as well as links to other resources on the subject. Introduction Audio processing in ios is done through Core Audio. Within Core Audio, there are two major strategies that allow an application to process audio data directly and play it back through the iphone or ipad hardware, and both will be covered in detail in the main body of this document. 1. Audio queues are consecutive buffers of audio that are passed to the hardware and then reused. There are typically three buffers in a queue, and processing is done via a callback function, which will provide a buffer of audio that needs to be processed in time for it to be output to the iphone/ipad hardware.
4 2. Audio units are components like mixers, equalizers, and input/ output units, which take audio input and produce audio output. In ios, built-in audio units are connected in audio processing graphs, wired together in the same way hardware components are connected. An audio input is provided to the processing graph, which then processes the audio before handing it off to the hardware. The above methods do all the work of interacting with hardware and audio drivers so that an appʼs code can use a simple interface to handle audio, and both methods have several qualities in common. 1. Working with digital audio on any system requires working knowledge of the basic topics of digital signal processing. For more information on DSP, take a look at the Appendix - Introductory DSP Tutorial section of this document, which covers most of the basics. 2. Audio format information is also crucial. The channel count, sampling rate, bit depth, and other formatting information are all contained in audio files along with the audio data itself. When working with audio data in ios, this format information is critical for processing and playing sound accurately. Apple provides a structure for communicating this information in the AudioStreamBasic Description struct, and the strategies described in this guide rely heavily on this struct. Much of the work that goes into handling audio in ios is in designing code that can correctly deal with a variety of audio formats. 3. Processing digital audio requires working with buffers. A buffer is just a short segment of samples whose length is convenient for processing, usually in a size that is a power of two. Buffers are time efficient because processing functions are called only when a buffer is needed, and theyʼre space efficient because they donʼt occupy a huge amount of memory. The strategies for audio processing in ios involve gathering buffers of audio from audio files or microphone input, and providing individual buffers to objects that will pass them off to the hardware.
5 4. An application will provide buffers whenever they are requested using a callback. Callbacks for audio input will have a few arguments, including a pointer to a buffer to be filled and an object or struct containing additional information that is useful for processing. This is where the digital audio is handed off, and will be passed to the audio hardware. Due to having direct access to a buffer of audio within the callback, it is possible to perform sound synthesis or processing at the individual sample level. This is also where any izotope audio processing will take place. This document explains audio queues and audio units by detailing the implementation of an audio playback application with both strategies. Other strategies for audio in ios do not allow direct manipulation of audio at the sample level, so they wonʼt be covered here. These strategies include simple frameworks for playing and recording sounds, like AVAudioPlayer and MPMediaPlayer. There is also a more complex framework for rendering 3D sound environments, called OpenAL, which is also not covered in detail within this document.
6 1. Audio Queues!... 1 i. Introduction to Audio Queue Services!... 1 ii. Designing a Helper Class!... 2 iii. Initialization!... 5 iv. Setup and Cleanup!... 8 v. Audio Queue Callback Functionality! vi. Running the Audio Queue! Audio Units! i. Introduction to Audio Units! ii. The MixerHostAudio Class! iii. Initialization! iv. Audio Unit Input Callback Functionality! v. Setting Audio Unit Parameters! vi. Running the Audio Processing Graph! vii. auriotouch! Glossary! Additional Help and Resources! Appendix - Introductory DSP Tutorial!... 63
7 1. Audio Queues i. Introduction to Audio Queue Services An audio queue is an object used in Mac OS and ios for recording and playing audio. It is a short queue of buffers that are constantly being filled with audio, passed to the audio hardware, and then filled again. An audio queue fills its buffers one at a time and then does the work of turning that data into sound. Audio queues handle interaction with codecs and hardware so that an appʼs code can interact with audio at a high level. The best resource for understanding audio queue programming is the Audio Queue Services Programming Guide in the Mac OS X Developer Library. Reading that document for a full understanding of Audio Queue Services is highly recommended for anyone planning to use audio in an ios application. Additionally, documentation of specific functions can be found in the Audio Queue Services Reference, and a list of further references can be found in the last section of this chapter. Audio queues are defined in AudioQueue.h in the AudioToolbox framework. Most of the functions explained here are defined in AudioQueue.h or in AudioFile.h. Functions in AudioQueue.h have the prefix AudioQueue, and functions in AudioFile.h have the prefix AudioFile. Why Use Audio Queue Services? Audio queues have a number of benefits that make them a good choice for audio processing. 1. They are the highest level structure that allows direct processing of audio. 2. They are well documented and straightforward to use. 3. They allow precise scheduling and synchronized playback of sounds. However, they have higher latency and are not as versatile as audio units. 1
8 ii. Designing a Helper Class The easiest way to work with audio queue services in ios is to create an Objective-C, C++ or Objective-C++ class in a project that performs all interactions with audio queues. An object needs several components to make this work properly, including interface functions, variables, initialization functions, and a callback function. This example uses a C++ class, AQPlayer, to handle all the audio queue functionality of its application. Bear in mind that this class is simply one way to get up and running with Audio Queue Services, not the only way. However, this is a good starting point for anyone who has never worked with audio queues before to begin developing powerful audio applications. Interface The AQPlayer class will have simple public member functions for interaction with the rest of the application. 1. The class will create and destroy AQPlayer objects with a constructor AQPlayer() and a destructor ~AQPlayer(). 2. To initialize an audio queue, there is a function CreateQueueForFile() that takes a path to an audio file in the form of a CFURLRef. 3. To start, stop, and pause audio queues, there are the functions StartQueue(), StopQueue(), and PauseQueue(). 4. To dispose of an audio queue, there is the DisposeQueue() function. 5. There will also be various functions to set and return the values of member variables, which keep track of the state of the audio queue. Member Variables To keep track of state in AQPlayer, several variables are required. Apple suggests using a custom struct that contains these, but in this example it is easier to just make them member variables of the C++ class AQPlayer. For clarity, all member variables will start with m. Variables of the following types will likely be needed for an app dealing with audio data: 2
9 AudioStreamBasicDescription - mdataformat; AudioQueueRef - mqueue; AudioQueueBufferRef - mbuffers[knumberbuffers]; AudioFileID - maudiofile; SInt64 - mcurrentpacket; UInt32 - mnumpacketstoread; AudioStreamPacketDescription - *mpacketdescs; bool - misdone; Hereʼs a brief description of each: AudioStreamBasicDescription mdataformat The audio data format of the file being played. This includes information such as sample rate, number of channels, and so on. AudioQueueRef mqueue A reference to the audio queue being used for playback. AudioQueueBufferRef mbuffers[knumberbuffers] The buffers used by the audio queue. The standard value for knumberbuffers is 3, so that one buffer can be filled as another is in use, with an extra in case thereʼs some lagging. In the callback, the audio queue will pass the individual buffer it wants filled. mbuffers is just used to allocate and prime the buffers. This is proof that the audio queue is literally just a queue of buffers that get filled up and wait in line to be output to the hardware. AudioFileID maudiofile The file being played from. SInt64 mcurrentpacket The index of the next packet of audio in the audio file. This will be incremented each time playback is called. UInt32 mnumpacketstoread The number of packets to read each time playback is called. This is also calculated after the audio queue is created. AudioStreamPacketDescription *mpacketdescs For variable bit rate (VBR) data, this is an array of packet descriptions for the audio file. For constant bit rate files, you can make this NULL. 3
10 bool misdone Whether the audio queue is currently running. It may be helpful to keep track of other information, such as whether the queue is initialized, whether it should loop playback, and so on. The only thing the constructor, AQPlayer(), needs to do is initialize these member variables to appropriate values. Audio Queue Setup Specific code is required to open audio files and to create and set up audio queues. The general process consists of: 1. Opening an audio file to determine its format. 2. Creating an audio queue. 3. Calculating buffer size. 4. Allocating buffers." 5. Some additional setup for the audio queue, like setting magic cookies, channel layout, and volume. The details of the audio queue setup are covered in the Initialization section of this document, and other setup is described in the Setup and Cleanup section. Audio Queue Callback The final requirement is an audio queue callback function that the audio queue will call when it needs a buffer filled. The callback is where audio is handed off from the application to the audio queue. To keep things simple, it makes sense to implement it as a member function of the C++ AQPlayer class, but in general the callback can be any C function. One of the arguments to the callback is a void* which can be used to point to any struct or object that holds the variables for managing state. In this example, the void* argument will be a pointer to the AQPlayer object to allow access its member variables. The details of what the callback does are covered in the Audio Queue Callback Functionality section of this document. 4
11 iii. Initialization Initialization functions are needed to create an audio queue for a given audio file. In this case, initialization can be performed in a member function of the AQPlayer class, CreateQueueForFile(). Opening an Audio File Before working with an audio queue, an audio file needs to be opened so its format can be provided when the audio queue is created. To get the format, do the following: 1. Create a CFURLRef that points to the file. One way to do this is with CFURLCreateFromFileSystemRepresentation. A typical call looks like this: CFURLRef audiofileurl= CFURLCreateFromFileSystemRepresentation( NULL, filepath, strlen( filepath ), false ); Variable descriptions: NULL The function will use the default memory allocator. filepath A char* with the full path of the audio file. strlen( filepath ) The length of the path. false The path is not for a directory, but for a single file. In this example, a CFURLRef is created outside the AQPlayer s member functions, and then passed to CreateQueueForFile(). 2. Open the file using AudioFileOpenURL(). Here is a typical call: OSStatus result= AudioFileOpenURL( audiofileurl, fsrdperm, 0, &maudiofile ); 5
12 Variable descriptions: audiofileurl The CFURLRef that was just created. fsrdperm Requests read permission (rather than write, read/ write, etc.) for the file. 0 Donʼt use a file type hint. &maudiofile A pointer to an AudioFileID that will represent the file. After finishing using the fileʼs CFURLRef, it is important to release it using CFRelease( audiofileurl ) to prevent a memory leak. 3. Get the fileʼs audio data format using AudioFileGetProperty(). Here is a typical call: UInt32 dataformatsize= sizeof( mdataformat ); AudioFileGetProperty( maudiofile, kaudiofilepropertydataformat, &dataformatsize, &mdataformat ); Variable descriptions: maudiofile The AudioFileID for the file, obtained from AudioFileOpenURL(). kaudiofilepropertydataformat The desired property is the data format. &dataformatsize A pointer to an integer containing the size of the data format. &mdataformat A pointer to an AudioStreamBasic Description. This has information like the sample rate, number of channels, and so on. 6
13 Creating the Audio Queue Once the audio fileʼs format is known, its possible to create an audio queue. For playback, AudioQueueNewOutput() is used (A different function is used with audio queues for recording). Hereʼs what a call looks like: AudioQueueNewOutput( &mdataformat, AQBufferCallback, this, CFRunLoopGetCurrent(), kcfrunloopcommonmodes, 0, &mqueue ); Variable descriptions: &mdataformat A pointer to the data format of the file being played. The audio queue will be initialized with the right file type, number of channels, and so on. AQBufferCallback The callback function. After the function is registered here and the audio queue is started, the queue will begin to call the callback. this The AQPlayer object. It is passed here so that it will be passed as the void* argument to our callback. Note that this is a C++ keyword; using an Objective-C object, self would be used here, and using a C struct the pointer would be passed to the struct. CFRunLoopGetCurrent() Get the run loop from the current thread so that the audio queue can use it for its callbacks and any listener functions. kcfrunloopcommonmodes Allows the callback to be invoked in normal run loop modes. 0 Reserved by Apple. Must be 0. &mqueue A pointer to the audio queue being initialized. 7
14 Setup After creating the audio queue, the CreateQueueForFile() function will call SetupNewQueue() to perform some additional setup, as explained in the Setup and Cleanup section. iv. Setup and Cleanup Before using the audio queue, some setup is required. Again, a member function of the AQPlayer class, SetupNewQueue() can be used, along with a few helper functions. Setting the Buffer Size Start by calculating the buffer size, in bytes, desired for each audio queue buffer, along with the corresponding number of packets to read from the audio file each time the callback is called. Buffers are just arrays of audio samples, and buffer size is an important value in determining the efficiency of an audio processing application. For live audio processing, a small buffer size is typically used (around 256 to 2048 bytes.) This minimizes latency, the time delay caused by processing each buffer. On the other hand, the shorter the buffer, the more frequently the callback will need to be called, so a balance must be struck. To find a good buffer size, an approximate maximum packet size can be obtained from the audio file using AudioFileGetProperty(), and then a function to calculate the buffer size for a given duration can be created. Write a function similar to CalculateBytesForTime() to calculate the buffer size. It is possible to just set a buffer size regardless of file format, but it makes sense to use a buffer size based on the packet size if the audio file uses packets. This will be another member function of the AQPlayer class. Hereʼs what the definition looks like: " 8
15 void AQPlayer::CalculateBytesForTime ( AudioStreamBasicDescription &indesc, UInt32 inmaxpacketsize, Float64 inseconds, UInt32 *outbuffersize, UInt32 *outnumpacketstoread ) { } Variable descriptions: &indesc A pointer to the fileʼs audio format description. inmaxpacketsize An upper bound on the packet size. inseconds The duration in seconds for the buffer size to approximate. *outbuffersize The buffer size being returned. *outnumpacketstoread The number of packets to read each time the callback is called, which will also calculated. Hereʼs what is done in the body of the function. 1. First, a reasonable maximum and minimum buffer size is chosen. For live audio processing, small values like 2048 and 256 bytes are used: static const int maxbuffersize = 0x800; static const int minbuffersize = 0x100; 2. Next, we see if the audio format has a fixed number of frames per packet and set the buffer size appropriately if it does. If it doesnʼt, just use a default value for buffer size from the maximum buffer size or maximum packet size: 9
16 if( indesc.mframesperpacket!= 0 ) { Float64 numpacketsfortime= indesc.msamplerate / indesc.mframesperpacket * inseconds; *outbuffersize= numpacketsfortime * inmaxpacketsize; } else { *outbuffersize= maxbuffersize > inmaxpacketsize? maxbuffersize : inmaxpacketsize; } 3. Then limit the buffer size to our maximum and minimum: if( *outbuffersize > maxbuffersize && *outbuffersize > inmaxpacketsize ) { *outbuffersize= maxbuffersize; } else { if( *outbuffersize < minbuffersize ) *outbuffersize= minbuffersize; } 4. Finally calculate the number of packets to read in each callback. *outnumpacketstoread= *outbuffersize / inmaxpacketsize; Next, get the upper bound for packet size using AudioFileGetProperty(). The call looks like this: AudioFileGetProperty( maudiofile, kaudiofilepropertypacketsizeupperbound, &size, &maxpacketsize ); Variable descriptions: maudiofile The AudioFileID for the file. kaudiofilepropertypacketsizeupperbound The desired property is the packet size upper bound, a conservative estimate for the maximum packet size. 10
17 &size A pointer to an integer with the size of the desired property (set to sizeof( maxpacketsize )). &maxpacketsize A pointer to an integer where the maximum packet size will be stored on output. Use CalculateBytesForTime to set the buffer size and number of packets to read. Hereʼs what the call looks like: UInt32 bufferbytesize; CalculateBytesForTime( mdataformat, maxpacketsize, kbufferdurationseconds, &bufferbytesize, &mnumpacketstoread ); Variable descriptions: mdataformat The audio data format of the file being played. maxpacketsize The maximum packet size found with AudioFileGetProperty(). kbufferdurationseconds A set duration for the buffer to match, approximately. In this example, the duration of the maximum buffer size is used (the buffer size in bytes divided by the sample rate). &bufferbytesize The buffer size in bytes to be set. mnumpacketstoread The member variable where the number of packets for the callback function to read from the audio file is being stored. Allocating Buffers The audio queueʼs buffers are allocated manually. First, it is determined whether the file is in a variable bit rate format, in which case the packet descriptions in the buffers will be needed. 11
18 bool isformatvbr= ( mdataformat.mbytesperpacket == 0 mdataformat.mframesperpacket == 0 ); UInt32 numpacketdescs= isformatvbr? mnumpacketstoread : 0; Then the audio queueʼs buffers are looped through and each one is allocated using AudioQueueAllocateBufferWithPacket Descriptions(), as follows: for( int ibuf= 0; ibuf < knumberbuffers; ++ibuf ) { AudioQueueAllocateBufferWithPacketDescriptions ( mqueue, bufferbytesize, numpacketdescs, mbuffers [ibuf] ); } Variable descriptions: mqueue The audio queue. bufferbytesize The buffer size in bytes calculated above. numpacketdescs 0 for CBR (constant bit rate) data, otherwise the number of packets per call to the callback. mbuffers[i] - The current audio queue buffer. knumberbuffers is the number of audio queue buffers. This has already been set to 3. If VBR (variable bit rate) data wonʼt be used, it is possible to just use AudioQueueAllocateBuffer(), which takes all the same arguments except for numpacketdescs. Audio File Properties Some audio files have additional properties that provide necessary information to an audio queue, such as magic cookies and channel layout. 12
19 Setting a magic cookie Some compressed audio formats store information about the file in a structure called a magic cookie. If the file being played has a magic cookie, it needs to be provided to the audio queue. The following code will check if there is a magic cookie and give it to the audio queue if there is. First, check whether the audio file has a magic cookie property: size= sizeof( UInt32 ); OSStatus result= AudioFileGetPropertyInfo( maudiofile, kaudiofilepropertymagiccookiedata, &size, NULL ); If it does, get the property from the audio file and set it in the audio queue: if( result == noerr && size > 0 ) { char* cookie= new char [size]; AudioFileGetProperty( maudiofile, kaudiofilepropertymagiccookiedata, &size, cookie ); AudioQueueSetProperty( mqueue, kaudioqueueproperty_magiccookie, cookie, size ); delete [] cookie; } Variable descriptions: maudiofile The audio file. kaudiofilepropertymagiccookiedata The desired property is magic cookie data. &size Pointer to an integer with the size of the desired property. cookie Temporary storage for the magic cookie data. mqueue The audio queue. kaudioqueueproperty_magiccookie The desired property to set is the queueʼs magic cookie. 13
20 Setting a channel layout If an audio file has a channel layout property, the channel layout of the audio queue needs to be set. The channel layout property is normally used only for audio with more than two channels. First, check whether the audio file has a channel layout property: size= sizeof( UInt32 ); result= AudioFileGetPropertyInfo( maudiofile, kaudiofilepropertychannellayout, &size, NULL ); If it does, get the property from the audio file and set it in the audio queue: if( result == noerr && size > 0 ){ AudioChannelLayout* acl= static_cast<audiochannellayout*>(malloc( size )); AudioFileGetProperty( maudiofile, kaudiofilepropertychannellayout, &size, acl ); AudioQueueSetProperty( mqueue, kaudioqueueproperty_channellayout, acl, size ); free( acl ); } Variable descriptions: maudiofile The audio file. kaudiofilepropertychannellayout The desired property is the channel layout. &size Pointer to an integer with the size of the desired property. acl Temporary storage for the channel layout data. mqueue The audio queue. kaudioqueueproperty_channellayout The desired property to set is the queueʼs channel layout. 14
21 Property Listeners A property listener is a function that the audio queue will call when one of its properties changes. Property listeners can be useful for tracking the state of the audio queue. For example, in this example there is a member function isrunningproc() that is called when the audio queueʼs IsRunning property changes. The function definition looks like this: void AQPlayer::isRunningProc( void* inaqobject, AudioQueueRef inaq, AudioQueuePropertyID inid ) { } Variable descriptions: void* inaqobject A pointer to the AQPlayer object. AudioQueueRef inaq The audio queue calling the property listener. AudioQueuePropertyID inid A value specifying the property that has changed. In the body of the function, the value of our AQPlayerʼs misrunning property is updated. A property listener can take any action that would be appropriate when a specific property changes. As done in the callback, cast inaqobject as a pointer to an AQPlayer: AQPlayer* THIS= static_cast<aqplayer*>(inaqobject); Then update the misrunning property using AudioQueueGetProperty (). UInt32 size= sizeof( THIS->mIsRunning ); OSStatus result= AudioQueueGetProperty( inaq, kaudioqueueproperty_isrunning, &THIS->mIsRunning, &size ); 15
22 To add the property listener to the audio queue so that it will be called, use AudioQueueAddPropertyListener(). That is done in the SetupNewQueue() function. Hereʼs what the call looks like: AudioQueueAddPropertyListener( mqueue, kaudioqueueproperty_isrunning, isrunningproc, this ); Variable descriptions: mqueue The audio queue. kaudioqueueproperty_isrunning The desired property to listen for is IsRunning. isrunningproc The property listener function created. this The AQPlayer object, which the audio queue will pass as the inaqobject argument to isrunningproc. To remove a property listener, use AudioQueueRemovePropertyListener(), which takes the same arguments. Setting Volume Volume is an adjustable parameter of an audio queue, (in fact, it is the only adjustable parameter), and to set it, use the AudioQueue SetParameter() function as follows: AudioQueueSetParameter( mqueue, kaudioqueueparam_volume, 1.0 ); Variable descriptions: mqueue The audio queue. kaudioqueueparam_volume The parameter to set is" volume. The third argument is volume as a floating point value from 0.0 to
23 Cleanup When finished with the audio queue, dispose of the queue and close the audio file. Disposing of an audio queue also disposes of its buffers. This is done in the member function DisposeQueue(). AudioQueueDispose( mqueue, true ); Variable descriptions: mqueue The audio queue. true Dispose immediately, rather than playing all queued buffers first. (This is a ʻsynchronousʼ action.) The DisposeQueue() function takes a boolean indisposefile specifying whether to dispose the audio file along with the queue. To dispose the audio file use: AudioFileClose( maudiofile ); The destructor, ~AQPlayer(), just needs to call DisposeQueue( true ) to dispose of the audio queue and audio file. v. Audio Queue Callback Functionality To provide audio data to an audio queue, a playback audio queue callback function is created. The audio queue will call this function every time it needs a buffer of audio, and itʼs up to the callback to provide it. Normally the callback will read directly from a file. If an application performs its own processing or synthesis, the callback will also use previously created functions to provide audio. In this example, the callback is a member function of the AQPlayer class. A callback declaration looks like this: void AQPlayer::AQBufferCallback( void* inaqobject, AudioQueueRef inaq, AudioQueueBufferRef inbuffer ) { } 17
24 Variable descriptions: void* inaqobject A pointer to any struct or object used to manage state. Typically this will be either a custom C struct or an Objective-C or C++ object handling the audio queue. In these examples, this will be a pointer to the C++ AQPlayer object, which handles audio queues and stores all the necessary information in member variables. AudioQueueRef inaq The audio queue calling the callback. The callback will place a buffer of audio on this queue. AudioQueueBufferRef inbuffer The buffer the audio queue is requesting be filled. Usually the callback will read from a file into the buffer and then enqueue the buffer. Getting the Object The first thing the appʼs callback should do is get a pointer to the struct or object that has all the state variables. In this case, just cast the void* inaqobject as a pointer to an AQPlayer using a static cast: AQPlayer* THIS= static_cast<aqplayer*> (inaqobject); Now, to access the AQPlayerʼs member variables, just use the arrow operator: THIS->mAudioFile. Reading From a File Now the audio queue buffer should be filled with audio data from a file. To read from an audio file into an audio queue buffer, use the AudioFileReadPackets() function. Hereʼs what a call looks like: AudioFileReadPackets( THIS->mAudioFile, false, &numbytes, inbuffer->mpacketdescriptions, THIS- >mcurrentpacket, &npackets, inbuffer- >maudiodata ); 18
25 Variable descriptions: THIS->mAudioFile The audio file being played from. false Donʼt cache data. &numbytes Pointer to an integer that will give the number of bytes read. inbuffer->mpacketdescriptions The packet descriptions in the current buffer. THIS->mCurrentPacket The index of the next packet of audio. &npackets Pointer to the number of packets to read. On output, will tell how many were actually read. inbuffer->maudiodata The audio buffer that will be filled. Processing After filling an audio buffer from a file, it is possible to process the audio directly by manipulating the values in the buffer. And if the application uses synthesis, not playback, it is possible to fill the buffer directly without using a file. This is also where any izotope effects would be applied. Details regarding the implementation of izotope effects are distributed along with our effect libraries. Enqueueing a Buffer After filling a buffer with audio data, check whether any packets were actually read by looking at the value of npackets. If no packets were read, the file has ended, so stop the audio queue and set the misdone variable to true. Otherwise, place the filled buffer in the audio queue using the AudioQueueEnqueueBuffer() function. A call to this function has the following form: AudioQueueEnqueueBuffer( inaq, inbuffer, numpackets, THIS->mPacketDescs ); 19
26 Variable descriptions: inaq The audio queue to place the buffer on. inbuffer The buffer to enqueue. numpackets The number of packets in the buffer. For constant bit rate audio formats, which do not use packets, this value should be 0. THIS->mPacketDescs The packet descriptions for the packets in the buffer. For constant bit rate audio formats, which do not use packets, this value should be NULL. Ending the Callback The callback should also stop the audio queue when the audio file ends. To do this, check the value of npackets after calling AudioFileRead Packets(). If npackets is 0, set the misdone variable to true and stop the audio queue using the AudioQueueStop() function as follows: THIS->mIsDone= true; AudioQueueStop( inaq, false ); Variable descriptions: inaq The audio queue being used. false Do not stop immediately; instead, play all queued buffers and then stop. (This is an ʻasynchronousʼ action). (This example also checks the mislooping member variable and, if it is true, resets the current packet index so that the file will loop.) 20
27 vi. Running the Audio Queue Starting the Queue Now that all the necessary information is stored in variables within the AQPlayer class, along with proper initialization and a callback function, it is possible to actually start the audio queue. This is done in the member function StartQueue(). StartQueue() also checks whether there is a need to create an audio queue for the current audio file. First, set the current packet index to 0 so it starts at the beginning of the file: mcurrentpacket= 0; Next, prime the queueʼs buffers so that there is audio data in them when the queue starts. Otherwise data from uninitialized buffers will briefly be heard instead of audio. for( int ibuf= 0; ibuf < knumberbuffers; ++ibuf ) { AQBufferCallback( this, mqueue, mbuffers [ibuf] ); } Then start the queue using AudioQueueStart(): AudioQueueStart( mqueue, NULL ); Variable descriptions: mqueue The audio queue. NULL Start immediately. It is also possible to specify a start time with an AudioTimeStamp here. 21
28 Pausing the Queue It is possible to pause the audio queue by using AudioQueuePause(). AudioQueuePause( mqueue ); This function will pause the audio queue provided to it. This is the only thing the PauseQueue() function in the AQPlayer does. The queue will start again when AudioQueueStart() is called. Stopping the Queue It is possible to stop the audio queue at any time with AudioQueueStop (), which is done in the AudioQueueStop() function. The sample application simply stops the audio queue at the end of the audio file. AudioQueueStop( mqueue, false ); Variable descriptions: mqueue The audio queue. false Do not stop immediately instead, play all queued buffers and then stop (This is an ʻasynchronousʼ action). 22
29 2. Audio Units i. Introduction to Audio Units Audio units are a service used for audio processing at a lower level in ios. An individual audio unit takes audio input and produces audio output, and they may be connected into an audio processing graph, in which each unit passes its output to the next. For example, it is possible to pass audio to an EQ unit, the output of which is passed to a mixer unit, whose output is passed to an I/O (input/output) unit, which gives it to the hardware to turn into sound. The graph structure is similar to the layout of an audio hardware setup, which makes it easy to visualize the flow of audio in an application. Alternatively, it is possible to use just a single audio unit, or connect them without using a processing graph. In Mac OS, audio unit programming is a much broader topic because it is possible to create audio units. ios programming however, simply makes use of audio units that already exist. Since the audio units provide access to the low-level audio data, it is possible to process that audio in any desired way. In ios 4, there are seven audio units available, which handle EQ, mixing, I/O, voice processing, and format conversion. Normally an app will have a very simple audio processing graph with just a few audio units. A great resource for understanding audio unit programming for ios is the Audio Unit Hosting Guide in the ios Developer Library. This document covers the details of using audio units for ios. (The Audio Unit Programming Guide in the Mac OS X Developer Library does not have as much useful information for ios programming). Documentation of specific functions can be found in the Audio Unit Framework Reference. Appleʼs MixerHost sample application is a great example of using audio units. This guide will focus on that sample code along with some discussion of other sample code like auriotouch. For an example like MixerHost but with a more complex audio processing graph, take a look at the iphonemixereqgraphtest example. Audio units and other structures necessary for using them are defined in AudioComponent.h, AUGraph.h and other headers included from AudioToolbox.h in the AudioToolbox framework. Similarly, necessary 23
30 functions for handling audio sessions are found in AVFoundation.h in the AVFoundation framework (an application can just include AudioToolbox/ AudioToolbox.h and AVFoundation/ AVFoundation.h). It should be noted that many of the functions and structures used by audio units have the AU prefix. Why use Audio Units? Audio units have several benefits. " 1. They are the lowest latency audio processing strategy in ios. 2. They have built-in support for EQ, mixing, and other processing. 3. They allow versatile, real-time manipulation of a flexible processing graph. The drawbacks to audio units are the subtleties and complexities of their setup and use. In general they are not as well-documented as audio queues, and have a steeper learning curve. ii. The MixerHostAudio Class Just as with audio queues, the easiest way to work with audio units in ios is to create an Objective-C, C++ or Objective-C++ class in a project that performs all interactions with audio units. Again, it is necessary to interface functions, variables, initialization functions, and a callback function, similar to the callback used by an audio queue. This example will examine the Objective-C class MixerHostAudio from the MixerHost sample application, which handles all the audio unit functionality of the application. Again, this class is simply one way to use audio units it is also possible to use another C++ class to handle the audio unit interaction. The MixerHostAudio class uses just two audio units in a very simple audio processing graph - mono or stereo audio is passed through a Multichannel Mixer audio unit to a Remote I/O audio unit, which provides its output to the deviceʼs audio hardware. 24
31 Interface The MixerHostAudio class is a subclass of NSObject. So, as expected, it implements the init and dealloc methods, which handle proper initialization and destruction. It also has several instance methods, some for setup and some for interfacing with the rest of the application. The methods that are important when using a MixerHostAudio object, as can be seen from how itʼs used by the MixerHostViewController, are for initialization (init), starting and stopping the audio processing graph (startaugraph and stopaugraph), and setting the state and parameters of the mixer unit (enablemixerinput:ison:, setmixeroutputgain:, and setmixerinput:gain:). The other methods are used by the MixerHostAudio object itself for initialization are - obtainsoundfileurls - setupaudiosession - setupstereostreamformat - setupmonostreamformat - readaudiofilesintomemory - configureandinitializeaudioprocessinggraph and for logging messages conveniently: - printasbd: - printerrormessage: So the interface for the MixerHostAudio class is simpler than it seems at first glance initialize it, start and stop its processing graph, and set the parameters for its mixer audio unit. Member Variables To keep track of state in MixerHostAudio, there are several member variables. First, there is a custom struct (soundstruct) defined, which contains necessary information to pass to the callback function: 25
32 typedef struct { BOOL isstereo; UInt32 framecount; UInt32 samplenumber; AudioUnitSampleType *audiodataleft; AudioUnitSampleType *audiodataright; } soundstruct, *soundstructptr; Variable descriptions: isstereo Whether the input data has a right channel. framecount Total frames in the input data. samplenumber The index of the current sample in the input data. audiodataleft The audio data from the left channel (or the only channel, for mono data). audiodataright The audio data from the right channel (or NULL for mono data). The member variables of MixerHostAudio are:! Float64 - graphsamplerate; CFURLRef - sourceurlarray[num_files]; soundstruct - soundstructarray[num_files]; AudioStreamBasicDescription - stereostreamformat; AudioStreamBasicDescription - monostreamformat; AUGraph - processinggraph; BOOL - playing; BOOL - interruptedduringplayback; 26
33 AudioUnit - mixerunit; Hereʼs a brief description of each: Float64 graphsamplerate The sample rate in Hz used by the audio processing graph. CFURLRef sourceurlarray[num_files] An array of URLs for the input audio files used for playback. soundstruct soundstructarray[num_files] An array of soundstructs, which are defined above. One soundstruct is kept for each file in order to keep track of all the necessary data separately. AudioStreamBasicDescription stereostreamformat The audio data format for a stereo file. Along with the number of channels, this includes information like sample rate, bit depth, and so on. AudioStreamBasicDescription monostreamformat The audio data format for a mono file. AUGraph processinggraph The audio processing graph that contains the Multichannel Mixer audio unit and the Remote I/O unit. The graph does all the audio processing. BOOL playing Whether audio is currently playing. BOOL interruptedduringplayback Whether the audio session has been interrupted (by a phone call, for example). This flag lets allows for restarting the audio session after returning from an interruption. 27
34 AudioUnit mixerunit The Multichannel Mixer audio unit. Its member variables are kept track of in order to change its parameters later, unlike the Remote I/O unit, which doesnʼt need to be referenced after it is added to the processing graph. Audio Unit Setup Some substantial code is required to create audio units and set up the audio processing graph. These are the steps that MixerHostAudio uses for initialization: 1. Set up an active audio session. 2. Obtain files, set up their formats, and read them into memory. 3. Create an audio processing graph. 4. Create and initialize Multichannel Mixer and Remote I/O audio units. 5. Add the audio units to the graph and connect them. 6. Start processing audio with the graph. The details of the audio unit setup are covered in the Audio Unit Initialization section of this document. Audio Unit Callback The last thing needed is a callback function that the Multichannel Mixer audio unit will call when it needs a buffer filled. The callback is where audio is handed off from the application to the mixer unit. After that point, the audio doesnʼt need to be dealt with again it will pass through the Multichannel Mixer unit, through the Remote I/O unit, and to the audio hardware. The callback is implemented as a static C function in MixerHostAudio.m, and is not a method of the MixerHostAudio class. It must be a C function, not a method. Apple warns against making Objective-C calls from a callback, in order to keep calculations fast and efficient during audio processing. So the callback doesnʼt actually know anything about the MixerHostAudio object it just takes in the data sent to it and adds samples to the buffer that the mixer unit asks it to fill. Just as in an audio queue callback, one of the arguments to the callback is a void* which can be used to point to any struct or object that holds the variables for managing state. In this case, the void* argument is a pointer 28
35 to the member variable soundstructarray so the necessary information thatʼs been stored there can be accessed. The details of what the callback does are covered in the Audio Unit Callback Functionality section of this document. iii. Initialization To initialize the MixerHostAudio object and its audio processing graph, it is necessary to do several things. Initialization is grouped into several methods in MixerHostAudio to keep things organized. (For brevity, the code shown here leaves out the error checking that the sample code implements.) Starting an Audio Session Start an audio session using the method setupaudiosession. The session is an AVAudioSession object shared throughout the application. First get the session object, set the MixerHostAudio object as its delegate to receive interruption messages, and set the category of the session to Playback. AVAudioSession *mysession = [AVAudioSession sharedinstance]; [mysession setdelegate: self]; NSError *audiosessionerror = nil; [mysession setcategory: AVAudioSessionCategoryPlayback error: &audiosessionerror]; Next, request the desired sample rate in the audio session. Note that itʼs impossible to set this value directly all that can be done is to request it and check what the actual value is. There is never a guarantee that the preferred sample rate will be used. self.graphsamplerate= ; [mysession setpreferredhardwaresamplerate: graphsamplerate error: &audiosessionerror]; 29
36 Now activate the audio session and then match the graph sample rate to the one the hardware is actually using. [mysession setactive: YES error: &audiosessionerror]; self.graphsamplerate = [mysession currenthardwaresamplerate]; Finally, register a property listener, just as was done with an audio queue. Here it is possible to detect audio route changes (like plugging in headphones) so that playback can be stopped when that happens. AudioSessionAddPropertyListener ( kaudiosessionproperty_audioroutechange, audioroutechangelistenercallback, self ); Variable descriptions: kaudiosessionproperty_audioroutechange The property of interest, in this case the audio route change property. audioroutechangelistenercallback A function that has been defined and will be called when the property changes. self A pointer to any data for the callback to take as an argument, in this case the MixerHostAudio object. Property listeners Just as with audio queues, an audio sessionʼs property listeners will be called when some property of the session changes. The function that was just registered, audioroutechangelistenercallback(), is called when the audio queueʼs AudioRouteChange property changes e.g. when headphones are plugged into the device. The function definition looks like this: void audioroutechangelistenercallback ( void *inuserdata, AudioSessionPropertyID inpropertyid, UInt32 inpropertyvaluesize, const void *inpropertyvalue ) { } 30
37 Variable descriptions: void* inuserdata A pointer to any struct or object containing the desired information, in this case the MixerHostAudio object. AudioSessionPropertyID inid A value specifying the property that has changed. UInt32 inpropertyvaluesize The size of the property value. const void *inpropertyvalue A pointer to the property value. In the body of the function, check whether playback should stop based on what type of audio route change occurred. First, make sure this is a route change message, then get the MixerHostAudio object with a C-style cast of inuserdata. if (inpropertyid!= kaudiosessionproperty_audioroutechange) return; MixerHostAudio *audioobject = (MixerHostAudio *)inuserdata; If sound is not playing, do nothing. Otherwise, check the reason for the route change and stop playback if necessary by posting a notification that the view controller will get. Thereʼs some fairly complicated stuff going on with types here, but it wonʼt be explored in detail. 31
38 if (NO == audioobject.isplaying) { return; } else { CFDictionaryRef routechangedictionary = inpropertyvalue; CFNumberRef routechangereasonref = CFDictionaryGetValue( routechangedictionary, CFSTR (kaudiosession_audioroutechangekey_reason)); SInt32 routechangereason; CFNumberGetValue (routechangereasonref, kcfnumbersint32type, &routechangereason); if (routechangereason == kaudiosessionroutechangereason_olddeviceunavailabl e) { NSString*MixerHostAudioObjectPlaybackStateDidChang enotification =@ MixerHostAudioObjectPlaybackStateDidChangeNotif ication ; [[NSNotificationCenter defaultcenter] postnotificationname: MixerHostAudioObjectPlaybackStateDidChangeNotifica tion object: self]; } } Obtaining Sound File URLs Next, use the method obtainsoundfileurls to get the URLs to the sound files wanted for playback. All thatʼs needed is to get an NSURL* using the path and then store it as a CFURLRef in an array for later use (retain is used so the reference wonʼt be autoreleased). NSURL *guitarloop= [[NSBundle mainbundle] URLForResource:@ guitarstereo caf ]; NSURL *beatsloop= [[NSBundle mainbundle] URLForResource:@ beatsmono withextension:@ caf ]; sourceurlarray[0] = (CFURLRef) [guitarloop retain]; sourceurlarray[1] = (CFURLRef) [beatsloop retain]; 32
39 Setting Up Mono and Stereo Stream Formats Now set up the AudioStreamBasicDescription member variables for each desired playback format, in the methods setupstereostream Format and setupmonostreamformat. Notice that these methods are identical except for what they set as mchannelsperframe and what they log. It would be easy to collapse these into a single method, but presumably theyʼre here for the sake of clarity. It is also possible to take the audio data formats from the files themselves if accepting different formats was desired, as was done in the audio queue example, but this sample application is only designed to work with the files it has built in. Take a look at setupstereostreamformat. First, get the size in bytes of AudioUnitSampleType. This type is defined to be a signed 32-bit integer in CoreAudioTypes.h, so it has 4 bytes. size_t bytespetsample = sizeof (AudioUnitSampleType); Then set all the values in the AudioStreamBasicDescription. stereostreamformat.mformatid = kaudioformatlinearpcm; stereostreamformat.mformatflags = kaudioformatflagsaudiounitcanonical; stereostreamformat.mbytesperpacket = bytespersample; stereostreamformat.mframesperpacket = 1; stereostreamformat.mbytesperframe = bytespersample; stereostreamformat.mchannelsperframe = 2; stereostreamformat.mbitsperchannel = 8 * bytespersample; stereostreamformat.msamplerate = graphsamplerate; Variable descriptions: mformatid An identifier tag for the format. Linear PCM (pulsecode modulation) is very common itʼs the format for.wav and.aif files, for example. Here it is used for the.caf files Apple has included. 33
40 mformatflags Here it is possible to specify some flags to hold additional details about the format. In this case, the canonical audio unit sample format is being used to avoid extraneous conversion. mbytesperpacket The number of bytes in each packet of audio. A packet is just a small chunk of data whose size is determined by the format and is the smallest ʻreadableʼ piece of data. For uncompressed linear PCM data (which is the data type here) each packet has just one sample, so this is set to the number of bytes per sample. mframesperpacket The number of frames in each packet of audio. A frame is just a group of samples that happen at the same instant so in stereo, there are two samples in" each frame. Again, for uncompressed audio this value is one. Just read frames one at a time from the file. mbytesperframe The number of bytes in each frame. Even though each frame has two samples in stereo, this will again be the number of bytes per sample. This is because the data is noninterleaved each channel is held in a separate array, rather than having a single array in which samples alternate between the two channels. mchannelsperframe The number of channels in the audio one for mono, two for stereo, etc. mbitsperchannel The bit depth of each channel how many bits represent a single sample. Using AudioUnitSampleType, just convert from bytes to bits by multiplying by 8. msamplerate The sample rate of the format in frames per second (Hz.) This is the value obtained from the hardware, which may or may not be the preferred sample rate. 34
41 Reading Audio Files Into Memory Next, read the audio files into memory so that audio data can be used from within the callback, with the readaudiofilesintomemory method. Note that this is a very different strategy than in the audio queue example, in which the callback opened and read directly from a file each time it needed to fill a buffer. The two methods have benefits and drawbacks. Reading the files into memory means there isnʼt a need to keep opening the file, which will make the callback more efficient. But it means space needs to be allocated for the audio data, which will use up a lot of memory if the files are long. Loop through the same process for each audio file, so all the code in this method is inside this for loop: " for(int audiofile = 0; audiofile < NUM_FILES; } ++audiofile) { " " " " " " " Next, open a file. The audio queue example used AudioFile.h. Here ExtendedAudioFile.h is used, which just handles additional encoding and decoding. First create an ExtAudioFileRef and load the file into it from the URL obtained earlier. ExtAudioFileRef audiofileobject = 0; OSStatus result = ExtAudioFileOpenURL ( sourceurlarray[audiofile], &audiofileobject); Getting properties 1. First get the number of frames in the file so that memory can be properly allocated for the data, and so itʼs possible to keep track of location when reading it. Store the value in the soundstruct that corresponds with the file. 35
42 UInt64 totalframesinfile = 0; UInt32 framelengthpropertysize = sizeof (totalframesinfile); result = ExtAudioFileGetProperty ( audiofileobject, kextaudiofileproperty_filelengthframes, &framelengthpropertysize, &totalframesinfile); soundstructarray[audiofile].framecount = totalframesinfile; Variable descriptions: audiofileobject The file, ExtAudioFileRef from above. kextaudiofileproperty_filelengthframes The property of interest, in this case file length in frames. &framelengthpropertysize Pointer to the size of the property of interest. &totalframesinfile Pointer to the variable where this value should be stored. 2. Now get the channel count in the file for correct playback. Store the value in a variable channelcount, which will be used shortly. AudioStreamBasicDescription fileaudioformat = {0}; UInt32 formatpropertysize = sizeof (fileaudioformat); result = ExtAudioFileGetProperty (audiofileobject, kextaudiofileproperty_filedataformat, &formatpropertysize, &fileaudioformat); UInt32 channelcount= fileaudioformat.mchannelsperframe; 36
43 Variable descriptions: audiofileobject The file, ExtAudioFileRef from above. kextaudiofileproperty_filedataformat The property of interest, in this case the data format description. &formatpropertysize Pointer to the size of the property of interest. &fileaudioformat Pointer to the variable where this value should be stored, here the AudioStreamBasicDescription. Allocating memory Now that the channel count is known, memory can be allocated to store the audio data. " 1. First, allocate the left channel, whose size is just the number of samples times the size of each sample. soundstructarray[audiofile].audiodataleft = (AudioUnitSampleType *) calloc (totalframesinfile, sizeof(audiounitsampletype)); 2. Next, if the file is stereo, allocate the right channel in the same way. Also set the isstereo value in the soundstruct and store the appropriate audio data format in a variable importformat, which will be used below. AudioStreamBasicDescription importformat = {0}; if (2 == channelcount) { soundstructarray[audiofile].isstereo = YES; soundstructarray[audiofile].audiodataright = (AudioUnitSampleType *) calloc (totalframesinfile, sizeof (AudioUnitSampleType)); importformat = stereostreamformat; } else if (1 == channelcount) { soundstructarray[audiofile].isstereo = NO; importformat = monostreamformat; } 37
44 3. Use the importformat variable to set the data format of the audiofileobject so that it will use the correct format when the audio data is copied into each fileʼs soundstruct. result = ExtAudioFileSetProperty ( audiofileobject, kextaudiofileproperty_clientdataformat, sizeof (importformat), &importformat ); Setting up a buffer list To read the audio files into memory, first set up an AudioBufferList object. As Appleʼs comments explain, this object gives the ExtAudioFileRead function the correct configuration for adding data to the buffer, and it points to the memory that should be written to, which was allocated in the soundstruct. 1. First, allocate the buffer list and set its channel count. AudioBufferList *bufferlist; bufferlist = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelcount - 1); bufferlist->mnumberbuffers = channelcount; 2. Create an empty buffer as a placeholder and place it at each index in the buffer array. AudioBuffer emptybuffer = {0}; size_t arrayindex; for (arrayindex = 0; arrayindex < channelcount; arrayindex++) { bufferlist->mbuffers[arrayindex] = emptybuffer; } 38
45 3. Finally, set the properties of each buffer the channel count, the size of the data, and where the data will actually be stored. bufferlist->mbuffers[0].mnumberchannels = 1; bufferlist->mbuffers[0].mdatabytesize = totalframesinfile * sizeof (AudioUnitSampleType); bufferlist->mbuffers[0].mdata = soundstructarray [audiofile].audiodataleft; if (2 == channelcount) { bufferlist->mbuffers[1].mnumberchannels = 1; bufferlist->mbuffers[1].mdatabytesize = totalframesinfile * sizeof (AudioUnitSampleType); bufferlist->mbuffers[1].mdata = soundstructarray [audiofile].audiodataright; } Reading the data With the buffers all set up and ready to be filled, the audio data can be read in from the audio files using ExtAudioFileRead. UInt32 numberofpacketstoread = (UInt32 totalframesinfile; result = ExtAudioFileRead ( audiofileobject, &numberofpacketstoread, bufferlist ); free (bufferlist); After that, along with error checking, all that needs to be done is set the sample index to 0 and get rid of audiofileobject. " soundstructarray[audiofile].samplenumber = 0; ExtAudioFileDispose (audiofileobject); 39
46 Setting Up an Audio Processing Graph The rest of the initialization will take place in the configureand InitializeAudioProcessingGraph method. Here an audio processing graph will be created and audio units will be added to it. (Again, error checking code is not reproduced here.) 1. First create an AUGraph: result = NewAUGraph (&processinggraph); 2. Next, set up individual audio units for use in the graph. Set up a component description for each one, the Remote I/O unit and the Multichannel Mixer unit. AudioComponentDescription iounitdescription; iounitdescription.componenttype = kaudiounittype_output; iounitdescription.componentsubtype = kaudiounittype_remoteio; iounitdescription.componentmanufacturer = kaudiounitmanufacturer_apple; iounitdescription.componentflags = 0; iounitdescription.componentflagsmask = 0; AudioComponentDescription MixerUnitDescription; MixerUnitDescription.componentType = kaudiounittype_mixer; MixerUnitDescription.componentSubType = kaudiounittype_multichannelmixer; MixerUnitDescription.componentManufacturer = kaudiounitmanufacturer_apple; MixerUnitDescription.componentFlags = 0; 40
47 MixerUnitDescription.componentFlagsMask = 0; Variable descriptions: componenttype The general type of the audio unit. The project has an output unit and a mixer unit. componentsubtype The specific audio unit subtype. The project has a Remote I/O unit and a Multichannel Mixer unit. componentmanufacturer Manufacturer of the audio unit. All the audio units usable in ios are made by Apple. componentflags Always 0. componentflagsmask Always Then add the nodes to the processing graph. AUGraphAddNode takes the processing graph, a pointer to the description of a node, and a pointer to an AUNode in which to store the added node. AUNode ionode; AUNode mixernode; result = AUGraphAddNode ( processinggraph, &iounitdescription, &ionode ); result = AUGraphAddNode ( processinggraph, &MixerUnitDescription, &mixernode ); 4. Open the graph to prepare it for processing. Once itʼs been opened, the audio units have been instantiated, so itʼs possible to retrieve the Multichannel Mixer unit using AUGraphNodeInfo. result = AUGraphOpen (processinggraph); result = AUGraphNodeInfo ( processinggraph, mixernode, NULL, &mixerunit ); 41
48 Setting up the mixer unit Some setup needs to be done with the mixer unit. 1. Set the number of buses and set properties separately for the guitar and beats buses. " UInt32 buscount = 2; UInt32 guitarbus = 0; UInt32 beatsbus = 1; result = AudioUnitSetProperty ( mixerunit, kaudiounitpropertyelementcount, kaudiounitscopeinput, 0, &buscount, sizeof (buscount)); Variable descriptions: mixerunit The audio unit a property is being set in. kaudiounitpropertyelementcount The desired property to set is the number of elements. kaudiounitscopeinput The scope of the property. Audio units normally have input scope, output scope, and global scope. Here there should be two input buses so input scope is used. 0 The element to set the property in. &buscount Pointer to the value being set. sizeof (buscount) Size of the value being set. 2. Now increase the maximum frames per slice to use larger slices when the screen is locked. UInt32 maximumframesperslice = 4096; result = AudioUnitSetProperty ( mixerunit, kaudiounitproperty_maximumframesperslice, kaudiounitscope_global, 0, &maximumframesperslice, sizeof (maximumframesperslice) ); 42
49 3. Next attach the input callback to each input bus in the mixer node. The input nodes on the mixer are the only place where audio will enter the processing graph. for (UInt16 busnumber = 0; busnumber < buscount; ++busnumber) { } AURenderCallbackStruct inputcallbackstruct; inputcallbackstruct.inputproc = &inputrendercallback; inputcallbackstruct.inputprocrefcon = soundstructarray; result = AUGraphSetNodeInputCallback ( processinggraph, mixernode, busnumber, &inputcallbackstruct ); The inputprocrefcon variable is what will be passed to the callback it can be a pointer to anything. Here it is set to soundstructarray because thatʼs where the data the callback will need is being kept. inputrendercallback is a static C function which will be defined in MixerHostAudio.m. 4. Now set the stream formats for the two input buses. result = AudioUnitSetProperty ( mixerunit, kaudiounitproperty_streamformat, kaudiounitscope_input, guitarbus, &stereostreamformat, sizeof (stereostreamformat) ); result = AudioUnitSetProperty ( mixerunit, kaudiounitproperty_streamformat, kaudiounitscope_input, beatsbus, &monostreamformat, sizeof (monostreamformat) ); 43
50 5. Finally, set the sample rate of the mixerʼs output stream. result = AudioUnitSetProperty ( mixerunit, kaudiounitproperty_samplerate, kaudiounitscope_output, 0, &graphsamplerate, sizeof (graphsamplerate) ); Connecting the nodes and initializing the graph Now that the nodes are good to go, itʼs possible to connect them and initialize the graph. result = AUGraphConnectNodeInput ( processinggraph, mixernode, 0, ionode, 0 ); result = AUGraphInitialize (processinggraph); Variable descriptions: processinggraph The audio processing graph. mixernode The mixer unit node, which is the source node. 0 The output bus number of the source node. The mixer unit has only one output bus, bus 0. ionode The I/O unit node, which is the destination node. 0 The input bus number of the destination node. The I/O unit has only one input bus, bus 0. iv. Audio Unit Input Callback Functionality To provide audio data to the audio processing graph, there is an audio unit input callback function. The graph will call this function every time it needs a buffer of audio, the same way an audio queue callback is used. Here the callback uses data that is stored in the soundstruct array rather than reading from a file each time. The callback is a static C function defined within MixerHostAudio.m, and its declaration looks like this: 44
51 static OSStatus inputrendercallback( void* inrefcon, AudioUnitRenderActionFlags *ioactionflags, const AudioTimeStamp *intimestamp, UInt32 inbusnumber, UInt32 innumberframes, AudioBufferList *iodata ) { } Variable descriptions: void* inrefcon A pointer to any struct or object used to manage state. Typically this will be either a custom C struct or an Objective-C or C++ object. In this case, it will be a pointer to the soundstruct, which has the necessary data for the callback. AudioUnitRenderActionFlags *ioactionflags Unused here. This can be used to mark areas of silence when generating sound. const AudioTimeStamp* intimestamp Unused here. UInt32 inbusnumber The number of the bus which is calling the callback. UInt32 innumberframes The number of frames being requested. AudioBufferList *iodata The buffer list which has the buffer to fill. Getting the soundstruct Array The first thing the callback should do is get a pointer to the struct or object that has all the state variables. In this case, just cast the void* inrefcon as a soundstructptr using a C-style cast: soundstructptr* soundstructpointerarray = (soundstructptr) inrefcon; 45
52 Also grab some useful data from the sound struct the total frame count and whether the data is stereo. UInt32 frametotalforsound = soundstructpointerarray[inbusnumber].framecount; BOOL isstereo = soundstructpointerarray [inbusnumber].isstereo; Filling a Buffer Now to fill the buffer with the audio data stored in the soundstruct array. First, set up pointers to the input data and the output buffers. " AudioUnitSampleType *datainleft; AudioUnitSampleType *datainright; datainleft = soundstructpointerarray [inbusnumber].audiodataleft; datainright = soundstructpointerarray [inbusnumber].audiodataright; AudioUnitSampleType *outsampleschannelleft; AudioUnitSampleType *outsampleschannelright; outsampleschannelleft = (AudioUnitSampleType *) iodata->mbuffers[0].mdata; outsampleschannelright = (AudioUnitSampleType *) iodata->mbuffer[1].mdata; Next, get the sample number to start reading from, which is stored in the soundstruct array, and copy the data from the input buffers into the output buffers. When the sample number reaches the end of the file, set it to 0 so that playback will just loop. 46
53 for (UInt32 framenumber = 0; framenumber < innumberframes; ++framenumber) { outsampleschannelleft[framenumber] = datainleft [samplenumber]; if (isstereo) outsampleschannelright [framenumber] = datainright[samplenumber]; samplenumber++; if (samplenumber >= frametotalforsound) samplenumber = 0; } soundstructpointerarray[inbusnumber].samplenumber = samplenumber; Thatʼs all the callback needs to do it just fills a buffer whenever itʼs called. The processing graph will call the callback at the right times to keep the audio playing smoothly. Processing After filling the audio buffer, just as in an audio queue callback, itʼs possible to process the audio directly by manipulating the values in the buffer. And if the application uses synthesis, not playback, the buffer can be filled directly without using a file. This is also where any izotope effects would be applied. Details regarding the implementation of izotope effects are distributed along with our effect libraries. v. Setting Audio Unit Parameters The Multichannel Mixer audio unit doesnʼt do anything interesting unless its parameters are changed. That functionality can be put in the MixerHostAudio class within a few methods: enablemixer Input:isOn:, setmixerinput:gain:, and setmixeroutputgain:. 47
54 Turning Inputs On and Off To enable and disable the two input buses of the mixer, use AudioUnitSetParameter(). OSStatus result = AudioUnitSetParameter ( mixerunit, kmultichannelmixerparam_enable, kaudiounitscope_input, inputbus, isonvalue, 0 ); Variable descriptions: mixerunit The desired audio unit to set a parameter in. kmultichannelmixerparam_enable The parameter to be set, in this case the enable/disable parameter. kaudiounitscope_input The scope of the parameter. This involves enabling and disabling input buses, so it falls in the mixerʼs input scope. inputbus The number of the bus to enable or disable, which is an argument to the method. isonvalue Whether to enable/disable the bus, also an argument to the method. 0 An offset in frames, which Apple recommends always be set to 0. Then make sure the sample index in both buses matches so that they stay in sync during playback after being disabled. if (0 == inputbuss && 1 == isonvalue) { soundstructarray[0].samplenumber = soundstructarray[1].samplenumber; } if (1 == inputbus && 1 == isonvalue) { soundstructarray[1].samplenumber = soundstructarray[0].samplenumber; } 48
55 Setting Gain To set the input gain in the two input buses of the mixer, again use AudioUnitSetParameter(). The arguments are identical except for the parameter ID, which is now kmultichannelmixerparam_volume, and the parameter value to set, which is an argument to this method newgain. OSStatus result = AudioUnitSetParameter ( mixerunit, kmultichannelmixerparam_volume, kaudiounitscope_input, inputbus, newgain, 0 ); AudioUnitSetParameter() is also used to set the output gain of the mixer. The arguments are similar to those for input gain, but the scope is now kaudiounitscope_input and the element number is 0, because there is only one output element while there are two input buses. OSStatus result = AudioUnitSetParameter ( mixerunit, kmultichannelmixerparam_volume, kaudiounitscope_output, 0, newgain, 0 ); vi. Running the Audio Processing Graph Starting and Stopping There are also simple interface methods to start and stop the processing graph: startaugraph and stopaugraph. All that needs to be done is call the appropriate functions on the processing graph (and set the playing member variable, which is used to handle interruptions). Starting the graph (startaugraph) is done as follows: OSStatus result = AUGraphStart (processinggraph); self.playing = YES; 49
56 And to stop it, stopaugraph first checks whether itʼs running, and stops it if it is. Boolean isrunning = false; OSStatus result = AUGraphIsRunning (processinggraph, &isrunning); if (isrunning) { result = AUGraphStop (processinggraph); self.playing = NO; } Handling Interruptions Because it is declared as implementing the <AVAudioSession Delegate> protocol, MixerHostAudio has methods to handle interruptions to the audio stream, like phone calls and alarms. It is possible to keep track of whether the audio session has been interrupted while playing in a boolean member variable, interruptedduringplayback. In begininterruption, itʼs possible to check whether sound is currently playing. If it is, set interruptedduringplayback and post a notification that the view controller will see. if (playing) { self.interruptedduringplayback = YES; NSString* MixerHostAudioObjectPlaybackStateDidChangeNotif MixerHostAudioObjectPlaybackStateDidChangeNot ification ; [[NSNotificationCenter defaultcenter] postnotificationname: MixerHostAudioObjectPlaybackStateDidChangeNotif ication object: self]; } The MixerHostViewController has registered to receive these notifications in the registerforaudioobjectnotifications method. When it receives such a notification, it will stop playback. When the audio session resumes from an interruption, end InterruptionWithFlags: is called. The flags value is a series of bits 50
57 in which each bit can be a single flag, so it can be checked against AVAudioSessionInterruptionFlags_ShouldResume with a bitwise and, &. if (flags & AVAudioSessionInterruptionFlags_ShouldResume) { } If the session should resume, reactivate the audio session: [[AVAudioSession sharedinstance] setactive: YES error: &endinterruptionerror]; Then update the interruptedduringplayback value and post a notification to resume playback if necessary. if (interruptedduringplayback) { self.interruptedduringplayback = NO; NSString* MixerHostAudioObjectPlaybackStateDidChangeNotif ication MixerHostAudioObjectPlaybackStateDidChangeNot ification ; [[NSNotificationCenter defaultcenter] PostNotificationName: MixerHostAudioObjectPlaybackStateDidChangeNotif ication object: self]; } vii. auriotouch The auriotouch sample application also uses audio units, but in a different way. First of all, it uses the Remote I/O unit for input as well as output, and this is the only unit that is used. (The auriotouch project is also not quite as neat and well-commented as MixerHost, so it may be a little harder to understand). This document will cover the structure of how auriotouch works but wonʼt dive too deep into its complexities. 51
58 Structure Most of the audio functionality of auriotouch is in auriotouchapp Delegate.mm. T h i s i s w h e r e a n i n t e r r u p t i o n l i s t e n e r rio InterruptionListener(), a property listener proplistener(), and an audio unit callback PerformThru() can be found. Most audio and graphics initialization takes place in applicationdidfinish Launching:, while setup for the Remote I/O unit takes place in aurio_helper.cpp, in the SetupRemoteIO function. Most of the other code in the app is for its fast Fourier transform (FFT) functionality, and for graphics. Initializing the Audio Session In auriotouch, applicationdidfinishlaunching: initializes an audio session similar to the way MixerHost does. First, the callback is stored in an AURenderCallback struct inputproc for use with the I/O unit. inputprocrefcon is a pointer to desired data to retrieve when the callback is called, so here itʼs just a reference to the auriotouchappdelegate object. (In the three examples in this document, its been used as a reference to a C++ object, a C struct, and an Objective-C object.) inputproc.inputproc = PerformThru; inputproc.inputprocrefcon = self; Then it initializes a SystemSoundID named buttonpresssound, which is functionality which hasnʼt been seen before. Through Audio Services, itʼs possible to play simple sounds without handling the audio directly at the sample level (again the error-checking code is left out for clarity here). url = CFURLCreateWithFileSystemPath ( kcfallocatordefault, CFStringRef([[NSBundle mainbundle] button_press oftype:@ caf ]), kcfurlposixpathstyle, false); AudioServicesCreateSystemSoundID(url, &buttonpresssound); CFRelease(url); 52
59 Now it initializes the audio session, sets some desired properties, and activates it. Unlike MixerHost, the session category is play and record because input audio is being used. Just as was done in MixerHost, there is a request for a preferred buffer size, but the actual buffer size must be retrieved because thereʼs no guarantee they match. AudioSessionInitialize(NULL, NULL, riointerruptionlistener, self); UInt32 audiocategory = kaudiosessioncategory_playandrecord; AudioSessionSetProperty ( kaudiosessionproperty_audiocategory, sizeof (audiocategory), &audiocategory); AudioSessionAddPropertyListener ( kaudiosessionproperty_audioroutechange, proplistener, self); Float32 preferredbuffersize =.005; AudioSessionSetProperty ( kaudiosessionproperty_preferredhardwareiobufferdur ation, sizeof(preferredbuffersize), &preferredbuffersize); UInt32 size = sizeof(hwsamplerate); AudioSessionGetProperty ( kaudiosessionproperty_currenthardwaresamplerate, &size, &hwsamplerate); AudioSessionSetActive(true); Next it initializes the Remote I/O unit using SetupRemoteIO(), which will be covered in the next section. Hand it the audio unit, the callback, and the audio data format. SetupRemoteIO(rioUnit, inputproc, thruformat); Then auriotouch sets up custom objects for DC filtering and FFT processing, which wonʼt be examined in detail here. Next it starts the Remote I/O unit and sets thruformat to match the audio unitʼs output format. 53
60 AudioOutputUnitStart(rioUnit); AudioUnitGetProperty(rioUnit, kaudiounitproperty_streamformat, kaudiounitscope_output, 1, &thruformat, &size); Initializing the Remote I/O Unit Set up the Remote I/O unit in SetupRemoteIO() in aurio_helper.cpp. This is similar to what was done in MixerHost. First create a description of the I/O unit component. AudioComponentDescription desc; desc.componenttype = kaudiounittype_output; desc.componentsubtype = kaudiounitsubtype_remoteio; desc.componentmanufacturer = kaudiounitmanufacturer_apple; desc.componentflags = 0; desc.componentflagsmask = 0; Because there is only one audio unit in auriotouch, there is no audio processing graph. So instead of adding a node to a graph with this description, just explicitly search for a component that matches this description and create a new instance of it. The only component that will match this description is the Remote I/O unit. AudioComponent comp = AudioComponentFindNext(NULL, &desc); AudioComponentInstanceNew(comp, &inremoteiounit); Now enable input from the I/O unit so that it will provide audio, and then register the callback as the render callback. UInt32 one = 1; AudioUnitSetProperty(inRemoteIOUnit, kaudiooutputunitproperty_enableio, kaudiounitscope_input, 1, &one, sizeof(one)); AudioUnitSetProperty(inRemoteIOUnit, kaudiounitproperty_setrendercallback, kaudiounitscope_input, 0, &inrenderproc, sizeof (inrenderproc)); 54
61 Finally set the format for both input and output and initialize the unit. SetAUCanonical() is a function from CAStreamBasic Description.h which creates a canonical AU format with the given number of channels. false means it wonʼt be interleaved. outformat.setaucanonical(2, false); outformat.msamplerate = 44100; AudioUnitSetProperty(inRemoteIOUnit, kaudiounitproperty_streamformat, kaudiounitscope_input, 0, &outformat, sizeof (outformat)); AudioUnitSetProperty(inRemoteIOUnit, kaudiounitproperty_streamformat, kaudiounitscope_output, 1, &outformat, sizeof (outformat)); AudioUnitInitialize(inRemoteIOUnit); The Audio Unit Render Callback Take a look at the render callback PerformThru in auriotouchapp Delegate.mm. Most of the code here is for preparing the data so that its values can be shown in an oscilloscope or FFT display, so that wonʼt be covered in great detail. Notice that the first thing done in the callback is to actually render the audio using the Remote I/O unit: auriotouchappdelegate *THIS = (auriotouchappdelegate *)inrefcon; OSStatus err = AudioUnitRender(THIS->rioUnit, ioactionflags, intimestamp, 1, innumberframes, iodata); Next the buffer is processed using a DC filter. A DC filter removes the DC component of an audio signal, which is, roughly speaking, the part of the signal that is a constant positive or negative voltage. In other words, it normalizes the signal so that its mean value is 0. This is an example of how input audio can be processed in a callback as well as audio for playback. 55
62 for(uint32 i = 0; i < iodata->mnumberbuffers; ++i) THIS->dcFilter[i].InplaceFilter((SInt32*) (iodata->mbuffers[i].mdata), innumberframes, 1); Then pass the buffer of audio to drawing functions depending on which drawing mode is enabled. These functions will not alter the audio data, just read it to draw in the oscilloscope display or the FFT. if (THIS->displayMode == auriotouchdisplaymodeoscilloscopewaveform) { } else if ((THIS->displayMode == auriotouchdisplaymodespectrum) (THIS- >displaymode == auriotouchdisplaymodeoscilloscopefft)) { } Finally, check whether the mute member variable is on. If it is, silence the data in the buffer using the SilenceData() function, which just zeroes out the buffers. The buffer is passed through the Remote I/O unit and is played, whether it is silence or normal audio. if (THIS->mute == YES) { SilenceData(ioData); } 56
63 3. Glossary ADC An analog to digital converter, a device that will sample an analog sound at a fixed sampling rate and convert it into a digital signal. amplitude The strength of vibration of a sound wave roughly speaking, the ʻvolumeʼ of a sound at a particular moment. analog A continuous ʻreal-worldʼ signal rather than discrete numbers. An analog signal must be sampled through an ADC in order to be converted to a digital signal that may be manipulated through processing in a computer. audio queue A Core Audio service for working with audio at a high level. Consists of a short list of buffers that are passed to the hardware and then reused. The audio queue object will call the callback whenever it needs a buffer filled. audio unit A Core Audio service for working with audio at a low level. An individual audio unit is a component, like a mixer, an equalizer, or an I/O unit, which takes audio input and produces audio output. Audio units are ʻpluggedʼ into one another, creating a chain of audio processing components that provide real-time audio processing. bit A binary digit. The smallest computational unit provided by a computer. A bit has only two states, on or off (1 or 0) and represents a single piece of binary data. bit depth The total possible size in bits of each sample in a signal for example, an individual sample might consist of a signed 16-bit integer, where 32,767 (2^16-1) is the maximum amplitude, and -32,768 is the minimum. buffer A short list of samples whose length is convenient for processing. For live audio processing, buffers are typically small, a few hundred to a few thousand samples (usually in powers of two), in order to minimize latency. For offline processing, buffers may be larger. Buffers are time efficient because processing functions are only called every time a buffer is needed, and theyʼre space efficient because they donʼt occupy a huge amount of memory. Typical buffer sizes are 256, 512, 1024 and 2048 samples, though they may vary depending on the application. 57
64 callback A written function that the system calls. When working with audio, callbacks are used to provide buffers whenever they are requested. Callbacks for audio input will have a few arguments, including a pointer to a buffer that should be filled and an object or struct containing additional information. The callback is where an audio buffer is handed off to an izotope effect for processing, and upon return that audio is passed to the audio hardware for output channel An individual audio signal in a stream. Channels are independent of one another. There may be one or many channels in an audio stream or file for example, a mono sound has one channel and a stereo sound has two channels. channel count The number of individual channels in an input stream. Core Audio The general API for audio in Mac OS and ios, which includes audio queues and audio units along with a large base of other functionality for working with audio. DAC A digital to analog converter. A device which reconstructs a continuous signal from individual samples by interpolating what should be between the samples. deinterleaved See interleaved. digital Consisting of discrete numbers rather than a continuous signal. fixed-point A system for representing numbers that are essentially integers. This data type has a fixed number of digits after the decimal point (radix point). A typical fixed point number may look like floating-point A system for representing numbers that include a decimal place. Floating point refers to the fact that the decimal place can be placed anywhere relative to the significant digits of a number. Floating point numbers can typically represent a larger range of values than their fixed point counterparts at the expense of additional memory. A floating point number may look like or
65 frame A set of samples that occur at the same time. For example, stereo audio will have two samples in each frame. L R L R L R L R Frame Frame Frame Frame Buffer or packet frequency - How often an event occurs. For sounds, this refers to the frequency of oscillation of the sound wave. Frequency is measured in Hertz. Hertz or Hz - A unit of frequency cycles per second. interleaved and deinterleaved These terms refer to the way samples are organized by channel in audio data. Interleaved audio data is data in which samples are stored in a single dimensional array. Itʼs called interleaved because the audio alternates between channels, providing the first sample of the first channel followed by the first sample of the second channel and so on. So, where L is a sample from the left channel and R is a sample from the right, itʼs organized as follows: L R L R L R L R L R L R L R L R In contrast, a deinterleaved or noninterleaved audio stream in which the channels are separated from one another and each channel is represented as its own contiguous group of samples. A deinterleaved audio stream may be stored in a two-dimensional array with each dimension consisting of a single channel L L L L L L L L R R R R R R R R 59
66 latency - Time delay due to processing. This refers to the amount of time that passes when audio is sent to a processing function and when that audio is output. Any audio processing system will have some amount of inherent delay due to the amount of time it takes to process the audio. Minimizing the amount of latency is desirable for real-time audio application. noninterleaved See interleaved. packet A small chunk of audio that is read from a file. sample - A single number indicating the amplitude of a sound at a particular moment. Digital audio is represented as a series of samples at some sampling rate, such as 44,100 or 48,000 Hz, and with some bit depth, such as 16 bits. sampling rate - The frequency with which samples of a sound are taken. Typical values are 22,050, 44,100 or 48,000 samples per second, or Hz. Your trusty old compact disc player hums along at a sampling rate of 44.1 khz, or 44,100 Hz. 60
67 4. Additional Help and Resources The Apple Developer documentation, sample applications and forums are great places to find additional information on how to develop audio applications for ios. Here are some useful links: Audio Queues The Audio Queue Services Programming Guide from the Mac OS X Developer Library: MusicAudio/Conceptual/AudioQueueProgrammingGuide/Introduction/ Introduction.html The Audio Queue Services Reference from the Mac OS X Developer Library: MusicAudio/Reference/AudioQueueReference/Reference/reference.html The SpeakHere sample application: #samplecode/speakhere/introduction/intro.html Audio Units The Audio Unit Hosting Guide for ios from the ios Developer Library: Conceptual/AudioUnitHostingGuide_iOS/Introduction/Introduction.html The MixerHost sample application: #samplecode/mixerhost/introduction/intro.html The auriotouch sample application: #samplecode/auriotouch/introduction/intro.html The iphonemixereqgraphtest sample application: developer.apple.com/library/ios/#samplecode/iphonemixereqgraphtest/ Introduction/Intro.html 61
68 General ios Help The ios Developer Library: navigation/ The Apple Developer Forums: 62
69 5. Appendix - Introductory DSP Tutorial This appendix is meant to serve as a brief introduction to digital signal processing for beginners. To work with audio in ios, the reader should be familiar with the ideas described here. Sound is vibrating air, and at its simplest a sound is just a single sine wave. A sine wave has a frequency (the number of times it oscillates per second) and an amplitude (the strength of the vibration). The frequency of a sound determines its perceived pitch (whether it is high or low), and the amplitude determines its perceived volume (whether it is loud or soft.) Typically we measure frequency in oscillations per second, or Hertz (Hz). Amplitude can be measured in a few ways; however, the typical unit is decibels or db. Decibels are on a logarithmic scale that generally replicates how we perceive the loudness of a signal. In theory, any sound can be represented as a sum of sine waves with different frequencies and amplitudes. The strength of the air vibration caused by a sound is called the amplitude, and may be thought of as the volume. Sounds heard in this fashion are analog continuous in both frequency and amplitude. Analog sound that is recorded through a microphone is turned into an electrical signal whose voltage varies with the amplitude of the sound. This voltage is then sampled very rapidly by an analog to digital converter, or ADC, at some sampling rate, with typical values being 44,100 and 48,000 samples per second, or Hz. The numbers obtained from this process are then stored in a digital device as a series of samples. The signal is digital because those samples are discrete and are represented as numbers with a set amount of digits. The possible size of those numbers in bits is called the bit depth for example, samples might be fixed-point signed 16-bit integers, where 32,767 (2^16-1) is the maximum amplitude, and -32,768 is the minimum. Or samples may be floating-point values, normally on a scale from -1.0 to 1.0. The amplitude of a digital signal may also be expressed in decibels (db), but unlike the db values for real-world sounds, the values are placed on a scale where 0.0 db is the maximum amplitude a system can produce. Thus the representation of decibels in digital systems is in the negative range. The typical person can hear a signal between 0.0dB (very loud) and -60.0dB (very quiet) from a digital system. 63
70 Once the signal is digital, a computer can manipulate it in a variety of ways through digital signal processing. DSP is simply the mathematics used for effects, noise reduction, metering, and a wide array of other audio applications. After the audio is processed through DSP algorithms, it must be converted to sound through a digital to analog converter, or DAC. The DAC will reconstruct a continuous waveform by interpolating the values it expects should be between the samples and playing that audio back through speakers or headphones. 64
Mobile Application Development
Mobile Application Development Lecture 23 Sensors and Multimedia 2013/2014 Parma Università degli Studi di Parma Lecture Summary Core Motion Camera and Photo Library Working with Audio and Video: Media
The C Programming Language course syllabus associate level
TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental programming
Virtuozzo Virtualization SDK
Virtuozzo Virtualization SDK Programmer's Guide February 18, 2016 Copyright 1999-2016 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse 59 8200
3.5. cmsg Developer s Guide. Data Acquisition Group JEFFERSON LAB. Version
Version 3.5 JEFFERSON LAB Data Acquisition Group cmsg Developer s Guide J E F F E R S O N L A B D A T A A C Q U I S I T I O N G R O U P cmsg Developer s Guide Elliott Wolin [email protected] Carl Timmer [email protected]
INTRODUCTION TO OBJECTIVE-C CSCI 4448/5448: OBJECT-ORIENTED ANALYSIS & DESIGN LECTURE 12 09/29/2011
INTRODUCTION TO OBJECTIVE-C CSCI 4448/5448: OBJECT-ORIENTED ANALYSIS & DESIGN LECTURE 12 09/29/2011 1 Goals of the Lecture Present an introduction to Objective-C 2.0 Coverage of the language will be INCOMPLETE
TUTORIAL FOR INITIALIZING BLUETOOTH COMMUNICATION BETWEEN ANDROID AND ARDUINO
TUTORIAL FOR INITIALIZING BLUETOOTH COMMUNICATION BETWEEN ANDROID AND ARDUINO some pre requirements by :-Lohit Jain *First of all download arduino software from www.arduino.cc *download software serial
Fast Arithmetic Coding (FastAC) Implementations
Fast Arithmetic Coding (FastAC) Implementations Amir Said 1 Introduction This document describes our fast implementations of arithmetic coding, which achieve optimal compression and higher throughput by
Linux Driver Devices. Why, When, Which, How?
Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may
Jorix kernel: real-time scheduling
Jorix kernel: real-time scheduling Joris Huizer Kwie Min Wong May 16, 2007 1 Introduction As a specialized part of the kernel, we implemented two real-time scheduling algorithms: RM (rate monotonic) and
C++ Class Library Data Management for Scientific Visualization
I. Introduction C++ Class Library Data Management for Scientific Visualization Al Globus, Computer Sciences Corporation 1 [email protected] Abstract Scientific visualization strives to convert large
Storage Classes CS 110B - Rule Storage Classes Page 18-1 \handouts\storclas
CS 110B - Rule Storage Classes Page 18-1 Attributes are distinctive features of a variable. Data type, int or double for example, is an attribute. Storage class is another attribute. There are four storage
Object Oriented Software Design II
Object Oriented Software Design II Introduction to C++ Giuseppe Lipari http://retis.sssup.it/~lipari Scuola Superiore Sant Anna Pisa February 20, 2012 G. Lipari (Scuola Superiore Sant Anna) C++ Intro February
D01167420A. TASCAM PCM Recorder. iphone/ipad/ipod touch Application USER'S GUIDE
D01167420A TASCAM PCM Recorder iphone/ipad/ipod touch Application USER'S GUIDE Contents Introduction...3 Trademarks... 3 What's in the Main Window...4 What's in the Settings Window...6 The Sharing Window...7
1 Abstract Data Types Information Hiding
1 1 Abstract Data Types Information Hiding 1.1 Data Types Data types are an integral part of every programming language. ANSI-C has int, double and char to name just a few. Programmers are rarely content
Affdex SDK for Windows!
Affdex SDK for Windows SDK Developer Guide 1 Introduction Affdex SDK is the culmination of years of scientific research into emotion detection, validated across thousands of tests worldwide on PC platforms,
Variable Base Interface
Chapter 6 Variable Base Interface 6.1 Introduction Finite element codes has been changed a lot during the evolution of the Finite Element Method, In its early times, finite element applications were developed
Java Interview Questions and Answers
1. What is the most important feature of Java? Java is a platform independent language. 2. What do you mean by platform independence? Platform independence means that we can write and compile the java
An Introduction To Simple Scheduling (Primarily targeted at Arduino Platform)
An Introduction To Simple Scheduling (Primarily targeted at Arduino Platform) I'm late I'm late For a very important date. No time to say "Hello, Goodbye". I'm late, I'm late, I'm late. (White Rabbit in
C Compiler Targeting the Java Virtual Machine
C Compiler Targeting the Java Virtual Machine Jack Pien Senior Honors Thesis (Advisor: Javed A. Aslam) Dartmouth College Computer Science Technical Report PCS-TR98-334 May 30, 1998 Abstract One of the
Freescale Semiconductor, I
nc. Application Note 6/2002 8-Bit Software Development Kit By Jiri Ryba Introduction 8-Bit SDK Overview This application note describes the features and advantages of the 8-bit SDK (software development
An Incomplete C++ Primer. University of Wyoming MA 5310
An Incomplete C++ Primer University of Wyoming MA 5310 Professor Craig C. Douglas http://www.mgnet.org/~douglas/classes/na-sc/notes/c++primer.pdf C++ is a legacy programming language, as is other languages
Binary storage of graphs and related data
EÖTVÖS LORÁND UNIVERSITY Faculty of Informatics Department of Algorithms and their Applications Binary storage of graphs and related data BSc thesis Author: Frantisek Csajka full-time student Informatics
Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2
Job Reference Guide SLAMD Distributed Load Generation Engine Version 1.8.2 June 2004 Contents 1. Introduction...3 2. The Utility Jobs...4 3. The LDAP Search Jobs...11 4. The LDAP Authentication Jobs...22
QNX Software Development Platform 6.6. QNX Neutrino OS Audio Developer's Guide
QNX Software Development Platform 6.6 QNX Software Development Platform 6.6 QNX Neutrino OS Audio Developer's Guide 2000 2014, QNX Software Systems Limited, a subsidiary of BlackBerry. All rights reserved.
Freescale MQX USB Device User Guide
Freescale MQX USB Device User Guide MQXUSBDEVUG Rev. 4 02/2014 How to Reach Us: Home Page: freescale.com Web Support: freescale.com/support Information in this document is provided solely to enable system
Help on the Embedded Software Block
Help on the Embedded Software Block Powersim Inc. 1. Introduction The Embedded Software Block is a block that allows users to model embedded devices such as microcontrollers, DSP, or other devices. It
Image Acquisition Toolbox Adaptor Kit User's Guide
Image Acquisition Toolbox Adaptor Kit User's Guide R2015b How to Contact MathWorks Latest news: www.mathworks.com Sales and services: www.mathworks.com/sales_and_services User community: www.mathworks.com/matlabcentral
Contents. About AirPlay 5. Preparing Your Media and Server for AirPlay 8. Opting Into or Out of AirPlay 11
AirPlay Overview Contents About AirPlay 5 At a Glance 5 Choose the Best Option for Media Playback 5 You Can Opt Into or Out of AirPlay for Video 6 Provide a Great AirPlay User Experience in Your App 6
Visual Basic Programming. An Introduction
Visual Basic Programming An Introduction Why Visual Basic? Programming for the Windows User Interface is extremely complicated. Other Graphical User Interfaces (GUI) are no better. Visual Basic provides
Stacks. Linear data structures
Stacks Linear data structures Collection of components that can be arranged as a straight line Data structure grows or shrinks as we add or remove objects ADTs provide an abstract layer for various operations
ios Dev Crib Sheet In the Shadow of C
ios Dev Crib Sheet As you dive into the deep end of the ios development pool, the first thing to remember is that the mother ship holds the authoritative documentation for this endeavor http://developer.apple.com/ios
An Overview of Java. overview-1
An Overview of Java overview-1 Contents What is Java Major Java features Java virtual machine Java programming language Java class libraries (API) GUI Support in Java Networking and Threads in Java overview-2
Introduction to Python
WEEK ONE Introduction to Python Python is such a simple language to learn that we can throw away the manual and start with an example. Traditionally, the first program to write in any programming language
CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study
CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what
Alarms of Stream MultiScreen monitoring system
STREAM LABS Alarms of Stream MultiScreen monitoring system Version 1.0, June 2013. Version history Version Author Comments 1.0 Krupkin V. Initial version of document. Alarms for MPEG2 TS, RTMP, HLS, MMS,
MS Active Sync: Sync with External Memory Files
Mindfire Solutions - 1 - MS Active Sync: Sync with External Memory Files Author: Rahul Gaur Mindfire Solutions, Mindfire Solutions - 2 - Table of Contents Overview 3 Target Audience 3 Conventions...3 1.
CISC 181 Project 3 Designing Classes for Bank Accounts
CISC 181 Project 3 Designing Classes for Bank Accounts Code Due: On or before 12 Midnight, Monday, Dec 8; hardcopy due at beginning of lecture, Tues, Dec 9 What You Need to Know This project is based on
Illustration 1: Diagram of program function and data flow
The contract called for creation of a random access database of plumbing shops within the near perimeter of FIU Engineering school. The database features a rating number from 1-10 to offer a guideline
El Dorado Union High School District Educational Services
El Dorado Union High School District Course of Study Information Page Course Title: ACE Computer Programming II (#495) Rationale: A continuum of courses, including advanced classes in technology is needed.
Crash Course in Java
Crash Course in Java Based on notes from D. Hollinger Based in part on notes from J.J. Johns also: Java in a Nutshell Java Network Programming and Distributed Computing Netprog 2002 Java Intro 1 What is
Audio-Technica AT-LP60-USB, AT-LP120-USB, AT-LP240-USB & AT-LP1240-USB Turntables. Software Guide
Audio-Technica AT-LP60-USB, AT-LP120-USB, AT-LP240-USB & AT-LP1240-USB Turntables Software Guide Audio-Technica USB Turntables Contents A note about software... 2 System requirements... 2 Installing Audacity
How To Port A Program To Dynamic C (C) (C-Based) (Program) (For A Non Portable Program) (Un Portable) (Permanent) (Non Portable) C-Based (Programs) (Powerpoint)
TN203 Porting a Program to Dynamic C Introduction Dynamic C has a number of improvements and differences compared to many other C compiler systems. This application note gives instructions and suggestions
Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C
Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C 1 An essential part of any embedded system design Programming 2 Programming in Assembly or HLL Processor and memory-sensitive
point to point and point to multi point calls over IP
Helsinki University of Technology Department of Electrical and Communications Engineering Jarkko Kneckt point to point and point to multi point calls over IP Helsinki 27.11.2001 Supervisor: Instructor:
PART-A Questions. 2. How does an enumerated statement differ from a typedef statement?
1. Distinguish & and && operators. PART-A Questions 2. How does an enumerated statement differ from a typedef statement? 3. What are the various members of a class? 4. Who can access the protected members
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
Design: Metadata Cache Logging
Dana Robinson HDF5 THG 2014-02-24 Document Version 4 As an aid for debugging, the existing ad-hoc metadata cache logging functionality will be made more robust. The improvements will include changes to
Network administrators must be aware that delay exists, and then design their network to bring end-to-end delay within acceptable limits.
Delay Need for a Delay Budget The end-to-end delay in a VoIP network is known as the delay budget. Network administrators must design a network to operate within an acceptable delay budget. This topic
iphone Objective-C Exercises
iphone Objective-C Exercises About These Exercises The only prerequisite for these exercises is an eagerness to learn. While it helps to have a background in object-oriented programming, that is not a
PART 1. Using USB Mixer with a Computer
PART 1. Using USB Mixer with a Computer Universal Serial Bus Mixers The USB mixer is equipped with either one or two USB ports that allow you to play and record audio directly from your computer! Just
TF5 / TF3 / TF1 DIGITAL MIXING CONSOLE
TF5 / TF3 / TF1 DIGITAL MIXING CONSOLE Reference Manual EN Table of contents Table of contents Overview... 4 Using this document... 4 The display... 4 Universal operations... 8 Library screen... 8 Keyboard
MPLAB Harmony System Service Libraries Help
MPLAB Harmony System Service Libraries Help MPLAB Harmony Integrated Software Framework v1.08 All rights reserved. This section provides descriptions of the System Service libraries that are available
MA-WA1920: Enterprise iphone and ipad Programming
MA-WA1920: Enterprise iphone and ipad Programming Description This 5 day iphone training course teaches application development for the ios platform. It covers iphone, ipad and ipod Touch devices. This
<User s Guide> Plus Viewer. monitoring. Web
Plus Viewer 1 < Plus Viewer (web ) > 1-1 Access Method The user can access the DVR system through the web. 1 Enter the IP for the DVR system in the address field of the web browser. 2 The
Java 7 Recipes. Freddy Guime. vk» (,\['«** g!p#« Carl Dea. Josh Juneau. John O'Conner
1 vk» Java 7 Recipes (,\['«** - < g!p#«josh Juneau Carl Dea Freddy Guime John O'Conner Contents J Contents at a Glance About the Authors About the Technical Reviewers Acknowledgments Introduction iv xvi
AN3998 Application note
Application note PDM audio software decoding on STM32 microcontrollers 1 Introduction This application note presents the algorithms and architecture of an optimized software implementation for PDM signal
Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.
Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet
Pseudo code Tutorial and Exercises Teacher s Version
Pseudo code Tutorial and Exercises Teacher s Version Pseudo-code is an informal way to express the design of a computer program or an algorithm in 1.45. The aim is to get the idea quickly and also easy
Object Oriented Software Design
Object Oriented Software Design Introduction to Java - II Giuseppe Lipari http://retis.sssup.it/~lipari Scuola Superiore Sant Anna Pisa September 14, 2011 G. Lipari (Scuola Superiore Sant Anna) Introduction
ADOBE ACROBAT CONNECT PRO MOBILE VISUAL QUICK START GUIDE
ADOBE ACROBAT CONNECT PRO MOBILE VISUAL QUICK START GUIDE GETTING STARTED WITH ADOBE ACROBAT CONNECT PRO MOBILE FOR IPHONE AND IPOD TOUCH Overview Attend Acrobat Connect Pro meetings using your iphone
System Calls and Standard I/O
System Calls and Standard I/O Professor Jennifer Rexford http://www.cs.princeton.edu/~jrex 1 Goals of Today s Class System calls o How a user process contacts the Operating System o For advanced services
Have both hardware and software. Want to hide the details from the programmer (user).
Input/Output Devices Chapter 5 of Tanenbaum. Have both hardware and software. Want to hide the details from the programmer (user). Ideally have the same interface to all devices (device independence).
Application Power Management for Mobility
Application Power Management for Mobility White Paper March 20, 2002 Copyright 2002 Intel Corporation Contents 1. Introduction... 4 1.1. Overview... 4 1.2. Audience... 4 2. Application Power Management
The BSN Hardware and Software Platform: Enabling Easy Development of Body Sensor Network Applications
The BSN Hardware and Software Platform: Enabling Easy Development of Body Sensor Network Applications Joshua Ellul [email protected] Overview Brief introduction to Body Sensor Networks BSN Hardware
Voxengo SPAN User Guide
Version 2.10 http://www.voxengo.com/product/span/ Contents Introduction 3 Features 3 Compatibility 3 User Interface Elements 4 Spectrum 4 Statistics 4 Correlation Meter 5 Credits 6 Copyright 2004-2016
Software Engineering Concepts: Testing. Pointers & Dynamic Allocation. CS 311 Data Structures and Algorithms Lecture Slides Monday, September 14, 2009
Software Engineering Concepts: Testing Simple Class Example continued Pointers & Dynamic Allocation CS 311 Data Structures and Algorithms Lecture Slides Monday, September 14, 2009 Glenn G. Chappell Department
Lab 3: Introduction to Data Acquisition Cards
Lab 3: Introduction to Data Acquisition Cards INTRODUCTION: In this lab, you will be building a VI to display the input measured on a channel. However, within your own VI you will use LabVIEW supplied
KITES TECHNOLOGY COURSE MODULE (C, C++, DS)
KITES TECHNOLOGY 360 Degree Solution www.kitestechnology.com/academy.php [email protected] [email protected] Contact: - 8961334776 9433759247 9830639522.NET JAVA WEB DESIGN PHP SQL, PL/SQL
Using Flash Media Live Encoder To Broadcast An Audio-Only Stream (on Mac)
Using Flash Media Live Encoder To Broadcast An Audio-Only Stream (on Mac) A user guide for setting up Flash Media Live Encoder (FMLE) to broadcast video over our servers is available here: (https://community.ja.net/system/files/15551/fmle%20streaming%20wizard%20guide.pdf)
Android builders summit The Android media framework
Android builders summit The Android media framework Author: Bert Van Dam & Poornachandra Kallare Date: 22 April 2014 Usage models Use the framework: MediaPlayer android.media.mediaplayer Framework manages
Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm
Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm PURPOSE Getting familiar with the Linux kernel source code. Understanding process scheduling and how different parameters
C++ Programming Language
C++ Programming Language Lecturer: Yuri Nefedov 7th and 8th semesters Lectures: 34 hours (7th semester); 32 hours (8th semester). Seminars: 34 hours (7th semester); 32 hours (8th semester). Course abstract
C++ INTERVIEW QUESTIONS
C++ INTERVIEW QUESTIONS http://www.tutorialspoint.com/cplusplus/cpp_interview_questions.htm Copyright tutorialspoint.com Dear readers, these C++ Interview Questions have been designed specially to get
Storing Measurement Data
Storing Measurement Data File I/O records or reads data in a file. A typical file I/O operation involves the following process. 1. Create or open a file. Indicate where an existing file resides or where
Receiver Customization
9242_13_Ch11_eng 6/11/07 9:36 AM Page 1 Receiver Customization PERSONALIZING YOUR SATELLITE RECEIVER Take a look through this chapter and you ll find out how to change settings on the receiver to make
Getting Started with the Internet Communications Engine
Getting Started with the Internet Communications Engine David Vriezen April 7, 2014 Contents 1 Introduction 2 2 About Ice 2 2.1 Proxies................................. 2 3 Setting Up ICE 2 4 Slices 2
Java (12 Weeks) Introduction to Java Programming Language
Java (12 Weeks) Topic Lecture No. Introduction to Java Programming Language 1 An Introduction to Java o Java as a Programming Platform, The Java "White Paper" Buzzwords, Java and the Internet, A Short
PicoScope 5000 Series (A API)
PicoScope 5000 Series A API PC Oscilloscopes Programmer's Guide PicoScope 5000 Series A API Programmer's Guide I Contents 1 Introduction...1 1 Overview...1...2 2 Minimum PC requirements 3 License agreement...3
EVALUATION. WA1844 WebSphere Process Server 7.0 Programming Using WebSphere Integration COPY. Developer
WA1844 WebSphere Process Server 7.0 Programming Using WebSphere Integration Developer Web Age Solutions Inc. USA: 1-877-517-6540 Canada: 1-866-206-4644 Web: http://www.webagesolutions.com Chapter 6 - Introduction
Getting Started on the PC and MAC
Getting Started on the PC and MAC Click on the topic you want to view. Download the Desktop App Download the ios or Android App Desktop App Home Screen Home Screen Drop Down Menu Home Screen: Upcoming
Textbook: T. Issariyakul and E. Hossain, Introduction to Network Simulator NS2, Springer 2008. 1
Event Driven Simulation in NS2 Textbook: T. Issariyakul and E. Hossain, Introduction to Network Simulator NS2, Springer 2008. 1 Outline Recap: Discrete Event v.s. Time Driven Events and Handlers The Scheduler
Exceptions in MIPS. know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine
7 Objectives After completing this lab you will: know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine Introduction Branches and jumps provide ways to change
CANnes PC CAN Interface Manual
CANnes PC CAN Interface Manual Version: 1.21 October 1 st, 2004 D 20375 Hamburg, Germany Phone +49-40-51 48 06 0 FAX: +49-40-51 48 06 60 2 CANnes Card Manual V1.21 Version Version Date Author Comment 1.00
Skynax. Mobility Management System. System Manual
Skynax Mobility Management System System Manual Intermec by Honeywell 6001 36th Ave. W. Everett, WA 98203 U.S.A. www.intermec.com The information contained herein is provided solely for the purpose of
Informatica e Sistemi in Tempo Reale
Informatica e Sistemi in Tempo Reale Introduction to C programming Giuseppe Lipari http://retis.sssup.it/~lipari Scuola Superiore Sant Anna Pisa October 25, 2010 G. Lipari (Scuola Superiore Sant Anna)
SMTP-32 Library. Simple Mail Transfer Protocol Dynamic Link Library for Microsoft Windows. Version 5.2
SMTP-32 Library Simple Mail Transfer Protocol Dynamic Link Library for Microsoft Windows Version 5.2 Copyright 1994-2003 by Distinct Corporation All rights reserved Table of Contents 1 Overview... 5 1.1
Debugging with TotalView
Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich
Monitoring Replication
Monitoring Replication Article 1130112-02 Contents Summary... 3 Monitor Replicator Page... 3 Summary... 3 Status... 3 System Health... 4 Replicator Configuration... 5 Replicator Health... 6 Local Package
Internet Access to a Radio Astronomy Observatory Whitham D. Reeve ( 2012 W. Reeve)
Whitham D. Reeve ( 2012 W. Reeve) 1. Introduction This article describes a method to simultaneously connect radio receiver audio and its associated electronic strip chart to the internet using a Windows
Arduino Internet Connectivity: Maintenance Manual Julian Ryan Draft No. 7 April 24, 2015
Arduino Internet Connectivity: Maintenance Manual Julian Ryan Draft No. 7 April 24, 2015 CEN 4935 Senior Software Engineering Project Instructor: Dr. Janusz Zalewski Software Engineering Program Florida
Livestream Studio. Release Notes & New Features!!! For use with Livestream Studio version 3.0.0. Published on April 13, 2015
Livestream Studio! Release Notes & New Features!!! For use with Livestream Studio version 3.0.0! Published on April 13, 2015 Table of Contents 1. Release notes 2. 4K/UHD and low definition project formats
Message from the Development Team. Contents. Message from the Development Team..2. Panel Controls and Terminals...3. Using the UR22mkII...
EN Contents Contents Message from the Development Team..2 Panel Controls and Terminals...3 Front Panel...3 Rear Panel...5 Software...7 Using the UR22mkII...10 Connections...10 Configuring Audio Driver
Name: Class: Date: 9. The compiler ignores all comments they are there strictly for the convenience of anyone reading the program.
Name: Class: Date: Exam #1 - Prep True/False Indicate whether the statement is true or false. 1. Programming is the process of writing a computer program in a language that the computer can respond to
ODBC Client Driver Help. 2015 Kepware, Inc.
2015 Kepware, Inc. 2 Table of Contents Table of Contents 2 4 Overview 4 External Dependencies 4 Driver Setup 5 Data Source Settings 5 Data Source Setup 6 Data Source Access Methods 13 Fixed Table 14 Table
Call Recorder Oygo Manual. Version 1.001.11
Call Recorder Oygo Manual Version 1.001.11 Contents 1 Introduction...4 2 Getting started...5 2.1 Hardware installation...5 2.2 Software installation...6 2.2.1 Software configuration... 7 3 Options menu...8
1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++
Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The
Sources: On the Web: Slides will be available on:
C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,
CS 111 Classes I 1. Software Organization View to this point:
CS 111 Classes I 1 Software Organization View to this point: Data Objects and primitive types Primitive types operators (+, /,,*, %). int, float, double, char, boolean Memory location holds the data Objects
GDB Tutorial. A Walkthrough with Examples. CMSC 212 - Spring 2009. Last modified March 22, 2009. GDB Tutorial
A Walkthrough with Examples CMSC 212 - Spring 2009 Last modified March 22, 2009 What is gdb? GNU Debugger A debugger for several languages, including C and C++ It allows you to inspect what the program
