Home / iPhone Tips and Tutorials

Creating and Playing Audio Recordings

In the second part of the tutorial, we'll be adding audio recording and playback to the application. Unlike the movie player, we'll be using classes within the AV Foundation framework to implement these features. As you'll learn, very little coding needs to be done to make this work!

For the recorder, we'll use the AVAudioRecorder class and these methods:

  • initWithURL:settings:error: Provided with an NSURL instance pointing to a local file and NSDictionary containing a few settings, this method returns an instance of a recorder, ready to use.
  • record: Begins recording.
  • stop: Ends the recording session.

Not coincidentally, the playback feature, an instance of AVAudioPlayer, uses some very similar methods:

  • initWithContentsOfURL:error: Creates an audio player object that can be used to play back the contents of the file pointed to by an NSURL object
  • play: Plays back the audio

When you were entering the contents of the MediaPlayground.h file a bit earlier, you may have noticed that we slipped in a protocol: AVAudioPlayerDelegate. By conforming to this protocol, we can implement the method audioPlayerDidFinishPlaying:successfully:, which will automatically be invoked when our audio player finishes playing back the recording. No notifications needed this time around!

Adding the AV Foundation Framework

To use the AVAudioPlayer and AVAudioRecorder classes, we must add the AV Foundation framework to the project. Right-click the Frameworks folder icon in the Xcode project, and choose Add, Existing Frameworks. Select the AVFoundation. framework, and then click Add.

Remember, the framework also requires a corresponding import line in your header file (#import <AVFoundation/AVFoundation.h<) to access the classes and methods. We added this earlier when setting up the project.

Implementing Audio Recording

To add audio recording, we need to create the recordAudio: method, but before we do, let's think through this a bit. What happens when we initiate a recording? In this application, recording will continue until we press the button again.

To implement this functionality, the "recorder" object itself must persist between calls to the recordAudio: method. We'll make sure this happens by using the soundRecorder instance variable in the MediaPlaygroundViewController class (declared in the project setup) to hold the AVAudioRecorder object. By setting the object up in the viewDidLoad method, it will be available anywhere and anytime we need it. Edit MediaPlaygroundViewController.m and add the code in Listing-4 to viewDidLoad.

LISTING-4
1: - (void)viewDidLoad {
2:     NSString *tempDir;
3:     NSURL *soundFile;
4:     NSDictionary *soundSetting;
5:
6:     tempDir=NSTemporaryDirectory();
7:     soundFile=[NSURL fileURLWithPath:
8: 		[tempDir stringByAppendingString:@"sound.caf"]];
9:
10:     soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
11:       [NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
12:       [NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
13:       [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
14:       [NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey,
15:        nil];
16:
17:     soundRecorder = [[AVAudioRecorder alloc] initWithURL: soundFile
18: 						settings: soundSetting
19: 							error: nil];
20:
21:     [super viewDidLoad];
22: }

Beginning with the basics, lines 2-3 declare a string, tempDir, that will hold the iPhone temporary directory (which we'll need to store a sound recording), a URL, soundFile, which will point to the sound file itself, and soundSetting, a dictionary that will hold several settings needed to tell the recorder how it should be recording.

In line 6, we use NSTemporaryDirectory() to grab and store the temporary directory path where your application can store its sound find.

Lines 7-8 concatenate "sound.caf" onto the end of the temporary directory. This string is then used to initialize a new instance of NSURL, which is stored in the soundFile variable.

Lines 10-15 create an NSDictionary object that contains keys and values for configuring the format of the sound being recorded. Unless you're familiar with audio recording, many of these might be pretty foreign sounding. Here's the 30-second summary:

  • AVSampleRateKey: The number of audio samples the recorder will take per second.
  • AVFormatIDKey: The recording format for the audio.
  • AVNumberofChannelsKey: The number of audio channels in the recording.
  • Stereo audio, for example, has two channels.
  • AVEncoderAudioQualityKey: A quality setting for the encoder.
To learn more about the different settings, what they mean, and what the possible options are, read the AVAudioRecorder Class Reference (scroll to the "Constants" section) in the Xcode developer documentation utility.
The audio format specified in the settings is defined in the CoreAudioTypes.h file. Because the settings reference an audio type by name, you must import this file:
(#import <CoreAudio/CoreAudioTypes.h>).
Again, this was completed in the initial project setup, so no need to make any changes now.

In lines 17-19, the audio recorder, soundRecorder, is initialized with the soundFile URL and the settings stored in the soundSettings dictionary. We pass nil to the error parameter because we don't (for this example) care whether an error occurs. If we did experience an error, it would be returned in a value passed to this parameter.

[Previous] [Contents] [Next]