<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Pranjal Satija - AppCoda]]></title><description><![CDATA[AppCoda is one of the leading iOS programming communities. Our goal is to empower everyone to create apps through easy-to-understand tutorials. Learn by doing is the heart of our learning materials. ]]></description><link>https://www.appcoda.com/</link><generator>Ghost 5.83</generator><lastBuildDate>Fri, 10 Apr 2026 21:59:26 GMT</lastBuildDate><atom:link href="https://www.appcoda.com/author/pranjalsatija/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Building a Full Screen Camera App Using AVFoundation]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>Today, we&#x2019;ll be learning how to use AV Foundation, an Apple system framework that exists on macOS and iOS, along with watchOS and tvOS. The goal of this tutorial will be to help you build a fully functional iOS app that&#x2019;s capable of capturing photos and</p>]]></description><link>https://www.appcoda.com/avfoundation-swift-guide/</link><guid isPermaLink="false">66612a0f166d3c03cf011429</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Pranjal Satija]]></dc:creator><pubDate>Tue, 30 May 2017 00:01:12 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2017/05/full-screen-camera.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2017/05/full-screen-camera.jpg" alt="Building a Full Screen Camera App Using AVFoundation"><p>Today, we&#x2019;ll be learning how to use AV Foundation, an Apple system framework that exists on macOS and iOS, along with watchOS and tvOS. The goal of this tutorial will be to help you build a fully functional iOS app that&#x2019;s capable of capturing photos and videos using the device&#x2019;s cameras. We&#x2019;ll also be following the principles of good object oriented programming and designing a utility class that can be reused and extended in all your projects.</p>
<div class="alert green"><strong>Note:</strong> This tutorial requires a physical iOS device, <em>not</em> the simulator. You won&#x2019;t be able to run the demo app on the simulator. This tutorial also assumes that you have a relatively strong knowledge of basic UIKit concepts such as actions, Interface Builder, and Storyboards, along with a working knowledge of Swift.</div>
<h2>What is AV Foundation?</h2>
<blockquote><p>
 AV Foundation is the full featured framework for working with time-based audiovisual media on iOS, macOS, watchOS and tvOS. Using AV Foundation, you can easily play, create, and edit QuickTime movies and MPEG-4 files, play HLS streams, and build powerful media functionality into your apps. &#x2013; Apple
</p></blockquote>
<p>So, there you have it. AV Foundation is a framework for capturing, processing, and editing audio and video on Apple devices. In this tutorial, we&#x2019;ll specifically be using it to capture photos and videos, complete with multiple camera support, front and rear flash, and audio for videos.</p>
<h2>Do I need AV Foundation?</h2>
<p>Before you embark on this journey, remember that AV Foundation is a complex and intricate tool. In many instances, using Apple&#x2019;s default APIs such as <code>UIImagePickerController</code> will suffice. Make sure you actually need to use AV Foundation before you begin this tutorial.</p>
<h3>Sessions, Devices, Inputs, and Outputs</h3>
<p>At the core of capturing photos and videos with AV Foundation is the <em>capture session</em>. According to Apple, the capture session is &#x201C;an object that manages capture activity and coordinates the flow of data from input devices to capture outputs.&#x201D; In AV Foundation, capture sessions are managed by the <code>AVCaptureSession</code> object.</p>
<p>Additionally, the <em>capture device</em> is used to actually access the physical audio and video capture devices available on an iOS device. To use AVFoundation, you take capture devices, use them to create capture inputs, provide the session with these inputs, and then save the result in capture outputs. Here&#x2019;s a diagram that I made that depicts this relation:</p>
<p><img loading="lazy" decoding="async" src="http://www.appcoda.com/wp-content/uploads/2017/05/Flowchart.jpeg" alt="Building a Full Screen Camera App Using AVFoundation" width="1024" height="768" class="aligncenter size-full wp-image-10071" srcset="https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart.jpeg 1024w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-200x150.jpeg 200w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-400x300.jpeg 400w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-768x576.jpeg 768w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-860x645.jpeg 860w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-680x510.jpeg 680w, https://www.appcoda.com/content/images/wordpress/2017/05/Flowchart-50x38.jpeg 50w" sizes="(max-width: 1024px) 100vw, 1024px"></p>
<h2>Example Project</h2>
<p>As always, we want you to explore the framework by getting your hands dirty. You&#x2019;ll work on an example project, but to let us focus on the discussion of the AVFoundation framework, this tutorial comes with a starter project. Before you move on, <a href="https://github.com/appcoda/FullScreenCamera/raw/master/CameraDemoStarter.zip?ref=appcoda.com">download the starter project here</a> and take a quick look.</p>
<p>The example project is rather basic. It contains:</p>
<ul>
<li>An <code>Assets.xcassets</code> file that contains all of the necessary iconography for our project. Credit goes to Google&#x2019;s Material Design team for these icons. You can find them, along with hundreds of others, available for <strong>free</strong> at <a href="http://material.io/icons?ref=appcoda.com">material.io/icons</a>.</li>
<li>A Storyboard file with one view controller. This view controller will be used to handle all photo and video capture within our app. It contains:
<ul>
<li>A capture button to initiate photo / video capture.</li>
<li>A capture preview view so that you can see what the camera sees in real time.</li>
<li>The necessary controls for switching cameras and toggling the flash.</li>
</ul>
</li>
<li>A <code>ViewController.swift</code> file that&#x2019;s responsible for managing the view controller mentioned above. It contains:
<ul>
<li>All of the necessary outlets that connect the UI controls mentioned above to our code.</li>
<li>A computed property to hide the status bar.</li>
<li>A setup function that styles the capture button appropriately.</li>
</ul>
</li>
</ul>
<p>Build and run the project, and you should see something like this:</p>
<p><img decoding="async" src="http://www.appcoda.com/wp-content/uploads/2017/05/Screenshot-1.png" alt="Building a Full Screen Camera App Using AVFoundation" width="320" class="aligncenter size-full wp-image-10072" srcset="https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1.png 1242w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-200x356.png 200w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-169x300.png 169w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-768x1365.png 768w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-576x1024.png 576w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-1240x2204.png 1240w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-860x1529.png 860w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-680x1209.png 680w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-400x711.png 400w, https://www.appcoda.com/content/images/wordpress/2017/05/Screenshot-1-50x89.png 50w" sizes="(max-width: 1242px) 100vw, 1242px"></p>
<p>Cool! Let&#x2019;s get started!</p>
<h2>Working with AVFoundation</h2>
<p>In this tutorial, we&#x2019;re going to design a class called <code>CameraController</code>, that will be responsible for doing the heavy lifting related to photo and video capture. Our view controller will use <code>CameraController</code> and bind it to our user interface.</p>
<p>To get started, create a new Swift file in your project and call it <code>CameraController.swift</code>. Import <code>AVFoundation</code> and declare an empty class, like this:</p>
<pre lang="swift">
import AVFoundation

class CameraController { }
</pre>
<h3>Photo Capture</h3>
<p>To begin, we&#x2019;re going to implement the photo capture feature with the rear camera. This will be our baseline functionality, and we will add the ability to switch cameras, use the flash, and record videos by adding onto our photo capture functionality. Since configuring and starting a capture session is a relatively intensive procedure, we&#x2019;re going to decouple it from <code>init</code> and create a function, called <code>prepare</code>, that prepares our capture session for use and calls a completion handler when it&#x2019;s done. Add a <code>prepare</code> function to your <code>CameraController</code> class:</p>
<pre lang="swift">
func prepare(completionHandler: @escaping (Error?) -&gt; Void) { }
</pre>
<p>This function will handle the creation and configuration of a new capture session. Remember, setting up the capture session consists of 4 steps:</p>
<ol>
<li>Creating a capture session.</li>
<li>Obtaining and configuring the necessary capture devices.</li>
<li>Creating inputs using the capture devices.</li>
<li>Configuring a photo output object to process captured images.</li>
</ol>
<p>We&#x2019;ll use Swift&#x2019;s nested functions to encapsulate our code in a manageable way. Start by declaring 4 empty functions within <code>prepare</code> and then calling them:</p>
<pre lang="swift">
func prepare(completionHandler: @escaping (Error?) -&gt; Void) {
    func createCaptureSession() { }
    func configureCaptureDevices() throws { }
    func configureDeviceInputs() throws { }
    func configurePhotoOutput() throws { }
    
    DispatchQueue(label: &quot;prepare&quot;).async {
        do {
            createCaptureSession()
            try configureCaptureDevices()
            try configureDeviceInputs()
            try configurePhotoOutput()
        }
            
        catch {
            DispatchQueue.main.async {
                completionHandler(error)
            }
            
            return
        }
        
        DispatchQueue.main.async {
            completionHandler(nil)
        }
    }
}
</pre>
<p>In the above code listing, we&#x2019;ve created boilerplate functions for performing the 4 key steps in preparing an <code>AVCaptureSession</code> for photo capture. We&#x2019;ve also set up an asynchronously executing block that calls the four functions, catches any errors if necessary, and then calls the completion handler. All we have left to do is implement the four functions! Let&#x2019;s start with <code>createCaptureSession</code>.</p>
<h4>Create Capture Session</h4>
<p>Before configuring a given <code>AVCaptureSession</code>, we need to create it! Add the following property to your <code>CameraController.swift</code> file:</p>
<pre lang="swift">
var captureSession: AVCaptureSession?
</pre>
<p>Next, add the following to the body of your <code>createCaptureSession</code> function that&#x2019;s nested within <code>prepare</code>:</p>
<pre lang="swift">
self.captureSession = AVCaptureSession()
</pre>
<p>This is simple code; it simply creates a new <code>AVCaptureSession</code> and stores it in the <code>captureSession</code> property.</p>
<h4>Configure Capture Devices</h4>
<p>Now that we&#x2019;ve created an <code>AVCaptureSession</code>, we need to create the <code>AVCaptureDevice</code> objects to represent the actual iOS device&#x2019;s cameras. Go ahead and add the following properties to your <code>CameraController</code> class. We&#x2019;re going to add the <code>frontCamera</code> and <code>rearCamera</code> properties now because we&#x2019;ll be setting up the basics of multicamera capture, and implementing the ability to change cameras later.</p>
<pre lang="swift">
var frontCamera: AVCaptureDevice?
var rearCamera: AVCaptureDevice?
</pre>
<p>Next, declare an embedded type within <code>CameraController.swift</code>. We&#x2019;ll be using this embedded type to manage the various errors we might encounter while creating a capture session:</p>
<pre lang="swift">
   enum CameraControllerError: Swift.Error {
        case captureSessionAlreadyRunning
        case captureSessionIsMissing
        case inputsAreInvalid
        case invalidOperation
        case noCamerasAvailable
        case unknown
    }
</pre>
<p>You&#x2019;ll notice that there are various error types in this enum. Just add them now, we&#x2019;re going to use them later.</p>
<p>Now it comes to the fun part! Let&#x2019;s find the cameras available on the device. We can do this with <code>AVCaptureDeviceDiscoverySession</code>. Add the following to <code>configureCaptureDevices</code>:</p>
<pre lang="swift">
//1
let session = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified)
guard let cameras = (session?.devices.flatMap { $0 }), !cameras.isEmpty else { throw CameraControllerError.noCamerasAvailable }

//2
for camera in cameras {
    if camera.position == .front {
        self.frontCamera = camera
    }

    if camera.position == .back {
        self.rearCamera = camera

        try camera.lockForConfiguration()
        camera.focusMode = .continuousAutoFocus
        camera.unlockForConfiguration()
    }
}
</pre>
<p>Here&#x2019;s what we just did:</p>
<ol>
<li>These 2 lines of code use <code>AVCaptureDeviceDiscoverySession</code> to find all of the wide angle cameras available on the current device and convert them into an array of non-optional <code>AVCaptureDevice</code> instances. If no cameras are available, we throw an error.</li>
<li>This loop looks through the available cameras found in code segment 1 and determines which is the front camera and which is the rear camera. It additionally configures the rear camera to autofocus, throwing any errors that are encountered along the way.</li>
</ol>
<p>Cool! We used <code>AVCaptureDeviceDiscoverySession</code> to find the available cameras on the device and configure them to meet our specifications. Let&#x2019;s connect them to our capture session.</p>
<h4>Configure Device Inputs</h4>
<p>Now we can create capture device inputs, which take capture devices and connect them to our capture session. Before we do this, add the following properties to <code>CameraController</code> to ensure that we can store our inputs:</p>
<pre lang="swift">
var currentCameraPosition: CameraPosition?
var frontCameraInput: AVCaptureDeviceInput?
var rearCameraInput: AVCaptureDeviceInput?
</pre>
<p>Our code won&#x2019;t compile in this state, because <code>CameraPosition</code> is undefined. Let&#x2019;s define it. Add this as an embedded type within <code>CameraController</code>:</p>
<pre lang="swift">
public enum CameraPosition {
    case front
    case rear
}
</pre>
<p>Great. Now we have all the necessary properties for storing and managing our capture device inputs. Let&#x2019;s implement <code>configureDeviceInputs</code>:</p>
<pre lang="swift">
func configureDeviceInputs() throws {
    //3
    guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }

    //4
    if let rearCamera = self.rearCamera {
        self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)

        if captureSession.canAddInput(self.rearCameraInput!) { captureSession.addInput(self.rearCameraInput!) }

        self.currentCameraPosition = .rear
    }

    else if let frontCamera = self.frontCamera {
        self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)

        if captureSession.canAddInput(self.frontCameraInput!) { captureSession.addInput(self.frontCameraInput!) }
        else { throw CameraControllerError.inputsAreInvalid }

        self.currentCameraPosition = .front
    }

    else { throw CameraControllerError.noCamerasAvailable }
}
</pre>
<p>Here&#x2019;s what we did:</p>
<ol>
<li>This line simply ensures that <code>captureSession</code> exists. If not, we throw an error.</li>
<li>These <code>if</code> statements are responsible for creating the necessary capture device input to support photo capture. <code>AVFoundation</code> only allows one camera-based input per capture session at a time. Since the rear camera is traditionally the default, we attempt to create an input from it and add it to the capture session. If that fails, we fall back on the front camera. If that fails as well, we throw an error.</li>
</ol>
<h4>Configure Photo Output</h4>
<p>Up until this point, we&#x2019;ve added all the necessary inputs to <code>captureSession</code>. Now we just need a way to get the necessary data <em>out</em> of our capture session. Luckily, we have <code>AVCapturePhotoOutput</code>. Add one more property to <code>CameraController</code>:</p>
<pre lang="swift">
var photoOutput: AVCapturePhotoOutput?
</pre>
<p>Now, let&#x2019;s implement <code>configurePhotoOutput</code> like this:</p>
<pre lang="swift">
func configurePhotoOutput() throws {
    guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }

    self.photoOutput = AVCapturePhotoOutput()
    self.photoOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecJPEG])], completionHandler: nil)

    if captureSession.canAddOutput(self.photoOutput) { captureSession.addOutput(self.photoOutput) }

    captureSession.startRunning()
}
</pre>
<p>This is a simple implementation. It just configures <code>photoOutput</code>, telling it to use the JPEG file format for its video codec. Then, it adds <code>photoOutput</code> to <code>captureSession</code>. Finally, it starts <code>captureSession</code>.</p>
<p>We&#x2019;re almost done! Your <code>CameraController.swift</code> file should look something similar to this:</p>
<pre lang="swift">
import AVFoundation

class CameraController {
    var captureSession: AVCaptureSession?

    var currentCameraPosition: CameraPosition?

    var frontCamera: AVCaptureDevice?
    var frontCameraInput: AVCaptureDeviceInput?

    var photoOutput: AVCapturePhotoOutput?

    var rearCamera: AVCaptureDevice?
    var rearCameraInput: AVCaptureDeviceInput?
}

extension CameraController {
    func prepare(completionHandler: @escaping (Error?) -&gt; Void) {
        func createCaptureSession() {
            self.captureSession = AVCaptureSession()
        }

        func configureCaptureDevices() throws {
            let session = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified)
            guard let cameras = (session?.devices.flatMap { $0 }), !cameras.isEmpty else { throw CameraControllerError.noCamerasAvailable }

            for camera in cameras {
                if camera.position == .front {
                    self.frontCamera = camera
                }

                if camera.position == .back {
                    self.rearCamera = camera

                    try camera.lockForConfiguration()
                    camera.focusMode = .autoFocus
                    camera.unlockForConfiguration()
                }
            }
        }

        func configureDeviceInputs() throws {
            guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }

            if let rearCamera = self.rearCamera {
                self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)

                if captureSession.canAddInput(self.rearCameraInput!) { captureSession.addInput(self.rearCameraInput!) }

                self.currentCameraPosition = .rear
            }

            else if let frontCamera = self.frontCamera {
                self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)

                if captureSession.canAddInput(self.frontCameraInput!) { captureSession.addInput(self.frontCameraInput!) }
                else { throw CameraControllerError.inputsAreInvalid }

                self.currentCameraPosition = .front
            }

            else { throw CameraControllerError.noCamerasAvailable }
        }

        func configurePhotoOutput() throws {
            guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }

            self.photoOutput = AVCapturePhotoOutput()
            self.photoOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecJPEG])], completionHandler: nil)

            if captureSession.canAddOutput(self.photoOutput) { captureSession.addOutput(self.photoOutput) }
            captureSession.startRunning()
        }

        DispatchQueue(label: &quot;prepare&quot;).async {
            do {
                createCaptureSession()
                try configureCaptureDevices()
                try configureDeviceInputs()
                try configurePhotoOutput()
            }

            catch {
                DispatchQueue.main.async {
                    completionHandler(error)
                }

                return
            }

            DispatchQueue.main.async {
                completionHandler(nil)
            }
        }
    }
}

extension CameraController {
    enum CameraControllerError: Swift.Error {
        case captureSessionAlreadyRunning
        case captureSessionIsMissing
        case inputsAreInvalid
        case invalidOperation
        case noCamerasAvailable
        case unknown
    }

    public enum CameraPosition {
        case front
        case rear
    }
}
</pre>
<div class="alert gray"><strong>Note:</strong> I used extensions to segment the code appropriately. You don&#x2019;t have to do this, but I think it&#x2019;s good practice, because it makes your code easier to read and write.</div>
<h4>Display Preview</h4>
<p>Now that we have the camera device ready, it is time to show what it captures on screen. Add <em>another</em> function to <code>CameraController</code> (outside of <code>prepare</code>), called it <code>displayPreview</code>. It should have the following signature:</p>
<pre lang="swift">
func displayPreview(on view: UIView) throws { }
</pre>
<p>Additionally, <code>import UIKit</code> in your <code>CameraController.swift</code> file. We&#x2019;ll need it to work with <code>UIView</code>.</p>
<p>As its name suggests, this function will be responsible for creating a capture preview and displaying it on the provided view. Let&#x2019;s add a property to <code>CameraController</code> to support this function:</p>
<pre lang="swift">
var previewLayer: AVCaptureVideoPreviewLayer?
</pre>
<p>This property will hold the preview layer that displays the output of <code>captureSession</code>. Let&#x2019;s implement the method:</p>
<pre lang="swift">
func displayPreview(on view: UIView) throws {
    guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }

    self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    self.previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
    self.previewLayer?.connection?.videoOrientation = .portrait

    view.layer.insertSublayer(self.previewLayer!, at: 0)
    self.previewLayer?.frame = view.frame
}
</pre>
<p>This function creates an <code>AVCaptureVideoPreview</code> using <code>captureSession</code>, sets it to have the portrait orientation, and adds it to the provided view.</p>
<h4>Wiring It Up</h4>
<p>Cool! Now, let&#x2019;s try connecting all this to our view controller. Head on over to <code>ViewController.swift</code>. First, add a property to <code>ViewController.swift</code>:</p>
<pre lang="swift">
let cameraController = CameraController()
</pre>
<p>Then, add a nested function in <code>viewDidLoad()</code>:</p>
<pre lang="swift">
func configureCameraController() {
    cameraController.prepare {(error) in
        if let error = error {
            print(error)
        }

        try? self.cameraController.displayPreview(on: self.capturePreviewView)
    }
}

configureCameraController()
</pre>
<p>This function simply prepares our camera controller like we designed it to.</p>
<p>Unfortunately, we still have one more step. This is a security requirement enforced by Apple. You have to provide a reason for users, explaining why your app needs to use the camera. Open <code>Info.plist</code> and insert a row:</p>
<p><img loading="lazy" decoding="async" src="http://www.appcoda.com/wp-content/uploads/2017/05/privacy-info-list-camera.png" alt="Building a Full Screen Camera App Using AVFoundation" width="1756" height="292" class="aligncenter size-full wp-image-10074" srcset="https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera.png 1756w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-200x33.png 200w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-600x100.png 600w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-768x128.png 768w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-1024x170.png 1024w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-1680x279.png 1680w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-1240x206.png 1240w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-860x143.png 860w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-680x113.png 680w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-400x67.png 400w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-camera-50x8.png 50w" sizes="(max-width: 1756px) 100vw, 1756px"></p>
<p>This key tells the user why you&#x2019;re using the camera when it asks for the necessary permissions.</p>
<p>Your <code>ViewController.swift</code> file should now look like this:</p>
<pre lang="swift">
import UIKit

class ViewController: UIViewController {
    let cameraController = CameraController()

    @IBOutlet fileprivate var captureButton: UIButton!

    ///Displays a preview of the video output generated by the device&apos;s cameras.
    @IBOutlet fileprivate var capturePreviewView: UIView!

    ///Allows the user to put the camera in photo mode.
    @IBOutlet fileprivate var photoModeButton: UIButton!
    @IBOutlet fileprivate var toggleCameraButton: UIButton!
    @IBOutlet fileprivate var toggleFlashButton: UIButton!

    ///Allows the user to put the camera in video mode.
    @IBOutlet fileprivate var videoModeButton: UIButton!

    override var prefersStatusBarHidden: Bool { return true }
}

extension ViewController {
    override func viewDidLoad() {
        func configureCameraController() {
            cameraController.prepare {(error) in
                if let error = error {
                    print(error)
                }

                try? self.cameraController.displayPreview(on: self.capturePreviewView)
            }
        }

        func styleCaptureButton() {
            captureButton.layer.borderColor = UIColor.black.cgColor
            captureButton.layer.borderWidth = 2

            captureButton.layer.cornerRadius = min(captureButton.frame.width, captureButton.frame.height) / 2
        }

        styleCaptureButton()
        configureCameraController()
    }
}
</pre>
<p>Build and run your project, press accept when the device asks for permission, and HOORAY! You should have a working capture preview. If not, recheck your code and leave a comment if you need help.</p>
<p><img decoding="async" src="http://www.appcoda.com/wp-content/uploads/2017/05/IMG_2299.png" alt="Building a Full Screen Camera App Using AVFoundation" width="320" class="aligncenter size-full wp-image-10076" srcset="https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299.png 640w, https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299-200x356.png 200w, https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299-169x300.png 169w, https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299-576x1024.png 576w, https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299-400x711.png 400w, https://www.appcoda.com/content/images/wordpress/2017/05/IMG_2299-50x89.png 50w" sizes="(max-width: 640px) 100vw, 640px"></p>
<h4>Toggling the Flash / Switching Cameras</h4>
<p>Now that we have a working preview, let&#x2019;s add some more functionality to it. Most camera apps allow their users to switch cameras and enable or disable the flash. Let&#x2019;s make ours do that as well. After we do this, we&#x2019;ll add the ability to capture images and save them to the camera roll.</p>
<p>To start, we&#x2019;re going to enable the ability to toggle the flash. Add this property to <code>CameraController</code>:</p>
<pre lang="swift">
var flashMode = AVCaptureFlashMode.off
</pre>
<p>Now, head over to <code>ViewController</code>. Add an <code>@IBAction func</code> to toggle the flash:</p>
<pre lang="swift">
@IBAction func toggleFlash(_ sender: UIButton) {
    if cameraController.flashMode == .on {
        cameraController.flashMode = .off
        toggleFlashButton.setImage(#imageLiteral(resourceName: &quot;Flash Off Icon&quot;), for: .normal)
    }

    else {
        cameraController.flashMode = .on
        toggleFlashButton.setImage(#imageLiteral(resourceName: &quot;Flash On Icon&quot;), for: .normal)
    }
}
</pre>
<p>For now, this is all we have to do. Our <code>CameraController</code> class will handle the flash when we capture an image. Let&#x2019;s move on to switching cameras.</p>
<p>Switching cameras in AV Foundation is a pretty easy task. We just need to remove the capture input for the existing camera and add a new capture input for the camera we want to switch to. Let&#x2019;s add another function to our <code>CameraController</code> class for switching cameras:</p>
<pre lang="swift">
func switchCameras() throws { }
</pre>
<p>When we switch cameras, we&#x2019;ll either be switching to the front camera or to the rear camera. So, let&#x2019;s declare 2 nested functions within <code>switchCameras</code>:</p>
<pre lang="swift">
func switchToFrontCamera() throws { }
func switchToRearCamera() throws { }
</pre>
<p>Now, add the following to <code>switchCameras()</code>:</p>
<pre lang="swift">
//5
guard let currentCameraPosition = currentCameraPosition, let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }

//6
captureSession.beginConfiguration()

func switchToFrontCamera() throws { }
func switchToRearCamera() throws { }

//7
switch currentCameraPosition {
case .front:
    try switchToRearCamera()

case .rear:
    try switchToFrontCamera()
}

//8
captureSession.commitConfiguration()
</pre>
<p>Here&#x2019;s what we just did:</p>
<ol>
<li>This <code>guard</code> statement ensures that we have a valid, running capture session before attempting to switch cameras. It also verifies that there is a camera that&#x2019;s currently active.</li>
<li>This line tells the capture session to begin configuration.</li>
<li>This <code>switch</code> statement calls either <code>switchToRearCamera</code> or <code>switchToFrontCamera</code>, depending on which camera is currently active.</li>
<li>This line commits, or saves, our capture session after configuring it.</li>
</ol>
<p>Great! All we have to do now is implement <code>switchToFrontCamera</code> and <code>switchToRearCamera</code>:</p>
<pre lang="swift">
func switchToFrontCamera() throws {
    guard let inputs = captureSession.inputs as? [AVCaptureInput], let rearCameraInput = self.rearCameraInput, inputs.contains(rearCameraInput),
        let frontCamera = self.frontCamera else { throw CameraControllerError.invalidOperation }

    self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)

    captureSession.removeInput(rearCameraInput)

    if captureSession.canAddInput(self.frontCameraInput!) {
        captureSession.addInput(self.frontCameraInput!)

        self.currentCameraPosition = .front
    }

    else { throw CameraControllerError.invalidOperation }
}

func switchToRearCamera() throws {
    guard let inputs = captureSession.inputs as? [AVCaptureInput], let frontCameraInput = self.frontCameraInput, inputs.contains(frontCameraInput),
        let rearCamera = self.rearCamera else { throw CameraControllerError.invalidOperation }

    self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)

    captureSession.removeInput(frontCameraInput)

    if captureSession.canAddInput(self.rearCameraInput!) {
        captureSession.addInput(self.rearCameraInput!)

        self.currentCameraPosition = .rear
    }

    else { throw CameraControllerError.invalidOperation }
}
</pre>
<p>Both functions have extremely similar implementations. They start by getting an array of all the inputs in the capture session and ensuring that it&#x2019;s possible to switch to the request camera. Next, they create the necessary input device, remove the old one, and add the new one. Finally, they set <code>currentCameraPosition</code> so that the <code>CameraController</code> class is aware of the changes. Easy! Go back to <code>ViewController.swift</code> so that we can add a function to switch cameras:</p>
<pre lang="swift">
@IBAction func switchCameras(_ sender: UIButton) {
    do {
        try cameraController.switchCameras()
    }

    catch {
        print(error)
    }

    switch cameraController.currentCameraPosition {
    case .some(.front):
        toggleCameraButton.setImage(#imageLiteral(resourceName: &quot;Front Camera Icon&quot;), for: .normal)

    case .some(.rear):
        toggleCameraButton.setImage(#imageLiteral(resourceName: &quot;Rear Camera Icon&quot;), for: .normal)

    case .none:
        return
    }
}
</pre>
<p>Great! Open up your storyboard, connect the necessary outlets, and build and run the app. You should be able to freely switch cameras. Now we get to implement the most important feature: image capture!</p>
<h3>Implementing Image Capture</h3>
<p>Now we can implement the feature we&#x2019;ve been waiting for this whole time: <strong>image capture</strong>. Before we get into it, let&#x2019;s have a quick recap of everything we&#x2019;ve done so far:</p>
<ul>
<li>Designed a working utility class that can be used to easily hide the complexities of AV Foundation.</li>
<li>Implemented functionality within this class to allow us to create a capture session, use the flash, switch cameras, and get a working preview.</li>
<li>Connected our class to a <code>UIViewController</code> and built a lightweight camera app.</li>
</ul>
<p>All we have left to do is actually capture the images!</p>
<p>Open up <code>CameraController.swift</code> and let&#x2019;s get to work. Add a <code>captureImage</code> function with this signature:</p>
<pre lang="swift">
func captureImage(completion: (UIImage?, Error?) -&gt; Void) {

}
</pre>
<p>This function, as its name suggests, will capture an image for us using the camera controller we&#x2019;ve built. Let&#x2019;s implement it:</p>
<pre lang="swift">
func captureImage(completion: @escaping (UIImage?, Error?) -&gt; Void) {
    guard let captureSession = captureSession, captureSession.isRunning else { completion(nil, CameraControllerError.captureSessionIsMissing); return }

    let settings = AVCapturePhotoSettings()
    settings.flashMode = self.flashMode

    self.photoOutput?.capturePhoto(with: settings, delegate: self)
    self.photoCaptureCompletionBlock = completion
}
</pre>
<p>Great! It&#x2019;s not a complicated implementation, but our code won&#x2019;t compile yet, because we haven&#x2019;t defined <code>photoCaptureCompletionBlock</code> and <code>CameraController</code> doesn&#x2019;t conform to <code>AVCapturePhotoCaptureDelegate</code>. First, let&#x2019;s add a property, <code>photoCaptureCompletionBlock</code> to <code>CameraController</code>:</p>
<pre lang="swift">
var photoCaptureCompletionBlock: ((UIImage?, Error?) -&gt; Void)?
</pre>
<p>And now, let&#x2019;s extend <code>CameraController</code> to conform to AVCapturePhotoCaptureDelegate:</p>
<pre lang="swift">
extension CameraController: AVCapturePhotoCaptureDelegate {
    public func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?,
                        resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Swift.Error?) {
        if let error = error { self.photoCaptureCompletionBlock?(nil, error) }
            
        else if let buffer = photoSampleBuffer, let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil),
            let image = UIImage(data: data) {
            
            self.photoCaptureCompletionBlock?(image, nil)
        }
            
        else {
            self.photoCaptureCompletionBlock?(nil, CameraControllerError.unknown)
        }
    }
}
</pre>
<p>Great. Now the compiler is raising one more issue:</p>
<pre>
Type &apos;CameraController&apos; does not conform to protocol &apos;NSObjectProtocol&apos;. 
</pre>
<p>We just need to make <code>CameraController</code> inherit from <code>NSObject</code> to fix this, so let&#x2019;s do so now. Change the class declaration for <code>CameraController</code> to <code>class CameraController: NSObject</code> and we&#x2019;ll be set!</p>
<p>Now, head back to <code>ViewController</code> one more time. First, import the <code>Photos</code> framework since we will use the built-in APIs to save the photo.</p>
<pre lang="swift">
import Photos
</pre>
<p>And then insert the following function:</p>
<pre lang="swift">
@IBAction func captureImage(_ sender: UIButton) {
    cameraController.captureImage {(image, error) in
        guard let image = image else {
            print(error ?? &quot;Image capture error&quot;)
            return
        }
        
        try? PHPhotoLibrary.shared().performChangesAndWait {
            PHAssetChangeRequest.creationRequestForAsset(from: image)
        }
    }
}
</pre>
<p>We simply call the <code>captureImage</code> method of the camera controller to take photo, and then use the <code>PHPhotoLibary</code> class to save the image to the built-in photo library.</p>
<p>Lastly, connect the <code>@IBAction func</code> to the capture button in the Storyboard, and head over to <code>Info.plist</code> to insert a row:</p>
<p><img loading="lazy" decoding="async" src="http://www.appcoda.com/wp-content/uploads/2017/05/privacy-info-list-photolib.png" alt="Building a Full Screen Camera App Using AVFoundation" width="1756" height="630" class="aligncenter size-full wp-image-10078" srcset="https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib.png 1756w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-200x72.png 200w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-600x215.png 600w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-768x276.png 768w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-1024x367.png 1024w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-1680x603.png 1680w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-1240x445.png 1240w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-860x309.png 860w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-680x244.png 680w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-400x144.png 400w, https://www.appcoda.com/content/images/wordpress/2017/05/privacy-info-list-photolib-50x18.png 50w" sizes="(max-width: 1756px) 100vw, 1756px"></p>
<p>This is a privacy requirement introduced in iOS 10. You have to specify the reason why your app needs to access the photo library.</p>
<p>Now build and run the app to capture a photo! After that, open your photo library. You should see the photo you just captured. Congrats, you now know how to use AV Foundation in your apps! Good luck, and stay tuned for the second part of this tutorial, where we&#x2019;ll learn how to capture videos.</p>
<p>For the complete project, you can <a href="https://github.com/appcoda/FullScreenCamera?ref=appcoda.com">download it from GitHub</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item></channel></rss>