<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Sai Kambampati - AppCoda]]></title><description><![CDATA[AppCoda is one of the leading iOS programming communities. Our goal is to empower everyone to create apps through easy-to-understand tutorials. Learn by doing is the heart of our learning materials. ]]></description><link>https://www.appcoda.com/</link><generator>Ghost 5.83</generator><lastBuildDate>Sat, 14 Mar 2026 09:02:30 GMT</lastBuildDate><atom:link href="https://www.appcoda.com/author/saikambampati/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Using MusicKit and Apple Music API to Build a Music Player]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>Hey everyone and welcome back to the second and final part of this tutorial series where we explore the intricacies of Apple&#x2019;s <strong>MusicKit</strong> by building our very own music player in <strong>SwiftUI</strong> that can stream songs from our Apple Music account. If you haven&#x2019;t read Part</p>]]></description><link>https://www.appcoda.com/musickit-music-api/</link><guid isPermaLink="false">66612a0f166d3c03cf0114d6</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Fri, 24 Jul 2020 12:48:55 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2020/07/hu-ul54pfqi.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2020/07/hu-ul54pfqi.jpg" alt="Using MusicKit and Apple Music API to Build a Music Player"><p>Hey everyone and welcome back to the second and final part of this tutorial series where we explore the intricacies of Apple&#x2019;s <strong>MusicKit</strong> by building our very own music player in <strong>SwiftUI</strong> that can stream songs from our Apple Music account. If you haven&#x2019;t read Part 1, you can do that <a href="https://www.appcoda.com/musickit-music-player-swiftui/">here</a>.</p>



<p>In the last tutorial, we looked at how to create a MusicKit Identifier for our Apple Developer account, created a JSON Web Token private key, and we were able to successfully make web requests to the Apple Music API. At the end of the tutorial, we also built the UI of our music player and were able to make it look like below.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18313" width="1473" height="1434" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1.png 2946w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-308x300.png 308w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-1024x997.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-200x195.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-768x747.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-1536x1495.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-2048x1993.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-1680x1635.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-1240x1207.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-860x837.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-680x662.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-400x389.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-1-50x49.png 50w" sizes="(max-width: 1473px) 100vw, 1473px"></figure>



<p>In this tutorial, we&#x2019;ll start making calls to the Apple Music API to populate our app with real data. We&#x2019;ll also look at how to control media playback with the <code>MediaPlayer</code> framework and enhance the app to this.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="489" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1024x489.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18459" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1024x489.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-600x287.png 600w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-200x96.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-768x367.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1536x734.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1680x802.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1240x592.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-860x411.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-680x325.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-400x191.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-50x24.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Cool, right? Let&#x2019;s get started.</p>



<p><em><strong>Note</strong>: This tutorial was made using Xcode 11.4 and Swift 5.1. An active Apple Music subscription will be needed in order to test the app out. You will also need to run your app on a real device because as of this time, the Simulator does not support Apple Music playback.</em></p>



<h2 class="wp-block-heading">Apple Music API vs iTunes Search API</h2>



<p>If you&#x2019;ve worked with podcasts, tv shows, or any other Apple generated media content, you must have come across the iTunes Search API. The <a href="https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/iTuneSearchAPI/index.html?ref=appcoda.com" class="rank-math-link">iTunes Search API</a> lets you search for content in the iTunes Store, App Store, and iBooks Store. This will give you access to information about apps, books, movies, podcasts, music, music videos, audiobooks, and TV shows. The best part about using this API is that it&#x2019;s completely free so you can get updated information, on the fly, for no additional cost.</p>



<p>But why do we use the <a href="https://developer.apple.com/documentation/applemusicapi?ref=appcoda.com" class="rank-math-link">Apple Music API</a> instead of the iTunes Search API? This is because of how much information and power the Apple Music API can deliver. While the Apple Music API provides the same functionality as the iTunes Search API (fetching information about music and music videos), the Apple Music API takes it to the next level by accessing a user&#x2019;s personal library and recommendations. </p>



<p>This means your app can leverage a user&#x2019;s playlists, songs, or ratings for their content. This is why it&#x2019;s required for your user to have an Apple Music account if you will be using the API. Another important reason why the Apple Music API is better over the iTunes Search API is because it is built to work harmoniously with the <code>MediaPlayer</code> framework. As you&#x2019;ll see, streaming audio is a breeze with the Apple Music API. Let&#x2019;s get started!</p>



<h2 class="wp-block-heading">Creating our Classes</h2>



<p>Due to the nature of SwiftUI, making URL calls within our <code>structs</code> can get really complicated. Furthermore, it can get pretty complicated to populate our UI with different information gathered from different songs. This is why we&#x2019;ll create an <code>AppleMusicAPI</code> class that will contain the functions we&#x2019;ll frequently use in our app and a <code>Song</code> structure to make it easier to populate our UI.</p>



<p>Before we begin, download <a href="https://github.com/SwiftyJSON/SwiftyJSON/blob/master/Source/SwiftyJSON/SwiftyJSON.swift?ref=appcoda.com" class="rank-math-link">SwiftyJSON.swift</a> and add the file to your project. This file contains the code for the popular library <code>SwiftyJSON</code>. This is a tremendous tool when making HTTP requests and receiving and parsing JSON responses. You can find more information about <code>SwiftyJSON</code> <a href="https://github.com/SwiftyJSON/SwiftyJSON?ref=appcoda.com" class="rank-math-link">here</a>.</p>



<p>To begin, choose to File &gt; New &gt; File. From the popup that appears, select <strong>Swift file</strong>. Name this file <em>AppleMusicAPI.swift</em> and save this file in the default location. You should now have a template file as shown below.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18316" width="1792" height="1078" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2.png 3584w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-2048x1232.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-music-player-2-50x30.png 50w" sizes="(max-width: 1792px) 100vw, 1792px"></figure>



<p>Similarly, let&#x2019;s create a <code>struct</code> for our songs. Go to File &gt; New &gt; File and select <strong>Swift file</strong>. Name this file as <em>Song.swift</em> and save this file. We&#x2019;ll begin creating our <code>Song</code> structure first.</p>



<p>Add the following code below <code>import Foundation</code>:</p>



<pre class="wp-block-code"><code>struct Song {
    var id: String
    var name: String
    var artistName: String
    var artworkURL: String

    init(id: String, name: String, artistName: String, artworkURL: String) {
        self.id = id
        self.name = name
        self.artworkURL = artworkURL
        self.artistName = artistName
    }
}</code></pre>



<p>The code above creates a simple structure for <code>Song</code>. In case if you&#x2019;re not familiar with structures, don&#x2019;t worry. A structure is a type of class that helps make your code more concise, especially when dealing with structural data. Structures, just like classes, can define properties, methods, and initializers to set up their initial state. This is what we are doing with our <code>init</code> method. We are providing a default initializer method for the <code>Song</code> structure that lets a user input the ID, name, artist&#x2019;s name, and album artwork&#x2019;s URL.</p>



<p>Now switch over to the <strong>AppleMusicAPI.swift</strong> file. Here, we&#x2019;ll be implementing a class called <code>AppleMusicAPI</code> that stores our developer token and a bunch of methods that will help when playing our song through our app.</p>



<p>Remove <code>import Foundation</code> and Add the following code to <strong>AppleMusicAPI.swift</strong>:</p>



<pre class="wp-block-code"><code>// 1
import StoreKit

// 2
class AppleMusicAPI {
    // 3
    let developerToken = &quot;YOUR DEVELOPER TOKEN FROM PART 1&quot;

    // 4
    func getUserToken() -&gt; String {
        var userToken = String()

        return userToken
    }
}</code></pre>



<p>The above code is pretty self-explanatory but don&#x2019;t worry if this is a little unclear. Here&#x2019;s what the above function does.</p>



<ol><li>First, we import the <code>StoreKit</code> framework. This helps us access a lot of built-in methods that can communicate with the Apple Music API.</li><li>Here, you&#x2019;ll notice we&#x2019;re defining a <strong>class</strong> called <code>AppleMusicAPI</code>. This makes it easier to manage multiple instances of this class and reference the methods from other views in our app.</li><li>Here we&#x2019;re defining a constant <code>developerToken</code> containing the developer token we created in Part 1 on this tutorial series. This makes it easier to call the token when communicating with the API.</li><li>Finally, we define our first method in this class: <code>getUserToken()</code>. As mentioned earlier, the Apple Music API can get a user&#x2019;s library and playlists. This is only possible if we receive a token that is identifiable to a particular user.</li></ol>



<p>Let&#x2019;s finish implementing the rest of the <code>getUserToken</code> method.</p>



<pre class="wp-block-code"><code>func getUserToken() -&gt; String {
    var userToken = String()

    // 1
    let lock = DispatchSemaphore(value: 0)

    // 2
    SKCloudServiceController().requestUserToken(forDeveloperToken: developerToken) { (receivedToken, error) in
        // 3
        guard error == nil else { return }
        if let token = receivedToken {
            userToken = token
            lock.signal()
        }
    }

    // 4
    lock.wait()
    return userToken
}</code></pre>



<p>You might face some new code in the above snippet. Don&#x2019;t worry though as most of it will be explained.</p>



<ol><li>First, we define a <code>lock</code> that is of type <code>DispatchSemaphore</code>. What is a <code>DispatchSemaphore</code>? A dispatch semaphore is an efficient implementation of halting a thread until a particular message has been passed. This &#x201C;locks&#x201D; a thread from executing more code until a signal has been given.</li><li>We access the <code>SKCloudServiceController().requestUserToken()</code> method to get a token that authenticates the user in personalized Apple Music API requests. Notice how we use our constant <code>developerToken</code> in this method. It&#x2019;s definitely easier than typing the long string again and again.</li><li>Here, we write some code to error check what the <code>requestUserToken()</code> function returns. If <code>receivedToken</code> is not empty, we set it equal to our <code>userToken</code> variable from above. Notice, afterwards, we call <code>lock.signal()</code>. This lets the dispatch semaphore we create earlier know that it&#x2019;s ok to start executing remaining code.</li><li>Since this method executes asynchronously, it&#x2019;s possible that when <code>getUserToken()</code> executes, it can skip to the line <code>return userToken</code>, even before a token is received from the <code>SKCloudServiceController</code>. By adding the <code>lock.wait()</code> line of code, this tells the code to halt executing any further code until a signal is given (as we implemented in Step 3).</li></ol>



<p><strong>Note</strong>: Dispatch sophomores must be used with caution. This can transform code to execute synchronously which can slow down an app or not update UI in time. In fact, if <code>lock.signal</code> is not called, the app will forever remain stuck until the user restarts the app. Thus, it&#x2019;s not advised to use this in big production apps. However, for our purposes, it&#x2019;s completely acceptable.</p>



<p>Now, let&#x2019;s implement our next method: <code>fetchStorefront()</code>. </p>



<p>A storefront is an object that represents the iTunes Store territory that the content is available in. When we perform a search using the Apple Music API, we&#x2019;d like to show results relevant to our user&#x2019;s location. Beneath, <code>getUserToken()</code>, add the following method:</p>



<pre class="wp-block-code"><code>func fetchStorefrontID() -&gt; String {
    // 1
    let lock = DispatchSemaphore(value: 0)
    var storefrontID: String!

    // 2
    let musicURL = URL(string: &quot;https://api.music.apple.com/v1/me/storefront&quot;)!
    var musicRequest = URLRequest(url: musicURL)
    musicRequest.httpMethod = &quot;GET&quot;
    musicRequest.addValue(&quot;Bearer \(developerToken)&quot;, forHTTPHeaderField: &quot;Authorization&quot;)
    musicRequest.addValue(getUserToken(), forHTTPHeaderField: &quot;Music-User-Token&quot;)

    // 3
    URLSession.shared.dataTask(with: musicRequest) { (data, response, error) in
        guard error == nil else { return }

        // 4
        if let json = try? JSON(data: data!) {
            print(json.rawString())
        }
    }.resume()

    // 5
    lock.wait()
    return storefrontID
}</code></pre>



<ol><li>Just like earlier, we create a dispatch semaphore called <code>lock</code> to make sure that the function returns a <code>storefrontID</code> only after the data has been received from our URL request.</li><li>We create a <code>URLRequest</code> using the Apple Music API url. A lot of the details about handling requests and responses from the Apple Music API can be found <a href="https://developer.apple.com/documentation/applemusicapi/handling_requests_and_responses?ref=appcoda.com" class="rank-math-link">here</a>, but here&#x2019;s the gist. To compose a request to the API, first specify the root path, <em>https://api.music.apple.com/v1/ &#xA0;</em>. Here, we specify the <em>storefront</em> component. Using a GET Request, we create a request, adding our developer token in the header.</li><li>Next, we use <code>URLSession</code> to send <code>musicRequest</code>. This is very similar to performing any networking requests for either obtaining images or data. As the <code>.dataTask</code> method is built, we are returned with three constants: <code>data</code> (the data the network response sends back), <code>response</code> (details about the response), and <code>error</code>(this may be nil if there is no error). After checking to make sure there is no error, we move to Step 4.</li><li>The data we get is a JSON payload. A payload is the part of transmitted data that is the actual intended message. Using the <code>SwiftyJSON</code> library we added earlier, we&#x2019;ll print the raw JSON output before parsing it. Notice that we&#x2019;re not calling <code>lock.signal()</code> here because we&#x2019;re not setting anything to the <code>storefrontID</code>. If we did call <code>lock.signal()</code>, our app would crash when testing it. We&#x2019;ll add the signal when we parse our JSON result.</li><li>Finally, we ask the dispatch semaphore to wait before returning the storefront&#x2019;s ID.</li></ol>



<p>This was a lot of code so now is a good time to make sure everything is working properly. We&#x2019;ll switch to <strong>ContentView.swift</strong> and add an <code>onAppear</code> function to our <code>TabView</code>. Whatever code we place inside this method will be executed when <code>TabView</code> is displayed to the user. The code we&#x2019;ll place inside will ask the user for permission to access their media library and upon authorization, will call the <code>fetchStorefront()</code> method which will print the JSON payload.</p>



<p>Let&#x2019;s do it! At the top of the file, add <code>import StoreKit</code> and then make your <code>TabView</code> view look as such</p>



<pre class="wp-block-code"><code>TabView(selection: $selection) {
// Previous Code
    ...
}
.accentColor(.pink)
.onAppear() {
    SKCloudServiceController.requestAuthorization { (status) in
        if status == .authorized {
            print(AppleMusicAPI().fetchStorefrontID())
        }
    }
}</code></pre>



<p>These few lines of code are easily comprehendible. We ask the <code>SKCloudServiceController</code> (remember, this is an object provided by <code>StoreKit</code> that determines the current capabilities of the user&#x2019;s music library) for authorization to access a user&#x2019;s media library. If the user authorizes this access, we run <code>print(AppleMusicAPI().fetchStorefrontID())</code> which will print the JSON payload of the request.</p>



<p>Notice how we call out <code>AppleMusicAPI</code> class. Similar to <code>ContentView</code> in SwiftUI or <code>ViewController</code> in storyboard-based Swift, by adding the pair of parentheses after the class&#x2019;s name, we are initializing it which gives us access to all its constants and methods, such as the <code>fetchStorefrontID()</code> method.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18436" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-6.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now before we run the app to make sure everything is functioning as expected, we need to make a slight addition to our <strong>Info.plist</strong>. We need to add the following property that gives gives a description to the user what we need access to their music library for. The key is <code>Privacy - Media Library Usage Description</code> and the value can be any message you wish it to be that adequately informs users what the data will be used for.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18437" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-7.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now we can run out app!</p>



<p><em>You&#x2019;ll need to run the app on a physical device with an active Apple Music subscription. Unfortunately, the simulator does not have the Music app so accessing a user&#x2019;s Apple Music account is near impossible. Having an active Apple Music subscription lets you test the features of your app on your account.</em></p>



<p>After authorizing the app to access your Apple Music account, you should wait for some time and expect an output that looks like the one below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18438" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-8.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Don&#x2019;t worry about the error messages stating something along the liens of &#x201C;Unable to get the local account&#x201D;. Beneath that, you&#x2019;ll see a JSON string printed out. This goes to show that our code is working. Let&#x2019;s go back to <strong>AppleMusicAPI.swift</strong> and finish the rest of the <code>fetchStorefrontID()</code> method. Delete the line <code>print(json.rawString())</code> and replace it with the following</p>



<pre class="wp-block-code"><code>// 1
let result = (json[&quot;data&quot;]).array!
let id = (result[0].dictionaryValue)[&quot;id&quot;]!

// 2
storefrontID = id.stringValue

// 3
lock.signal()</code></pre>



<ol><li>We set <code>result</code> to be the array the JSON payload provides underneath the <code>data</code> key. <code>id</code> is simply value provided under the <code>id</code> key in the dictionary provided in <code>result</code>. This can sound a little confusing. The names of keys and values can be derived from the JSON string we printed earlier. To learn more about reading and parsing JSON, this [article][4] can help you.</li><li>We set the <code>storefrontID</code> to the string value of the <code>id</code> constant from Step 1.</li><li>Just like earlier, we signal the dispatch semaphore that it&#x2019;s okay to execute remaining code and free up the thread.</li></ol>



<p>Your entire <code>fetchStorefrontID()</code> method should look like this.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18439" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-9.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Run the app again and you&#x2019;ll see how instead of printing a JSON file, only two characters are printed, signifying the storefront ID of your Apple Music account. For me, in the United States, a simple &#x201C;us&#x201D; is printed out.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18440" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-10.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>If everything works, give yourself a pat on the back! We have one last method to implement in our <code>AppleMusicAPI</code> class and this one is pretty big. This method will allow us to search Apple Music&#x2019;s entire library of 40 million+ songs based on the keywords given. Now&#x2019;s a good time to take a break!</p>



<h2 class="wp-block-heading">Music Search</h2>



<p>Underneath our <code>fetchStorefrontID()</code> method, let&#x2019;s add a new method called <code>searchAppleMusic(searchTerm:)</code> that will let us search Apple Music&#x2019;s library based on the term. Here&#x2019;s a boilerplate to get you started. You&#x2019;ll notice it&#x2019;s very similar to our <code>fetchStorefrontID()</code> function.</p>



<pre class="wp-block-code"><code>func searchAppleMusic(_ searchTerm: String!) -&gt; [Song] {
    let lock = DispatchSemaphore(value: 0)
    var songs = [Song]()

    let musicURL = URL(string: &quot;https://api.music.apple.com/v1/catalog/\(fetchStorefrontID())/search?term=\(searchTerm.replacingOccurrences(of: &quot; &quot;, with: &quot;+&quot;))&amp;types=songs&amp;limit=25&quot;)!
    var musicRequest = URLRequest(url: musicURL)
    musicRequest.httpMethod = &quot;GET&quot;
    musicRequest.addValue(&quot;Bearer \(developerToken)&quot;, forHTTPHeaderField: &quot;Authorization&quot;)
        musicRequest.addValue(getUserToken(), forHTTPHeaderField: &quot;Music-User-Token&quot;)

    URLSession.shared.dataTask(with: musicRequest) { (data, response, error) in
        guard error == nil else { return }

    }.resume()

    lock.wait()
    return songs
}</code></pre>



<p>In terms of variables, we have our routine dispatch semaphore defined and a new <code>songs</code> array of type <code>Song</code> that we defined earlier. We&#x2019;ll be storing the songs in this array and returning this array to populate our table. You&#x2019;ll notice that <code>searchTerm</code> is a <code>String</code> provided as an input to this function. We also perform string manipulation on <code>searchTerm</code> to replace any instances of whitespaces with the <strong>+</strong> symbol as URL&#x2019;s can&#x2019;t contain any whitespaces.</p>



<p>We also create the <code>URLRequest</code> using the Apple Music API url. Notice that like before, it uses the root path <em>https://api.music.apple.com/v1/</em>. However, the URL component is <em>catalog</em> and we pass the search term as a parameter of the URL. Apart from these changes, we again use the GET request, create the request, and add our developer token to the header.</p>



<p>Now comes the important part which is parsing the data returned from our <code>URLSession.shared.dataTask</code>. Let&#x2019;s think about what we need. We need to parse the data for an array of songs. We then need to isolate the different components of the each song in the array such as title, artist, album artwork, etc. After isolating these components, we can create an object of type <code>Song</code> and add this to our <code>songs</code> array. This is how it can be accomplished. </p>



<p>Type the following code and place it underneath <code>guard error == nil else { return }</code> inside the <code>URLSession.shared.dataTask()</code> method:</p>



<pre class="wp-block-code"><code>// 1
if let json = try? JSON(data: data!) {
    // 2
    let result = (json[&quot;results&quot;][&quot;songs&quot;][&quot;data&quot;]).array!
    // 3
    for song in result {
        // 4
        let attributes = song[&quot;attributes&quot;]
        let currentSong = Song(id: attributes[&quot;playParams&quot;][&quot;id&quot;].string!, name: attributes[&quot;name&quot;].string!, artistName: attributes[&quot;artistName&quot;].string!, artworkURL: attributes[&quot;artwork&quot;][&quot;url&quot;].string!)
        songs.append(currentSong)
    }
    // 5
    lock.signal()
} else {
    // 6
    lock.signal()
}</code></pre>



<p>This is very similar to the code we wrote above to fetch a user&#x2019;s storefront ID. The only difference it there&#x2019;s a lot more JSON parsing and we are creating a <code>Song</code> object to add to our array.</p>



<ol><li>First, we type check to see is the data returned is in a valid JSON format. If it is, we create a constant called <code>json</code> containing all this data in the JSON file format.</li><li>We parse <code>json</code> for the songs that is nested within <code>results</code> -&gt; <code>songs</code> -&gt; <code>data</code> in an array format. Now, I don&#x2019;t want to burden you with more JSON formatting but this order was derived by printing the raw string of <code>json</code> and noticing how each component was ordered. If you want, you can print <code>json</code> to see how <code>result</code> is nested within the JSON data. <code>result</code> will now contain an array of our songs.</li><li>Now, we parse through each song in <code>result</code> using our trusty for loop.</li><li>First, we define a constant called <code>attributes</code> that is a dictionary of all the attributes of our current song. These attributes contain lots of information about the current song but we&#x2019;re interested in the id, name, artist&#x2019;s name, and artwork URL. Then we create a <code>currentSong</code> constant of type <code>Song</code> that is populated with the values from the <code>attributes</code> dictionary. Finally, we append the <code>currentSong</code> to our <code>songs</code> array.</li><li>Last but not least, we signal our dispatch semaphore through<code>lock</code> so the rest of the thread can be freed up to run the remaining code.</li><li>This is more of a check but if the data returned is not in a valid JSON format, we don&#x2019;t want our app to stay stuck forever. This is why we add an <code>else</code> clause to signal the dispatch semaphore to continue the rest of the code.</li></ol>



<p>Now we need to make sure our function works. Head over to <strong>ContentView.swift</strong>. In the <code>.onAppear()</code> declaration, we need to make a very minor change to the statement where we print <code>AppleMusicAPI().fetchStorefrontID()</code>. All we need to do is replace <code>.fetchStorefrontID()</code> with <code>.searchAppleMusic(&quot;Taylor Swift&quot;)</code>. If all goes well, when we run the app, this will print an array of <code>Song</code> objects containing all the songs Apple Music returns when its catalog is searched for <strong>Taylor Swift</strong>. (Of course, you can replace this with any artist, album, song, or word of your choice).</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18441" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-12.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Run the app and observe the console. After a few seconds, you should see an output similar to the following:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18442" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-13.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>As you can see, a whole array containing the songs based on your search term. We can look at the artist name, id, name of the song, and a URL to the album artwork. Since we know our code works, let&#x2019;s delete the entirety of the <code>.onAppear()</code> declaration. That way we don&#x2019;t necessarily make calls to the API when our app loads.</p>



<p>Now comes the second part of this section which is populating our search table view with the results.</p>



<h3 class="wp-block-heading">Populating the Table</h3>



<p>Go to <strong>SearchView.swift</strong>. We&#x2019;ll be making all the changes here to populate our search table. There are many changes we have to make so follow carefully.</p>



<p>First, we need to replace the <code>songs</code> array containing some temporary strings. Delete that line and replace it with the following line:</p>



<pre class="wp-block-code"><code>@State private var searchResults = [Song]()</code></pre>



<p>This creates a new empty array called <code>searchResults</code> that conforms to our <code>Song</code> object. We also link it to the <code>@State</code> property so anytime any changes are made to this array, any UI using this variable will be updated.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1155" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18443" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-14-50x30.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<p>Now, as expected, you&#x2019;ll get some errors since we removed our old <code>songs</code> array. Let&#x2019;s start at the <code>ForEach</code> within our <code>List</code> SwiftUI component. The error displayed here is that we&#x2019;re still trying to iterate over a variable <code>songs</code> that doesn&#x2019;t exist. Replace the <code>ForEach</code> line with the following:</p>



<pre class="wp-block-code"><code>ForEach(searchResults, id:\.id) { song in
    ...</code></pre>



<p>There are three changes we&#x2019;ve made here.</p>



<ol><li>The first change is we&#x2019;ve replaced <code>songs</code> which <code>searchResults</code>.</li><li>Now since our <code>Song</code> object doesn&#x2019;t conform to the <code>Hashable</code> property, we can&#x2019;t use <code>.self</code> to use it as an <code>id</code>. Good news though! Our <code>Song</code> objects have an <code>id</code> property to identify each element. This is why we replace <code>\.self</code> with <code>\.id</code>.</li><li>Finally, and this is really minor, we replace the <code>songTitle</code> variable to <code>song</code>. <code>songTitle</code> referred to the string from our old array. This isn&#x2019;t very representative of the current array for which we are iterating which is why we rename it to <code>song</code>.</li></ol>



<p>While this error is removed, we now have errors wherever we used the variable <code>songTitle</code>. This is a simple fix as we replace <code>songTitle</code> with <code>song.name</code>. Modify the code after the <code>Image</code> object within your <code>HStack</code> to look like the following:</p>



<pre class="wp-block-code"><code>VStack(alignment: .leading) {
    // 1
    Text(song.name)
        .font(.headline)
    // 2
    Text(song.artistName)
        .font(.caption)
        .foregroundColor(.secondary)
}
Spacer()
Button(action: {
    // 3
    print(&quot;Playing \(song.name)&quot;)
}) {
    Image(systemName: &quot;play.fill&quot;)
        .foregroundColor(.pink)
}</code></pre>



<ol><li>first change we made was to replace <code>songTitle</code> with <code>song.name</code>. This will use the name of the song as the string value for this Text object.</li><li>The second change we made is to replace the <strong>&#x201D;Artist Name&#x201D;</strong> placeholder with the actual artists name. Just like <code>song.name</code>, we use <code>song.artistName</code> to populate this <code>Text</code> object with the value in <code>song.artistName</code>.</li><li>Finally, the last change we made was a slight modification to our <code>Button</code> object which is to replace <code>songTitle</code> with <code>song.name</code> just like Step 1.</li></ol>



<p>Now, you can build and run the app. Everything should compile and work as normal but there&#x2019;s a catch. You&#x2019;ll notice that when you search for a song and press enter, nothing happens. This is because we still haven&#x2019;t called the <code>searchAppleMusic</code> function from our <code>AppleMusicAPI()</code> class. To do this, we need to make some changes to our <code>TextField</code>. Before this, at the top of the file underneath <code>import SwiftUI</code>, add the line <code>import StoreKit</code>.</p>



<p>Now, make the following changes to your <code>TextField</code>:</p>



<pre class="wp-block-code"><code>TextField(&quot;Search Songs&quot;, text: $searchText, onCommit: {
    // 1
    UIApplication.shared.resignFirstResponder()
    if self.searchText.isEmpty {
        // 2
        self.searchResults = []
    } else {
        // 3
        SKCloudServiceController.requestAuthorization { (status) in
            if status == .authorized {
                // 4
                self.searchResults = AppleMusicAPI().searchAppleMusic(self.searchText)
            }
        }
    }
})
.textFieldStyle(RoundedBorderTextFieldStyle())
.padding(.horizontal, 16)
.accentColor(.pink)</code></pre>



<p>We&#x2019;re at the last stretch here but this code shouldn&#x2019;t be too complicated for you to read. Here&#x2019;s a gist of what it compiles:</p>



<ol><li>First, we&#x2019;d like to dismiss the keyboard when the user presses <strong>Search</strong> on the keyboard. This is done by calling the line <code>UIApplication.shared.resignFirstResponder()&amp;nbsp;</code>.</li><li>Next, we&#x2019;d like to make sure that if the user doesn&#x2019;t enter anything, our <code>searchResults</code> are empty. This is why we set it to an empty array if the <code>searchText</code> is empty.</li><li>If the <code>searchText</code> variable is not empty, we&#x2019;d like to call our <code>searchAppleMusic</code> function. Before we do this, we make sure that <code>SKCloudServiceController</code> has the necessary authorization to access a user&#x2019;s media library. If the user authorizes this access, we can continue to the next line of code.</li><li>Finally, we set out <code>searchResults</code> array equal to the results of <code>AppleMusicAPI().searchAppleMusic(self.searchText)</code>. This will pass the search term to our <code>searchAppleMusic</code> function and update our <code>searchResults</code> array with whatever Apple Music returns!</li></ol>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18444" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-16.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>We&#x2019;re finally ready to run our app. Build and run the app and head over to the Search page. Enter any term, wait for a couple seconds as your app makes network requests you&#x2019;ve implemented, and watch with delight as the table view is populated with the results of your search term.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="689" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18445" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-600x215.png 600w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-1024x367.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-200x72.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-768x276.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-1536x551.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-1680x603.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-1240x445.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-860x309.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-680x244.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-400x144.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-17-50x18.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<p>Congratulations! You&#x2019;ve mastered the basics of the Apple Music API! As you can see, it&#x2019;s very repetitive in that the steps can be summed down to the following:</p>



<ol><li>Make a network call to the Apple Music API</li><li>Parse the JSON to see where the data you need can be found</li><li>Create an object and populate it with the data you&#x2019;ve parsed</li></ol>



<p>For the remainder of the tutorial, we&#x2019;ll be focusing on the under appreciated <code>MediaPlayer</code> framework and its very important object, <code>MPMusicPlayerController&amp;nbsp;</code>. Also, don&#x2019;t worry about the album artwork not displaying as of now. I&#x2019;ll show you how we can use <strong>Swift Package Manager</strong> to successfully load and display images from a URL.</p>



<h2 class="wp-block-heading">Implementing Music Play Back</h2>



<p>Get excited because now we&#x2019;ll be looking at the <code>MediaPlayer</code> framework and how to access and control your device&#x2019;s music player. Essentially, the <code>MediaPlayer</code> framework is a part of MusicKit and lets you control playback of the user&#x2019;s media from your app.</p>



<p>To play content using this framework, we have to instantiate one of the built-in <code>MPMusicPlayerController</code> objects. There are two types. Here is a brief description of what they are:</p>



<ol><li><strong>System Player</strong>: This media player is directly linked to the Music app on your device. If you choose to use this player, then when a user click on a song, it will open up Music and start playing from that app.</li><li><strong>Application Player</strong>: This will be a media player that is built-in directly to your app. That way when a user clicks on a song, they won&#x2019;t be transported to the Music app, but rather all the playback functionality will occur in your app. Of course, this means that you will have to add play/pause and skip/rewind functionality. We&#x2019;ll be using this type of player for our app.</li></ol>



<p>Let&#x2019;s add the application player to our app. Now we want to modify this object within both screens of our app. In our <strong>Player View</strong>, we want to control the playback of the currently playing item in the song and in our <strong>Search View</strong>, we want to be able to play songs directly from that page. As such, we&#x2019;ll be implementing <code>@State</code> and <code>@Binding</code> protocols to help with data flow.</p>



<p>Head over to <strong>ContentView.swift</strong>. At the top of the file, type <code>import MediaPlayer</code> underneath <code>import StoreKit</code>. Next, add the following line where you declare your <code>selection</code> variable:</p>



<pre class="wp-block-code"><code>@State private var musicPlayer = MPMusicPlayerController.applicationMusicPlayer</code></pre>



<p>We&#x2019;re creating a variable called <code>musicPlayer</code> that is of the type <code>applicationMusicPlayer</code> as we discussed above. We&#x2019;re also adding the <code>@State</code> property to it so we can use it to pass it to other views. Let&#x2019;s do that right now!</p>



<p>Go to <strong>PlayerView.swift</strong> and at the top of the file, add import the <code>MediaPlayer</code> framework with:</p>



<pre class="wp-block-code"><code>import MediaPlayer</code></pre>



<p>Next, at the top of the<code>struct</code>, add the following line:</p>



<pre class="wp-block-code"><code>@Binding var musicPlayer: MPMusicPlayerController</code></pre>



<p>This creates a <code>Binding</code> variable for our <strong>Player View</strong>. Since we will be binding this variable to the <code>musicPlayer</code> we created in <strong>ContentView.swift</strong>, anytime we make a change in <strong>Player View</strong> to this object, it will be reflected in <strong>Content View</strong>.</p>



<p>Since we can&#x2019;t pass this binding variable to our <code>PlayerView_Previews</code> and SwiftUI previews have no support for MusicKit, there&#x2019;s no point in having this in our file anymore. Delete the entire <code>PlayerView_Previews</code> structure from this file.</p>



<p>Let&#x2019;s do the same thing in <strong>SearchView.swift</strong>. You should already have the <code>MediaPlayer</code> framework implemented. Just like before, add the following line to the top of your <code>SearchView</code> structure below the initialization of the <code>searchResults</code> array.</p>



<pre class="wp-block-code"><code>@Binding var musicPlayer: MPMusicPlayerController</code></pre>



<p>Similar to <strong>PlayerView.swift</strong>, delete the entire <code>SearchView_Previews</code> structure from this file.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18446" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-20.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now, head back to <strong>ContentView.swift</strong>. You should see some errors for <code>PlayerView()</code> and <code>SearchView()</code>. This is simply because we haven&#x2019;t added the argument for the parameter <code>musicPlayer</code> when we call it. You can fix this by changing <code>PlayerView()</code> to <code>PlayerView(musicPlayer: self.$musicPlayer)</code> and <code>SearchView()</code> to <code>SearchView(musicPlayer: self.$musicPlayer)</code>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18447" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-21.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Build the app! It should compile without errors. We now have a built-in media player that is accessible to all the views in our app! Now when we implement the play and pause methods, we can easily update our player and reflect those changes across all the views in our app!</p>



<h3 class="wp-block-heading">Playing Music</h3>



<p>When our user clicks on a song from the search table, we&#x2019;d like our app to play the song selected. This means that we have to add a song to our <code>musicPlayer</code>&#x2019;s queue and tell the queue to play. Watch how we implement this in two line!</p>



<p>Go to <strong>SearchView.swift</strong>. Find the <code>Button</code> structure that should be inside the <code>List</code> structure. As of right now, it only prints the name of the song it is playing to the console. Delete that line and replace it with the following:</p>



<pre class="wp-block-code"><code>self.musicPlayer.setQueue(with: [song.id])
self.musicPlayer.play()</code></pre>



<p>That&#x2019;s it! In the first line, we set the queue of our <code>musicPlayer</code>. The queue in this case is an array of strings containing the ID&#x2019;s of the song. This is why the <code>MediaPlayer</code> and Apple Music API work harmoniously together. The framework can automatically detect which song is to be played based on the ID given. The final line simply instructs our <code>musicPlayer</code> to play whatever is in the queue!</p>



<p>Build and run the app! You should be able to search for a song, and when you tap on it, your music player should start playing the song!</p>



<p>If you&#x2019;ve got everything working, congratulations! This is one of the most under appreciated, unknown functionalities you can build using Swift! We&#x2019;re not done yet though! While we can play music, we&#x2019;d also like to control the playback state of the song! We also want to make sure that the <strong>Player View</strong> can show us the name of the song and tag artist&#x2019;s name. This can be achieved relatively simply as well!</p>



<p>Go to <strong>PlayerView.swift</strong>. Let&#x2019;s start at the top of the file and make the respective changes as we work our way down!</p>



<p>Find the <code>VStack</code> structure where we define our <code>Text</code> objects that will contain the name of the song and the artist&#x2019;s name. Replace it with the following:</p>



<pre class="wp-block-code"><code>VStack(spacing: 8) {
    Text(self.musicPlayer.nowPlayingItem?.title ?? &quot;Not Playing&quot;)
        .font(Font.system(.title).bold())
    Text(self.musicPlayer.nowPlayingItem?.artist ?? &quot;&quot;)
        .font(.system(.headline))
}</code></pre>



<p>The changes we&#x2019;ve made here is to replace &#x201C;Song Title&#x201D; and &#x201C;Artist Name&#x201C; with the actual name of the currently playing song and the artist by whom it&#x2019;s sung. If no song is playing, then we leave the artist&#x2019;s name empty and set the name of the song to <strong>&#x201D;Not Playing&#x201D;</strong>.</p>



<p>Before we go further, I&#x2019;d like to explain a little bit about this <code>.nowPlayingItem</code> object we&#x2019;re calling. This object is of a type called <code>MPMediaItem</code>. This object is provided by the <code>MediaPlayer</code> framework and is an object which contains a collection of properties that represents what you could call a &#x201C;song&#x201D; in a media library. More specifically, an <code>MPMediaItem</code> can be any object that can be played and has a metadata that is similar to any audio-based object such as a song or podcast. The properties we&#x2019;re using here are <code>title</code>, which is the name of the song, and <code>artist</code>, which is the name of the artist. There are many more properties that you can use if you choose to that can be found <a href="https://developer.apple.com/documentation/mediaplayer/mpmediaitem?ref=appcoda.com" class="rank-math-link">here</a>.</p>



<p>Build and run your app. You can see that when you start to play a song, the metadata is displayed where it&#x2019;s supposed to be!</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1853" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18448" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-311x300.png 311w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-1024x988.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-200x193.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-768x741.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-1536x1482.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-1680x1621.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-1240x1197.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-860x830.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-680x656.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-400x386.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-24-50x48.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<p>Now, let&#x2019;s add some play and pause functionality. Scroll down in <strong>PlayerView.swift</strong> and go to the <code>Button</code> object which currently prints <strong>&#x201D;Pause&#x201D;</strong> to the console. Delete this line and replace it with the following:</p>



<pre class="wp-block-code"><code>if self.musicPlayer.playbackState == .paused || self.musicPlayer.playbackState == .stopped {
    self.musicPlayer.play()
} else {
    self.musicPlayer.pause()
}</code></pre>



<p>Our <code>musicPlayer</code> has a property called <code>playbackState</code> that is of type <code>MPMusicPlaybackState</code>. This property keeps track of whether the music player is playing a song, paused, or stopped. If the <code>playbackState</code> is equal to<code>.paused</code> or <code>.stopped</code>, then nothing is playing and when the user clicks on this button, we&#x2019;d like for it to start playing. This is why we call <code>self.musicPlayer.play()</code>. Similarly, in the else clause, we call <code>self.musicPlayer.pause()</code> because we&#x2019;d like to stop the currently playing song.</p>



<p>You can build and run the app on your device. You should see that when you click on the pause button, the music pauses and when you press again, the music resumes. However, the icon of the button is not updating. This can be done by tracking the playback state of the currently playing item!</p>



<p>At the top of <strong>PlayerView.swift</strong>, underneath where you declared <code>musicPlayer</code>, type the following line:</p>



<pre class="wp-block-code"><code>@State private var isPlaying = false</code></pre>



<p>This is a Boolean variable that is attached to the <code>@State</code> property that will be updated based on the playback state of the currently playing song. First, go to the <code>Button</code> object we modified above. Modify the action of the <code>Button</code> to look like the following:</p>



<pre class="wp-block-code"><code>if self.musicPlayer.playbackState == .paused || self.musicPlayer.playbackState == .stopped {
    self.musicPlayer.play()
    // 1
    self.isPlaying = true
} else {
    self.musicPlayer.pause()
    // 2
    self.isPlaying = false
}</code></pre>



<p>We&#x2019;ve added two lines that are pretty self explanatory. When we signal the <code>musicPlayer</code> to play, we set the <code>isPlaying</code> variable to <code>true</code>. When we pause the <code>musicPlayer</code>, we once again set the <code>isPlaying</code> variable to <code>false</code>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18449" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-26.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, we need to use the <code>isPlaying</code> variable to update the image used as the logo for the button. This is quite simple. Underneath the button&#x2019;s action where we declare the UI of the button, modify the <code>ZStack</code> object to look as such:</p>



<pre class="wp-block-code"><code>ZStack {
    Circle()
        .frame(width: 80, height: 80)
        .accentColor(.pink)
        .shadow(radius: 10)
    Image(systemName: self.isPlaying ? &quot;pause.fill&quot; : &quot;play.fill&quot;)
        .foregroundColor(.white)
        .font(.system(.title))
}</code></pre>



<p>The change we made here is to change the <code>systemName</code>of the <code>Image</code> object used based on whether <code>self.isPlaying</code> is true or false. If it is true, we&#x2019;ll use the <strong>pause.fill</strong> SF Symbol. If not, we&#x2019;ll use the <strong>play.fill</strong> symbol.</p>



<p>Last but not least, we need a way to update <code>self.isPlaying</code> if there is a change made in <strong>Search View</strong>. For example, if our player is paused but we play a song from <strong>Search View</strong>, we&#x2019;d like to update <code>self.isPlaying</code> by setting it to true. Now, we can use the <code>@Binding</code> protocol but I&#x2019;d like to show you an easier alternative: the <code>onAppear()</code> function!</p>



<p>At the very bottom of <strong>PlayerView.swift</strong>, find the second to last <strong>}</strong> closing curly bracket. This is the bracket that will close our <code>GeometryReader</code>. Attach the <code>.onAppear()</code> function below it as shown:</p>



<pre class="wp-block-code"><code>.onAppear() {
    if self.musicPlayer.playbackState == .playing {
        self.isPlaying = true
    } else {
        self.isPlaying = false
    }
}</code></pre>



<p>What are we doing in this function? Well, remember that our <code>musicPlayer</code> is already bound to all the views in our app. So any updates to this object is immediately reflected in every view. When we play a song from <strong>Search View</strong>, the <code>musicPlayer</code> is updated to set its <code>playbackState</code> to <code>.playing</code>. This is why in the <code>.onAppear()</code> function, we check to see that if the <code>playbackState</code> is in fact playing, then we set <code>isPlaying</code> to true. If not, we set <code>isPlaying</code> to false!</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18450" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-28.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>That&#x2019;s all for playing and pausing! Build and run your app! You should see that the play/pause button accurately changes based on whether the song is currently playing or is paused!</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1853" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18451" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-311x300.png 311w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-1024x988.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-200x193.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-768x741.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-1536x1482.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-1680x1621.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-1240x1197.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-860x830.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-680x656.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-400x386.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-29-50x48.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<h2 class="wp-block-heading">Implementing Skip &amp; Rewind</h2>



<p>Now let&#x2019;s implement the logic for skip and rewind buttons. This won&#x2019;t take too long. Let&#x2019;s start with the skip button.</p>



<p>Navigate to <strong>PlayerView.swift</strong> and locate the <code>Button</code> object that we&#x2019;re using as a temporary skip button. It should be printing <strong>&#x201D;Skip&#x201D;</strong> to the console when tapped upon.</p>



<p>Replace the line <code>print(&quot;Skip&quot;)</code> with <code>self.musicPlayer.skipToNextItem()</code> and that&#x2019;s all! Build and run your app. When you press the skip button, it will jump to the next song in the queue, which is nothing, so the player will stop!</p>



<p>You can see how <code>MediaPlayer</code> comes with a list of functions that make it easy for us to control playback! With one line of code, we were able to implement a &#x201C;skip song&#x201D; functionality into our music player app. The &#x201C;rewind&#x201D; button is a little more challenging.</p>



<p>Think about your favorite music player. When you press the rewind button, what happens? If the song is more than 5 seconds through its playback, it skips to the beginning of the song. If the current playback is still within the first 5 seconds, then it goes to the previous song. With a simple <code>if-else</code> statement, watch how we can add the same functionality to our app.</p>



<p>Scroll back up to the <code>Button</code> object that currently prints <strong>&#x201D;Rewind&#x201D;</strong> to the console. Delete that line and replace it with the following:</p>



<pre class="wp-block-code"><code>if self.musicPlayer.currentPlaybackTime &lt; 5 {
    self.musicPlayer.skipToPreviousItem()
} else {
    self.musicPlayer.skipToBeginning()
}</code></pre>



<p>You can see that we use a new property here: <code>currentPlaybackTime</code>. This is the length of the song, in seconds, that has been played by our player. If the number of seconds played so far is less than 5, we call upon a new method: <code>skipToPreviousItem()</code>. This method, provided by the <code>MediaPlayer</code> framework, we jump back in the queue and start playing the song before this song. Currently, this will continue playing the beginning of this song.</p>



<p>However, what we can test is that if <code>currentPlaybackTime</code> is greater than or equal to 5, we can call <code>skipToBeginning()</code> in our <code>musicPlayer</code> and that will ask our <code>musicPlayer</code> to start playing from the beginning of our current song.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18452" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-31.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Build and run your app! It should be able to handle the skip and rewind buttons as you implemented! As you can see from this section, a lot of the methods we use to control playback are already implemented in the <code>MediaPlayer</code> framework. With a little research, you can continue to build upon all these features.</p>



<h2 class="wp-block-heading">Implementing Album Artwork</h2>



<p>Finally, let&#x2019;s replace the ugly &#x201C;A&#x201D; SF Symbol with the album artwork of the current song playing. If you remember from earlier, our <code>Song</code> object has a property called <code>artworkURL</code> that contains the string of a URL containing the image of the album.</p>



<p>Normally, we&#x2019;d have to write a lot of code to process the URL, gather the image data, and cache the image so we wouldn&#x2019;t make redundant calls. I&#x2019;d like to show you a nifty Swift package called [SDWebImageSwiftUI][6]. This package contains a framework built for SwiftUI that can help with image loading and contains features like async image loading, memory/disk caching, animated image playback and more. If you ever need to load images from a URL in your apps, I highly recommend this framework since it automatically handles image loading and caching, thereby speeding up your app.</p>



<p>Here&#x2019;s how we can install it. We&#x2019;ll be using <strong>Swift Package Manager</strong> to link this framework with our project. In Xcode, go to the Title Bar, and click on File &gt; Swift Packages &gt; Add Package Dependancy.</p>



<p></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="640" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-1024x640.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18453" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-1536x960.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-1680x1050.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32-50x31.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-32.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>A popup will show up and ask you to enter a package repository URL. The URL we will be using is <a href="https://github.com/SDWebImage/SDWebImageSwiftUI?ref=appcoda.com" class="rank-math-link">https://github.com/SDWebImage/SDWebImageSwiftUI</a>. Enter this URL and press <strong>Next</strong>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18454" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-33.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>After some loading, Xcode will ask you which version to use. Don&#x2019;t make any changes and simple press <strong>Next</strong>. Xcode will take some time to fetch the repository and create a package to add to your project. After a minute, Xcode will show you a popup confirming you to choose the packages and target. Make sure it looks like something below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18455" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-34.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The important point to follow here is that <strong>MusicPlayer</strong> is selected as the target. Click <strong>Finish</strong> and Xcode will take you to the main project page that shows all your Swift Packages. This means that the package has successfully been added to your project and you can start using the framework!</p>



<p>Go to <strong>SearchView.swift</strong> and at the top of the file, type <code>import SDWebImageSwiftUI</code>. This will gives us access to all the objects and methods in this framework.</p>



<p>In our <strong>List</strong> object, locate the <strong>Image</strong> object that is currently coded as such:</p>



<pre class="wp-block-code"><code>Image(systemName: &quot;rectangle.stack.fill&quot;)
    .resizable()
    .frame(width: 40, height: 40)
    .cornerRadius(5)
    .shadow(radius: 2)</code></pre>



<p>Making sure that all its modifiers are still in place, replace it with the following</p>



<pre class="wp-block-code"><code>WebImage(url: URL(string: song.artworkURL.replacingOccurrences(of: &quot;{w}&quot;, with: &quot;80&quot;).replacingOccurrences(of: &quot;{h}&quot;, with: &quot;80&quot;)))
    .resizable()
    .frame(width: 40, height: 40)
    .cornerRadius(5)
    .shadow(radius: 2)</code></pre>



<p><code>SDWebImageSwiftUI</code> provides an object called <code>WebImage</code>. This is very similar to our <code>Image</code> object but has some really neat additions. For one, it takes an argument called <code>url</code> which when provided, automatically loads the image from this URL.</p>



<p>We create a <code>URL</code> object and pass the string <code>song.artworkURL</code>. Notice that once again, we perform string manipulation here. If you notice the URL, you&#x2019;ll notice that it has two parameters: <strong>{h}</strong> and <strong>{w}</strong>. These parameters stand for height and width, respectively. This is provided by the Apple Music API on purpose because we can specify the height and width that is best suited for us. I chose a height and width of 80, since this is 2x the size of our <code>WebImage</code> so the image loaded will have a high quality. Therefore, we call <code>replaceOccurences</code> on both <strong>{w}</strong> and <strong>{h}</strong> and replace it with 80.</p>



<p>Everything else should remain the same! Make sure your code looks like the screenshot below:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18456" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-36.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Build and run your app! Now, when you search for a song, you can actually see the album cover in the search results. The best part too is that <code>SDWebImageSwiftUI</code> caches the image so if you quit the app and search for the same term again, you can notice that there is a huge speed increase in loading the image.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1227" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18457" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-469x300.png 469w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-1024x654.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-200x128.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-768x491.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-1536x982.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-1680x1074.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-1240x792.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-860x550.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-680x435.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-400x256.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-37-50x32.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<p>Our last step is to change the album artwork in <strong>PlayerView.swift</strong>. Unfortunately, this is not as simple. See, in <strong>SearchView.swift</strong> we had a list of <code>Song</code> objects, each containing a URL to the album cover. Since we didn&#x2019;t pass this list along, our <strong>Player View</strong> can&#x2019;t access this album artwork URL. Furthermore, while <code>MediaPlayer</code> is great at providing many built-in objects and methods, it does not contain an object with a link to a particular song&#x2019;s album artwork.</p>



<p>In order to fix this, we need to rely on <code>State</code> and <code>Binding</code>, once again. Head to <strong>ContentView.swift</strong> and at the top of the file, underneath where you created your <code>musicPlayer</code> variable, add the following file:</p>



<pre class="wp-block-code"><code>@State private var currentSong = Song(id: &quot;&quot;, name: &quot;&quot;, artistName: &quot;&quot;, artworkURL: &quot;&quot;)</code></pre>



<p>We are creating a new shared object called <code>currentSong</code> of type <code>Song</code> that will be updated whenever we click on a new song from our <strong>Search View</strong>.</p>



<p>On that note, go to <strong>SearchView.swift</strong> and add the following <code>Binding</code> property at the top of the file, below where you declared your <code>musicPlayer</code> variable.</p>



<pre class="wp-block-code"><code>@Binding var currentSong: Song</code></pre>



<p>Scroll to the bottom of the file and you&#x2019;ll find the <code>Button</code> object that currently sets the queue of the <code>musicPlayer</code> and tells it to start playing. Over here, we want to set <code>currentSong</code> to the song that was tapped upon.</p>



<p>Modify the action of the <code>Button</code> as shown.</p>



<pre class="wp-block-code"><code>Button(action: {
    self.currentSong = song
    self.musicPlayer.setQueue(with: [song.id])
    self.musicPlayer.play()
})</code></pre>



<p>All we&#x2019;re adding here is setting the <code>currentSong</code> to the <code>song</code> that was tapped on.</p>



<p>Let&#x2019;s do the same for <strong>PlayerView.swift</strong>. Navigate to that file and at the very top, add <code>import SDWebImageSwiftUI</code>. This will let us use the <code>WebImage</code> object as we did above.</p>



<p>Next, add the <code>Binding</code> variable <code>currentSong</code> to the structure, right below where we declared the <code>isPlaying</code> variable.</p>



<pre class="wp-block-code"><code>@Binding var currentSong: Song</code></pre>



<p>Last but not least, let&#x2019;s replace our current <code>Image</code> object which holds the SF Symbol <strong>a.square</strong> with the following code. Notice that all the modifiers remain the same.</p>



<pre class="wp-block-code"><code>WebImage(url: URL(string: self.currentSong.artworkURL.replacingOccurrences(of: &quot;{w}&quot;, with: &quot;\(Int(geometry.size.width - 24) * 2)&quot;).replacingOccurrences(of: &quot;{h}&quot;, with: &quot;\(Int(geometry.size.width - 24) * 2)&quot;)))
    .resizable()
    .frame(width: geometry.size.width - 24, height: geometry.size.width - 24)
    .cornerRadius(20)
    .shadow(radius: 10)</code></pre>



<p>Now just like before, we pass a URL to <code>WebImage</code>, only this URL is very lengthy. Just like before, we are replacing <strong>{w}</strong> and <strong>{h}</strong> with the width and height we want. It&#x2019;s not as simple as before however, since our height and width is reliant on the width of the device. This is why we replace <strong>{w}</strong> and <strong>{h}</strong> with <code>geometry.size.width - 24</code>. The number is actually of type <strong>Float</strong>so to convert it to an integer, we wrap it around in <code>Int</code>. Finally, we multiply both lengths by 2 to get a sharper image quality.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-1024x616.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18458" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-40.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Last but not least, we still need to pass the <code>currentSong</code> from our <strong>Content View</strong> to both the <strong>Player View</strong> and <strong>Search View</strong>. Head over to <strong>ContentView.swift</strong> and make the following changes to where you declare both <code>PlayerView()</code> and <code>SearchView()</code>.</p>



<pre class="wp-block-code"><code>PlayerView(musicPlayer: self.$musicPlayer, currentSong: self.$currentSong)
...
SearchView(musicPlayer: self.$musicPlayer, currentSong: self.$currentSong)</code></pre>



<p>In these changes, we passing the <code>currentSong</code> that was created in <code>ContentView.swift</code> to the remaining views.</p>



<p>Congratulations! This is the end of this lengthy tutorial! Build and run your app and watch with delight as your app can make calls to the Apple Music API, parse JSON and display a list of songs, utilize a devices media player capabilities to play songs and control playback, and harness external libraries to load images from remote sources- all using SwiftUI, a technology that is only a year old!</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="917" src="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42.png" alt="Using MusicKit and Apple Music API to Build a Music Player" class="wp-image-18459" srcset="https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42.png 1920w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-600x287.png 600w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1024x489.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-200x96.png 200w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-768x367.png 768w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1536x734.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1680x802.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-1240x592.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-860x411.png 860w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-680x325.png 680w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-400x191.png 400w, https://www.appcoda.com/content/images/wordpress/2020/07/musickit-player-42-50x24.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></figure>



<h2 class="wp-block-heading">Conclusion</h2>



<p>If you&#x2019;ve reached the end successfully, congratulations! This two part tutorial was not your average, intermediate tutorial. Despite its lengthy content, I hope you learned something new from all the technologies we&#x2019;ve utilized. I&#x2019;ve compiled a list of resources below that can help you if you choose to continue building upon this app! You can <a href="https://github.com/appcoda/MusicKitPlayer?ref=appcoda.com" class="rank-math-link">download the completed project</a> on GitHub.</p>



<p>For reference, you can check out the following resource related to this tutorial:</p>



<p><strong>Apple Documentation and Videos</strong></p>



<ul><li><a href="https://developer.apple.com/documentation/applemusicapi/?ref=appcoda.com" class="rank-math-link">Apple Music API Documentation</a></li><li><a href="https://developer.apple.com/documentation/storekit/?ref=appcoda.com" class="rank-math-link">StoreKit Documentation</a></li><li><a href="https://developer.apple.com/documentation/mediaplayer/?ref=appcoda.com" class="rank-math-link">MediaPlayer Documentation</a></li><li><a href="https://developer.apple.com/musickit/?ref=appcoda.com" class="rank-math-link">MusicKit on Android and the Web</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2017/502/?ref=appcoda.com" class="rank-math-link">WWDC 2017 Video- Introduction MusicKit</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2018/506/?ref=appcoda.com" class="rank-math-link">WWDC 2018 Video- MusicKit on the Web</a></li></ul>



<p><strong>More on the Technology</strong></p>



<ul><li><a href="https://www.json.org/json-en.html?ref=appcoda.com" class="rank-math-link">Introducing JSON</a></li><li><a href="https://www.appcoda.com/json-codable-swift/" class="rank-math-link">Working with JSON and Codable in Swift 5</a></li><li><a href="https://www.appcoda.com/learnswiftui/swiftui-state.html" class="rank-math-link">More on @State and @Binding in SwiftUI</a></li></ul>



<p>As always, if you have any doubts, feel free to leave a comment below or reach me on Twitter <a href="https://twitter.com/HeySaiK?ref=appcoda.com" class="rank-math-link">@HeySaiK</a>. I can&#x2019;t wait to see what you&#x2019;ll make with MusicKit!</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Introduction to MusicKit: Building a Music Player in SwiftUI]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>At WWDC 2017, Apple announced <code><a href="https://developer.apple.com/musickit/?ref=appcoda.com">MusicKit</a></code>, a framework to help developers build apps that allow users to play Apple Music and their local music library. Unlike most frameworks like <code>ARKit</code> or <code>CoreML</code>, <code>MusicKit</code> cannot be added to your code with a simple <code>import</code> function. Rather, it&#x2019;s a combination</p>]]></description><link>https://www.appcoda.com/musickit-music-player-swiftui/</link><guid isPermaLink="false">66612a0f166d3c03cf0114c4</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Fri, 03 Apr 2020 17:38:51 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2020/04/uy-9dyz8ppm.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2020/04/uy-9dyz8ppm.jpg" alt="Introduction to MusicKit: Building a Music Player in SwiftUI"><p>At WWDC 2017, Apple announced <code><a href="https://developer.apple.com/musickit/?ref=appcoda.com">MusicKit</a></code>, a framework to help developers build apps that allow users to play Apple Music and their local music library. Unlike most frameworks like <code>ARKit</code> or <code>CoreML</code>, <code>MusicKit</code> cannot be added to your code with a simple <code>import</code> function. Rather, it&#x2019;s a combination of the Apple Music API, the <code>StoreKit</code> framework, the <code>MediaPlayer</code> framework, and some other web-based technologies.</p>



<p>You may be wondering why is this framework so difficult to integrate into your applications compared to other API and frameworks. This is because MusicKit was not only built to work on iOS devices, but Android and Web applications as well. Just like <code>CloudKit</code> or <code>Sign in with Apple</code>, you can access a user&#x2019;s Apple Music account through Android and Web apps. However, for the purpose of this tutorial, we will simply focus on building a music player for iOS using SwiftUI.</p>



<p><em><strong>Editor&#x2019;s note</strong>: This is a two part series of our MusicKit tutorials. This is the first part and the second one will come next week.</em> If you are new to SwiftUI, you can check out <a href="https://www.appcoda.com/swiftui-first-look/">this tutorial</a> and <a href="https://www.appcoda.com/learnswiftui/swiftui-basics.html">this sample chapter</a> of our <a href="https://www.appcoda.com/swiftui">Mastering SwiftUI</a> book.</p>



<h2 class="wp-block-heading">The MusicKit Demo App</h2>



<p>In this tutorial series, we&#x2019;ll be building a very simple music player that searches using the Apple Music API for a song, grabs the relevant details of the song like artist name and album cover, and plays it on the device with the help of <code>MediaPlayer</code>&#x2019;s <code>MPMusicPlayerController</code>. We&#x2019;ll also see how to use the <code>MediaPlayer</code> framework to play, rewind, and skip songs. </p>



<p>Before we can do this, we need to first communicate with the Apple Music service to see if a user has a valid Apple Music subscription. This requires us to create a MusicKit identifier and private key to sign your developer tokens using Certificates, Identifiers &amp; Profiles in our Apple Developer Program account. Since this can be very daunting for a lot of iOS developers, a majority of this tutorial will focus on creating these keys and identifiers. The next part of this tutorial will focus more on the <code>MediaPlayer</code> framework.</p>



<p>Our final app will look like this:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="997" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-1024x997.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17919" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-1024x997.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-308x300.png 308w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-200x195.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-768x747.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-1536x1495.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-2048x1993.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-1680x1635.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-1240x1207.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-860x837.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-680x662.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-400x389.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-1-50x49.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>This tutorial was made using Xcode 11.3 and Swift 5.1. An active <a href="https://www.apple.com/apple-music/?ref=appcoda.com">Apple Music</a> subscription will be needed in order to test the app out. You may also need an active Apple Developer account in order to create some of the required identifiers and keys. You will also need to run your app on a real device because as of this time, the Simulator does not support Apple Music playback. We will also be running some Python code in order to generate private keys for us. If you don&#x2019;t have <code>pip</code> installed, don&#x2019;t worry, I will explain how to later in the tutorial.</p>



<h2 class="wp-block-heading">Creating a MusicKit Identifier</h2>



<p>The first step in communicating with the Apple Music API is to have a valid MusicKit identifier in your Apple Developer account. This identifier will be linked to our app and will let the Apple Music service know that our app is valid to access its services. Since our identifier is linked to our app&#x2019;s Bundle Identifier, let&#x2019;s quickly create the Xcode project first.</p>



<p>Open Xcode and click on <strong>Create a new Xcode Project</strong>. Under <strong>iOS</strong>, make sure that <strong>Tabbed App</strong> is the option selected before clicking on the <strong>Next</strong> button.</p>



<p>Next, name the project to <strong>MusicPlayer</strong> but feel free to change it if you like. Make sure that your <strong>User Interface&#xA0;</strong> is set to <strong>SwiftUI</strong>. Finally, take note of the <strong>Bundle Identifier</strong> as displayed below. For example, my bundle identifier is &#x201C;com.appcoda.MusicPlayer&#x201D;. But you should use your own.</p>



<p>Click on <strong>Next</strong> and choose where you would like to save your project. That&#x2019;s it! You&#x2019;ve created the project and just for reference, you should see your Bundle Identifier again on your Project Dashboard.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17920" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-2048x1233.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-4-50x30.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now we can create our MusicKit Identifier. Head to your <a href="developer.apple.com/account">Apple Developer account</a>. Once you login, you should be greeted with a screen that looks like this. Make sure you click on <strong>Certificates, Identifiers, &amp; Profiles</strong>.</p>



<p>Once the page loads, click on <strong>Identifiers</strong> on the left side of the page. You&#x2019;ll get a list of all the App IDs your Apple Developer account is associated with.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-1024x616.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17921" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-2048x1232.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-6-50x30.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now, let&#x2019;s create a new MusicKit Identifier. Click on the blue plus button next to the <strong>Identifiers</strong> title. A new page will load titled <strong>Register a New Identifier</strong>. Make sure that you select the <strong>Music IDs</strong> option before pressing <strong>Continue</strong>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-1024x616.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17922" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-2048x1232.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-7-50x30.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, you&#x2019;ll be asked to quickly fill out a description and identifier for your Music ID. You can see what I filled out below. For your identifier, make sure that it is in the following format- music.&lt;YOUR-BUNDLE-ID&gt;.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-1024x616.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17923" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-2048x1232.png 2048w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-8-50x30.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Click <strong>Continue</strong> and make sure all your details are correct, click on <strong>Register</strong>.</p>



<p>That&#x2019;s it! If all worked out, you should see a list of all your Music IDs and one of them being the one you just created!</p>



<p>Next, we&#x2019;ll create a private key that is associated with the identifier we just created. This key can be used to sign a secret developer token that allows us to communicate with MusicKit.</p>



<h2 class="wp-block-heading">Creating a Private Key</h2>



<p>Generally speaking, private keys are important because they allow you to access and authenticate communication with some app services &#x2014; such as Push Notifications, MusicKit, and DeviceCheck. We&#x2019;ll need to create a private key in order to sign something called a <strong>developer token</strong>. This token will be used when sending requests to the Apple Music API in order to let them know that this request is coming from a verified developer. You&#x2019;ll use the private key to create the token in the form of a JSON web token (JWT). You can learn more about JWT <a href="https://jwt.io/introduction/?ref=appcoda.com">here</a>.</p>



<p>Still in the <strong>Certificates, Identifiers, and Profiles</strong> page, click on the <strong>Keys</strong> tab to the left side of the page. You&#x2019;ll see a list of all the keys associated with your developer account. If you&#x2019;ve ever used Push Notifications in your apps, you&#x2019;ll find the key you used for that service.</p>



<p>Let&#x2019;s create a new key for our app. Just like before, click on the plus button next to <strong>Keys</strong>. Name your key (I&#x2019;m using &#x201C;AppCoda MusicPlayer&#x201D;) and select MusicKit from the list of capabilities shown below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="463" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-1024x463.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17924" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-1024x463.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-600x272.png 600w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-200x91.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-768x348.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-1536x695.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-1680x760.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-1240x561.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-860x389.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-680x308.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-400x181.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11-50x23.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-11.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Before you select <strong>Continue</strong>, we&#x2019;ll need to configure this capability. Click on <strong>Configure</strong> and you&#x2019;ll be shown a page where you can associate this private key with the MusicKit Identifier we created earlier. Select the Music ID you created and then click <strong>Save</strong>.</p>



<p>You&#x2019;ll go to the previous page where you can finally click on <strong>Continue</strong>. After a quick checkup to make sure everything has been entered correctly, click on <strong>Register</strong>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="343" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-1024x343.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17925" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-1024x343.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-600x201.png 600w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-200x67.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-768x257.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-1536x514.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-1680x563.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-1240x415.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-860x288.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-680x228.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-400x134.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13-50x17.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-13.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>If all goes well, you&#x2019;ll see a page like the one above asking you to download your key. Make sure you download the key and save it in a safe place as you will not be able to download your key again. Also, make sure you take note of your Key ID as we&#x2019;ll need it soon.</p>



<p>Now that we have the private key downloaded, we&#x2019;ll need to use it to create the developer token in the JWT format. However, if you look at the file you downloaded, you&#x2019;ll see that it&#x2019;s in a <code>.p8</code> file format and you probably can&#x2019;t open the file. Luckily, there&#x2019;s a way to view the private key from this file.</p>



<p>Open a new tab in your browser and head to <a href="https://filext.com/file-extension/P8?ref=appcoda.com">https://filext.com/file-extension/P8</a>. This website will read your <code>.p8</code> file and display the private key to you. Drop your file into the placeholder and wait for a couple seconds.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-1024x616.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17926" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-14.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>You can see that the private key is given to you. I&#x2019;ve hidden it from my screenshot above. You should also strive to keep this key as much of a secret since this is directly linked to your developer account. Copy the private key and save it. We&#x2019;ll also need this in a bit.</p>



<h3 class="wp-block-heading">Python Script</h3>



<p>Now it comes to the fun part, but slightly complicated. You&#x2019;ll need to create the developer token using a Python script. So, first <a href="https://github.com/appcoda/MusicKitPlayer/blob/master/musictoken.py?ref=appcoda.com">download the script file here</a>. In order for the script to work, you&#x2019;ll need to install some Python packages. MacOS already comes with Python installed, so you don&#x2019;t have to do any extra work to install Python. All you need to do is install those Python packages.</p>



<p>First, let&#x2019;s download <code>pip</code>. If you&#x2019;re not familiar with Python, <code>pip</code> is a very simple and easy-to-use package management system used to install and manage software packages written in Python. You can think of it as CocoaPods or Carthage for Python. Open <strong>Terminal</strong>. Type the following command:</p>



<pre class="wp-block-code"><code>sudo easy_install pip</code></pre>



<p>After entering your password, <code>pip</code> will be installed. If you have <code>pip</code> installed already, you&#x2019;ll probably see the following screen which means we can move to the next party.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="716" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-1024x716.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17927" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-1024x716.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-429x300.png 429w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-768x537.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-1240x867.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-860x601.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-680x476.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-400x280.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15-50x35.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-15.png 1364w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now we need two packages in order for our script to work: <code>pyjwt</code> and <code>cryptography</code>. Both are used to encrypt all the sensitive information and transform it into the JWT format. Just like before enter the following two commands in terminal:</p>



<pre class="wp-block-code"><code>sudo pip install pyjwt
sudo pip install cryptography</code></pre>



<p>As long as you don&#x2019;t get any errors (usually in red text), you should be in the clear. We&#x2019;re almost done. Now we need to edit the script in order to populate it with 3 pieces of information. </p>



<ol><li>The MusicKit private key (copied from the website which displayed our <code>.p8</code> file)</li><li>The Key ID from the private key we created in the Apple Developer website</li><li>Your Apple ID Team ID</li></ol>



<p>You can find your Team ID by going to <a href="https://developer.apple.com/account?ref=appcoda.com">https://developer.apple.com/account</a> and clicking on the Membership tab to the left. </p>



<p>Now, open the script you downloaded earlier. You&#x2019;ll be able to see where to enter your own details in the script.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="616" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-1024x616.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17928" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-1024x616.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-499x300.png 499w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-1536x924.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-1680x1011.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-1240x746.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-860x517.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-19.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Some notes about modifying the script:</p>



<ul><li>Copy and paste your MusicKit private key exactly as it&#x2019;s displayed. This means there should be 4 lines in between the <strong>BEGIN PRIVATE KEY</strong> and <strong>END PRIVATE KEY</strong> (which should also be included).</li><li>In case you get confused in all the strings we&#x2019;ve seen, your Key ID is an alphanumeric 10 character string.</li></ul>



<p>When you&#x2019;re done entering all your information, save the script and head back to Terminal. Type the following command:</p>



<pre class="wp-block-code"><code>cd PATH/TO/WHERE/MUSICTOKEN.PY/IS/LOCATED</code></pre>



<p>Now, for the magic. Type the command into Terminal:</p>



<pre class="wp-block-code"><code>python musictoken.py</code></pre>



<p>If you&#x2019;ve done everything right, you&#x2019;ll see your token and a sample <code>CURL</code> request. Your Terminal screen will look like mine below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="716" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-1024x716.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17929" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-1024x716.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-429x300.png 429w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-768x537.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-1240x867.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-860x601.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-680x476.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-400x280.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21-50x35.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-21.png 1364w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>If you face any errors, make sure you go back and see if you edited the script correctly. You may also want to make sure that you have the right Key ID, MusicKit private key, and Team ID. If none of these work, leave a comment below and I&#x2019;ll be happy to help.</p>



<p>If your Terminal screen looks like mine, congratulations! You&#x2019;ve successfully completed the hardest party of this tutorial. Copy your developer token and save it as we&#x2019;ll be using it in the next part of the tutorial. For the rest of the tutorial, let&#x2019;s go back to Swift and build the base layout of our app.</p>



<h2 class="wp-block-heading">Creating the UI of the Music Player app</h2>



<p>Open the Xcode project we created earlier in the tutorial. You can see that we already have our <code>Tab View</code> created for us in <code>ContentView.swift</code>. Let&#x2019;s create a new <strong>SwiftUI</strong> view for our player view. Go to <strong>File &gt; New &gt; File</strong> and make sure you have <strong>SwiftUI View</strong> selected as your preset. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="739" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-1024x739.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17930" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-1024x739.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-416x300.png 416w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-200x144.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-768x554.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-1240x895.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-860x621.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-680x491.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-400x289.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22-50x36.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-22.png 1445w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Save this file as <code>PlayerView.swift</code>. And, you should see the very basic <strong>SwiftUI</strong> View.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17931" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-23.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s change this up. Command-click on the <code>Text</code> view and choose <strong>Embed in VStack</strong>. This will be the view that displays our song&#x2019;s title. Let&#x2019;s add another <code>Text</code> view inside the <strong>VStack</strong>. This can be the view that displays the artist&#x2019;s name. Change the values of both <code>Text</code> structs to reflect the change.</p>



<pre class="wp-block-code"><code>VStack {
    Text(&quot;Song Title&quot;)
    Text(&quot;Artist Name&quot;)
}</code></pre>



<p>In order to offer a clearer distinction between both <code>Text</code> views, I&#x2019;ll change the font and add some spacing between both views. This is for a purely design-based purpose. You can modify these values to whatever you see fit. Here is the code:</p>



<pre class="wp-block-code"><code>VStack(spacing: 8) {
    Text(&quot;Song Title&quot;)
        .font(Font.system(.title).bold())
    Text(&quot;Artist Name&quot;)
        .font(.system(.headline))
}</code></pre>



<p>That&#x2019;s much better! The users of our app can clearly distinguish between our song&#x2019;s title and its artist. Now let&#x2019;s add an <code>Image</code> view so we can show the album cover to our users! Before we do that, let&#x2019;s copy all the code we have so far and paste it into a <code>GeometryReader</code>. This is a container view that defines its content as a function of its own size and coordinate space. It helps with resizing views for different device sizes.</p>



<p>Click on the Plus button in the top right corner of Xcode and search for <code>Geometry Reader</code>. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="590" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-1024x590.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17932" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-1024x590.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-521x300.png 521w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-200x115.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-768x442.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-1536x885.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-1680x968.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-1240x714.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-860x495.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-680x392.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-400x230.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26-50x29.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-26.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Select this and drag it to paste the code about where we declare our <code>VStack</code>. Now copy the <code>VStack</code> struct and paste it inside the <code>GeometryReader</code>&#x2019;s placeholder. While our preview will still look the same, our code should now look like this.</p>



<pre class="wp-block-code"><code>GeometryReader { geometry in
    VStack(spacing: 8) {
        Text(&quot;Song Title&quot;)
            .font(Font.system(.title).bold())
        Text(&quot;Artist Name&quot;)
            .font(.system(.headline))
    }
}</code></pre>



<p>Now we can add an <code>Image</code> view that will function as a place where we can display the album cover. Modify your code to look like this.</p>



<pre class="wp-block-code"><code>GeometryReader { geometry in
    // 1
    VStack(spacing: 24) {
        // 2
        Image(systemName: &quot;a.square&quot;)
            .resizable() // 3
            .frame(width: geometry.size.width - 24, height: geometry.size.width - 24) // 4
            .cornerRadius(20)
            .shadow(radius: 10)

        VStack(spacing: 8) {
            Text(&quot;Song Title&quot;)
                .font(Font.system(.title).bold())
            Text(&quot;Artist Name&quot;)
                .font(.system(.headline))
        }
    }
}</code></pre>



<ol><li>First, I take the existing <code>VStack</code> and wrap it inside another <code>VStack</code>. This offers some spacing in between the <code>Image</code> view and the <code>Text</code> views.</li><li>Next, I created the <code>Image</code> view. Since the app has no data right now, I&#x2019;m using an <strong>SF Symbol</strong> as a placeholder image.</li><li>In order to resize the <code>Image</code>, it&#x2019;s important to add the <code>.resizable()</code> function.</li><li>Finally, I define the frame of my <code>Image</code> view. <code>geometry.size.width</code> is the width of the device. You can think of it like a <strong>SwiftUI</strong> version of <code>self.view.frame.size.width</code>. I also subtract 24 in order to create some margins.</li></ol>



<p>Your preview should look a little like this now.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17933" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-27.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now we need to create our play, rewind, and skip buttons. Click on the plus button like before and drag a button right below the <code>VStack</code> where we define our song&#x2019;s information.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17934" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-28.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s modify this button so it functions as our rewind button. Change the <code>Button</code> view code to look like this.</p>



<pre class="wp-block-code"><code>Button(action: {
    // 1
    print(&quot;Rewind&quot;)
}) {
    // 2
    ZStack {
        // 3
        Circle()
            .frame(width: 80, height: 80)
            .accentColor(.pink)
            .shadow(radius: 10)
        Image(systemName: &quot;backward.fill&quot;)
            .foregroundColor(.white)
            .font(.system(.title))
    }
}</code></pre>



<ol><li>Currently since we don&#x2019;t have access to the <code>MediaPlayer</code> framework, we&#x2019;ll just print a statement when the button is tapped.</li><li>Next we define the look of our button. I decided to use a <code>ZStack</code> so we can use an <strong>SF Symbol</strong> as an image and use the <code>Circle</code> view as its background.</li><li>I create the <code>Circle</code> view first and give it its frame, color, and shadow for design purposes. I then use an <strong>SF Symbol</strong> to provide an image for the button.</li></ol>



<p>We need two more of these buttons next to each other. Command and click on the <code>Button</code> view and click on <strong>Embed in HStack</strong>. Copy the button code and paste it inside the <code>HStack</code> struct two more times.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17936" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-30.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Obviously we don&#x2019;t want our player to look like this so let&#x2019;s modify the <code>HStack</code> code.</p>



<pre class="wp-block-code"><code>HStack(spacing: 40) {
    Button(action: {
        print(&quot;Rewind&quot;)
    }) {
        ZStack {
            Circle()
                .frame(width: 80, height: 80)
                .accentColor(.pink)
                .shadow(radius: 10)
            Image(systemName: &quot;backward.fill&quot;)
                .foregroundColor(.white)
                .font(.system(.title))
        }
    }

    Button(action: {
        print(&quot;Pause&quot;)
    }) {
        ZStack {
            Circle()
                .frame(width: 80, height: 80)
                .accentColor(.pink)
                .shadow(radius: 10)
            Image(systemName: &quot;pause.fill&quot;)
                .foregroundColor(.white)
                .font(.system(.title))
        }
    }

    Button(action: {
        print(&quot;Skip&quot;)
    }) {
        ZStack {
            Circle()
                .frame(width: 80, height: 80)
                .accentColor(.pink)
                .shadow(radius: 10)
            Image(systemName: &quot;forward.fill&quot;)
                .foregroundColor(.white)
                .font(.system(.title))
        }
    }
}</code></pre>



<p>In the code above, I&#x2019;m simply adding a spacing of 40 between my buttons. All of them are similar to the button&#x2019;s I&#x2019;ve created earlier. The only difference is that each button prints a different statement regarding its function and has a different image. </p>



<p>Your player view should look like this now:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17937" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-31.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, let&#x2019;s build the Search View.</p>



<h3 class="wp-block-heading">Search View</h3>



<p>Just like before, create a new <strong>SwiftUI</strong> view and name this file <code>SearchView.swift</code>.  You&#x2019;ll be greeted with the familiar template view.</p>



<p>Our search view will be very simple. It just needs a <code>TextField</code> view for searching songs and a <code>List</code> view for displaying the results. Meanwhile, we also need to populate some fake data since the app hasn&#x2019;t connected with the Apple Music API. At the top of the file, right before you declare <code>body</code>, type the following:</p>



<pre class="wp-block-code"><code>@State private var searchText = &quot;&quot;
let songs = [&quot;Blinding Lights&quot;, &quot;That Way&quot;, &quot;This Is Me&quot;]

var body: some View {
    Text(&quot;Hello, World!&quot;)
}</code></pre>



<p>The <code>searchText</code> variable is the binding variable we&#x2019;ll use to connect our <code>TextField</code> view with. We define the <code>songs</code> array to populate our list with some temporary data.</p>



<p>Let&#x2019;s build the <code>TextField</code> view first. Remove the <code>Text</code> view and replace it with the following code:</p>



<pre class="wp-block-code"><code>VStack {
    TextField(&quot;Search Songs&quot;, text: $searchText, onCommit: {
        print(self.searchText)
    })
    .textFieldStyle(RoundedBorderTextFieldStyle())
    .padding(.horizontal, 16)
    .accentColor(.pink)
}</code></pre>



<p>I&#x2019;ve created the <code>TextField</code> view with a placeholder text of &#x201C;Search Songs&#x201D; and attached any text type into this View to the <code>searchText</code> variable.  The <code>onCommit</code> method is called when the user pressed <strong>Return</strong> on their keyboard. In the next part of the tutorial, we&#x2019;ll implement our search functionality within this method.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17938" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-33.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, I&#x2019;ll create the <code>List</code> view and populate it with the data from above. Underneath the <code>TextField</code>, paste the following code.</p>



<pre class="wp-block-code"><code>List {
    // 1
    ForEach(songs, id:\.self) { songTitle in
        // 2
        HStack {
            // 3
            Image(systemName: &quot;rectangle.stack.fill&quot;)
                .resizable()
                .frame(width: 40, height: 40)
                .cornerRadius(5)
                .shadow(radius: 2)

            // 4
            VStack(alignment: .leading) {
                Text(songTitle)
                    .font(.headline)
                Text(&quot;Artist Name&quot;)
                    .font(.caption)
                    .foregroundColor(.secondary)
            }
            Spacer()
            // 5
            Button(action: {
                print(&quot;Playing \(songTitle)&quot;)
            }) {
                Image(systemName: &quot;play.fill&quot;)
                    .foregroundColor(.pink)
            }
        }
    }
}
.accentColor(.pink)</code></pre>



<p>If you&#x2019;ve worked with the <code>List</code> view or the <code>ForEach</code> structure before, this should look familiar. If not, I&#x2019;ll walk you through the code.</p>



<ol><li>Inside my <code>List</code> view, I initialize the <code>ForEach</code> structure. This makes it easy to create multiple copies of views that I won&#x2019;t have to explicitly code. As my data, I pass the <code>songs</code> array we created earlier. I also pass  each element in that array as a variable called <code>songTitle</code>. This is so we can easily populate our cells without having to worry about array indexes. </li><li>Next, to emulate the look of a <code>UITableViewCell</code>, I create an <code>HStack</code>. I&#x2019;ll be placing the album cover, song title, song artist, and add button within each cell inside the <code>HStack</code>.</li><li>Next is the <code>Image</code> view. As before, I&#x2019;m using an <strong>SF Symbol</strong> as a placeholder image. </li><li>Next comes our <code>Text</code> views. I place them inside a <code>VStack</code> to conserve space and display multiple <code>Text</code> views in the same place. You&#x2019;ll notice that the string value of our first <code>Text</code> view is the <code>songTitle</code> variable we created earlier.</li><li>Finally, I add a <code>Play</code> button which indicates to the user that our music player will play this song. Currently, it only prints to the console. In the next part of the tutorial, we&#x2019;ll manage adding songs to the queue inside this method.</li></ol>



<p>That&#x2019;s all! Take a look at your Canvas and make sure it looks something like this:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17939" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-34.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h3 class="wp-block-heading">Putting it all together</h3>



<p>We&#x2019;re almost done! All that&#x2019;s left is going back to <code>ContentView.swift</code> and modifying the inside of our <code>TabView</code> to make sure it displays the two views we have created. Delete all the code inside <code>TabView</code> and replace it with the following.</p>



<pre class="wp-block-code"><code>TabView(selection: $selection) {
    PlayerView()
        .tag(0)
        .tabItem {
            VStack {
                Image(systemName: &quot;music.note&quot;)
                Text(&quot;Player&quot;)
            }
        }
    SearchView()
        .tag(1)
        .tabItem {
            VStack {
                Image(systemName: &quot;magnifyingglass&quot;)
                Text(&quot;Search&quot;)
            }
        }
}
.accentColor(.pink)</code></pre>



<p>In the above code, I&#x2019;m calling both <code>PlayerView()</code> and <code>SearchView()</code> as the two views in our <code>TabView</code>. As such, we&#x2019;ll only have two tabs. I&#x2019;ve also created the look of those tabs with a simple <code>Image</code> and <code>Text</code>. </p>



<p>By now you must have noticed that I&#x2019;ve been using <code>Color.pink</code> inside all my buttons and <code>.accentColors</code>. This provides the app with a unique, music-themed design. You&#x2019;re free to change the colors, fonts, and shapes to however you wish. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="617" src="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-1024x617.png" alt="Introduction to MusicKit: Building a Music Player in SwiftUI" class="wp-image-17940" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-1536x925.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-1680x1012.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-1240x747.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/musickit-player-35.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">What&#x2019;s Next</h2>



<p>Congratulations! You&#x2019;ve done a lot in this tutorial by venturing out of Swift and into further technologies like Python, Web Token, Encryption, and API handling. This was a lot but if you&#x2019;ve made it to the end, congrats! The hardest part about using MusicKit is generating your keys and developer tokens. You can download the final project <a href="https://github.com/appcoda/MusicKitPlayer/releases/tag/0.5?ref=appcoda.com" class="rank-math-link">here</a>.</p>



<p>In the next part of this tutorial series, we&#x2019;ll complete our app. I&#x2019;ll show you how to access the Apple Music API and make web requests to the API. Furthermore, I&#x2019;ll teach you how to use <code>MediaPlayer</code> to play music and control playback from your device. If you have any doubts, feel free to leave a comment. Stay tuned!</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Vision Framework: Working with Text and Image Recognition in iOS]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>2 years ago, at WWDC 2017, Apple released the <em>Vision</em> framework, an amazing, intuitive framework that would make it easy for developers to add computer vision to their apps. Everything from text detection to facial detection to barcode scanners to integration with Core ML was covered in this framework. </p>



<p>This</p>]]></description><link>https://www.appcoda.com/animal-recognition-vision-framework/</link><guid isPermaLink="false">66612a0f166d3c03cf0114ae</guid><category><![CDATA[AI]]></category><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Thu, 12 Sep 2019 01:30:34 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2019/09/mzgrja-kyla.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2019/09/mzgrja-kyla.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS"><p>2 years ago, at WWDC 2017, Apple released the <em>Vision</em> framework, an amazing, intuitive framework that would make it easy for developers to add computer vision to their apps. Everything from text detection to facial detection to barcode scanners to integration with Core ML was covered in this framework. </p>



<p>This year, at WWDC 2019, Apple released several more new features to this framework that really push the field of computer vision. That&#x2019;s what we&#x2019;ll be looking at in this tutorial.</p>



<h2 class="wp-block-heading">What We&#x2019;ll be Building in this Tutorial</h2>



<p>For many years now, Snapchat has reigned as the popular social media app among teens. With its simple UI and great AR features, high schoolers around the world love to place cat/dog filters themselves. <strong>Let&#x2019;s flip the script!</strong></p>



<p>In this tutorial, we&#x2019;ll be building <em>Snapcat</em>, the Snapchat for Cats. Using <code>Vision</code>&#x2019;s new animal detector, we&#x2019;ll be able to detect cats, place the human filter on them, and take pictures of them. After taking pictures of our cats, we&#x2019;ll want to scan their business cards. Using a brand new framework, <code>VisionKit</code>, we&#x2019;ll be able to scan their business cards just like the default Notes app on iOS.</p>



<p>That&#x2019;s not all! If you remember 2 years ago my tutorial on using <a href="https://www.appcoda.com/vision-framework-introduction/">Vision for text detection</a>, I ended by saying that even though you could detect text, you would still need to integrate the code with a <code>Core ML</code> model to recognize each character. Well finally, Apple has released a new class under <code>Vision</code> to recognize the text it detects. We&#x2019;ll use this new class to grab the information from the scanned cards and assign it to our cat. Let&#x2019;s get started!</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="489" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-1024x489.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17193" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-1024x489.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-200x96.jpg 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-600x287.jpg 600w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-768x367.jpg 768w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-1680x802.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-1240x592.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-860x411.jpg 860w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-680x325.jpg 680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-400x191.jpg 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework-50x24.jpg 50w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-demo-vision-framework.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The project in this tutorial was built using Swift 5.1 in Xcode 11 Beta 7 running on macOS Catalina. If you face any errors when go through the tutorial, try to update your Xcode to the latest version or leave a comment below.</p>



<h2 class="wp-block-heading">Getting the Starter Project</h2>



<p>To begin, download the starter project <a href="https://github.com/appcoda/ImageRecognitionDemo/raw/master/starter.zip?ref=appcoda.com">here</a>. Build and run on your device. What we have is the iconic Snapchat camera view. To create this, I added a <code>ARSCNView</code>, or an <code>SceneView</code> that is capable of displaying AR content, and overplayed it with a button. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="988" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-1024x988.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17195" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-1024x988.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-200x193.jpg 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-311x300.jpg 311w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-768x741.jpg 768w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-1680x1621.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-1240x1197.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-860x830.jpg 860w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-680x656.jpg 680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-400x386.jpg 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2-50x48.jpg 50w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-2.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>When the shutter button is pressed, the app takes a snapshot of the current frame of this scene view and passes it along the navigation stack to the Cat Profile View where it&#x2019;s loaded the profile image. In this view, we have empty fields for the data: <em>Name</em>, <em>Number</em>, and <em>Email</em>. This is because we haven&#x2019;t scanned the business card yet! Clicking on the button has no action yet because this is where we&#x2019;ll implement the document scanning program via <code>VisionKit</code>. </p>



<h2 class="wp-block-heading">Image Recognition &#x2013; Detecting a Cat Using VNRecognizeAnimalsRequest</h2>



<p>Our first step is to detect cats in our camera view. Head over to <code>CameraViewController</code> and at the top of the file, type <code>import Vision</code>. This imports the Vision framework into this file, giving us access to all the classes and functions. </p>



<p>Now we want our app to automatically place the human filter (&#x201C;the human emoji&#x201D;)  on our cat with no input from us whatsoever. This means that we need to scan the frames in our camera view every few milliseconds. To do this, modify your code to look like below:</p>



<pre class="wp-block-code"><code>@IBOutlet var previewView: ARSCNView!
var timer: Timer?

override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)
    let configuration = ARWorldTrackingConfiguration()
    previewView.session.run(configuration)
    timer = Timer.scheduledTimer(timeInterval: 0.5, target: self, selector: #selector(self.detectCat), userInfo: nil, repeats: true)

}</code></pre>



<p>We create an instance of <code>timer</code> and define it in our <code>viewWillAppear</code> function. We schedule the timer to perform the function <code>detectCat()</code> in the interval of every half a second.</p>



<p>However, you&#x2019;ll experience an error displayed because we have not created the function called <code>detectCat</code>. This is a simple fix. Here is the <code>viewWillAppear</code> function. Insert the code in the <code>CameraViewController</code> class:</p>



<pre class="wp-block-code"><code>@objc func detectCat() {
    print(&quot;Detected Cat!&quot;)
}</code></pre>



<p>Let&#x2019;s build and run the project so far. The moment our <code>CameraViewController</code> is initialized, we should see the text &#x201C;Detected Cat!&#x201D; printed to the console every half second.</p>



<p>Now here comes to the fun part where we&#x2019;ll use the new <code>VNRecognizeAnimalsRequest</code>, a built-in class that lets you detect cats and dogs.</p>



<p>Modify the <code>detectCat()</code> function to look like this:</p>



<pre class="wp-block-code"><code>@objc func detectCat() {
    guard let currentFrameBuffer = self.previewView.session.currentFrame?.capturedImage else { return }
    let image = CIImage(cvPixelBuffer: currentFrameBuffer)
    let detectAnimalRequest = VNRecognizeAnimalsRequest { (request, error) in
        DispatchQueue.main.async {
            if let results = request.results?.first as? VNRecognizedObjectObservation {
                let cats = result.labels.filter({$0.identifier == &quot;Cat&quot;})
                for cat in cats {
                    print(&quot;Found a cat!!&quot;)
                }
            }
        }
    }

    DispatchQueue.global().async {
        try? VNImageRequestHandler(ciImage: image).perform([detectAnimalRequest])
    }
}</code></pre>



<p>There&#x2019;s quite a bit going on here but I&#x2019;ll go through it line by line. First, we implement a <code>guard</code> function that checks to make sure that the current frame in our Preview View has an image. If this image can be captured, we create a special <code>CIImage</code> using the pixel buffer of that frame&#x2019;s image.</p>



<p>Here&#x2019;s some terminology! The <code>CIImage</code> class we&#x2019;re using is a representation for an image as a <code>CoreImage</code>. This type of images are great when you want to analyze the pixels in them.  A <em>pixel buffer</em> stores an image data in the main memory.</p>



<p>Before explaining the next few lines of code, it&#x2019;ll help to understand how Vision works in case you&#x2019;re unfamiliar. Basically, there are 3 steps to implement Vision in your app, which are:</p>



<ol><li><strong>Requests</strong> &#x2013; Requests are when you request the framework to detect something for you.</li><li><strong>Handlers</strong> &#x2013; Handlers are when you want the framework to perform something after the request is made or &#x201C;handle&#x201D; the request.</li><li><strong>Observations</strong> &#x2013; Observations are what you want to do with the data provided with you.</li></ol>



<p>So first, we create a <code>VNRecognizeAnimalRequest</code> to detect animals. This is a new <code>VNImageRequest</code> that was introduced in iOS 13 specifically for animal detection. </p>



<p>Next, within the handler of this request, we get the result and filter the labels in our first result. Why are we doing this? This is because currently, the <code>VNRecognizeAnimalRequest</code> can detect both cats and dogs. For our use, we only want cats which is why we define an array called <code>cats</code> where the identifier is equal to &#x201C;Cats&#x201D;. Finally, for each <code>cat</code> observation, we print that we found a cat to the console. This is just to see if Vision is detecting cats.</p>



<p>Remember, we&#x2019;ve only defined the request. Now we need to ask Vision to perform the request. After defining the request, we ask <code>VNImageRequestHandler</code> to perform the request on our <code>CIImage</code> we defined earlier. A <code>VNImageRequestHandler</code>  is basically a specific type of VNRequest that is used to process images using one or more image analysis requests.</p>



<p>Build and run the project on a device. I don&#x2019;t have a cat so I&#x2019;ll be using some images from online. If Vision correctly detects the cat, then we should be able to see it printed to the console.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="613" src="https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-1024x613.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17196" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-1024x613.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-200x120.jpg 200w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-501x300.jpg 501w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-768x460.jpg 768w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-1680x1005.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-1240x742.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-860x515.jpg 860w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-680x407.jpg 680w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-400x239.jpg 400w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat-50x30.jpg 50w, https://www.appcoda.com/content/images/wordpress/2019/09/animal-recognition-cat.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>It works!! But how should we let users know that the app detects a cat? It&#x2019;s time to code again. </p>



<p>At the top of the class where we initialized the <code>timer</code> object, type the line: <code>var rectangleView = UIView()</code>. This initializes a <code>UIView</code> which we can modify to be a box. We&#x2019;ll place this box around the coordinates where our Vision function can detect the cat.</p>



<p>Inside the <code>detectCat()</code> method, insert <code>self.rectangleView.removeFromSuperview()</code> at the beginning. Then modify the <code>for</code> loop to look like this:</p>



<pre class="wp-block-code"><code>for cat in cats {
    self.rectangleView = UIView(frame: CGRect(x: result.boundingBox.minX * self.previewView.frame.width, y: result.boundingBox.minY * self.previewView.frame.height, width: result.boundingBox.width * self.previewView.frame.width, height: result.boundingBox.height * self.previewView.frame.height))

    self.rectangleView.backgroundColor = .clear
    self.rectangleView.layer.borderColor = UIColor.red.cgColor
    self.rectangleView.layer.borderWidth = 3
                        self.previewView.insertSubview(self.rectangleView, at: 0)
}</code></pre>



<p>Here&#x2019;s what we&#x2019;re doing. We are defining our <code>rectangleView</code> to have the bounds of the <code>result</code>&#x2019;s bounding box. However, this is something worth mentioning.</p>



<p>In Vision, the size of the result&#x2019;s bounding box is smaller than the size of your <code>previewView</code>. This is because when Vision analyzes the image, it scales down the image to speed up computer vision processes. That&#x2019;s why when we create the bounds for our <code>rectangleView</code>, we have the multiply it by the constant to scale it up.</p>



<p>After defining the frame, all we do is set the view to have a transparent background and a border of width 3 and color red. We add this to our <code>previewView</code>. Now, you may be wondering why we included the line of code <code>self.rectangleView.removeFromSuperview()</code> at the beginning of the function. Remember, our Vision code is running every half a second. If we don&#x2019;t remove the <code>rectangleView</code>, we&#x2019;ll have <strong>a lot</strong> of boxes on our view.</p>



<p>Ok! Build and run the app! Are you seeing the boxes?</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="400" height="790" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-9.png" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17197" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-9.png 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-9-200x395.png 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-9-152x300.png 152w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-9-50x99.png 50w" sizes="(max-width: 400px) 100vw, 400px"></figure>



<p>Yes! It&#x2019;s working. Now we have one last thing to do: place a human emoji near the cat. This is really simple. Here&#x2019;s what we do. </p>



<p>Ath the top of the class, underneath where we define our <code>rectangleView</code>, type the line: <code>var humanLabel = UILabel()</code>. </p>



<p>Inside the <code>for</code> loop of our <code>detectCat()</code> function, we need to add this label to our view. Here&#x2019;s how to do that.</p>



<pre class="wp-block-code"><code>for cat in cats {
    self.rectangleView = UIView(frame: CGRect(x: result.boundingBox.minX * self.previewView.frame.width, y: result.boundingBox.minY * self.previewView.frame.height, width: result.boundingBox.width * self.previewView.frame.width, height: result.boundingBox.height * self.previewView.frame.height))

    self.humanLabel.text = &quot;&#x1F466;&quot;
    self.humanLabel.font = UIFont.systemFont(ofSize: 70)
    self.humanLabel.frame = CGRect(x: 0, y: 0, width: self.rectangleView.frame.width, height: self.rectangleView.frame.height)

    self.rectangleView.addSubview(self.humanLabel)
    self.rectangleView.backgroundColor = .clear
    self.previewView.insertSubview(self.rectangleView, at: 0)
} </code></pre>



<p>This is really easy to follow. We still have the same frame for our <code>rectangleView</code>. Now, we set the text of the label to be an emoji followed by the font size. Since we want the labels to cover the entire box, we set its width and height to be the same as the <code>rectangleView</code>. Finally, we removed the code that would create a border and instead add the label to our box and add the box to the <code>previewView</code>. </p>



<p>Build and run the app. Is the code working? Can you see the human emoji on the cat?</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="400" height="790" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-11.png" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17198" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-11.png 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-11-200x395.png 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-11-152x300.png 152w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-11-50x99.png 50w" sizes="(max-width: 400px) 100vw, 400px"></figure>



<p>It works! We&#x2019;ve got a human emoji placed on the cat. Now, of course, it&#x2019;s not perfect because Vision only detects a cat object, not the facial features of the cat. Hopefully Apple will add that soon and we can further refine the filter.</p>



<h2 class="wp-block-heading">Scanning the Business Card Using VNDocumentCameraViewController</h2>



<p>Whew! We made it through the first part. From here on, it gets a little easier. When you snap the picture of your cat, you automatically get pushed to the Cat Profile view where you can see the image you&#x2019;ve taken, the details of the cat, and a button that says &#x201C;Scan For Details&#x201C;. We&#x2019;ll be pulling up the new document scanner code when this button is tapped on.</p>



<p>Navigate to <code>CatProfileViewController.swift</code>. At the top of the file, underneath <code>import UIKit</code>, type <code>import VisionKit</code>. The difference between <code>Vision</code> and <code>VisionKit</code> is that while <code>Vision</code> lets your perform computer vision analyses on an image, <code>VisionKit</code> is a small framework that lets your app use the system&#x2019;s document scanner.</p>



<p>Now within the <code>scanDocument()</code> function, all you need to add are three lines of code.</p>



<pre class="wp-block-code"><code>@IBAction func scanDocument(_ sender: Any) {
    let documentCameraViewController = VNDocumentCameraViewController()
    documentCameraViewController.delegate = self
    self.present(documentCameraViewController, animated: true, completion: nil)
}</code></pre>



<p>The <code>VNDocumentCameraViewController()</code> is a special type of view controller created by Apple which is used to scan the documents. We set the delegate of that controller and present the view controller. </p>



<p>Next, you need to implement the <code>VNDocumentCameraViewControllerDelegate</code> in the class. Change the line where we define out <code>CatProfileViewController</code> class to look like this.</p>



<pre class="wp-block-code"><code>class CatProfileViewController: UIViewController, VNDocumentCameraViewControllerDelegate {
    ....
}</code></pre>



<p>Build and run the app. After taking a picture, when you press &#x201C;Scan for Details&#x201D;, the device&#x2019;s document scanner should pop up.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="988" src="https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-1024x988.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17202" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-1024x988.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-200x193.jpg 200w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-311x300.jpg 311w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-768x741.jpg 768w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-1680x1621.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-1240x1197.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-860x830.jpg 860w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-680x656.jpg 680w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-400x386.jpg 400w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-50x48.jpg 50w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Looks like it&#x2019;s scanning! However, when we press &#x201C;Save&#x201D;, nothing happens. This is because we haven&#x2019;t implemented any delegate methods yet. Underneath, the <code>scanDocument()</code> function, type the following. </p>



<pre class="wp-block-code"><code>func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
    let image = scan.imageOfPage(at: 0)
    self.catImageView.image = image
    controller.dismiss(animated: true)
}</code></pre>



<p>The <code>documentCameraViewController(didFinishWith:)</code> is a method that runs when the Save button is clicked. We take the first scan, make it an image, and set the <code>catImageView</code> to display this image. We&#x2019;re doing this to make sure that the scanner is working well. </p>



<p>Build and run the app. When you have to scan for details, scan over any business card and when you click &#x201C;Save&#x201D;, it should dismiss the document scanner and set the image to the <code>catImageView</code>. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="988" src="https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-1024x988.jpg" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17203" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-1024x988.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-200x193.jpg 200w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-311x300.jpg 311w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-768x741.jpg 768w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-1680x1621.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-1240x1197.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-860x830.jpg 860w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-680x656.jpg 680w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-400x386.jpg 400w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2-50x48.jpg 50w, https://www.appcoda.com/content/images/wordpress/2019/09/business-card-recognition-2.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now that the app is able to detect the business card, let&#x2019;s further implement it to extract the textual data from the card.</p>



<h2 class="wp-block-heading">Text Recognition Using VNRecognizeTextRequest</h2>



<p>With the new API, <code>VNRecognizeTextRequest</code>, introduced in iOS 13, it&#x2019;s pretty easy to finds and recognizes text in an image.</p>



<p>Again, first <code>import VisionKit</code> in the file because the new API is bundled in the framework. As explained earlier, this will bring over the Vision API&#x2019;s. Next, before the <code>viewDidLoad()</code> function, type the following:</p>



<pre class="wp-block-code"><code>var textRecognitionRequest = VNRecognizeTextRequest()
var recognizedText = &quot;&quot; </code></pre>



<p>The <code>VNRecognizeTextRequest</code> is just like the <code>VNRecognizeAnimalsRequest</code> class that we&#x2019;ve used earlier. This request tells Vision to run text recognition on whatever image we pass to it. Later, when Vision detects text in a image, we&#x2019;ll assign it to the variable <code>recognizedText</code>. </p>



<p>Head over to the <code>viewDidLoad()</code> function and update the code like below:</p>



<pre class="wp-block-code"><code> override func viewDidLoad() {&#x2028;super.viewDidLoad()
    self.navigationController?.setNavigationBarHidden(false, animated: true)
    self.title = &quot;New Cat&quot;
    catImageView.image = catImage

    textRecognitionRequest = VNRecognizeTextRequest(completionHandler: { (request, error) in
        if let results = request.results, !results.isEmpty {
            if let requestResults = request.results as? [VNRecognizedTextObservation] {
                self.recognizedText = &quot;&quot;
                for observation in requestResults {
                    guard let candidiate = observation.topCandidates(1).first else { return }
                      self.recognizedText += candidiate.string
                    self.recognizedText += &quot;\n&quot;
                }
                self.catDetailsTextView.text = self.recognizedText
            }
        }
    })
}</code></pre>



<p>In the code, we initiate a <code>VNRecognizeTextRequest</code>. Vision returns the result of this request in a&#xA0;<a href="https://developer.apple.com/documentation/vision/vnrecognizedtextobservation?ref=appcoda.com"><code>VNRecognizedTextObservation</code></a>&#xA0;object. This type of observation contains information about both the location and content of text and glyphs that Vision recognized in the input image. </p>



<p>For every item in the <code>requestResults</code> array, we choose the first <code>topCandidates</code>. What is this? Well for every observation Vision makes, it outputs a list of potential candidates for the detected text. In some cases, it may be useful to consider all the possible text. For our case, we only care about the most accurate prediction.</p>



<p>Finally, we take the <code>candidate.string</code> and add it to our <code>recognizedText</code>. After going through every observation in the <code>requestResults</code> array, we set the text of the <code>catDetailsTextView</code> to our <code>recognizedText</code>.</p>



<p>Your code should look a little like the image below:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="613" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-1024x613.png" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17204" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-1024x613.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-501x300.png 501w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-768x460.png 768w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-1680x1005.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-1240x742.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-860x515.png 860w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-680x407.png 680w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-400x239.png 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-16.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>You can build and run the app now but nothing will happen. Why? This is because we still haven&#x2019;t defined a handler to perform the request. This can be accomplished in a couple lines of code. Head back to the <code>documentCameraViewController(didFinishWith:)</code> function and modify it like this:</p>



<pre class="wp-block-code"><code>func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
    let image = scan.imageOfPage(at: 0)
    let handler = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])
    do {
        try handler.perform([textRecognitionRequest])
    } catch {
        print(error)
    }
    controller.dismiss(animated: true)
}</code></pre>



<p>The code above should look familiar as we used it in the first section of the tutorial. We define a <code>handler</code> of type <code>VNImageRequestHandler</code> and input the <code>cgImage</code> of our scan.</p>



<p>A <code>VNImageRequestHandler</code> is an object that processes one or more image analysis requests pertaining to a single image. By specifying the image, we can begin executing the Vision request. In the <code>do</code> block, we ask the handler to perform the Vision request we defined earlier. x</p>



<p>Build and run the app. Is it working?</p>



<div class="wp-block-image"><figure class="aligncenter"><img loading="lazy" decoding="async" width="400" height="790" src="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-18.png" alt="Vision Framework: Working with Text and Image Recognition in iOS" class="wp-image-17205" srcset="https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-18.png 400w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-18-200x395.png 200w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-18-152x300.png 152w, https://www.appcoda.com/content/images/wordpress/2019/09/image-recognition-18-50x99.png 50w" sizes="(max-width: 400px) 100vw, 400px"></figure></div>



<p>It seems to be working quite well! We&#x2019;re able to gather all the text from the scanned card and place it onto the text view. </p>



<h2 class="wp-block-heading">Fine Tuning the Text Recognition</h2>



<p>Now in my image above, the text recognition worked great. However, I&#x2019;ll try to help you fine tune our <code>VNRecognizeTextRequest</code> in case it didn&#x2019;t work so well for you. </p>



<p>At the end of our <code>viewDidLoad()</code> function, type the following:</p>



<pre class="wp-block-code"><code>textRecognitionRequest.recognitionLevel = .accurate
textRecognitionRequest.usesLanguageCorrection = false
textRecognitionRequest.customWords = [&quot;@gmail.com&quot;, &quot;@outlook.com&quot;, &quot;@yahoo.com&quot;, &quot;@icloud.com&quot;] </code></pre>



<p>These are some of the values that the <code>VNRecognizeTextRequest</code> has. You can modify them to fine tune your application. Let&#x2019;s go through these.</p>



<ol><li><code>.recognitionLevel</code>: You have two options between these: <code>.fast</code> and <code>.accurate</code>. When you want the request to recognize text, you can choose between recognizing faster or more accurately. Apple advices to use <code>.accurate</code> because even though it take slightly more time, it works well for recognizing texts in custom font shapes and size. The advantage with <code>.fast</code> is that it takes up less memory on the device.</li><li><code>.usesLanguageCorrection</code>: Here&#x2019;s how Vision Text Recognition Request works. After grabbing the text from each observation, it doesn&#x2019;t just output that text to you. It goes through another layer of Natural Language to change any misspelled words. Think if it like a spell and grammar check. By changing this Boolean value, you can enable this on or off. When should you set this to true? If your application is being used to recognize text from a book or a document, then you should enable this setting. However, in our scenario, we&#x2019;re using it to detect names, numbers, and email addresses. If we enable this setting, it&#x2019;s possible that the request may think the number <em>1</em> is the lowercase letter <em>l</em>. Or another example is the last name <em>He</em> which Vision may correct to <em>Hey</em> or <em>Hi</em>. </li><li><code>.customWords</code>: This value is an array that can be used for specific words. In our scenario, we&#x2019;re detecting emails which generally end in <em>@email.com</em>. While this isn&#x2019;t a word on its own, we wouldn&#x2019;t want Vision predicting these characters to be something else. By offering custom words, the text recognition request analyzes the image a second time to see if any of those words appear so it can recognize them properly.</li></ol>



<p>Build and run the app. Is there any improvement? Did the recognition system get better or worse? By fine tuning these parameters, you can utilize Vision&#x2019;s text recognition system to be the best in your app.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>In this tutorial, as you can see, it is so easy to take advantage of all the new API&#x2019;s in Vision to perform object recognition. You learned how we can use the brand new animal detector to recognize cats and dogs in an image. You should also understand how the new framework, <code>VisionKit</code>, makes scanning documents a breeze. Finally, the coveted text recognition APIs became available in Vision. With easy-to-use API&#x2019;s and lots of parameters for fine tuning the system, you can easily implement text recognition in your apps.</p>



<p>However, this isn&#x2019;t all that was announced at WWDC 2019. Vision also introduced API&#x2019;s to classify images into categories, analyze saliency, and detect human rectangles. I&#x2019;ve attached some great links if you want to  continue learning more about Vision.</p>



<ul><li><a href="https://developer.apple.com/documentation/vision?ref=appcoda.com">Vision Documentation</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2019/222/?ref=appcoda.com">WWDC 2019 Session: Understanding Images in Vision Framework</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2019/234?ref=appcoda.com">WWDC 2019 Session: Text Recognition in Vision Framework</a></li><li><a href="https://www.appcoda.com/vision-framework-introduction/">AppCoda Tutorial: Using Vision Framework for Text Detection in iOS 11</a></li></ul>



<p>I&#x2019;d love to know what you&#x2019;re building with Vision! Leave a comment below if you have a question or want to share what you&#x2019;re working on. You can <a href="https://github.com/appcoda/ImageRecognitionDemo?ref=appcoda.com">download the full tutorial here</a>. Thanks for reading! Catch you in the next article!</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[How to Detect and Track the User's Face Using ARKit]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>One of the most innovative inventions Apple has come up with in the past year is its True Depth camera. An ode to hardware and software engineers, the <a href="https://www.extremetech.com/mobile/255771-apple-iphone-x-truedepth-camera-works?ref=appcoda.com">True Depth camera</a> is what powers its secure facial recognition system, FaceID. As developers, the True Depth camera opens up a world</p>]]></description><link>https://www.appcoda.com/arkit-face-tracking/</link><guid isPermaLink="false">66612a0f166d3c03cf0114a7</guid><category><![CDATA[ARKit]]></category><category><![CDATA[Swift]]></category><category><![CDATA[iOS Programming]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Wed, 31 Jul 2019 11:07:53 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1.png" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1.png" alt="How to Detect and Track the User&apos;s Face Using ARKit"><p>One of the most innovative inventions Apple has come up with in the past year is its True Depth camera. An ode to hardware and software engineers, the <a href="https://www.extremetech.com/mobile/255771-apple-iphone-x-truedepth-camera-works?ref=appcoda.com">True Depth camera</a> is what powers its secure facial recognition system, FaceID. As developers, the True Depth camera opens up a world of possibilities for us, especially in the field of face-base interactions.</p>



<p>Before we begin this ARKit tutorial, let me quickly brief you on the different parts of the camera. Like most iPhone/iPad front cameras, the True Depth camera comes with a microphone, a 7 megapixel camera, an ambient light sensor, a proximity sensor, and a speaker. What makes the True Depth camera itself is the addition of a dot projector, flood illuminator, and infrared camera.</p>



<p>The dot projector projects more than 30,000 invisible dots onto your face to build a local map (you&#x2019;ll see this later in the tutorial). The infrared camera reads the dot pattern, captures an infrared image, then sends the data to the Secure&#xA0;Enclave in the A12&#xA0;Bionic chip to confirm a&#xA0;match. Finally, the flood illuminator allowed invisible infrared light to identify your face even when it&#x2019;s dark. </p>



<p>These parts come together to create some magical experiences like Animojis and Memojis. Special effects that require a 3D model of the user&#x2019;s face and head can rely on the True Depth Camera.</p>



<h2 class="wp-block-heading">Introduction to the Demo Project</h2>



<p>I believe that it&#x2019;s important for developers to learn how to utilize the True Depth camera so they can perform face tracking and create amazing face-based experiences for users. In this tutorial, I will show you how we can use the 30,000 dots to recognize different facial movements using <code>ARFaceTrackingConfiguration</code>, that comes with the ARKit framework.</p>



<p>The final result will look like this:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="461" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-1024x461.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16989" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-1024x461.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-200x90.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-600x270.png 600w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-768x345.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-1680x756.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-1240x558.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-860x387.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-680x306.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-400x180.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-1-50x22.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s get started!</p>



<p>You will need to run this project on either an iPhone X, XS, XR, or iPad Pro (3rd gen). This is because these are the only devices which have the True Depth camera. We will also be using Swift 5 and Xcode 10.2.</p>



<p><strong>Editor&#x2019;s Note</strong>: If you&#x2019;re new to ARKit, you can refer to <a href="https://www.appcoda.com/arkit-3d-object/">our ARKit tutorials</a>.</p>



<h2 class="wp-block-heading">Creating a ARKit Demo for Face Tracking</h2>



<p>First, open Xcode and create a new Xcode project. Under templates, make sure to choose Augmented Reality App under iOS.</p>



<p>Next, set the name of your project. I simply named mine <em>True Depth</em>. Make sure the language is set to <em>Swift</em> and Content Technology to <em>SceneKit</em>.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16990" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-3.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Head over to <code>Main.storyboard</code>. There should be a single view with an <code>ARSCNView</code> already connected to an outlet in your code.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16991" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-4.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>What we have to do is really simple. All we need to do is add a <code>UIView</code> and a <code>UILabel</code> inside that view. This label will inform the user of the face expressions they are making.</p>



<p>Drag and drop a <code>UIView</code> into the <code>ARSCNView</code>. Now, let&#x2019;s set the constraints. Set the width to 240pt and height to 120pt. Set the left and bottom constraints to 20pt.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16992" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-5.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>For design purpose, let&#x2019;s set the alpha of the view to <em>0.8</em>. Now, drag a <code>UILabel</code> into the view you just added. Set the constraints to<em> 8 points</em> all around as shown below.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="576" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-1024x576.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16993" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6-50x28.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-6.png 1440w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Finally, set the alignment of the label to centralized. Your final storyboard should look like this. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16994" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-7.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now, let&#x2019;s set the <code>IBOutlets</code> to our <code>ViewController.swift</code> file. Switch to the Assistant editor. Control and click on the <code>UIView</code> and <code>UILabel</code> and drag it over to <code>ViewController.swift</code> to create the <code>IBOutlets</code>.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16995" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-8.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>You should create two outlets: <code>faceLabel</code> and <code>labelView</code>. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16996" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-9.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Creating a Face Mesh</h2>



<p>Let&#x2019;s clean up the code a little bit. Because we chose the Augmented Reality App as our template, there&#x2019;s some code which we don&#x2019;t need. Change your <code>viewDidLoad</code> function to this:</p>



<pre class="wp-block-code"><code>override func viewDidLoad() {
    super.viewDidLoad()

    // 1 
    labelView.layer.cornerRadius = 10

    sceneView.delegate = self
    sceneView.showsStatistics = true

    // 2
    guard ARFaceTrackingConfiguration.isSupported else {
        fatalError(&quot;Face tracking is not supported on this device&quot;)
    }
}</code></pre>



<p>With the template, our code loads a 3D scene. However, we don&#x2019;t need this scene, so we delete it. At this point, you can delete the <code>art.scnassets</code> folder in the project navigator. Finally, we add two pieces of code to our <code>viewDidLoad</code> method.</p>



<ol><li>First, we round the corners of the <code>labelView</code>. This is more of a design choice. </li><li>Next, we check to see if the device supports the <code>ARFaceTrackingConfiguration</code>. This is the AR tracking setting we&#x2019;ll be using to create a face mesh. If we don&#x2019;t check to see if a device supports this, our app will crash. If the device does not support the configuration, then we will present an error.</li></ol>



<p>Next, we&#x2019;ll change one line in our <code>viewWillAppear</code> function. Change the constant <code>configuration</code> to <code>ARFaceTrackingConfiguration()</code>. Your code should look like this now.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16997" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-10.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, we need to add the <code>ARSCNViewDelegate</code> methods. Add the following code below <code>// MARK: - ARSCNViewDelegate</code>. </p>



<pre class="wp-block-code"><code>func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -&gt; SCNNode? {
    let faceMesh = ARSCNFaceGeometry(device: sceneView.device!)
    let node = SCNNode(geometry: faceMesh)
    node.geometry?.firstMaterial?.fillMode = .lines
    return node
}</code></pre>



<p>This code runs when the <code>ARSCNView</code> is rendered. First, we create a face geometry of the <code>sceneView</code> and set it to the constant <code>faceMesh</code>. Then, we assign this geometry to an <code>SCNNode</code>. Finally, we set the material of the <code>node</code>. For most 3D objects, the material is usually the color or texture of a 3D object.</p>



<p>For the face mesh, you can use two materials- either a fill material or a lines material. I prefer the lines which is why I set <code>fillMode = .lines</code>, but you can use what you prefer. Your code should look like this now.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-1024x607.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16998" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-11.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>If we run the app, you should see something like this.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="743" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-1024x743.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-16999" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-1024x743.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-200x145.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-414x300.png 414w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-768x557.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-1680x1219.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-1240x900.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-860x624.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-680x493.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-400x290.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-12-50x36.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Updating the Face Mesh</h2>



<p>You may notice that the mesh does not update when you change your facial features (blinking, smiling, yawning, etc.). This is because we need to add the <code>renderer(_didUpdate:)</code> under the <code>renderer(_nodeFor)</code> method.</p>



<pre class="wp-block-code"><code>func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    if let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry {
        faceGeometry.update(from: faceAnchor.geometry)
    }
}</code></pre>



<p>This code runs every time the <code>sceneView</code> updates. First, we define a <code>faceAnchor</code> as the anchor for the face it detects in the <code>sceneView</code>. The achor is the information about the pose, topology, and expression of a face detected in the face-tracking AR session. We also define the constant <code>faceGeometry</code> which is a topology of the face detected. Using these two constants we update the <code>faceGeometry</code> every time.</p>



<p>Run the code again. Now, you&#x2019;ll see the mesh updating every time you change your facial features, all running at 60 fps.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="694" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-1024x694.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-17000" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-1024x694.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-200x136.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-442x300.png 442w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-768x521.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-1680x1139.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-1240x841.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-860x583.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-680x461.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-400x271.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-14-50x34.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Analyzing the features</h2>



<p>First, let&#x2019;s create a variable at the top of the file.</p>



<pre class="wp-block-code"><code>var analysis = &quot;&quot;</code></pre>



<p>Next, type the following function at the end of the file.</p>



<pre class="wp-block-code"><code>func expression(anchor: ARFaceAnchor) {
    // 1
    let smileLeft = anchor.blendShapes[.mouthSmileLeft]
    let smileRight = anchor.blendShapes[.mouthSmileRight]
    let cheekPuff = anchor.blendShapes[.cheekPuff]
    let tongue = anchor.blendShapes[.tongueOut]
    self.analysis = &quot;&quot;

    // 2    
    if ((smileLeft?.decimalValue ?? 0.0) + (smileRight?.decimalValue ?? 0.0)) &gt; 0.9 {
        self.analysis += &quot;You are smiling. &quot;
    }

    if cheekPuff?.decimalValue ?? 0.0 &gt; 0.1 {
        self.analysis += &quot;Your cheeks are puffed. &quot;
    }

    if tongue?.decimalValue ?? 0.0 &gt; 0.1 {
        self.analysis += &quot;Don&apos;t stick your tongue out! &quot;
    }
}</code></pre>



<p>The above function takes an <code>ARFaceAnchor</code> as a parameter. </p>



<ol><li>The <code>blendShapes</code> are a dictionary of named coefficients representing the detected facial expression in terms of the movement of specific facial features. Apple provides over 50 coefficients which detect various different facial features. For our purpose we&#x2019;re using only 4: <code>mouthSmileLeft</code>, <code>mouthSmileRight</code>, <code>cheekPuff</code>, and <code>tongueOut</code>.</li><li>We take the coefficients and check the probability of the face performing these facial features. For detecting a smile, we add the probabilities of both the right and left side of the mouth. I found that the 0.9 for the smile and 0.1 for the cheek and tongue work best.</li></ol>



<p>We take the possible values and add the text to the <code>analysis</code> string.</p>



<p>Now that we have our function created, let&#x2019;s update our <code>renderer(_didUpdate:)</code> method.</p>



<pre class="wp-block-code"><code>func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    if let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry {
        faceGeometry.update(from: faceAnchor.geometry)
        expression(anchor: faceAnchor)
        
        DispatchQueue.main.async {
            self.faceLabel.text = self.analysis
        }
        
    }
}</code></pre>



<p>We run the <code>expression</code> method every time the <code>sceneView</code> is updated. Since the function is setting the <code>analysis</code> string, we can finally set the text of the <code>faceLabel</code> to the <code>analysis</code> string.</p>



<p>Now, we&#x2019;re all done coding! Run the code and you should get the same result as we saw in the beginning.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="743" src="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-1024x743.png" alt="How to Detect and Track the User&apos;s Face Using ARKit" class="wp-image-17001" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-1024x743.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-200x145.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-414x300.png 414w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-768x557.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-1680x1219.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-1240x900.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-860x624.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-680x493.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-400x290.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/facial-recognition-17-50x36.png 50w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Conclusion</h2>



<p>There is a lot of potential behind developing face-based experiences using ARKit. Games and apps can utilize the True Depth camera for a variety of purposes. One of my favorite apps is <a href="https://www.usehawkeye.com/?ref=appcoda.com">Hawkeye Access</a>, a browser you can control using your eyes.</p>



<p>For more information on the True Depth camera, you can check out Apple&#x2019;s video <a href="https://developer.apple.com/videos/play/tech-talks/601/?ref=appcoda.com">Face Tracking with ARKit</a>. You can download the final project <a href="https://github.com/appcoda/Face-Mesh?ref=appcoda.com">here</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[What's New in Natural Language APIs in iOS]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>Last year, Apple debuted a new framework called <code>NaturalLanguage</code>. This was the next step in making text-based machine learning much more accessible to developers all around. This year, the progress has not stopped as Apple has continued to make advancements in this framework.</p>



<p>In this tutorial, we&#x2019;ll see</p>]]></description><link>https://www.appcoda.com/natural-language-apis-ios-13/</link><guid isPermaLink="false">66612a0f166d3c03cf0114a3</guid><category><![CDATA[AI]]></category><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Tue, 09 Jul 2019 17:06:23 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2019/07/iusj25iyu1c.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2019/07/iusj25iyu1c.jpg" alt="What&apos;s New in Natural Language APIs in iOS"><p>Last year, Apple debuted a new framework called <code>NaturalLanguage</code>. This was the next step in making text-based machine learning much more accessible to developers all around. This year, the progress has not stopped as Apple has continued to make advancements in this framework.</p>



<p>In this tutorial, we&#x2019;ll see what are some of the new API&#x2019;s in this framework, as well as, what else is possible with the use of <code>NaturalLanguage</code>. </p>



<p>You&#x2019;ll need to use Xcode 11 and Swift 5.1 in order to follow the tutorial and build the sample app. At the time of writing, these softwares are still in beta, so some functionalities may not be the same after the official release.</p>



<p>I have enabled the <code>Mac</code> device in order to test <a href="https://techcrunch.com/2019/06/03/ios-apps-will-run-on-macos-with-project-catalyst/?ref=appcoda.com">Project Catalyst</a>, while working on this sample app. You&#x2019;ll find some of the screenshots below are a Mac app. </p>



<p><div class="alert "><strong>Note:</strong> To recap your knowledge on Natural Language Processing, it&#x2019;s recommenced you take a look at <a href="https://www.appcoda.com/natural-language-processing-swift/">this introductory tutorial</a>.</div></p>



<h2 class="wp-block-heading">What&#x2019;s Natural Language Processing?</h2>



<p>What exactly is Natural Language Processing (NLP)? Simply put, this framework gives apps the ability to analyze natural language text and understand parts of it. This framework can perform a variety of tasks on text by assigning <strong>tag schemes</strong> to the text. </p>



<p>So, what are tag schemes? Well, basically tag schemes are the constants used to identify the pieces of information we want from the text. You can think of them as a set of tasks we ask a <strong>tagger</strong> to apply to the text. Some of the most common tag schemes we ask the tagger to look for are the language, name type, lemma, etc.</p>



<p>All NLP tasks can be split into 2 sections: <em>Text Classification</em> and <em>Word Tagging</em>. At WWDC 2019, Apple announced improvements in both these sections of <code>NaturalLanguage</code>. We&#x2019;ll go through them one by one to see what&#x2019;s new!</p>



<h2 class="wp-block-heading">The Starter Project</h2>



<p>Before you continue reading the tutorial, please first <a href="https://github.com/appcoda/NLP-Sentiment-Analysis/raw/master/starter.zip?ref=appcoda.com">download the starter project</a>.</p>



<p>In our starter project, you can see that we have an app called <code>Text+</code>. This is a tabbed application that is used to separate the new API&#x2019;s we&#x2019;ll work with. At the end of our project, we should have an app that runs Sentiment Analysis in one tab and Word Embedding in the other tab. I&#x2019;ll explain what these are in more depth later on. Here&#x2019;s what you&#x2019;ll be building.</p>



<ul class="wp-block-gallery columns-2 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex"><li class="blocks-gallery-item"><figure><img loading="lazy" decoding="async" width="901" height="704" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-1.png" alt="What&apos;s New in Natural Language APIs in iOS" data-id="16892" data-link="https://www.appcoda.com/?attachment_id=16892" class="wp-image-16892" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-1.png 901w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-200x156.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-384x300.png 384w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-768x600.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-860x672.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-680x531.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-400x313.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-1-50x39.png 50w" sizes="(max-width: 901px) 100vw, 901px"></figure></li><li class="blocks-gallery-item"><figure><img loading="lazy" decoding="async" width="901" height="704" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-12.png" alt="What&apos;s New in Natural Language APIs in iOS" data-id="16893" data-link="https://www.appcoda.com/?attachment_id=16893" class="wp-image-16893" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-12.png 901w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-200x156.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-384x300.png 384w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-768x600.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-860x672.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-680x531.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-400x313.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-12-50x39.png 50w" sizes="(max-width: 901px) 100vw, 901px"></figure></li></ul>



<h2 class="wp-block-heading">Text Classification</h2>



<p>Text Classification is the process of assigning a label to a group of text, such as a sentence or a paragraph or even a whole document. These labels can be anything you choose: either a topic label, sentiment label, or any label that help you classify it.</p>



<p>New in <code>NaturalLanguage</code> this year is the new built-in API for <strong>Sentiment Analysis</strong>. Sentiment Analysis is the task of classifying a block of text by it&#x2019;s mood. From a score of -1.0 to 1.0, we can determine how positive or negative a group of text is. </p>



<p>In the demo application, choose the <code>SuggestionViewController.swift</code> file, you should see that we have a function called <code>analyzeText()</code>. This function will be called when the <code>analyzeButton</code> is tapped. </p>



<p>The user is free to type any message in the text field. What we want to do is to perform sentiment analysis on that message and change the color according of the button accordingly:</p>



<ol><li><em>green</em> if the message is <em>positive</em>,</li><li><em>red</em> if the message is <em>negative</em>,</li><li>Or <em>blue</em> if the message is <em>neutral</em>. </li></ol>



<p>Let&#x2019;s start to implement the change. Underneath where we declare our <code>IBOutlets</code>, let&#x2019;s declare our tagger.</p>



<pre class="wp-block-code"><code>import UIKit
import NaturalLanguage

class SentimentViewController: UIViewController {
    @IBOutlet var analyzeButton: UIButton!
    @IBOutlet var messageTextField: UITextField!
    let tagger = NLTagger(tagSchemes: [.sentimentScore])

    ...
}</code></pre>



<p>Our tagger is an <code>NLTagger</code> where we ask it to observe the scheme of <code>sentimentScore</code>. This means that when we assign this tagger to our text, it will run the process of looking for a sentiment score.</p>



<p>Our next step is to add the logic to our <code>analyzeText()</code> function. Here&#x2019;s how you do it:</p>



<pre class="wp-block-code"><code>@IBAction func analyzeText() {
    tagger.string = messageTextField.text
    let (sentiment, _) = tagger.tag(at: messageTextField.text!.startIndex, unit: .paragraph, scheme: .sentimentScore)
    print(sentiment!.rawValue)
}</code></pre>



<p>If you&#x2019;ve worked with <code>NaturalLanguage</code> before, this small block of code should be self-explanatory. If not, here&#x2019;s what it means.</p>



<p>First, we assign the text in the <code>messageTextField</code> to our tagger by setting it in the <em>string</em> property. Next, we call the <code>tag</code> method of the tagger. This method will find a tag based on the given text and the specified scheme. Here the scheme we used is <code>.sentimentScore</code>. This scheme will score the text as positive, negative, or neutral based on its sentiment polarity.</p>



<p>The <code>tag</code> method also requires you to specify the linguistic unit. Since the user may type a paragraph of text, we specify the unit to <code>.paragraph</code>.</p>



<p>The call will return two values: the tag and the range in which our sentiment is detected. Since we don&#x2019;t need it, we use <code>_</code> in the code above. Finally, we print the value of <code>sentiment</code>.</p>



<p>You can now run the code and see the score printed to the console.</p>



<ul class="wp-block-gallery columns-2 is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex"><li class="blocks-gallery-item"><figure><img loading="lazy" decoding="async" width="901" height="704" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-4.png" alt="What&apos;s New in Natural Language APIs in iOS" data-id="16898" data-link="https://www.appcoda.com/?attachment_id=16898" class="wp-image-16898" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-4.png 901w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-200x156.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-384x300.png 384w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-768x600.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-860x672.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-680x531.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-400x313.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-4-50x39.png 50w" sizes="(max-width: 901px) 100vw, 901px"></figure></li><li class="blocks-gallery-item"><figure><img loading="lazy" decoding="async" width="1024" height="610" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-1024x610.png" alt="What&apos;s New in Natural Language APIs in iOS" data-id="16899" data-link="https://www.appcoda.com/?attachment_id=16899" class="wp-image-16899" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-1024x610.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-1240x738.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-860x512.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-680x405.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-5.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure></li></ul>



<p>You can see that for my sentence, &#x201C;I am feeling very happy that I was able to code this!&#x201D;, I got a score of 0.6 which is mostly positive!</p>



<p>We&#x2019;re done with the NLP part! Next, I challenge you to change the background color of <code>analyzeButton</code> in respect to the score we get. If the score is greater than 0, this means that it is positive so we&#x2019;d like the background color to be green. If the score is less than 0, this means that it is negative so we&#x2019;d like the background color to be red. Finally, if the score is 0, this means that it is neutral so we&#x2019;d like the background color to be blue.</p>



<p>Were you able to do it? If not, no worries. Here&#x2019;s how it can be done. Add the following lines of code in our <code>analyzeText()</code> function.</p>



<pre class="wp-block-code"><code>let score = Double(sentiment!.rawValue)!
if score &lt; 0 {
    self.analyzeButton.backgroundColor = .systemRed
} else if score &gt; 0 {
    self.analyzeButton.backgroundColor = .systemGreen
} else {
    self.analyzeButton.backgroundColor = .systemBlue
}</code></pre>



<p>This code is pretty easy. We transform the raw value of our <code>sentiment</code> constant into a type <code>Double</code>. Then, using an if-else statement, we change the background color of our button.</p>



<p>If possible, try to use <code>.system</code> colors in your apps from now on because they&#x2019;re are universal within Apple&#x2019;s OS and can automatically be adjusted for features like Dark Mode and Accessibility without you having to do anything.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="901" height="704" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-8.png" alt="What&apos;s New in Natural Language APIs in iOS" class="wp-image-16901" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-8.png 901w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-200x156.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-384x300.png 384w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-768x600.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-860x672.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-680x531.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-400x313.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-8-50x39.png 50w" sizes="(max-width: 901px) 100vw, 901px"></figure>



<p>Congratulations! With just a few lines of code, you were able to code a full Sentiment Analysis feature into your app. Previously, you would have to rely on a <code>CoreML</code> model but with <code>NaturalLanguage</code>, this just became much more easier! Next, let&#x2019;s see what&#x2019;s new with Word Tagging.</p>



<h2 class="wp-block-heading">Word Tagging</h2>



<p>Word Tagging is slightly different than Text Classification. In this process, given that we have a sequence of words (or <strong>tokens</strong> as it is commonly referred to in NLP), we want to assign a label to every single token. This could be assigning each token its respective part of speech, named entity recognition, or any other tagging system that helps you assign it to each token.</p>



<p>One important task of NLP which falls under Word Tagging is <strong>Word Embedding</strong>. Word embedding is a mapping of objects into a vector representation. A vector is nothing more than a continuous sequence of numbers. Well, what does this mean and why is it important? By using this method of mapping objects (or tokens in our case), we can find a way to <strong>quantitatively organize</strong> a group of objects. So when you plot these vectors, you can find that similar objects are clustered together. This helps when we want to build something that can also display objects similar to it. Apart from words, embedding can also be used for images, phrases, and more!</p>



<p>In the second part of our app, we&#x2019;ll be building a suggestion system for someone going shopping with the help of word embedding. Here&#x2019;s how we can achieve it. Looking at our storyboard, you can see that we have a text field for entering the item we&#x2019;re looking for, a button to suggest items for us, and a label which will show us those suggestions. Since all of our UI is hooked up, all we need to do is add the following lines of code to our <code>suggest()</code> function.</p>



<pre class="wp-block-code"><code>@IBAction func suggest(_ sender: Any) {
    //1
    suggestionLabel.text = &quot;You may be interested in:\n&quot;
    let embedding = NLEmbedding.wordEmbedding(for: .english)

    //2
    embedding?.enumerateNeighbors(for: itemTextField.text!.lowercased(), maximumCount: 5) { (string, distance) -&gt; Bool in
        //3
        print(&quot;\(string) - \(distance)&quot;)
        suggestionLabel.text! += (string.capitalized + &quot;\n&quot;)
        return true
    }
}</code></pre>



<p>Let me explain the code above line by line:</p>



<ol><li>First, we want to reset the <code>suggestionLabel</code> so the previous suggestions won&#x2019;t be there. You use an&#xA0;<code>NLEmbedding</code>&#xA0;to find similar strings. The framework provides some built-in word embeddings that we can use on-the-fly. Here, by calling its <code>wordEmbedding</code> method with our preferred language, we can use the returned&#xA0;<code>NLEmbedding</code> for finding similar words in English.</li><li>With the <code>NLEmbedding</code>, we call the <code>enumerateNeighbors</code> method to find similar words for the input text. I have configured it to return only 5 of the closest neighbors but you can change that number.</li><li>Finally, I print the neighboring string and its distance from our input string to the console. This distance indicates the similarity of the two words. The smaller the distance, the higher is the similarity.</li></ol>



<p>Build and run the project! You should see everything work flawlessly!</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="901" height="704" src="https://www.appcoda.com/content/images/wordpress/2019/07/Image-11.png" alt="What&apos;s New in Natural Language APIs in iOS" class="wp-image-16902" srcset="https://www.appcoda.com/content/images/wordpress/2019/07/Image-11.png 901w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-200x156.png 200w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-384x300.png 384w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-768x600.png 768w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-860x672.png 860w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-680x531.png 680w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-400x313.png 400w, https://www.appcoda.com/content/images/wordpress/2019/07/Image-11-50x39.png 50w" sizes="(max-width: 901px) 100vw, 901px"></figure>



<p>Now, you may notice that this isn&#x2019;t the ideal experience since word embeddings only display words that are close to it, not necessarily the similarity of objects. For example, if I ask a word embedder to show me the closest words for &#x201C;Meat&#x201D;, one of the words would be &#x201C;Vegetarian&#x201D;. While this is a close word, it definitely is not something a shopper would be looking for. For our app, a <strong>Recommender System</strong> would be more apt. This also requires machine learning and will be covered in an upcoming tutorial on <code>CreateML</code>.</p>



<p>Where else can word embedding be used? In the Photos application, when you search for &#x201C;sea&#x201D;, it also shows you pictures of the beach, surfboard, and other sea-related pictures. This is possible through word embedding. </p>



<p>Another application of word embedding is with something called <strong>Fuzzy Searching</strong>. Has it every happened to you when you searched for something, either a song or a book, and you misspelled maybe a few letters but you still got the right result. This is because Fuzzy Searching uses Word Embedding to calculate the distance between your wrong input and the right input in the database which allows it to show the right phrase.</p>



<h2 class="wp-block-heading">What else is new</h2>



<p>Sentiment Analysis and Word Embedding were the top 2 API&#x2019;s in this year&#x2019;s update of <code>NaturalLanguage</code>. On top of that, there are 2 more features that were released as well. While going into them would make the tutorial too long, I&#x2019;ll quickly cover these new features.</p>



<h3 class="wp-block-heading">Custom Word Embeddings</h3>



<p>Earlier, we used a general word embedding model that is built into all of Apple&#x2019;s operating systems. However, sometimes, we may want a word embedding model that is based on specifically one domain, such as financial words or medical words. Apple has thought of that and <code>NaturalLanguage</code> also supports the use of custom word embeddings.</p>



<p>When we create recommender systems in Create ML, under the hood, it builds a custom word embedding using the data provided. However, if you&#x2019;d like to control the algorithms, then Apple also allows us to do that as well. You can easily build your own word embedder using <code>CreateML</code></p>



<pre class="wp-block-code"><code>import CreateML

let vectors = [&quot;Object A&quot;: [0.112, 0.324, -2.24, ...],
                &quot;Object B&quot;: [0.112, 0.324, -2.24, ...],
                &quot;Object C&quot;: [0.112, 0.324, -2.24, ...],
                  ...
              ]
let embedding = try MLWordEmbedding(dictionary: vectors)
try embedding.write(to: url)</code></pre>



<p>You can obtain the vectors for these words through a custom neural network using either TensorFlow or PyTorch. This is way beyond the scope of the tutorial but if you&#x2019;re interested, I highly suggest you take a look on the web because this is an active research area in NLP and can be quite interesting. </p>



<h3 class="wp-block-heading">Text Catalog</h3>



<p>We briefly mentioned how you can create your own word tagger with <code>MLWordTagger</code>. However, it would take a lot of efforts to create the word tagging model.</p>



<p>New this year is the addition of <code>MLGazetteer</code>. A <strong>gazetteer</strong> is a text catalog, or a dictionary, filled with names and labels. <code>CreateML</code> then takes this gazetteer and transforms it into a tagger that can be used in your app. For example, if you wanted a tagger that would tag electronic devices by its type, you would use a file that would look like this:</p>



<pre class="wp-block-code"><code>[&quot;smartphone&quot;: [&quot;iPhone XS&quot;, &quot;Samsung S10&quot;, &quot;Google Pixel 3a&quot;, ...],
&quot;laptop&quot;: [&quot;MacBook Pro&quot;, &quot;Surface Laptop&quot;, &quot;Chromebook&quot;, ...],
&quot;smartwatch&quot;: [&quot;Apple Watch&quot;, &quot;Samsung Galaxy Watch&quot;, &quot;Fitbit Versa&quot;, ...],
...
]</code></pre>



<p>Ideally, in this JSON file, you would have thousands of entities. Then, you&#x2019;d use <code>CreateML</code> to create your gazetteer like this:</p>



<pre class="wp-block-code"><code>import CreateML

let entities = [&quot;smartphone&quot;: [&quot;iPhone XS&quot;, &quot;Samsung S10&quot;, &quot;Google Pixel 3a&quot;, ...],
&quot;laptop&quot;: [&quot;MacBook Pro&quot;, &quot;Surface Laptop&quot;, &quot;Chromebook&quot;, ...],
&quot;smartwatch&quot;: [&quot;Apple Watch&quot;, &quot;Samsung Galaxy Watch&quot;, &quot;Fitbit Versa&quot;, ...],
...
]
let gazetteer = try MLGazetteer(dictionary: entities)
try gazetteer.write(to: url)</code></pre>



<p>Then you would use it with the <code>NaturalLanguage</code> framework as such:</p>



<pre class="wp-block-code"><code>import NaturalLanguage

let gazetteer = try! MLGazetteer(contentsOf: url)
let tagger = NLTagger(tagSchemes: [.nameTypeOrLexicalClass])
tagger.setGazetteers([gazetteer], for: .nameTypeOrLexicalClass)</code></pre>



<p>If you&#x2019;d like to see a second part of this tutorial with the above features explained more in depth, leave a comment below!</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>As shown, you can see how the field of Natural Language Processing is quickly expanding and Apple is doing its best to make these high level technologies much more accessible for developers. I suggest you take a look at some of the resources below such as the Apple&#x2019;s documentation and the WWDC 2019 sessions given about this framework.</p>



<ul><li><a href="https://developer.apple.com/documentation/naturallanguage/?ref=appcoda.com">Natural Language Documentation</a></li><li><a href="https://developer.apple.com/wwdc19/232?ref=appcoda.com">WWDC 2019 &#x2013; Advancements in Natural Language Framework</a></li></ul>



<p>With powerful tools like Sentiment Analysis and Word Embedding, you can create a whole range of apps in all domains that can leverage the power of machine learning. If you have any questions, feel free to ask in the comments below. </p>



<p>For reference, you can download the completed project <a href="https://github.com/appcoda/NLP-Sentiment-Analysis?ref=appcoda.com">here</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[SwiftUI First Look: Building a Simple Table View App]]></title><description><![CDATA[<!--kg-card-begin: html-->

<p>WWDC 2019 was one of the more exciting keynotes in terms of advancements in developer tools. One of the biggest and the best announcements was the release of <strong>SwiftUI</strong>. <code>SwiftUI</code> is a brand new framework that allows you to design and developer user interfaces with way less code and in</p>]]></description><link>https://www.appcoda.com/swiftui-first-look/</link><guid isPermaLink="false">66612a0f166d3c03cf01149f</guid><category><![CDATA[SwiftUI]]></category><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Fri, 07 Jun 2019 22:09:02 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2019/06/swiftui-demo-featured.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->

<img src="https://www.appcoda.com/content/images/wordpress/2019/06/swiftui-demo-featured.jpg" alt="SwiftUI First Look: Building a Simple Table View App"><p>WWDC 2019 was one of the more exciting keynotes in terms of advancements in developer tools. One of the biggest and the best announcements was the release of <strong>SwiftUI</strong>. <code>SwiftUI</code> is a brand new framework that allows you to design and developer user interfaces with way less code and in a declarative way. </p>



<p>Unlike <code>UIKit</code> which was commonly used conjoined with <a href="https://www.appcoda.com/uitableview-tutorial-storyboard-xcode5/">storyboards</a>, <code>SwiftUI</code> is completely based on code. However, the syntax is very easy to understand and can quickly be previewed with Automatic Preview.</p>



<p>Since <code>SwiftUI</code> is built with Swift, it allows you to create the same complexity of apps with much less code. What&#x2019;s even more is that the use of <code>SwiftUI</code> automatically enables your app to take advantage of features like Dynamic Type, <a href="https://www.appcoda.com/dark-mode-preview/">Dark Mode</a>, Localization, and Accessibility. Furthermore, <code>SwiftUI</code> is available on all platforms including macOS, iOS, iPadOS, watchOS, and tvOS. So now, your UI code can be synchronized across all platforms giving you more time to focus on the minor platform-specific code.</p>



<p><em><strong>Editor&#x2019;s note</strong>: This tutorial has been updated for Xcode 11.4 and Swift 5.2. If you want to dive deeper into SwiftUI, you can check out our <a href="https://www.appcoda.com/swiftui">Mastering SwiftUI</a> book. </em></p>



<h2 class="wp-block-heading">About this tutorial</h2>



<p>It&#x2019;s important that developers learn early how to use <code>SwiftUI</code> because Apple will eventually migrate much of their focus to this framework. In this tutorial, we&#x2019;ll look at the basics of <code>SwiftUI</code> and explore how to create navigation views, images, texts, and lists by building a simple contact list that shows all our tutorial team members. When a member is clicked, the app proceeds to the detail view displaying their picture along with a short bio about them. Let&#x2019;s get started!</p>



<p>This tutorial requires you to be running Xcode 11 (or up) and we will be using Swift 5 in this tutorial. While an advanced knowledge of Swift is not necessary to follow the tutorial, it&#x2019;s recommended that you understand the basics.</p>



<h2 class="wp-block-heading">Setting Up Your Project with SwiftUI</h2>



<p>Let&#x2019;s start from scratch so you can see how to start to run a <code>SwiftUI</code> app immediately. First, open Xcode and click on <strong>Create new Xcode project</strong>. Under <strong>iOS</strong>, choose <em>Single View App</em>. Name your app and fill out the text fields. However, at the bottom, make sure that the <strong><em>Use SwiftUI</em></strong> option is checked. If you do not check the option, Xcode will generate the storyboard file </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="725" height="513" src="https://www.appcoda.com/content/images/wordpress/2019/06/image.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16634" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image.png 725w, https://www.appcoda.com/content/images/wordpress/2019/06/image-200x142.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-424x300.png 424w, https://www.appcoda.com/content/images/wordpress/2019/06/image-680x481.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-400x283.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-50x35.png 50w" sizes="(max-width: 725px) 100vw, 725px"></figure>



<p>Now Xcode will automatically create a file for you called <code>ContentView.swift</code> and the amazing part is that it will show you a live preview of your code to the right hand side as shown below.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-1-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16635" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-1-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-1.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>If you don&#x2019;t see the preview, you will need to hit the Resume button in the preview canvas. It&#x2019;ll take some time to build the project. Just be patient.</p>



<p>Now let&#x2019;s start seeing how we can modify these files to create our app.</p>



<h2 class="wp-block-heading">Building the List View</h2>



<p>To build the list view, there are three parts to this. The first part is the creation of the rows in the list. You may recognize the design to be similar to a <code>UITableView</code>. To do this, we have to create a <code>ContactRow</code>. The second part is connecting the data we need to our list. I have the data already coded and it takes just a few modifications to connect our list to the data. The final part is simply adding a Navigation Bar and embedding our list in a Navigation View. This is pretty simple. Let&#x2019;s see how we implemented all these in <code>SwiftUI</code>.</p>



<h3 class="wp-block-heading">Creating the Tutor List</h3>



<p>First, let&#x2019;s create the list view for displaying a list of all the tutorial team members including their profile photos and description. Let&#x2019;s see how this can be done.</p>



<p>As you can see in the generated code, we already have a <code>Text</code> component with the value set to &#x201C;Hello World&#x201D;. In the code editor, change the value of the code to &#x201C;Simon Ng&#x201D;. </p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var body: some View {
        Text(&quot;Simon Ng&quot;)
    }
}</code></pre>



<p>If all works, you should see your canvas on the right update automatically. That&#x2019;s the power of instant preview that we&#x2019;ve all anticipated for so long.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-2-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16637" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-2-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-2.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s add another <code>Text</code> view to our app. This will be the short description of the member. To add a new UI element to our app, press the <code>+</code> button in the top right corner. A new window will appear with a list of different views. Drag the view titled <code>Text</code> and drop it underneath our initial <code>Text</code> view as shown below.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-3-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16638" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-3-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-3.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Notice the code on the left:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var body: some View {
        VStack {
            Text(&quot;Simon Ng&quot;)
            Text(&quot;Placeholder&quot;)
        }
    }
}</code></pre>



<p>You can see that a new <code>Text</code> view was added underneath our <code>Simon Ng</code> text view. What&#x2019;s different is that it seems to have wrapped it in something called a <code>VStack</code>. A <code>VStack</code> is short for vertical stack and it is the replacement of Auto Layout in <code>SwiftUI</code>. If you&#x2019;ve developed for <code>watchOS</code> before, you know that there are no constraints, rather everything is placed into groups. With a vertical stack, all of our views will be arranged vertically.</p>



<p>Now, change the text of &#x201C;Placeholder&#x201D; to &#x201C;Founder of AppCoda&#x201D;.</p>



<p>Next, let&#x2019;s add an image to the left of this text. Since we want to arrange a view <em>horizontally</em> to the existing views, we need to wrap the <code>VStack</code> in an <code>HStack</code>. To do this, <code>CMD+Click</code> on the <code>VStack</code> in your code, and then choose <code>Embed in HStack</code>. Take a look at it below:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-4-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16639" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-4-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-4.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Your code should now look like this:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var body: some View {
        HStack {
            VStack {
                Text(&quot;Simon Ng&quot;)
                Text(&quot;Founder of AppCoda&quot;)
            }
        }
    }
}</code></pre>



<p>There are no distinctive changes in our automatic preview but let&#x2019;s add an image now. Change your code to look like this:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var body: some View {
        HStack {
            Image(systemName: &quot;photo&quot;)
            VStack {
                Text(&quot;Simon Ng&quot;)
                Text(&quot;Founder of AppCoda&quot;)
            }
        }
    }
}</code></pre>



<p>Starting from iOS 13, Apple introduces a new feature called <strong>SFSymbols</strong>. Designed by Apple, SF Symbols is a set of over 1,500 symbols you can use in your app. Since they are able to integrate seamlessly with the San Francisco system font, the symbols automatically ensure optical vertical alignment with text for all weights and sizes. Since we don&#x2019;t have the images of our tutors yet, we use a placeholder image here. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-5-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16640" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-5-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-5.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Now, let&#x2019;s focus on some minor design issues. Since we want to emulate the look and feel of a <code>UITableRow</code>, let&#x2019;s align the text to the left (i.e. leading). To do that, <code>CMD+Click</code> on the <code>VStack</code> and click on <code>Inspect</code>. Select the left alignment icon as shown below:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-6-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16641" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-6-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-6.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>You&#x2019;ll see the code change to the following. And, you&#x2019;ll also see your live preview change to reflect the new changes.</p>



<pre class="wp-block-code"><code>VStack(alignment: .leading) {
    ...
}</code></pre>



<p>Now since our second text view is a headline, let&#x2019;s change the font to reflect this. Just like before, <code>CMD+Click</code> on the &#x201C;Founder of AppCoda&#x201D; text view in the live preview and select <code>Inspect</code>. Change the font to &#x201C;Subheadline&#x201D; and watch both the live preview and your code change.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-7-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16642" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-7-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-7.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s change the color too and set it to &#x201C;Gray&#x201D;. Your code should now look like this:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var body: some View {
        HStack {
            Image(systemName: &quot;photo&quot;)
            VStack(alignment: .leading) {
                Text(&quot;Simon Ng&quot;)
                Text(&quot;Founder of AppCoda&quot;)
                    .font(.subheadline)
                    .color(.gray)
            }
        }
    }
}</code></pre>



<p>Now that we&#x2019;re done designing the sample row, here comes to the magic part. Watch how easy it is to create a list. <code>CMD+Click</code> on our <code>HStack</code> and click on <code>Embed in List</code>. Voila! Watch how your code will automatically change and our canvas will reflect 5 beautiful new rows each showing Simon Ng as the team member.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-8-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16643" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-8-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-8.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Also, be sure to note how the <code>List</code> was created in the code. By removing the <code>HStack</code> and replacing it with a reiterating <code>List</code>, we were able to create a table view. Now think about all the time and lines of code you saved avoiding all the <code>UITableViewDataSource</code>, <code>UITableViewDelegate</code>, Auto Layout, Implementation for Dark Mode, etc. This alone shows the extent and power of <code>SwiftUI</code>. However, we&#x2019;re far from done. Let&#x2019;s add some real data to our new list.</p>



<h3 class="wp-block-heading">Connecting our data to the list</h3>



<p>The data we need is a list of the tutorial team members and their bios along with a folder of all their images. You can download the files you need <a href="https://github.com/appcoda/SwiftUIDemo/raw/master/Resources.zip?ref=appcoda.com">here</a>. You should find 2 files named <code>Tutor.swift</code> and <code>Tutor.xcassets</code>.</p>



<p>Once downloaded, import both the Swift file and the asset folder into your Xcode project. To import them, simply drag them to the project navigator.</p>



<p>In <code>Tutor.swift</code>, we declare a <code>struct</code> called <code>Tutor</code> and have it conform to the <code>Identifiable</code> protocol. You&#x2019;ll see why this is important later. We also define its variables as <code>id</code>, <code>name</code>, <code>headline</code>, <code>bio</code>, and <code>imageName</code>. Finally, we include some test data that will be used in our app. In <code>Tutor.xcassets</code>, we have images of all our team members.</p>



<p>Go back to <code>ContentView.swift</code> and modify your code to look like this:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    //1
    var tutors: [Tutor] = []

    var body: some View {
        List(0..&lt;5) { item in
            Image(systemName: &quot;photo&quot;)
            VStack(alignment: .leading) {
                Text(&quot;Simon Ng&quot;)
                Text(&quot;Founder of AppCoda&quot;)
                    .font(.subheadline)
                    .color(.gray)
            }
        }
    }
}

#if DEBUG
struct ContentView_Previews : PreviewProvider {
    static var previews: some View {
        //2
        ContentView(tutors: testData)
    }
}
#endif</code></pre>



<p>What we&#x2019;re doing here is quite simple:</p>



<ol><li>We&#x2019;re defining a new variable named <code>tutors</code> which is an empty array of the structure <code>Tutor</code>. </li><li>Since we&#x2019;re defining a new variable to our structure <code>ContentView</code>, we need to change the <code>ContentView_Previews</code> as well to reflect this change. We set the parameter <code>tutors</code> to our <code>testData</code>.</li></ol>



<p>There won&#x2019;t be any change in our live preview because we haven&#x2019;t used our test data. To display the test data, modify your code like this:</p>



<pre class="wp-block-code"><code>struct ContentView: View {
    var tutors: [Tutor] = []

    var body: some View {
        List(tutors) { tutor in
            Image(tutor.imageName)
            VStack(alignment: .leading) {
                Text(tutor.name)
                Text(tutor.headline)
                    .font(.subheadline)
                    .color(.gray)
            }
        }
    }
}</code></pre>



<p>Here, we make sure the <code>ContentView</code> to use the <code>tutors</code> list for displaying the data on screen.</p>



<p>Ta-da! Look at how your live preview changes in the canvas.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-9-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16644" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-9-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-9.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The images are in square shape. We&#x2019;d like them to have more of a rounded and circular look. Let&#x2019;s see how we can make a rounded-corner image. In the top right corner, press the <code>+</code> button and click the second tab. This should show a list of layout modifiers that you can add to the views.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="727" height="542" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-10.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16646" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-10.png 727w, https://www.appcoda.com/content/images/wordpress/2019/06/image-10-200x149.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-10-402x300.png 402w, https://www.appcoda.com/content/images/wordpress/2019/06/image-10-680x507.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-10-400x298.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-10-50x37.png 50w" sizes="(max-width: 727px) 100vw, 727px"></figure>



<p>Search for &#x201C;Corner Radius&#x201D;, drag it over to our live preview and drop it on the image. You should see the code and the live preview change to something like this.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-11-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16647" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-11-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-11.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>However, a corner radius of <code>3</code> is a little too small to notice. So, change it to <code>40</code>. This way, you&#x2019;ll achieve a nice and circular profile picture.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-12-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16648" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-12-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-12.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The cell and the list are all done now! Give yourself a pat on the back for accomplishing this. What we need to do next is present a detail view when a user taps on a cell. Let&#x2019;s start building the navigation.</p>



<h3 class="wp-block-heading">Building the Navigation</h3>



<p>A navigation view wraps the view we already have in a navigation bar and navigation controller. Assuming you&#x2019;ve used Storyboard before, you should know that it&#x2019;s pretty easy to embed a view in a navigation interface. All you need to do is just a few clicks.</p>



<p>For SwiftUI, wrapping the <code>List</code> view in a <code>NavigationView</code> is also very easy. All you need to do is to change your code like this:</p>



<pre class="wp-block-code"><code>...
var body : some View {
    NavigationView {
        List(tutors) { tutor in 
            ...
        }
    }
}
...</code></pre>



<p>You just need to wrap the <code>List</code> code in a <code>NavigationView</code> wrapper. By default, the navigation bar doesn&#x2019;t have a title. Your preview should move the list down leaving a very large gap in the middle. This is because we haven&#x2019;t set the navigation bar&#x2019;s title. To fix that, we can set the title by adding the following line of code (i.e. <code>.navigationBarTitle</code>):</p>



<pre class="wp-block-code"><code>...
var body : some View {
    NavigationView {
        List(tutors) { tutor in 
            ...
        }
        .navigationBarTitle(Text(&quot;Tutors&quot;))
    }
}
...</code></pre>



<p>Your screen should now look similar to this:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-13-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16649" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-13-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-13.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Next, let&#x2019;s set up the navigation link. A <code>NavigationLink</code> presents a <code>destination</code> which is a view that presents on the navigation stack. Just like how we wrapped the <code>List</code> in a <code>NavigationView</code>, we need to wrap the content of the <code>List</code> with a <code>NavigationLink</code> as shown below:</p>



<pre class="wp-block-code"><code>...
var body : some View {
    NavigationView {
        List(tutors) { tutor in 
            NavigationLink(destination: Text(tutor.name)) {
                Image(tutor.imageName)
                VStack(alignment: .leading) {
                    Text(tutor.name)
                    Text(tutor.headline)
                        .font(.subheadline)
                        .foregroundColor(.gray)
                }
            }
        }
        .navigationBarTitle(Text(&quot;Tutors&quot;))
    }
}
...</code></pre>



<p>We simply shows the name of the team member in the detail view. Now&#x2019;s the time to test it out.</p>



<p>In the current preview mode, you can&#x2019;t interact with the view. Normally when you click on the automatic preview, it just highlights the code. In order to test and interact with the UI, you need to press the play button at the bottom right corner. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-14-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16650" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-14-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-14.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The view will go dark and you may need to wait for a couple of seconds for the whole simulator to load before you can actually interact with the views.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-15-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16651" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-15-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-15.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Once it finishes loading, you should be able to click on a cell and it will navigate to a new view in the stack that displays the name of the selected cell.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="607" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-16-1024x607.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16652" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-16-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-768x455.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-1240x735.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-16.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Before moving onto the implementation of the detailed view, let me show you a nifty trick that will help making your code more legible. <code>CMD+Click</code> the <code>NavigationLink</code> and choose &#x201C;Extract Subview&#x201D;. </p>



<p>Boom! You can see that the entire code in <code>NavigationLink</code> has been created into a brand new <code>struct</code> that makes it very legible. Rename <code>ExtractedView</code> to <code>TutorCell</code>.</p>



<p>You may now get an error in <code>TutorCell</code>. This is because we don&#x2019;t have a <code>tutor</code> parameter to pass in this new structure. It&#x2019;s very simple to fix the error. Add a new constant in your <code>TutorCell</code> struct as such:</p>



<pre class="wp-block-code"><code>struct TutorCell: View {
    let tutor: Tutor
    var body: some View {
        ...
    }
}</code></pre>



<p>And, in the <code>ContentView</code>, add the missing parameter by changing the line to:</p>



<pre class="wp-block-code"><code>...
List(tutors) { tutor in 
    TutorCell(tutor: tutor)
}.navigationBarTitle(Text(&quot;Tutors&quot;))
...</code></pre>



<p>That&#x2019;s it! We have our list and cells all well designed and laid out! Next, we&#x2019;re going to build a detail view that will show all the information of the tutor.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="561" src="https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-1024x561.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-17956" srcset="https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-1024x561.png 1024w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-548x300.png 548w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-200x110.png 200w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-768x421.png 768w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-1536x842.png 1536w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-1680x921.png 1680w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-1240x679.png 1240w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-860x471.png 860w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-680x373.png 680w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-400x219.png 400w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link-50x27.png 50w, https://www.appcoda.com/content/images/wordpress/2020/04/swiftui-navigation-link.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Building the Detail View</h2>



<p>Let&#x2019;s create a new file by going to File &gt; New &gt; File. Under iOS, select <code>SwiftUI View</code> and name this file <code>TutorDetail</code>. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-18-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16654" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-18-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-18.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>You can see in the automatic preview that our basic view has been created. Let&#x2019;s change this. First, click on the <code>+</code> button and drop an image above the <code>Text</code> view already built in. Set the name of the image to &#x201C;Simon Ng&#x201D;. Simon&#x2019;s picture should show up. Now modify your code to look like below:</p>



<pre class="wp-block-code"><code>struct TutorDetail: View {
    var body: some View {
        //1
        VStack {
            //2
            Image(&quot;Simon Ng&quot;)
                 .clipShape(Circle())
                .overlay(
                    Circle().stroke(Color.orange, lineWidth: 4)
                )
                .shadow(radius: 10)
            //3
            Text(&quot;Simon Ng&quot;)
                .font(.title)
        }
    }
}</code></pre>



<p>You should be able to follow the code but in case you can&#x2019;t, don&#x2019;t worry. Here&#x2019;s what it&#x2019;s basically saying:</p>



<ol><li>First, we wrap all our views in a vertical stack. This is crucial to the layout of the design we&#x2019;ll be taking.</li><li>Next, we take that image of Simon and spice it up. First, we set the clips of the image to be in the shape of a circle. Rather than setting its <code>cornerRadius</code>, this is much more effective because it can adapt to different image sizes. We add an overlay of a circle with a white border which provides a beautiful orange border. Finally, we add a light shadow to provide some depth.</li><li>Our last line of code sets the font of the name of the tutor to the the <code>title</code> font.</li></ol>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-19-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16655" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-19-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-19.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>We also need to add two more text views: <em>headline</em> and <em>bio</em>. Drag in two text views below the tutor name text view and let&#x2019;s edit them:</p>



<pre class="wp-block-code"><code>struct TutorDetail: View {
    var body: some View {
        VStack {
            Image(&quot;Simon Ng&quot;)
                 .clipShape(Circle())
                .overlay(
                    Circle().stroke(Color.orange, lineWidth: 4)
                )
                .shadow(radius: 10)
            Text(&quot;Simon Ng&quot;)
                .font(.title)
            Text(&quot;Founder of AppCoda&quot;)
            Text(&quot;Founder of AppCoda. Author of multiple iOS programming books including Beginning iOS 12 Programming with Swift and Intermediate iOS 12 Programming with Swift. iOS Developer and Blogger.&quot;)
        }
    }
}</code></pre>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-20-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16656" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-20-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-20.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>The good news is that we have our text views present. The bad news is it looks bad and doesn&#x2019;t really show the difference between a headline and a biographical description. Furthermore, the bio text view doesn&#x2019;t display the whole text. Let&#x2019;s fix these issues one by one.</p>



<p>Update the code like this:</p>



<pre class="wp-block-code"><code>struct TutorDetail: View {    
    var body: some View {
        VStack {
            Image(&quot;Simon Ng&quot;)
                 .clipShape(Circle())
                .overlay(
                    Circle().stroke(Color.orange, lineWidth: 4)
                )
                .shadow(radius: 10)
            Text(&quot;Simon Ng&quot;)
                .font(.title)
            //1
            Text(&quot;Founder of AppCoda&quot;)
                .font(.subheadline)
            //2
            Text(&quot;Founder of AppCoda. Author of multiple iOS programming books including Beginning iOS 12 Programming with Swift and Intermediate iOS 12 Programming with Swift. iOS Developer and Blogger.&quot;)
                .font(.headline)
                .multilineTextAlignment(.center)

        }
    }
}</code></pre>



<ol><li>First we set the &#x201C;Founder of AppCoda&#x201D; to be a font weight of <code>subheadline</code></li><li>Similarly, we set the bio text view to the font weight of <code>headline</code>. We also center the text with the line <code>.multilineTextAlignment(.center)</code>.</li></ol>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-21-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16657" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-21-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-21.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Let&#x2019;s fix the next problem. We need to display the entire text of the bio text view. This can be easily done by adding a new line of code:</p>



<pre class="wp-block-code"><code>...
Text(&quot;Founder of AppCoda. Author of multiple iOS programming books including Beginning iOS 12 Programming with Swift and Intermediate iOS 12 Programming with Swift. iOS Developer and Blogger.&quot;)
        .font(.headline)
        .multilineTextAlignment(.center)
        .lineLimit(50)
...</code></pre>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-22-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16658" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-22-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-22.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Everything looks good. There&#x2019;s one last design change I want to make. The headline and the bio text views are rather too close to each other. I&#x2019;d like to have some space in between these two <code>Text</code> views. Also, I&#x2019;d like to add some paddings to all the views such that they are not hugging the edges of the device. Make sure you change the code like this:</p>



<pre class="wp-block-code"><code>struct TutorDetail: View {
    var body: some View {
        VStack {
            Image(&quot;Simon Ng&quot;)
                 .clipShape(Circle())
                .overlay(
                    Circle().stroke(Color.orange, lineWidth: 4)
                )
                .shadow(radius: 10)
            Text(&quot;Simon Ng&quot;)
                .font(.title)
            Text(&quot;Founder of AppCoda&quot;)
                .font(.subheadline)
            //1
            Divider()

            Text(&quot;Founder of AppCoda. Author of multiple iOS programming books including Beginning iOS 12 Programming with Swift and Intermediate iOS 12 Programming with Swift. iOS Developer and Blogger.&quot;)
                .font(.headline)
                .multilineTextAlignment(.center)
                .lineLimit(50)
        //2
        }.padding()
    }
}</code></pre>



<p>We&#x2019;ve made a couple of changes here:</p>



<ol><li>Adding a divider is just as simple as calling <code>Divider()</code>.</li><li>To add padding to the entire vertical stack, we just have to call <code>.padding()</code> at the end of the <code>VStack</code> declaration.</li></ol>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-23-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16659" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-23-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-23.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>That&#x2019;s all! Congrats! You finally completed the detail view. What&#x2019;s left is connecting our list to the detail view. This can be quite simple.</p>



<h2 class="wp-block-heading">Passing Data </h2>



<p>To pass data, we need to declare some parameters in our <code>TutorDetail</code> struct. Before declaring the variable <code>body</code>, add the following variables:</p>



<pre class="wp-block-code"><code>var name: String
var headline: String
var bio: String
var body: some View {
    ...
}</code></pre>



<p>These are the parameters we&#x2019;ll pass from our <code>ContentView</code>. Replace the text occurrences with these parameters:</p>



<pre class="wp-block-code"><code>...
var body: some View {
    VStack {
        // 1
        Image(name)
            .clipShape(Circle())
               .overlay(
                   Circle().stroke(Color.orange, lineWidth: 4)
            )
               .shadow(radius: 10)
        //2
        Text(name)
            .font(.title)
        //3
        Text(headline)
            .font(.subheadline)
        Divider()
        //4
        Text(bio)
            .font(.headline)
            .multilineTextAlignment(.center)
            .lineLimit(50)
        //5
    }.padding().navigationBarTitle(Text(name), displayMode: .inline)
}
...</code></pre>



<ol><li>We replace the name of the image with our variable <code>name</code></li><li>We also replace the name of the tutor with our variable <code>name</code></li><li>We replace our headline text with our variable <code>headline</code></li><li>Finally, we also replace our long paragraph of text with out variable <code>bio</code></li><li>We also add a new line of code that will set the title of the navigation bar to the name of our tutor.</li></ol>



<p>Last but not least, we need to add the missing parameters to our struct <code>TutorDetail_Previews</code>.</p>



<pre class="wp-block-code"><code>#if DEBUG
struct TutorDetail_Previews : PreviewProvider {
    static var previews: some View {
        TutorDetail(name: &quot;Simon Ng&quot;, headline: &quot;Founder of AppCoda&quot;, bio: &quot;Founder of AppCoda. Author of multiple iOS programming books including Beginning iOS 12 Programming with Swift and Intermediate iOS 12 Programming with Swift. iOS Developer and Blogger.&quot;)
    }
}    
#endif</code></pre>



<p>In the above code, we add the missing parameters and fill in the information with what we had earlier.</p>



<p>You may be wondering what&#x2019;s up with the <code>#if DEBUG/#endif</code> statements. These lines of code mean that whatever code is wrapped within these commands, will only be shown in the preview for debugging purposes. In your final app, it won&#x2019;t be there.</p>



<p>Resume your automatic preview. Nothing should change since we still have the same information. But make sure that you are able to preview <code>TutorDetail</code> before moving on to the next step.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="640" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-24-1024x640.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16660" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-24-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24-50x31.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-24.png 1440w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Our final step is to link this view to our list. Switch over to the <code>ContentView.swift</code> file. All you need to do is change one line of code in the <code>TutorCell</code> struct. Change the <code>NavigationButton</code> code to the following:</p>



<pre class="wp-block-code"><code>...
var body: some View {
    return NavigationButton(destination: TutorDetail(name: tutor.name, headline: tutor.headline, bio: tutor.bio)) {
        ...
    }
}
...</code></pre>



<p>Instead of presenting a <code>Text</code> view with the tutor name, we need to change the destination to that of <code>TutorDetail</code> while filling in the appropriate details. Your code should now look like this:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="606" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-25-1024x606.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16661" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-25-1024x606.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-507x300.png 507w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-768x454.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-1240x733.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-860x509.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-680x402.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25-50x30.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-25.png 1552w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>Click the play button on the live canvas and interact with the view. If all works well, it should work exactly as expected.</p>



<p>Simply choose one of the member records:</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="640" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-26-1024x640.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16662" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-26-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26-50x31.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-26.png 1440w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<p>And then you should see the member details in the detail view.</p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1024" height="640" src="https://www.appcoda.com/content/images/wordpress/2019/06/image-27-1024x640.png" alt="SwiftUI First Look: Building a Simple Table View App" class="wp-image-16663" srcset="https://www.appcoda.com/content/images/wordpress/2019/06/image-27-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27-50x31.png 50w, https://www.appcoda.com/content/images/wordpress/2019/06/image-27.png 1440w" sizes="(max-width: 1024px) 100vw, 1024px"></figure>



<h2 class="wp-block-heading">Conclusion</h2>



<p>This was a pretty huge tutorial but it presents the basics of <code>SwiftUI</code>. You should be comfortable to build simple apps now like a to-do list. I suggest you take a look at some of the resources below such as the Apple&#x2019;s documentation and the WWDC 2019 sessions given about this framework.</p>



<ul><li><a href="https://developer.apple.com/documentation/swiftui/?ref=appcoda.com">SwiftUI Documentation</a></li><li><a href="https://developer.apple.com/tutorials/swiftui?ref=appcoda.com">SwiftUI Tutorials</a></li><li><a href="https://developer.apple.com/wwdc19/204?ref=appcoda.com">Introducing SwiftUI: Building Your First App</a></li><li><a href="https://developer.apple.com/wwdc19/216?ref=appcoda.com">SwiftUI Essentials</a></li></ul>



<p>This framework is the future of Apple development so it&#x2019;s great to get that head start early on. <strong>Remember, if you&#x2019;re confused about the code, try to interact with the automatic preview and see if you can make UI changes directly to see how the code is built.&#xA0;</strong>If you have any questions, feel free to ask in the comments below. </p>



<p>For reference, you can download the completed project <a href="https://github.com/appcoda/SwiftUIDemo?ref=appcoda.com">here</a>. If you want to dive deeper into SwiftUI, check out our <a href="https://www.appcoda.com/swiftui-buttons/">next tutorial on SwiftUI button</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[How to Add Apple Pencil Support to your iPad Apps]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>In October 2018, Apple announced the brand new iPad Pro and the all-new Apple Pencil 2.0. Unlike the previous generation of the Apple Pencil, this utensil offers developers some extra fun APIs to play around with in order to enhance their app&#x2019;s functionality and UX. In this</p>]]></description><link>https://www.appcoda.com/apple-pencil-ipad/</link><guid isPermaLink="false">66612a0f166d3c03cf011499</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Thu, 02 May 2019 12:57:22 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-featured.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-featured.jpg" alt="How to Add Apple Pencil Support to your iPad Apps"><p>In October 2018, Apple announced the brand new iPad Pro and the all-new Apple Pencil 2.0. Unlike the previous generation of the Apple Pencil, this utensil offers developers some extra fun APIs to play around with in order to enhance their app&#x2019;s functionality and UX. In this tutorial, I will show you how to make your app support the Apple Pencil 2.</p>
<div class="alert green"><strong>Note:</strong> To test the demo app in this tutorial, you will need a real iPad Pro which is compatible with the second gen Apple Pencil. The simulator does not offer this functionality. We will be using <a href="https://www.appcoda.com/swift">Swift 5</a> and Xcode 10.2.</div>
<h2>Getting Started</h2>
<p>We will be building an app called Canvas, where users can look at hilarious invention ideas from the parody account <a href="https://twitter.com/boredelonmusk?ref=appcoda.com">Bored Elon Musk</a> everytime they double tap their Apple Pencil. First, open Xcode and select &#x201C;New Project&#x201D;. Choose &#x201C;Single View App&#x201D; and name your project to whatever name you like.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15139" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1512" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1.png 1512w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-200x122.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-491x300.png 491w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-768x469.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-1024x625.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-1240x757.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-860x525.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-680x415.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-400x244.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-1-50x31.png 50w" sizes="(max-width: 1512px) 100vw, 1512px"></p>
<p>Before we begin, we need to do something <strong>important</strong>. Since the Apple Pencil is supported only on the iPad, we need to make sure that our project is set to iPad only but not Universal.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15140" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1512" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2.png 1512w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-200x122.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-491x300.png 491w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-768x469.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-1024x625.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-1240x757.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-860x525.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-680x415.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-400x244.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-2-50x31.png 50w" sizes="(max-width: 1512px) 100vw, 1512px"></p>
<h2>Setting up the User Interface</h2>
<div class="alert green"><strong>Editor&#x2019;s note:</strong> You can skip this section and jump over to the next section if you just want to check out the code for Apple Pencil support.</div>
<p>First, we will design the user interface of the app. Navigate to <code>Main.storyboard</code> and drag a label into the view controller, set it&#x2019;s font size to 30, style to <strong>bold</strong>, and lines to 0.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15141" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-3-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>Next, we have to set the constraints. Click on the <code>Align</code> button on the bottom right and checkmark &#x201C;Horizontally in Container&#x201D; and &#x201C;Vertically in Container&#x201D;.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15142" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-4-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>We are almost done. Next, drag in a button and set its title to &#x201C;Next Idea!&#x201D; and font to &#x201C;Semibold 25&#x201D;.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15143" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-5-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>Finally, click on the <code>Align</code> button. Check both &#x201C;Horizontally in Container&#x201D; and &#x201C;Vertically in Container&#x201D;. For &#x201C;Vertically in Container&#x201D;, set its value to 100. This way the button will not be placed right below the label. Also, set the alpha of the button to 0 so it&#x2019;s invisible by default. I will explain why we did that later.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15144" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-6-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>Let&#x2019;s connect our UI elements to our code. Create two outlets in <code>ViewController.swift</code> and connect it to the UI elements.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15146" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-8-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>I have two outlets titled <code>ideaLabel</code> and <code>ideaButton</code>. The <code>ideaLabel</code> will display our invention idea. In the event that the user does not have an Apple Pencil or has turned off Apple Pencil support, we will display the <code>ideaButton</code> so they can still use the app.</p>
<p>That&#x2019;s the basics of our UI. Let&#x2019;s jump to the coding part!</p>
<h2>Coding the Support for Apple Pencil 2</h2>
<p>First, we declare an array to hold all of our invention ideas. Insert the following code in the <code>viewDidLoad</code> method:</p>
<pre class="swift">let ideas = [&quot;Email app that anonymously pings all your coworkers after New Year&#x2019;s&quot;, &quot;Self-imposed ads reminding you of your to-do list&quot;, &quot;Anti-bacterial gel dispensing doorknobs.&quot;, &quot;Vending machines that can break a $20&quot;, &quot;Routers that work.&quot;]
</pre>
<p>I&#x2019;ve added a few ideas but you&#x2019;re free to add as many as you want!</p>
<p>Next, we have to change the text of the label when the view loads up. We will randomly select an idea and put it on the label. This can be done quite simply by a single line of code. Update the <code>viewDidLoad</code> method like this:</p>
<pre class="swift">override func viewDidLoad() {
    super.viewDidLoad()
    ideaLabel.text = (ideas.randomElement()!)
}
</pre>
<p>This sets the text of our <code>ideaLabel</code> to a random string from our array.</p>
<p>Now let&#x2019;s run the app to have a test. If all goes well, the label should be changing every time you open the app.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15149" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="2388" height="1668" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11.png 2388w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-429x300.png 429w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-768x536.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-1024x715.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-1680x1173.png 1680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-1240x866.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-860x601.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-680x475.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-400x279.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-11-50x35.png 50w" sizes="(max-width: 2388px) 100vw, 2388px"></p>
<p>Finally it comes the exciting part! We are now going to interact with the Apple Pencil to capture the double-tap action. In order to make it work, all you need to do is set up the Pencil Interaction delegate. In the <code>viewDidLoad</code> method, insert the following lines of code:</p>
<pre class="swift">let interaction = UIPencilInteraction()
interaction.delegate = self
view.addInteraction(interaction)
</pre>
<p>Basically, we create a constant <code>interaction</code> of type <code>UIPencilInteraction</code> and set its delegate to <code>self</code>. Then we add this interaction to the view. However, as soon as you insert the code, you&#x2019;ll notice an error in Xcode. This is because the <code>ViewController</code> class is not conformed to the <code>UIPencilInteractionDelegate</code>. To fix that, let&#x2019;s adopt the delegate in the <code>ViewController</code> class.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15151" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-13-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>If we run our app, you will notice that nothing changes. This is because we still have not written the code for the double-tap action. Now, create the <code>pencilInteractionDidTap</code> method in the <code>ViewController</code> class:</p>
<pre class="swift">func pencilInteractionDidTap(_ interaction: UIPencilInteraction) {
    switch UIPencilInteraction.preferredTapAction {
    case .ignore:
        ideaButton.alpha = 1
    case .showColorPalette:
        ideaLabel.text = (ideas.randomElement()!)
    case .switchEraser:
        ideaLabel.text = (ideas.randomElement()!)
    case .switchPrevious:
        ideaLabel.text = (ideas.randomElement()!)
    default:
        break
    }
}
</pre>
<p>When an Apple Pencil 2 is connected to an iPad Pro, the user can choose what they would like the Apple Pencil to do including:</p>
<ol>
<li>Ignore the tap</li>
<li>Show the color palette</li>
<li>Switch to an eraser</li>
<li>Switch to the last used tool</li>
</ol>
<p>These rules were made in the thought that the Apple Pencil would be used mostly for drawing apps. However, you are allowed to tell the Apple Pencil to perform a lot more. That said, it&#x2019;s important that you let your user know what the tap would be doing.</p>
<p>In the code above, unless the Apple Pencil&#x2019;s configuration is set to <code>ignore</code>, we change the text of <code>ideaLabel</code> when the app captures the tap action. If it is set to <code>ignore</code>, we can deduce that the user won&#x2019;t be using the Apple Pencil as much. In this case, we will make the <code>ideaButton</code> visible by setting its alpha to 1.</p>
<p>If you run the code, everything should work as expected. The only function that doesn&#x2019;t work is the when you tap on the <code>Next Idea!</code> button. This is because we haven&#x2019;t linked it to an <code>IBAction</code>. Just link them up and then the button will work.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15153" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="1552" height="923" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15.png 1552w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-504x300.png 504w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-768x457.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-1024x609.png 1024w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-1240x737.png 1240w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-860x511.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-680x404.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-400x238.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-15-50x30.png 50w" sizes="(max-width: 1552px) 100vw, 1552px"></p>
<p>After the changes, run the app to test again and everything should work flawlessly!</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-15155" src="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17.png" alt="How to Add Apple Pencil Support to your iPad Apps" width="893" height="677" srcset="https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17.png 893w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-200x152.png 200w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-396x300.png 396w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-768x582.png 768w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-860x652.png 860w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-680x516.png 680w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-400x303.png 400w, https://www.appcoda.com/content/images/wordpress/2019/05/apple-pencil-swift-17-50x38.png 50w" sizes="(max-width: 893px) 100vw, 893px"></p>
<p>It&#x2019;s hard to show the functionality of an app using screenshots. I shot a video to show you how the app works in action. Don&#x2019;t forget to take a look!</p>
<p><iframe loading="lazy" title="Apple Pencil 2.0 Tutorial Example" width="500" height="281" src="https://www.youtube.com/embed/BdboScOQONU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2>Conclusion</h2>
<p>The Apple Pencil is truly a fascinating accessory for iPad Pro users. For developers, it&#x2019;s much more than a simple stylus. While this tutorial is primitive, the objective is to show you how to add Apple Pencil support to your iPad apps. As you can see, it&#x2019;s pretty easy to do that. What to keep in mind when developing for the Apple Pencil and iPad Pro is to adopt the <code>UIPencilInteractionDelegate</code>.</p>
<p>I can&#x2019;t wait to see what you&#x2019;ll come up with this technology. If you want to learn more about developing for iPad Pro and Apple Pencil, I suggest you take a look at these videos:</p>
<ul>
<li><a href="https://developer.apple.com/videos/play/tech-talks/804/?ref=appcoda.com">Designing for iPad Pro and Apple Pencil</a></li>
<li><a href="https://developer.apple.com/videos/play/tech-talks/209/?ref=appcoda.com">Bringing Your Apps to the New iPad Pro</a></li>
</ul>
<p>For reference, you can <a href="https://github.com/appcoda/ApplePencilDemo?ref=appcoda.com">check out the full Xcode project</a> on GitHub.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Creating a Prisma-like App with Core ML, Style Transfer and Turi Create]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>If you&#x2019;ve been following Apple&#x2019;s announcements from the past year, you know that they are heavily invested in machine learning. Ever since they introduced <a href="www.developer.apple.com/machine-learning">Core ML</a> last year at WWDC 2017, there are tons of apps which have sprung up which harness the power of machine</p>]]></description><link>https://www.appcoda.com/coreml-turi-create/</link><guid isPermaLink="false">66612a0f166d3c03cf011474</guid><category><![CDATA[AI]]></category><category><![CDATA[Swift]]></category><category><![CDATA[iOS Programming]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Mon, 30 Jul 2018 00:17:25 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/07/drew-hays-29234-unsplash.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/07/drew-hays-29234-unsplash.jpg" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create"><p>If you&#x2019;ve been following Apple&#x2019;s announcements from the past year, you know that they are heavily invested in machine learning. Ever since they introduced <a href="www.developer.apple.com/machine-learning">Core ML</a> last year at WWDC 2017, there are tons of apps which have sprung up which harness the power of machine learning.</p>
<p>However, one challenge developers always faced was how to create the models? Luckily, Apple solved our question last winter when they announced the acquisition on Turi Create from GraphLab. <a href="https://github.com/apple/turicreate?ref=appcoda.com">Turi Create</a> is Apple&#x2019;s tool which can help developers simplify the creation of their own custom models. With Turi Create, you can build your own custom machine learning models.</p>
<h2>A Quick Introduction to Turi Create</h2>
<p>If you&#x2019;ve been following the other machine learning tutorials, you&#x2019;re probably wondering, <em>&#x201C;Didn&#x2019;t Apple announced <a href="https://www.appcoda.com/create-ml/">Create ML</a> this year? What are the advantages to Turi Create over CreateML?&#x201D;</em></p>
<p>While Create ML is a great tool for people just getting started with ML, it is severely limited in terms of usage. With Create ML, you are limited to text or image data. While this does constitute for a majority of projects, it can be rendered useless for slightly more complex ML applications (like Style Transfer!).</p>
<p>With Turi Create, you can create all the same Core ML models as you can with Create ML but more! Since Turi Create is much more complex than Create ML, it is heavily integrated with other ML tools like <a href="https://keras.io/?ref=appcoda.com">Keras</a> and <a href="https://www.tensorflow.org/?ref=appcoda.com">TensorFlow</a>. In our <a href="https://www.appcoda.com/create-ml/">tutorial on CreateML</a>, you saw the types of Core ML models we could make with Create ML. Here are the types of algorithms you can make with Turi Create:</p>
<ul>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/recommender/README.md?ref=appcoda.com">Recommender systems</a></li>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/image_classifier/README.md?ref=appcoda.com">Image classification</a></li>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/image_similarity/README.md?ref=appcoda.com">Image similarity</a></li>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/object_detection/README.md?ref=appcoda.com">Object detection</a></li>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/activity_classifier/README.md?ref=appcoda.com">Activity classifier</a></li>
<li><a href="https://github.com/apple/turicreate/blob/master/userguide/text_classifier/README.md?ref=appcoda.com">Text classifier</a></li>
</ul>
<p>You can see that this list consists of classifiers and regressors which can be accomplished with either Create ML, or mostly Turi Create. This is why Turi Create is preferred by more experienced data scientists as it offers a level of customizability simply not available in Create ML.</p>
<h2>What is Style Transfer?</h2>
<p>Now that you have a fair understanding of Turi Create, let&#x2019;s look at what Style Transfer is. Style transfer is the technique of recomposing images in the style of other images. What do I mean by this? Take a look at the image below created by using Prisma:</p>
<p><img decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer.jpg" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create"></p>
<p>As you can see, the image of the breakfast plate above is transformed into the style of a comic book. Style Transfer began when Gatys et al. published a <a href="https://arxiv.org/abs/1508.06576?ref=appcoda.com">paper</a> on how it was possible to use convolutional neural networks to transfer artistic style from one image to another.</p>
<blockquote><p>
Convolutional Neural Networks (CNNs) are a type of neural network in machine learning that are commonly used in areas such as image recognition and classification. CNNs have been successful in computer vision related problems like identifying faces, objects and more. This is fairly complex ideas so I wouldn&#x2019;t worry too much about it.</p></blockquote>
<h2>Building our own Style Transfer Demo</h2>
<p>Now that you have an understanding of the tools and concepts that we&#x2019;ll examine in the tutorial, it is finally time to get started! We&#x2019;ll be building our own Style Transfer model using Turi Create and import it to a sample iOS project to see how it&#x2019;ll work!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1.jpg" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1200" height="676" class="aligncenter size-full wp-image-13785" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1.jpg 1200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-200x113.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-533x300.jpg 533w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-768x433.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-1024x577.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-860x484.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-680x383.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-400x225.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-1-50x28.jpg 50w" sizes="(max-width: 1200px) 100vw, 1200px"></p>
<p>First, download the <a href="https://github.com/appcoda/CoreMLStyleTransfer/raw/master/starter.zip?ref=appcoda.com">starter project here</a>. In this tutorial, we&#x2019;ll be using Python 2, Jupyter Notebook, and Xcode 9.</p>
<div class="alert gray">At the time of writing, some of the software is still in beta stage. Keep this in mind when you begin. Do make sure to use Xcode 9 because Xcode 10 beta has some bugs with Core ML. This project will be using Swift 4.1.</div>
<h2>Training the Style Transfer Model</h2>
<p>Turi Create is a Python package, but it is not built into macOS. Therefore, let me go over how to install this really quick. Your macOS should have Python installed. In case you don&#x2019;t have <code>Python</code> or <code>pip</code> installed on your device, you can learn the installation procedures over <a href="https://www.appcoda.com/core-ml-tools-conversion/">here</a>.</p>
<h3> Installing Turi Create and Jupyter</h3>
<p>Open Terminal and type the following command:</p>
<pre class="bash">pip install turicreate==5.0b2
</pre>
<p>Wait for a minute or two for the Python package to get installed. In the meantime, download <a href="http://jupyter.org/index.html?ref=appcoda.com">Jupyter Notebook</a>. Jupyter Notebook is a compiler for many languages used by developers because of its rich and interactive output visualization. Since Turi Create only supports Python 2, enter the following commands in Terminal to install Jupyter Notebook for Python 2.</p>
<pre class="bash">python -m pip install --upgrade pip
python -m pip install jupyter
</pre>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="893" height="404" class="aligncenter size-full wp-image-13787" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message.png 893w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-200x90.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-600x271.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-768x347.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-860x389.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-680x308.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-400x181.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-terminal-message-50x23.png 50w" sizes="(max-width: 893px) 100vw, 893px"></p>
<p>Once you have all the packages installed, it&#x2019;s time to start creating our algorithm!</p>
<h3>Coding with Turi Create</h3>
<p>The style transfer model we&#x2019;ll be creating is from Vincent van Gogh&#x2019;s <em>Starry Night</em>. Basically, we&#x2019;ll create a model which can transform any image into a replica of <em>Starry Night</em>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280.jpg" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1280" height="800" class="aligncenter size-full wp-image-13801" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280.jpg 1280w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-200x125.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-480x300.jpg 480w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-768x480.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-1024x640.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-1240x775.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-860x538.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-680x425.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-400x250.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/07/starry-night-1093721_1280-50x31.jpg 50w" sizes="(max-width: 1280px) 100vw, 1280px"></p>
<p>First, download the <a href="https://github.com/appcoda/CoreMLStyleTransfer/raw/master/trainingdata.zip?ref=appcoda.com">training data</a> and unzip it. One folder should be named <code>content</code> and the other should be named <code>style</code>. If you open <code>content</code>, you&#x2019;ll see roughly 70 images with different subjects. This folder contains varied images so our algorithm knows what type of images to apply the style transfer to. Since we want the transformation on all images, we will have a variety of images.</p>
<p><code>Style</code>, on the other hand, simply contains only one image: <em>StarryNight.jpg</em>. This folder contains the image we want the artistic style to transfer from.</p>
<p>Now, let&#x2019;s start our coding session by opening <code>Jupyter Notebook</code>. Enter the following into Terminal.</p>
<pre class="bash">jupyter notebook
</pre>
<p>This will open up Safari with the page like below.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1440" height="393" class="aligncenter size-full wp-image-13788" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-200x55.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-600x164.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-768x210.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-1024x279.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-1240x338.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-860x235.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-680x186.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-400x109.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-4-50x14.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Select the <code>New</code> button and click on Python 2!</p>
<div class="alert green"><strong>Note:</strong>It&#x2019;s important to make sure that the notebook you&#x2019;re using is running Python 2 since Turi Create doesn&#x2019;t support Python 3.</div>
<p>Once you click on that button, a new screen should open. This is where we will create our model. Click on the first cell and begin by importing the Turi Create package:</p>
<pre lang="python">import turicreate as tc
</pre>
<p>Press SHIFT+Enter to run the code in that cell. Wait until the package is imported. Next, let&#x2019;s create a reference to the folders which contain our images. Please make sure you change the parameters to your own folder paths.</p>
<pre lang="python">style = tc.load_images(&apos;/Path/To/Folder/style&apos;)
content = tc.load_images(&apos;/Path/To/Folder/content&apos;)
</pre>
<p>Run the code in the text field and you should receive an output like this:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1440" height="456" class="aligncenter size-full wp-image-13790" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-200x63.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-600x190.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-768x243.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-1024x324.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-1240x393.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-860x272.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-680x215.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-400x127.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-6-50x16.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Don&#x2019;t worry too much about the warnings. Next, we&#x2019;ll type in the command to create the style transfer model. <em>It is highly advised that you run the following code on a Mac with a very powerful GPU! This would mean most of the latest MacBook Pros as well as the iMacs. If you choose to run the code on a MacBook Air, for example, the computations would run on the CPU and could take days!</em></p>
<pre lang="python">model = tc.style_transfer.create(style, content)
</pre>
<p>Run the code. This could take a very long time to finish based on the device you are running this on. On my MacBook Air, it took 3.5 days since the computations were running on the CPU! If you don&#x2019;t have enough time, no worries. You can download the final Core ML model <a href="https://github.com/appcoda/CoreMLStyleTransfer/raw/master/Style%20Transfer/Style%20Transfer/StarryStyle.mlmodel?ref=appcoda.com">here</a>. However, you can always let the whole function run to get a feel for what it is like!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1440" height="716" class="aligncenter size-full wp-image-13791" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-200x99.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-600x298.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-768x382.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-1024x509.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-1240x617.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-860x428.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-680x338.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-400x199.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-7-50x25.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>The table you see contains three columns: <em>Iteration</em>, <em>Loss</em>, and <em>Elapsed Time</em>. In Machine Learning, there will be a function which runs back and forth several times. When the function runs forward, this is known as <strong>cost</strong>. When it goes back, it is known as <strong>loss</strong>. Each time the function runs, the goal is to tweak the parameters to reduce the loss. So each time the parameters are altered, this adds one more iteration. The goal is to have a small number for the loss. As the training progresses, you can see the loss slowly reduces. Elapsed time refers to how long it has been.</p>
<p>When the model has finished training, all that&#x2019;s left is saving it! This can be achieved with a simple line of code!</p>
<pre lang="python">model.export_coreml(&quot;StarryStyle.mlmodel&quot;)
</pre>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1132" height="424" class="aligncenter size-full wp-image-13792" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8.png 1132w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-200x75.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-600x225.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-768x288.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-1024x384.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-860x322.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-680x255.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-400x150.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-8-50x19.png 50w" sizes="(max-width: 1132px) 100vw, 1132px"></p>
<p>That&#x2019;s all! Head over to your Library to view the final model!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="777" height="590" class="aligncenter size-full wp-image-13793" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9.png 777w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-200x152.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-395x300.png 395w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-768x583.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-680x516.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-400x304.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-9-50x38.png 50w" sizes="(max-width: 777px) 100vw, 777px"></p>
<h2>A Quick Look at the Xcode Project</h2>
<p>Now that we have our model, all that&#x2019;s left is importing it to our Xcode project. Open Xcode 9 and take a look at the project.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1440" height="814" class="aligncenter size-full wp-image-13794" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-531x300.png 531w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-768x434.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-1024x579.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-1240x701.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-860x486.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-680x384.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-400x226.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-10-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Build and run the project to make sure you can compile the project. The app is not working right now. When we press the <code>Van Gogh!</code> button, nothing happens! It&#x2019;s up to us to write the code. Let&#x2019;s get started!</p>
<h2>Implementing Machine Learning</h2>
<p>The first step is to drag and drop the model file (i.e. <code>StarryStyle.mlmodel</code>) into the project. Make sure that <code>Copy Items If Needed</code> is checked and the project target is selected.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1440" height="316" class="aligncenter size-full wp-image-13795" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-200x44.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-600x132.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-768x169.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-1024x225.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-1240x272.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-860x189.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-680x149.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-400x88.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-12-50x11.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Next, we have to add the code to process the machine learning in <code>ViewController.swift</code>. Most of the code will be written in our <code>transformImage()</code> function. Let&#x2019;s begin by importing the Core ML package and calling the model.</p>
<pre lang="swift">import CoreML
...
@IBAction func transformImage(_ sender: Any) {
    // Style Transfer Here
    let model = StarryStyle()
}
</pre>
<p>This line simply assigns our Core ML model to a constant called <code>model</code>.</p>
<h3>Converting the Image</h3>
<p>Next, we have to convert an image which a user chooses into some readable data. If you look into the <code>StarryStyle.mlmodel</code> file again, you should find that it takes in an image of size 256&#xD7;256. Therefore, we have to perform the conversion. Right below our <code>transformImage()</code> function, add a new function.</p>
<pre lang="swift">func pixelBuffer(from image: UIImage) -&gt; CVPixelBuffer? {
    // 1
    UIGraphicsBeginImageContextWithOptions(CGSize(width: 256, height: 256), true, 2.0)
    image.draw(in: CGRect(x: 0, y: 0, width: 256, height: 256))
    let newImage = UIGraphicsGetImageFromCurrentImageContext()!
    UIGraphicsEndImageContext()

    // 2
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
    var pixelBuffer : CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, 256, 256, kCVPixelFormatType_32ARGB, attrs, &amp;pixelBuffer)
    guard (status == kCVReturnSuccess) else {
        return nil
    }
       
    // 3   
    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
       
    // 4     
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: pixelData, width: 256, height: 256, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
       
    // 5
    context?.translateBy(x: 0, y: 256)
    context?.scaleBy(x: 1.0, y: -1.0)
    
    // 6 
    UIGraphicsPushContext(context!)
    image.draw(in: CGRect(x: 0, y: 0, width: 256, height: 256))
    UIGraphicsPopContext()
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        
    return pixelBuffer
}
</pre>
<p>This is a helper function, similar to the same function used in the <a href="https://appcoda.com/coreml-introduction/?ref=appcoda.com">earlier Core ML tutorial</a>. In case you don&#x2019;t remember, don&#x2019;t worry. Let me go step by step to explain what this function does.</p>
<ol>
<li>Since our model only accepts images with dimensions of <code>256 x 256</code>, we convert the image into a square. Then, we assign the square image to another constant <code>newImage</code>.</li>
<li>Now, we convert <code>newImage</code> into a <code>CVPixelBuffer</code>. If you&#x2019;re not familiar with <code>CVPixelBuffer</code>, it&#x2019;s basically an image buffer which holds the pixels in the main memory. You can find out more about <code>CVPixelBuffers</code> <a href="https://developer.apple.com/documentation/corevideo/cvpixelbuffer-q2e?ref=appcoda.com">here</a>.</li>
<li>We then take all the pixels present in the image and convert them into a device-dependent RGB color space. Then, by creating all this data into a <code>CGContext</code>, we can easily call it whenever we need to render (or change) some of its underlying properties. This is what we do in the next two lines of code by translating and scaling the image.</li>
<li>Finally, we make the graphics context into the current context, render the image, and remove the context from the top stack. With all these changes made, we return our pixel buffer.</li>
</ol>
<p>This is really some advanced <code>Core Image</code> code, which is out of the scope of this tutorial. Don&#x2019;t worry if you didn&#x2019;t understand most of it. The gist is that this function takes an image and extracts its data by turning it into a pixel buffer which can be read easily by Core ML.</p>
<h3>Applying Style Transfer to the Image</h3>
<p>Now that we have our Core ML helper function in place, let&#x2019;s go back to <code>transformImage()</code> and implement the code. Below the line where we declare our <code>model</code> constant, insert the following code:</p>
<pre lang="swift">let styleArray = try? MLMultiArray(shape: [1] as [NSNumber], dataType: .double)
styleArray?[0] = 1.0
</pre>
<p>Turi Create allows you to package more than one &#x201C;style&#x201D; into a model. For this project, we only have one style: <em>Starry Night</em>. However, if you wanted to add more styles, you could add more pictures to the <code>style</code> folder. We declare <code>styleArray</code> as an <a href="https://developer.apple.com/documentation/coreml/mlmultiarray?ref=appcoda.com"><code>MLMultiArray</code></a>. This is a type of array used by Core ML for either an input or an output of a model. Since we have one style, we only have one shape and one data element. This is why we set the number of data elements to 1 for our <code>styleArray</code>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14.png" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="904" height="119" class="aligncenter size-full wp-image-13796" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14.png 904w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-200x26.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-600x79.png 600w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-768x101.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-860x113.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-680x90.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-400x53.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-14-50x7.png 50w" sizes="(max-width: 904px) 100vw, 904px"></p>
<p>Finally, all that&#x2019;s left is making a prediction using our model and setting it to the <code>imageView</code>.</p>
<pre lang="swift">if let image = pixelBuffer(from: imageView.image!) {
    do {
        let predictionOutput = try model.prediction(image: image, index: styleArray!)
                
        let ciImage = CIImage(cvPixelBuffer: predictionOutput.stylizedImage)
        let tempContext = CIContext(options: nil)
        let tempImage = tempContext.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(predictionOutput.stylizedImage), height: CVPixelBufferGetHeight(predictionOutput.stylizedImage)))
        imageView.image = UIImage(cgImage: tempImage!)
    } catch let error as NSError {
        print(&quot;CoreML Model Error: \(error)&quot;)
    }
}
</pre>
<p>This function first checks if there is an image in <code>imageView</code>. In the code block, it defines a <code>predictionOutput</code> to save the output of the model&#x2019;s prediction. We call the model&#x2019;s <code>prediction</code> method with the user&#x2019;s image and the style array. The predicted result is a pixel buffer. However, we can&#x2019;t assign set a pixel buffer to a <code>UIImageView</code> so we come up with a creative way to do so.</p>
<p>First, we set the pixel buffer, <code>predictionOutput.stylizedImage</code>, to an image of type <code>CIImage</code>. Then, we create a variable <code>tempContext</code> which is an instance of <code>CIContext</code>. We call upon a built-in function of the context (i.e. <code>createCGImage</code>) which generates a <code>CGImage</code> from <code>ciImage</code>. Finally, we can set <code>imageView</code> to <code>tempImage</code>. That&#x2019;s all! If there is an error, we gracefully handle it by printing the error.</p>
<p>Build and run your project. Choose an image from your photo library and test how the app works!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16.jpg" alt="Creating a Prisma-like App with Core ML, Style Transfer and Turi Create" width="1920" height="1080" class="aligncenter size-full wp-image-13797" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16.jpg 1920w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-200x113.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-533x300.jpg 533w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-768x432.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-1024x576.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-1680x945.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-1240x698.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-860x484.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-680x383.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-400x225.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/07/coreml-turi-create-16-50x28.jpg 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<p>You may notice that the model may not look too close to <em>Starry Night</em> and this can be due to a variety of reasons. Maybe we need more training data? Or perhaps we need to train the model for a higher (or lower) number of iterations? I highly encourage you to go back and play around with the numbers until you get a satisfactory result!</p>
<h2>Conclusion</h2>
<p>This sums up the tutorial! I have given you an introduction to Turi Create and created your own Style Transfer model, a feat that would have been impossible for a single person to create just 5 years ago. You also learned how to import this Core ML model into an iOS app and use it for creative purposes!</p>
<p>However, Style Transfer is just the beginning. As I mentioned earlier, Turi Create can be used for a variety of applications. Here are some great resources on where to go next:</p>
<ul>
<li><a href="https://apple.github.io/turicreate/docs/userguide/applications/?ref=appcoda.com">Apple&#x2019;s Gitbook on Turi Create Applications</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/712/?ref=appcoda.com">A Guide to Turi Create &#x2013; WWDC 2018</a></li>
<li><a href="https://github.com/apple/turicreate?ref=appcoda.com">Turi Create Repository</a></li>
</ul>
<p>For the full project, please download it from <a href="https://github.com/appcoda/CoreMLStyleTransfer?ref=appcoda.com">GitHub</a>. If you have any comments or feedback, please leave me comment and share your thought below.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[An Introduction to AR Quick Look in iOS]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>At WWDC 2018, Apple released ARKit 2.0 with a slew of brand new APIs and features for Augmented Reality Development. One of these features was an addition to their Quick Look APIs. If you&#x2019;re not familiar with what Quick Look is, it&#x2019;s basically a framework</p>]]></description><link>https://www.appcoda.com/arkit-quick-look/</link><guid isPermaLink="false">66612a0f166d3c03cf011472</guid><category><![CDATA[ARKit]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Wed, 18 Jul 2018 00:11:50 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-feature.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-feature.jpg" alt="An Introduction to AR Quick Look in iOS"><p>At WWDC 2018, Apple released ARKit 2.0 with a slew of brand new APIs and features for Augmented Reality Development. One of these features was an addition to their Quick Look APIs. If you&#x2019;re not familiar with what Quick Look is, it&#x2019;s basically a framework that allows users to preview a whole range of file formats such as PDFs, images, and more! For example, the Mail application in iOS uses Quick Look to preview attachments.</p>
<p>An advantage about using Quick Look in your apps is that all you need to do is state what file you would like to be quick looked. The framework handles displaying the UI and UX which makes it easy to integrate. Before going on, I would suggest to skim over <a href="https://www.appcoda.com/quick-look-framework/">this tutorial</a> on using the Quick Look framework to preview documents.</p>
<p>This year, for iOS 12, Apple has introduced Quick Look for Augmented Reality objects. This means that you can share <code>.usdz</code> (more on that later) files in Mail, Messages, or any application that supports this type of Quick Look. The recipient can open it up and view the object without having to download an additional app.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-demo.jpg" alt="An Introduction to AR Quick Look in iOS" width="320" height="569" class="aligncenter size-full wp-image-13731" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-demo.jpg 320w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-demo-200x356.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-demo-169x300.jpg 169w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-demo-50x89.jpg 50w" sizes="(max-width: 320px) 100vw, 320px"></p>
<p>In the above image, you can see a teapot being previewed in AR without the help of a separate app. However, AR Quick Look isn&#x2019;t limited to simply apps. You can integrate AR Quick Look into websites as well! In this tutorial, I&#x2019;ll walk you through integrating AR Quick Look in an iOS application, as well as, building a very basic HTML website using GitHub Pages to see how we can add AR Quick Look to websites! Just check out Apple&#x2019;s <a href="https://developer.apple.com/arkit/gallery/?ref=appcoda.com">AR Gallery</a> on any device running iOS 12!</p>
<div class="alert green"><strong>Note:</strong> For this tutorial, we&#x2019;ll be using Xcode 10 which is still in beta at the time of writing. To actually see AR Quick Look in action, you&#x2019;ll need to run the app on a device running iOS 12. Keep this in mind throughout the development of your app.</div>
<h2>What is USDZ?</h2>
<p>Before we start coding, it&#x2019;s important to understand what USDZ is. Just like there are many file formats, USDZ is one of them. It stands for <strong>Universal Scene Description Zip</strong>. If you&#x2019;ve worked with 3D models before, you&#x2019;re proabably familiar with 3D models like <code>.OBJ</code>, <code>.DAE</code>, or <code>.sketchup</code>. USDZ is a file created from a collaboration with Pixar and Apple.</p>
<p>At its core, a USDZ file is nothing more than a <code>.zip</code> archive, that it packages the model and its textures into a single file. This is why USDZ files are used for Quick Look and not any other 3D model format.</p>
<p>Now you&#x2019;re probably wondering, &#x201C;How do I create a USDZ file?&#x201D; Well, the way it works is that you create your own 3D model with your favorite 3D modelling software (AutoCAD, Blender, Maya, etc.) and then use Xcode Command Line Tools to convert this file to a <code>.usdz</code> file format.</p>
<p>Let&#x2019;s try converting our own model into the USDZ file format!</p>
<h2>Converting a 3D model to a USDZ file format</h2>
<p>Converting a model to USDZ is quite simple and requires only one line of code! We&#x2019;ll be converting a 3D Object model of an egg that I created. You can download the model <a href="https://github.com/appcoda/AR-Quick-Look-Demo/raw/master/egg.obj?ref=appcoda.com">here</a>.</p>
<p>When you download the model, you&#x2019;ll see that it&#x2019;s simply an egg. Now, let&#x2019;s try converting the model into a USDZ file format. Open Terminal and type the following line:</p>
<pre class="bash">xcrun usdz_converter /Users/You/PATH/TO/egg.obj /Users/You/CHOOSE/A/PATH/TO/SAVE/egg.usdz
</pre>
<p>That&#x2019;s all! Here&#x2019;s what my terminal looks like:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3.png" alt="An Introduction to AR Quick Look in iOS" width="930" height="586" class="aligncenter size-full wp-image-13695" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3.png 930w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-200x126.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-476x300.png 476w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-768x484.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-860x542.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-680x428.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-400x252.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-3-50x32.png 50w" sizes="(max-width: 930px) 100vw, 930px"></p>
<p>Press enter. Within a few seconds, you will see the <code>.usdz</code> file saved to the path you chose where you want to save it. Press the spacebar to Quick Look the file.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4.png" alt="An Introduction to AR Quick Look in iOS" width="949" height="718" class="aligncenter size-full wp-image-13696" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4.png 949w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-200x151.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-397x300.png 397w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-768x581.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-860x651.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-680x514.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-400x303.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-4-50x38.png 50w" sizes="(max-width: 949px) 100vw, 949px"></p>
<p>Don&#x2019;t worry if it&#x2019;s black. I&#x2019;m not sure if this is a bug on Apple&#x2019;s part or if they meant it to be like this. Either way, when you preview it, all the colors will be restored.</p>
<p>And that&#x2019;s all! Now that we have our own <code>USDZ</code> file, let&#x2019;s begin with integrating it into our project.</p>
<h2>Adding AR Quick Look in your apps</h2>
<p>Let&#x2019;s begin by downloading the starter project <a href="https://github.com/appcoda/AR-Quick-Look-Demo/raw/master/ARQuickLookStarter.zip?ref=appcoda.com">here</a>. Take a look around. You&#x2019;ll see that there is already a collection view in place.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5.png" alt="An Introduction to AR Quick Look in iOS" width="1440" height="900" class="aligncenter size-full wp-image-13697" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-5-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Run the app. You&#x2019;ll see a list of a bunch of models but when you click on them, nothing happens. It&#x2019;s up to us to make sure that users can quick look the model!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6.png" alt="An Introduction to AR Quick Look in iOS" width="499" height="795" class="aligncenter size-full wp-image-13698" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6.png 499w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6-200x319.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6-188x300.png 188w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6-400x637.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-6-50x80.png 50w" sizes="(max-width: 499px) 100vw, 499px"></p>
<p>First, let&#x2019;s add our Egg model to the <code>Models</code> folder. Drag <code>egg.usdz</code> to the models folder. Make sure that when you drop it into the folder, you check the target box as shown below.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7.png" alt="An Introduction to AR Quick Look in iOS" width="1440" height="813" class="aligncenter size-full wp-image-13699" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-531x300.png 531w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-768x434.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-1024x578.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-1240x700.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-860x486.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-680x384.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-400x226.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-7-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Next, head over to <code>ViewController.swift</code> and add <code>egg</code> to the <code>models</code> array. This way when we run our app, the model will show up in the list. Just to be sure, run the app again.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8.png" alt="An Introduction to AR Quick Look in iOS" width="1440" height="812" class="aligncenter size-full wp-image-13700" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-532x300.png 532w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-768x433.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-1024x577.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-1240x699.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-860x485.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-400x226.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-8-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>It works! Now all that&#x2019;s left is adding the code to Quick Look these models. First, begin by importing the <code>QuickLook</code> package. When we create a <code>UICollectionView</code>, we also add the Data Source and Delegate protocols to give us access to the methods needed to add data to our collection view. Similarly, we do the same for Quick Look. Modify your code to look like below.</p>
<pre lang="swift">import UIKit
import Foundation
import QuickLook

class ViewController: UIViewController, UICollectionViewDelegate, UICollectionViewDataSource, QLPreviewControllerDelegate, QLPreviewControllerDataSource
</pre>
<p>There are two methods we need to add in order to configure to the protocols we added: <code>numberOfPreviewItems()</code> and <code>previewController(previewItemAt)</code>. This should look familiar to you if you&#x2019;ve worked with <code>UITableViews</code> or <code>UICollectionViews</code>. Add the following code towards the bottom of the class below <code>collectionView(didSelectItemAt)</code>.</p>
<pre lang="swift">func numberOfPreviewItems(in controller: QLPreviewController) -&gt; Int {
    return 1
}
    
func previewController(_ controller: QLPreviewController, previewItemAt index: Int) -&gt; QLPreviewItem {
    let url = Bundle.main.url(forResource: models[thumbnailIndex], withExtension: &quot;usdz&quot;)!
    return url as QLPreviewItem
}
</pre>
<ol>
<li>In the first function, we are asked how many items are allowed to preview at a time. Since we want to preview one 3D model at a time, we return 1 in the code.</li>
<li>The second function asks what file it should preview when an item is clicked on a particular <code>index</code>. We define a constant called <code>url</code> which is the path of our <code>.usdz</code> files. Then, we return the file as a <code>QLPreviewItem</code>.</li>
</ol>
<div class="alert gray"><strong>Note</strong>: Notice how we use <em>thumbnailIndex</em> to specify which model we use. We set the number of <em>thumbnailIndex</em> when the user taps on the collection view cell as handled in the <em>collectionView(didSelectItemAt)</em> method.</div>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10.png" alt="An Introduction to AR Quick Look in iOS" width="1440" height="810" class="aligncenter size-full wp-image-13702" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-10-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>If you run the code now, nothing would happen. Why? It&#x2019;s because we never added the logic to present the <code>QLPreviewController</code>. Navigate to the <code>collectionView(didSelectItemAt)</code> method and modify it to look like the following:</p>
<pre lang="swift">func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) {
    thumbnailIndex = indexPath.item

    let previewController = QLPreviewController()
    previewController.dataSource = self
    previewController.delegate = self
    present(previewController, animated: true)
}
</pre>
<p>Just as I mentioned earlier, we set <code>thumbnailIndex</code> to the <code>index</code> the user clicks on. This helps the Quick Look Data Source methods know what model to use. If you are using Quick Look in your apps for any file type, you will always present it in a <code>QLPreviewController</code>. Whether it is a document, an images, or, in our case, a 3D model, the <code>QuickLook</code> framework requires you to present these files in a <code>QLPreviewController</code>. We set the <code>previewController</code> data source and delegates to <code>self</code> and then present it!</p>
<p>Here is what all the Quick Look code should look like:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11.png" alt="An Introduction to AR Quick Look in iOS" width="1435" height="810" class="aligncenter size-full wp-image-13704" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11.png 1435w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-531x300.png 531w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-768x434.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-1024x578.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-1240x700.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-860x485.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-680x384.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-400x226.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-11-50x28.png 50w" sizes="(max-width: 1435px) 100vw, 1435px"></p>
<p>Build and run your app. Make sure to try the app on a real device running iOS 12. Running the app on a simulator won&#x2019;t present the Quick Look preview.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12.png" alt="An Introduction to AR Quick Look in iOS" width="1920" height="1080" class="aligncenter size-full wp-image-13705" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-12-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<p>It works as expected! You now should know how to integrate AR Quick Look into your apps. But that&#x2019;s not all since AR Quick Look also provides web support! In the next section, I&#x2019;ll guide you through building a website using HTML and AR Quick Look.</p>
<h2>Adding AR Quick Look to Websites in HTML</h2>
<div class="alert green"><strong>Editor&#x2019;s note:</strong> If you are familiar with HTML and web development, you can skip over to the end of the tutorial to check out the demo. However, if you have no idea about how to build a website using GitHub Page, don&#x2019;t miss this section to learn building your first website!</div>
<p>Now that we have an iOS app working, let&#x2019;s build a similar feature on a website using HTML. If you&#x2019;ve never worked with HTML before, don&#x2019;t worry! I&#x2019;ll guide you through building a very simple website. Let&#x2019;s get started!</p>
<p>To begin, open any text editor on your Mac. This can be TextEdit or any other similar application. I will be using <a href="https://macromates.com/download?ref=appcoda.com">TextMate</a>. Type the following:</p>
<pre class="html">&lt;!DOCTYPE html&gt;
&lt;html&gt;

&lt;/html&gt;
</pre>
<p>This is the way you begin all HTML websites. The <code>&lt;!DOCTYPE html&gt;</code> is an instruction to the web browser about what HTML version the page is written in. We are using HTML 5.</p>
<p>The angled brackets are known as <strong>tags</strong>. Similar to how we declare all our code in a <code>class</code> in Swift, all HTML code must be declared in between the <code>&lt;html&gt;</code> and <code>&lt;/html&gt;</code> tags. The <code>&lt;html&gt;</code> tag represents the start of the HTML code where as the <code>&lt;/html&gt;</code> tag signifies the end.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13.png" alt="An Introduction to AR Quick Look in iOS" width="940" height="625" class="aligncenter size-full wp-image-13706" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13.png 940w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-200x133.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-451x300.png 451w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-768x511.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-860x572.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-680x452.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-400x266.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-13-50x33.png 50w" sizes="(max-width: 940px) 100vw, 940px"></p>
<p>Everytime you visit a website, you&#x2019;ll see the title of that website in the tab.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13707" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-14-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Let&#x2019;s see how to write this in HTML! In between the HTML tags, type the following code:</p>
<pre class="html">&lt;head&gt;
    &lt;title&gt;AR Library&lt;/title&gt;
&lt;/head&gt;
</pre>
<p>The <code>&lt;head&gt;</code> tag is a place where all the metadata about a website is stored. Some examples of metadata include built-in links, scripts, favicons, or in our case, the title.</p>
<p>Since we&#x2019;re defining the title of our website, we put the title in between the <code>&lt;title&gt;</code> and <code>&lt;/title&gt;</code> tags.</p>
<blockquote><p>
One thing you&#x2019;ll notice about HTML is that strings don&#x2019;t require you to put quotations around them. This is one of my favorite aspects of HTML.</p></blockquote>
<p>Your text file should look like this:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15.png" alt="An Introduction to AR Quick Look in iOS" width="1000" height="702" class="aligncenter size-full wp-image-13708" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15.png 1000w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-427x300.png 427w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-768x539.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-860x604.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-680x477.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-400x281.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-15-50x35.png 50w" sizes="(max-width: 1000px) 100vw, 1000px"></p>
<p>Now, we have to define the body of our website. This means all the text, button, and images you&#x2019;ll see in a website. Like before, we declare these in the <code>&lt;body&gt;</code> tag. Right under the <code>&lt;/head&gt;</code> line, we&#x2019;ll add the following code:</p>
<pre class="html">&lt;body&gt;
    &lt;h1&gt;Welcome to the AR Library&lt;/h1&gt;
    &lt;p&gt;Welcome to the AR Library website. I created this website in order to view AR objects from the web on any device running iOS 12. Conincidentally, this is the first time I made a website with HTML! It&apos;s a lot of fun!&lt;/p&gt;
&lt;/body&gt;
</pre>
<p>Inside our <code>&lt;body&gt;</code> tag, you should see two new tags: <code>&lt;h1&gt;</code> and <code>&lt;p&gt;</code>. <code>H1</code> stands for <strong>Header 1</strong>. This is usually for a title of a section. <code>P</code> stands for <strong>paragraph</strong>. This tag is used when you want to write some long body of text. Remember, you can rename the title and paragraph to whatever you want!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16.png" alt="An Introduction to AR Quick Look in iOS" width="1000" height="702" class="aligncenter size-full wp-image-13709" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16.png 1000w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-427x300.png 427w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-768x539.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-860x604.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-680x477.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-400x281.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-16-50x35.png 50w" sizes="(max-width: 1000px) 100vw, 1000px"></p>
<p>Save your file. Make sure that when you do, you use a <code>.html</code> ending.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17.png" alt="An Introduction to AR Quick Look in iOS" width="1000" height="702" class="aligncenter size-full wp-image-13710" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17.png 1000w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-427x300.png 427w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-768x539.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-860x604.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-680x477.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-400x281.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-17-50x35.png 50w" sizes="(max-width: 1000px) 100vw, 1000px"></p>
<p>Click on the file you saved and it should open in Safari (or your default browser)!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13711" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-18-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Congratulations! You just built your first website in HTML!</p>
<blockquote><p>
You&#x2019;re probably wondering if you can change the font and font size. This is possible through CSS. Currently that is beyond the scope of the tutorial but you can find a great article <a href="https://www.w3schools.com/html/html_css.asp?ref=appcoda.com">here</a>.</p></blockquote>
<h3>Adding AR Buttons</h3>
<p>Now that we have some text for our website, let&#x2019;s add the buttons for users to launch the AR Quick Look view in the website. Since we are making a button, this will still be inside the <code>&lt;body&gt;</code> tag of our code. Below the <code>&lt;p&gt;</code> tag, type the following.</p>
<pre class="html">&lt;a href=&quot;egg.usdz&quot; rel=&quot;ar&quot;&gt;
    &lt;img src=&quot;egg.png&quot; width=200&gt;
&lt;/a&gt;
</pre>
<p>This is the <code>&lt;a&gt;</code> tag which defines a hyperlink. There are several customizeable attributes under the <code>&lt;a&gt;</code> tag which we define above.</p>
<ol>
<li>The first attribute is <code>href</code>. This is basically a path to the document we want to navigate to when our button is clicked on. The &#x201C;document&#x201D; is our 3D model, so I put the name of a <code>.usdz</code> file there.</li>
<li>The second is <code>rel</code>. This specifies the relationship between the current page we&#x2019;re on and the page we link to. I set it to <code>ar</code> because the relationship of <code>egg.usdz</code> is an AR model.</li>
<li>Now we have our button defined but we didn&#x2019;t define what our button should look like. By using the <code>&lt;img src&gt;</code> tag, I&#x2019;m defining the image our button should have. This way when our users click on the image, they&#x2019;ll be directed to the AR Quick Look view. I also set the <code>width</code> of my image so that it&#x2019;s not too big. The image I&#x2019;m using is the one we already have in our Xcode project.</li>
</ol>
<p>That&#x2019;s it! You can add the other buttons in a similar manner.</p>
<blockquote><p>
When referencing your image source and USDZ files, make sure that they are in the same folder as your HTML file.</p></blockquote>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19.png" alt="An Introduction to AR Quick Look in iOS" width="1205" height="809" class="aligncenter size-full wp-image-13712" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19.png 1205w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-200x134.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-447x300.png 447w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-768x516.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-1024x687.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-860x577.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-680x457.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-400x269.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-19-50x34.png 50w" sizes="(max-width: 1205px) 100vw, 1205px"></p>
<p>Open the file on your web browser. Take a look- your first HTML website hosting a powerful AR feature!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13713" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-20-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>However, when you click on an image, it only directs you to the actual folder on your device. Plus, there&#x2019;s no way to view it on an iPhone or iPad. This is where GitHub Pages comes in!</p>
<h3>Uploading to GitHub Pages</h3>
<p>GitHub Pages is a great way to host static websites. Many people use GitHub Pages as a way to showcase either a r&#xE9;sum&#xE9; or an About page for a project or organization.</p>
<p>One of the advantages about GitHub Pages is that it can be edited from a repository on your account. Through this, it&#x2019;s a great way to store files (such as images and AR models) and reference them in your website! Let&#x2019;s explore how this can be done! If you haven&#x2019;t one already, create a GitHub account <a href="https://github.com/join?source=header-home">here</a>.</p>
<p>Once you have an account, go to the home page and click on the Plus button in the upper right corner. Click on <code>New repository</code>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13714" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-21-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>The way GitHub Pages works is that you&#x2019;re given only one domain: <em>username</em>.github.io. Any pages you create are subdomains under the URL. Therefore, name your repository <em>username</em>.github.io. In the image below, you can see that I named my repository <code>aidev1065.github.io</code>, since <code>aidev1065</code> is my username. You can leave the rest of the settings and click on <code>Create Repository</code>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13715" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-22-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>When you&#x2019;re presented with the repository page, navigate to the Settings tab and scroll down until you come to the GitHub Pages section.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13716" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-23-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Click on <code>Choose a theme</code> under &#x201C;Theme Chooser&#x201D;. This creates a theme for our page. There are a variety of themes. You can choose the one which suits you but I&#x2019;m going to go with Cayman!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13717" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-24-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>After you click on <code>Select theme</code>, you&#x2019;ll be presented with a Markdown file. This is used to present information on our website. If you&#x2019;re not familiar with Markdown syntax, don&#x2019;t worry. It&#x2019;s not that big of a deal. Just delete everything in the <code>index.md</code> file and type the following:</p>
<pre># AR Library
This is a website for an AR Library! You can view it [here](Website.html)!
</pre>
<p>What&#x2019;s important to understand here is that in the parenthesis, put the name of the HTML file you created earlier!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13718" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-25-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Scroll to the bottom and click on <code>Commit changes</code>. Now you have your Markdown file ready! If you go to any web browser and type in &#x201C;<em>username</em>.github.io&#x201D;, you&#x2019;ll be directed to your own GitHub Page!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13719" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-26-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>However, when we click on &#x201C;here&#x201D;, we get an invalid page error. This is because we have not uploaded the HTML file, USDZ files, and images! Let&#x2019;s do that now!</p>
<p>Head on back to the repository page and click on the button <code>Upload files</code>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13720" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-27-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Now all that&#x2019;s left is uploading the HTML files, the USDZ models, and the images! There should be 19 files in total: <em>1 HTML file, 9 USDZ models, and 9 images</em>.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28.png" alt="An Introduction to AR Quick Look in iOS" width="1551" height="920" class="aligncenter size-full wp-image-13721" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28.png 1551w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-200x119.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-506x300.png 506w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-768x456.png 768w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-1024x607.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-1240x736.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-860x510.png 860w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-680x403.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-400x237.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-28-50x30.png 50w" sizes="(max-width: 1551px) 100vw, 1551px"></p>
<p>Scroll to the bottom and click on <code>Commit changes</code>. This will take a few minutes. When everything is done, go back to your GitHub Page: &#x201C;<em>username</em>.github.io&#x201D;. Now, when you click on &#x201C;here&#x201D;, you&#x2019;ll see the HTML website you created earlier.</p>
<p>Also, when you open the website on a device running iOS 12, you&#x2019;ll see the ARKit logo in the top right of the image. This means that you can Quick Look the model!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29.png" alt="An Introduction to AR Quick Look in iOS" width="662" height="1194" class="aligncenter size-full wp-image-13722" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29.png 662w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29-200x361.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29-166x300.png 166w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29-568x1024.png 568w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29-400x721.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-29-50x90.png 50w" sizes="(max-width: 662px) 100vw, 662px"></p>
<p>When you click any of the images, you&#x2019;re presented with the same viewer as that of the iOS app!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30.png" alt="An Introduction to AR Quick Look in iOS" width="750" height="1334" class="aligncenter size-full wp-image-13723" srcset="https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30.png 750w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-200x356.png 200w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-169x300.png 169w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-576x1024.png 576w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-680x1209.png 680w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-400x711.png 400w, https://www.appcoda.com/content/images/wordpress/2018/07/ar-quick-look-30-50x89.png 50w" sizes="(max-width: 750px) 100vw, 750px"></p>
<h2>Conclusion</h2>
<p>Congratulations for making it this far! I hope you have enjoyed and learned something valuable from my tutorial. You should now understand how to convert 3D models to USDZ files, integrate them in your apps using <code>QLPreviewController</code>, use HTML to build a basic website, and use GitHub Pages to host your files. Feel free to share this tutorial on your social networks so that your friends can learn too!</p>
<p>Here are some resources related to the tutorial that can help you expand this project:</p>
<ul>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/603/?ref=appcoda.com">Integrating Apps and Content with AR Quick Look &#x2013; WWDC 2018</a></li>
<li><a href="https://developer.apple.com/documentation/quicklook?ref=appcoda.com">Quick Look Framework Documentation</a></li>
<li><a href="https://www.w3schools.com/htmL/?ref=appcoda.com">HTML Tutorial</a></li>
<li><a href="https://www.w3schools.com/Css/?ref=appcoda.com">CSS Tutorial</a></li>
<li><a href="https://pages.github.com/?ref=appcoda.com">Official GitHub Pages Page</a></li>
</ul>
<p>For reference, you can download the complete Xcode project on GitHub <a href>here</a> and visit the repository I created in the tutorial <a href="https://github.com/aidev1065/aidev1065.github.io?ref=appcoda.com">here</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[What's New in Core ML 2]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>Core ML is Apple&#x2019;s Machine Learning framework. Released just a year ago, Core ML offers developers a way to integrate powerful and smart machine learning capabilities into their apps with just a few lines of code! This year, at WWDC 2018, Apple released Core ML 2.0- the</p>]]></description><link>https://www.appcoda.com/coreml2/</link><guid isPermaLink="false">66612a0f166d3c03cf01146e</guid><category><![CDATA[AI]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Tue, 26 Jun 2018 13:47:23 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/06/andras-vas-655226-unsplash.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/06/andras-vas-655226-unsplash.jpg" alt="What&apos;s New in Core ML 2"><p>Core ML is Apple&#x2019;s Machine Learning framework. Released just a year ago, Core ML offers developers a way to integrate powerful and smart machine learning capabilities into their apps with just a few lines of code! This year, at WWDC 2018, Apple released Core ML 2.0- the next version of Core ML all centered around streamlining the process by optimizing the size of the model, improving the performance, and giving developers the ability to customize their own Core ML models.</p>
<p>In this tutorial, I&#x2019;ll catch you up on all of the new features introduced in Core ML 2.0 and how you can apply it to your Machine Learning app! In case you are new to Core ML, I would recommend getting familiar with Core ML with this <a href="https://www.appcoda.com/coreml-introduction/">tutorial</a>. In case you are familiar with it, let&#x2019;s get started!</p>
<div class="alert green"><strong>Note:</strong> To try these techniques out, you will need to have macOS Mojave installed on your Mac. Also, you should have the <em>coremltools beta</em> installed. If not, don&#x2019;t worry. I&#x2019;ll explain how to download this later in the tutorial.</div>
<h2>A Quick Recap</h2>
<p>There are lots of great apps in the App Store with the ability to perform powerful tasks. For example, you could find an app which understands text. Or maybe an app which knows what workout you&#x2019;re doing based on the motion of your device. Even further, there are apps which apply filters to images based on a previous image. These apps have one thing in common: <em>they are all examples of machine learning and all of them can be created with a Core ML model.</em></p>
<p><img decoding="async" src="https://docs-assets.developer.apple.com/published/7e05fb5a2e/4b0ecf58-a51a-4bfa-a361-eb77e59ed76e.png" alt="What&apos;s New in Core ML 2"><br>
Source: Apple</p>
<p>Core ML makes it simple for developers to integrate machine learning models into their apps. You can create an app which understands context in a conversation or can recognize different audio. Furthermore, Apple has made it possible for developers to take the extra step with realtime image analysis and natural language understanding through 2 of their frameworks: <a href="https://developer.apple.com/documentation/vision?ref=appcoda.com">Vision</a> and <a href="https://developer.apple.com/documentation/naturallanguage?ref=appcoda.com">Natural Language</a>.</p>
<p>With the <code>VNCoreMLRequest</code> API and the <code>NLModel</code> API, you can heavily increase your app&#x2019;s ML capabilities since Vision and Natural Language are built upon Core ML!</p>
<ul>
<li><a href="https://www.appcoda.com/vision-framework-introduction/">Vision Tutorial</a></li>
<li><a href="https://www.appcoda.com/natural-language-processing-swift/">Natural Language Tutorial</a></li>
</ul>
<p>This year, Apple was focused on 3 main points for helping Core ML developers.</p>
<ol>
<li>The model size</li>
<li>The performance of a model</li>
<li>Customizing a model</li>
</ol>
<p>Let&#x2019;s explore these three points!</p>
<h2>Model Size</h2>
<p>One huge advantage of Core ML is that everything is done on-device. This way, a user&#x2019;s privacy is always safe and the computation can be calculated from anywhere. However, as more accurate machine learning models are used, they can have a larger size. Importing these models into your apps can take up a large amount of space on a user&#x2019;s device.</p>
<p>Apple decided to give developers the tools to <strong>quantize</strong> their Core ML models. Quantizing a model refers to the techniques used to store and calculate numbers in a more compact form. At the core roots of any machine learning model, it&#x2019;s just a machine trying to compute numbers. If we were to reduce the numbers or store them in a form that would take less space, we can drastically reduce the size of a model. This can lead to a reduced runtime memory usage and faster calculations!</p>
<p>There are 3 main parts to a machine learning model:</p>
<ul>
<li>The number of models</li>
<li>The number of weights</li>
<li><strong>The size of the weights</strong></li>
</ul>
<p>When we quantize a model, we are reducing the <strong>size of the weight</strong>! In iOS 11, Core ML models were stored in 32-bit models. With iOS 12 and Core ML 2, Apple has given us the ability to store the model in 16-bit and even 8-bit models! This is what we&#x2019;ll be looking at in this tutorial!</p>
<blockquote><p>In case you aren&#x2019;t familiar with what weights are, here&#x2019;s a really good analogy. Say that you&#x2019;re going from your house to the supermarket. The first time, you may take a certain path. The second time, you&#x2019;ll try to find a shorter path to the supermarket, since you already know your way to the market. And the third time, you&#x2019;ll take an even shorter route because you have the knowledge of the previous 2 paths. Each time you go to the market, you&#x2019;ll keep taking a shorter path as you <strong>learn</strong> over time! This knowledge of knowing which route to take is known as the weights. Hence, the most accurate path, is the one with the most weights!</p></blockquote>
<p>Let&#x2019;s put this into practice. Time for some code!</p>
<h3>Weight Quantization</h3>
<p>As an example, let&#x2019;s use a popular machine learning model called Inception v3 for demo. You can download the model in the Core ML format <a href="https://docs-assets.developer.apple.com/coreml/models/Inceptionv3.mlmodel?ref=appcoda.com">here</a>.</p>
<p>Opening this model, you can see that it takes up quite a bit of space at 94.7 MB.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12507" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1.jpg" alt="What&apos;s New in Core ML 2" width="927" height="803" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1.jpg 927w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-200x173.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-346x300.jpg 346w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-768x665.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-860x745.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-680x589.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-400x346.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-1-50x43.jpg 50w" sizes="(max-width: 927px) 100vw, 927px"></p>
<p>We will use the Python package <code>coremltools</code> to quantize this model. Let&#x2019;s see how!</p>
<p><em>In case you don&#x2019;t have <code>python</code> or <code>pip</code> installed on your device, you can learn the installation procedures over <a href="https://www.appcoda.com/core-ml-tools-conversion/">here</a>.</em></p>
<p>First, you need to make sure to install the beta version of <code>coremltools</code>. Open terminal and type in the following:</p>
<pre>pip install coremltools==2.0b1
</pre>
<p>You should see an output like this.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12508" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2.png" alt="What&apos;s New in Core ML 2" width="1016" height="637" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2.png 1016w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-478x300.png 478w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-768x482.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-860x539.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-680x426.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-400x251.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-2-50x31.png 50w" sizes="(max-width: 1016px) 100vw, 1016px"></p>
<p>This command successfully installs the beta version of Core ML Tools beta 1. The next few steps require some Python. No worries however, it&#x2019;s really simple and doesn&#x2019;t require too much code! Opening up a Python editor of your choice or follow along in the Terminal. First, let&#x2019;s import the <code>coremltools</code> package. Type <code>python</code> in the Terminal and then type the following once the editor shows up:</p>
<pre lang="python">import coremltools
from coremltools.models.neural_network.quantization_utils import *
</pre>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12509" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3.png" alt="What&apos;s New in Core ML 2" width="842" height="589" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3.png 842w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-200x140.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-429x300.png 429w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-768x537.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-680x476.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-400x280.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-3-50x35.png 50w" sizes="(max-width: 842px) 100vw, 842px"></p>
<p>This imports the Core ML tools package, as well as, all the quantization utilities. Next, let&#x2019;s define a variable <code>model</code> and set its URL to the <code>Inceptionv3.mlmodel</code> just downloaded.</p>
<pre lang="python">model = coremltools.models.MLModel(&apos;/PATH/TO/Inceptionv3.mlmodel&apos;)
</pre>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12510" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4.jpg" alt="What&apos;s New in Core ML 2" width="836" height="574" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4.jpg 836w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-200x137.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-437x300.jpg 437w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-768x527.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-680x467.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-400x275.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-4-50x34.jpg 50w" sizes="(max-width: 836px) 100vw, 836px"></p>
<p>Before we quantize the model (it takes only 2 lines!), let me give you some background information of neural network.</p>
<p>A neural network is composed of different layers. These layers are nothing but math function which have many parameters. These parameters are known as weights.</p>
<p><img decoding="async" src="https://cdn-images-1.medium.com/max/1318/1*Gh5PS4R_A5drl5ebd_gNrg@2x.png" alt="What&apos;s New in Core ML 2"><br>
Source: Towards Data Science</p>
<p>When we quantize weights, we take the minimum value of a weight and the maximum value of a weight and map them. There are many ways to map them but the most commonly used ones are linear and lookup. <em>Linear Quantization</em> is when you map the weights evenly and reduce them. In a <em>Lookup Table Quantization</em>, the model constructs a table and groups the weights around based on similarity and reduces them.</p>
<p>If this sounds complicated, don&#x2019;t worry. All we need to do is choose the number of bits we want our model to be represented by and the algorithm to choose. First, let&#x2019;s see what happens if we choose linearly quantize a model.</p>
<pre lang="python">lin_quant_model = quantize_weights(model, 16, &quot;linear&quot;)
</pre>
<p>In the above code, we quantize the weights of the Inceptionv3 model to 16 bits and use linear quantization. Running the code should give you a long list of every layer the program is quantizing.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12511" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5.png" alt="What&apos;s New in Core ML 2" width="1440" height="900" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-5-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Let&#x2019;s save the model and see how it compares to the original model. Choose a path where to save it and type the following.</p>
<pre lang="python">lin_quant_model.save(&apos;Path/To/Save/QuantizedInceptionv3.mlmodel&apos;)
</pre>
<p>Now open both models and compare the sizes.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12512" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6.jpg" alt="What&apos;s New in Core ML 2" width="1440" height="900" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6.jpg 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-200x125.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-480x300.jpg 480w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-768x480.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-1024x640.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-1240x775.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-860x538.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-680x425.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-400x250.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-6-50x31.jpg 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>When we represent the <em>Inceptionv3</em> model in a 16-bit format, it takes up less space!</p>
<p>However, it is important to remember what weight quantization really is. Earlier, in my analogy, I said that more weights yields more accuracy. When we quantize a model, we are also reducing the accuracy of the model along with the size. Quantization is an <strong>accuracy tradeoff</strong>. Quantized models are approximations of the size of the weight, so it is always important to run your quantized models and see how they perform.</p>
<p>Ideally, we want to quantize our models while retaining the highest possible accuracy. This can be done by finding the right quantization algorithm. In the previous example, we used <em>Linear Quantization</em>. Now let&#x2019;s try to use <em>Lookup Quantization</em> now and see what happens. Just like before, type the following into Terminal:</p>
<pre lang="python">lut_quant_model = quantize_weights(model, 16, &quot;kmeans&quot;)
</pre>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12513" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7.jpg" alt="What&apos;s New in Core ML 2" width="1440" height="900" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7.jpg 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-200x125.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-480x300.jpg 480w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-768x480.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-1024x640.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-1240x775.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-860x538.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-680x425.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-400x250.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-7-50x31.jpg 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>When the model is done quantizing, we need to compare both our <code>lin_quant_model</code> and our <code>lut_quant_model</code> to the original model by running it through some sample data. This way, we can find out which quantized model was most similar to the original model. Download a folder of sample images <a href="https://github.com/appcoda/ML-Kit-Demo/raw/master/sampleimages.zip?ref=appcoda.com">here</a>. Type the following line and we can see which model performed better!</p>
<pre lang="python">compare_models(model, lin_quant_model, &apos;/Users/SaiKambampati/Desktop/SampleImages&apos;)
</pre>
<p>This may take a while but after both models are done processing, you will receive an output which looks like this:</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12514" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8.png" alt="What&apos;s New in Core ML 2" width="822" height="551" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8.png 822w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-200x134.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-448x300.png 448w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-768x515.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-680x456.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-400x268.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-8-50x34.png 50w" sizes="(max-width: 822px) 100vw, 822px"></p>
<p>We are interested in the Top 1 Agreement. It shows 100% which means that it matches 100% with our model! This is great for us as we now have a quantized model which takes less space and has approximately the same accuracy as our original model! We can import this into a project now if we want but let&#x2019;s compare the Lookup Table Quantized model as well!</p>
<pre lang="python">compare_models(model, lut_quant_model, &apos;/Users/SaiKambampati/Desktop/SampleImages&apos;)
</pre>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12515" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9.png" alt="What&apos;s New in Core ML 2" width="787" height="544" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9.png 787w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-200x138.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-434x300.png 434w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-768x531.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-680x470.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-400x276.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-9-50x35.png 50w" sizes="(max-width: 787px) 100vw, 787px"></p>
<p>We receive an output of 100% as well so both models are compatible! I encourage you to play around with quantizing different models. In the above example, we quantized the <code>Inceptionv3</code> model down to a 16-bit model. See if you can continue to quantize the model to an 8-bit representation and even a 4-bit representation and compare it with the sample data! How did it perform?</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12516" src="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10.jpg" alt="What&apos;s New in Core ML 2" width="1440" height="900" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10.jpg 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-200x125.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-480x300.jpg 480w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-768x480.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-1024x640.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-1240x775.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-860x538.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-680x425.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-400x250.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/coreml-2-10-50x31.jpg 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>The above image depicts what happened when I quantized the <code>Inceptionv3</code> model using a linear algorithm to a 1-bit representation! As you can see, the model size drastically reduced but so did the accuracy! In fact, it is completely inaccurate with a 0% accuracy. Play around with quantization to try and find the happy medium. Always remember to test your quantized models to make sure they perform accurately!</p>
<h2>Performance</h2>
<p>The next point Apple focused on with Core ML 2 was performance. Since we are running the ML computations on-device, we want it to be fast and accurate. This can be pretty complicated. Luckily, Apple has provided us with a way to improve the performance of our CoreML models. Let&#x2019;s walk through an example.</p>
<p>Style Transfer is a machine learning application which basically transforms a certain image into the style of another. If you&#x2019;ve the Prisma app before, that&#x2019;s a sample use of Style Transfer.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12520" src="https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer.jpg" alt="What&apos;s New in Core ML 2" width="1280" height="878" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer.jpg 1280w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-200x137.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-437x300.jpg 437w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-768x527.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-1024x702.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-1240x851.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-860x590.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-680x466.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-400x274.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/core-ml-prisma-style-transfer-50x34.jpg 50w" sizes="(max-width: 1280px) 100vw, 1280px"></p>
<p>If we were to look into the neural network of style transfer, this is what we would notice. There are a certain set of inputs for this algorithm. Each layer in the neural network adds a certain transformation to the original image. This means that the model has to take in <strong>every</strong> input and map it out to an output and make a prediction from that. The prediction then helps in the creation of the weights. What would this look like in code?</p>
<pre lang="swift">// Loop over inputs
for i in 0..&lt; modelInputs.count {
    modelOutputs[i] = model.prediction(from: modelInputs[i], options: options) 
}
</pre>
<p>In the above code, you see that for each input, we are asking the model to generate a prediction and yield an output based on some <code>options</code>. However, iterating over every input can take a long time.</p>
<p>To combat this, Apple has introduced a brand new Batch API! Unlike a <em>for</em> loop, a batch in machine learning is when all the inputs are fed to the model and an accurate prediction is its result! This can take much less time and more importantly, much less code!</p>
<p>Here is the above for-loop code written with the new Batch Predict API!</p>
<pre lang="swift">modelOutputs = model.prediction(from: modelInputs, options: options)
</pre>
<p>That&#x2019;s all! Just one line of code is all it takes! You&#x2019;re probably wondering, &#x201C;Wait! I never did this before? This sounds complicated. Where do I even use this?&#x201D; That bring me to my last point which is Customization.</p>
<h2>Customization</h2>
<p>When you open the hood of a neural network, you see that they are comprised of many layers. However, there may be some scenarios when you are trying to convert a neural network from Tensorflow to Core ML. Or maybe a pipeline from Keras to Core ML. However, there may be the occasional example where Core ML simply does not have the tools to convert the model correctly! What do I mean by this? Let&#x2019;s take another example.</p>
<p>Image recognition models are built using a Convolutional Neural Network (CNN). CNNs consists of series of layers which are highly optimized. When you convert a neural network from one format to Core ML, you are transforming each layer. However, there may be some rare scenario where Core ML simply does not provide the tools to convert a layer. In the past, you could not do anything about it but with iOS 12, the Apple engineers have introduced the <code>MLCustonLayer</code> protocol which allows developers to create their own layers in Swift. With <code>MLCustomLayer</code>, you can define the behavior of your own neural network layers in Core ML models. <strong>However, it&#x2019;s worth noting that custom layers only work for neural network models.</strong></p>
<p>Now if this sounds very complicated, don&#x2019;t worry. It usually takes a skilled data scientists or an machine learning engineer to understand all the intricacies of a neural network and have the talent of writing their own model. This is beyond the scope of the tutorial so we won&#x2019;t delve into this.</p>
<h2>Conclusion</h2>
<p>That sums up all the new changes in Core ML 2.0. Core ML 2.0 aims to make models smaller, faster, and more customizable. We saw how to reduce the size of our Core ML models through weight quantization, improve the performance of our model through the new Batch API, and examples where we might need to write custom layers for our model. As developers, I predict (see what I did there) that you&#x2019;ll be using weight quantization more than the other two techniques (Batch API and Custom Layers).</p>
<p>If you are interested in exploring Core ML 2.0 even more, here are some excellent resources to look over!</p>
<ul>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/708?ref=appcoda.com">What&#x2019;s New in Core ML, Part 1 &#x2013; WWDC 2018</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/709?ref=appcoda.com">What&#x2019;s New in Core ML, Part 2 &#x2013; WWDC 2018</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/717?ref=appcoda.com">Vision with Core ML &#x2013; WWDC 2018</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2018/713?ref=appcoda.com">Introducing Natural Language Framework &#x2013; WWDC 2018</a></li>
<li><a href="https://developer.apple.com/documentation/coreml?ref=appcoda.com">Core ML Documentation</a></li>
<li><a href="https://developer.apple.com/machine-learning/?ref=appcoda.com">Apple&#x2019;s Machine Learning Page</a></li>
</ul>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>Just as Apple does a lot for its developer community, another company which goes to great lengths to create amazing tools and services for its developers is Google. In recent years, Google has released and improved its services such as Google Cloud, Firebase, TensorFlow, etc. to give more power to</p>]]></description><link>https://www.appcoda.com/mlkit/</link><guid isPermaLink="false">66612a0f166d3c03cf01146d</guid><category><![CDATA[AI]]></category><category><![CDATA[Swift]]></category><category><![CDATA[iOS Programming]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Fri, 15 Jun 2018 13:34:25 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo.png" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More"><p>Just as Apple does a lot for its developer community, another company which goes to great lengths to create amazing tools and services for its developers is Google. In recent years, Google has released and improved its services such as Google Cloud, Firebase, TensorFlow, etc. to give more power to both iOS and Android developers.</p>
<p>This year at Google I/O 2018, Google released a brand new toolkit called <a href="https://developers.google.com/ml-kit/?ref=appcoda.com">ML Kit</a> for its developers. Google has been at the front of the race towards Artificial Intelligence and by giving developers access to its ML Kit models, Google has put a lot of power into its developers.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="688" height="387" class="aligncenter size-full wp-image-12474" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo.png 688w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-logo-50x28.png 50w" sizes="(max-width: 688px) 100vw, 688px"></p>
<p>With ML Kit, you can perform a variety of machine learning tasks with very little code. One core difference between CoreML and ML Kit is that in CoreML, you are required to add your own models but in ML Kit, you can either rely on the models Google provides for you or you can run your own. For this tutorial, we will be relying on the models Google uses since adding your own ML models requires TensorFlow and an adequate understanding of Python.</p>
<div class="alert gray"><strong>Editor&#x2019;s note:</strong> With the announcement of WWDC 18, you can now <a href="https://www.appcoda.com/create-ml/">use CreateML to create your own ML model</a> with Xcode 10&#x2019;s Playgrounds.</div>
<p>Another difference is that if your models are large, you have the ability to put your ML model in Firebase and have your app make calls to the server. In CoreML, you can only run machine learning on-device. Here&#x2019;s a list of everything you can do with ML Kit:</p>
<ul>
<li>Barcode Scanning</li>
<li>Face Detection</li>
<li>Image Labelling</li>
<li>Text Recognition</li>
<li>Landmark Recognition</li>
<li>Smart Reply (not yet released, but coming soon)</li>
</ul>
<p>In this tutorial, I&#x2019;ll show you how to create a new project in Firebase, use Cocoapods to download the required packages, and integrate ML Kit into our app! Let&#x2019;s get started!</p>
<h2>Creating a Firebase Project</h2>
<p>The first step is to go to the <a href="console.firebase.google.com">Firebase Console</a>. Here you will be prompted to login to your Google Account. After doing so, you should be greeted with a page that looks like this.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="799" class="aligncenter size-full wp-image-12444" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-541x300.png 541w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-768x426.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-1024x568.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-1240x688.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-860x477.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-680x377.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-400x222.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-1-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Click on <em>Add Project</em> and name your project. For this scenario, let&#x2019;s name this project <code>ML Kit Introduction</code>. Leave the Project ID as is and change the <em>Country/Region</em> as you see fit. Then, click on the <em>Create Project</em> button. This should take about a minute.</p>
<div class="alert gray"><strong>Note:</strong> There are only a certain number of Firebase projects you can create before reaching the quota. Create your projects sparingly.</div>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="800" class="aligncenter size-full wp-image-12445" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-540x300.png 540w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-768x427.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-1024x569.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-1240x689.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-860x478.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-680x378.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-400x222.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-2-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>When everything is all done, your page should look like this!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="796" class="aligncenter size-full wp-image-12446" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-543x300.png 543w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-768x425.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-1024x566.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-1240x685.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-860x475.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-680x376.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-400x221.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-3-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>This is your Project Overview page and you&#x2019;ll be able to manipulate a wide variety of Firebase controls from this console. Congratulations! You just created your first Firebase project! Leave this page as is and we&#x2019;ll shift gears for a second and look at the iOS project. Download the starter project <a href="https://github.com/appcoda/ML-Kit-Demo/raw/master/mlkit-starter.zip?ref=appcoda.com">here</a>.</p>
<h2>A Quick Glance At The Starter Project</h2>
<p>Open the starter project. You&#x2019;ll see that most of the UI has been designed for you already. Build and run the app. You can see a <code>UITableView</code> with the different ML Kit options which lead to different pages.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="803" class="aligncenter size-full wp-image-12447" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-200x112.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-538x300.png 538w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-768x428.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-1024x571.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-1240x691.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-860x480.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-680x379.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-4-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>If you click on the <em>Choose Image</em> button, a <code>UIImagePickerView</code> pops up and choosing an image will change the empty placeholder. However, nothing happens. It&#x2019;s up to us to integrate ML Kit and perform its machine learning tasks on the image.</p>
<h2>Linking Firebase to the App</h2>
<p>Go back to the Firebase console of your project. Click on the button which says &#x201C;Add Firebase to your iOS App&#x201D;.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="796" class="aligncenter size-full wp-image-12448" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-543x300.png 543w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-768x425.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-1024x566.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-1240x685.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-860x475.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-680x376.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-400x221.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-5-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>You should receive a popup now with instructions on how to link Firebase. The first thing to do is to link enter your iOS Bundle ID. That can be found in the Project Overview of Xcode in the General tab.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="801" class="aligncenter size-full wp-image-12449" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-539x300.png 539w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-768x427.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-1024x570.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-1240x690.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-860x478.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-680x378.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-6-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Enter that into the field and click the button titled <code>Register App</code>. You don&#x2019;t need to enter anything into the optional text fields as this app is not going to go on the App Store. You&#x2019;ll be guided to Step 2 where you are asked to download a <code>GoogleService-Info.plist</code>. This is an important file that you will add to your project. Click on the <code>Download</code> button to download the file.</p>
<p>Drag the file into the sidebar as shown on the Firebase website. Make sure that the <code>Copy items if needed</code> checkbox is checked. IF you&#x2019;ve added everything, click on the <code>Next</code> button and proceed to Step 3.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="796" class="aligncenter size-full wp-image-12450" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-543x300.png 543w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-768x425.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-1024x566.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-1240x685.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-860x475.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-680x376.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-400x221.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-7-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<h3>Installing Firebase Libraries Using Cocoapods</h3>
<p>This next step will introduce the idea of Cocoapods. Cocoapods are basically a way for you to import packages into your project in an easy manner. However, there can be a lot of disastrous consequences if any slight error is made. First, close all windows on Xcode and quit the application.</p>
<div class="alert gray"><strong>Note:</strong> Make sure that you already have Cocoapods installed on your device. If not, here&#x2019;s a <a href="https://www.appcoda.com/cocoapods/">great tutorial about Cocoapods</a> and adding it to your Mac.</div>
<p>Open Terminal on your Mac and enter the following command:</p>
<pre lang="bash">cd &lt;Path to your Xcode Project&gt;
</pre>
<div class="alert green"><strong>Tip:</strong> To get the path to your Xcode project, click on the file holding your Xcode project and press <em>CMD+C</em>. The head to terminal and type <em>cd</em> and paste.</div>
<p>You are now in that directory.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1093" height="634" class="aligncenter size-full wp-image-12452" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9.png 1093w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-200x116.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-517x300.png 517w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-768x445.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-1024x594.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-860x499.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-680x394.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-400x232.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-9-50x29.png 50w" sizes="(max-width: 1093px) 100vw, 1093px"></p>
<p>To create a pod, it&#x2019;s quite simple. Enter the command:</p>
<pre>pod init
</pre>
<p>Wait for a second and then your Terminal should look like this. Just a simple line was added.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1018" height="582" class="aligncenter size-full wp-image-12453" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10.png 1018w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-200x114.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-525x300.png 525w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-768x439.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-860x492.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-680x389.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-400x229.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-10-50x29.png 50w" sizes="(max-width: 1018px) 100vw, 1018px"></p>
<p>Now, let&#x2019;s add all the packages we need to our Podfile. Enter the command in terminal and wait for Xcode to open up:</p>
<pre>open -a Xcode podfile
</pre>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="880" class="aligncenter size-full wp-image-12454" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-200x122.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-491x300.png 491w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-768x469.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-1024x626.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-1240x758.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-860x526.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-680x416.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-400x244.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-11-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Underneath where it says <code># Pods for ML Kit Starter Project</code>, type the following lines of code:</p>
<pre lang="swift">pod &apos;Firebase/Core&apos;
pod &apos;Firebase/MLVision&apos;
pod &apos;Firebase/MLVisionTextModel&apos;
pod &apos;Firebase/MLVisionFaceModel&apos;
pod &apos;Firebase/MLVisionBarcodeModel&apos;
pod &apos;Firebase/MLVision&apos;
pod &apos;Firebase/MLVisionLabelModel&apos;
</pre>
<p>Your Podfile should look like this.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="900" class="aligncenter size-full wp-image-12455" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-12-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Now, there&#x2019;s only one thing remaining. Head back to Terminal and type:</p>
<pre>pod install
</pre>
<p>This will take a couple of minutes so feel free to take a break. In the meantime, Xcode is downloading the packages which we will be using. When everything is done and you head back over to the folder where your Xcode project is, you&#x2019;ll notice a new file: a .xcworkspace file.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1166" height="702" class="aligncenter size-full wp-image-12457" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14.png 1166w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-200x120.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-498x300.png 498w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-768x462.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-1024x617.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-860x518.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-680x409.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-400x241.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-14-50x30.png 50w" sizes="(max-width: 1166px) 100vw, 1166px"></p>
<p>This is where most developer mess up: YOU SHOULD NEVER AGAIN OPEN THE .XCODEPROJ FILE! If you do and edit content there, the two will not be in sync and you will have to create a new project all over again. From now on, you should always open the .xcworkspace file.</p>
<p>Go back to the Firebase webpage and you&#x2019;ll notice that we have finished Step 3. Click on the next button and head over to Step 4.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1439" height="795" class="aligncenter size-full wp-image-12458" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15.png 1439w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-200x110.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-543x300.png 543w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-768x424.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-1024x566.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-1240x685.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-860x475.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-680x376.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-400x221.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-15-50x28.png 50w" sizes="(max-width: 1439px) 100vw, 1439px"></p>
<p>We are now being asked to open our workspace and add a few lines of code to our <code>AppDelegate.swift</code>. Open the .xcworkspace (again, not the .xcodeproj. I cannot iterate over how important this is) and go to <code>AppDelegate.swift</code>.</p>
<p>Once we&#x2019;re in <code>AppDelegate.swift</code>, all we need to do is add two lines of code.</p>
<pre lang="swift">import UIKit
import Firebase

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

    var window: UIWindow?

    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -&gt; Bool {
        // Override point for customization after application launch.
        FirebaseApp.configure()
        return true
    }
</pre>
<p>All we are doing is importing the Firebase package and configuring it based on our <code>GoogleService-Info.plist</code> file we added earlier. You may get an error saying that it was not able to build the <code>Firebase</code> module but just press CMD+SHIFT+K to clean the project and then CMD+B to build it.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="804" class="aligncenter size-full wp-image-12459" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-200x112.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-537x300.png 537w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-768x429.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-1024x572.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-1240x692.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-860x480.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-680x380.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-16-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>If the error persists, go to the <code>Build Settings</code> tab of the project and search for <code>Bitcode</code>. You&#x2019;ll see an option called <code>Enable Bitcode</code> under <code>Build Options</code>. Set that to No and build again. You should be successful now!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1438" height="804" class="aligncenter size-full wp-image-12460" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17.png 1438w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-200x112.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-537x300.png 537w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-768x429.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-1024x573.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-1240x693.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-860x481.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-680x380.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-400x224.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-17-50x28.png 50w" sizes="(max-width: 1438px) 100vw, 1438px"></p>
<p>Press the <code>Next</code> button on the Firebase console to go to Step 5. Now, all you need to do is run your app on a device and Step 5 will automatically be completed! You should then be redirected back to your Project Overview page.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="799" class="aligncenter size-full wp-image-12461" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-541x300.png 541w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-768x426.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-1024x568.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-1240x688.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-860x477.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-680x377.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-400x222.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-18-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Congratulations! You are done with the most challenging part of this tutorial! All that left is adding the ML Kit code in Swift. Now it would be a perfect time to take a break but from now on, it&#x2019;s just smooth cruising in a familiar code we know!</p>
<h2>Barcode Scanning</h2>
<p>The first implementation will be with Barcode Scanning. This is really simple to add to your app. Head to <code>BarcodeViewController</code>. You&#x2019;ll see the code which happens when the <code>Choose Image</code> button is clicked. To get access to all the ML Kit protocols, we have to import Firebase.</p>
<pre lang="swift">import UIKit
import Firebase
</pre>
<p>Next, we need to define some variables which will be used in our Barcode Scanning function.</p>
<pre lang="swift">let options = VisionBarcodeDetectorOptions(formats: .all)
lazy var vision = Vision.vision()
</pre>
<p>The <code>options</code> variable tells the <code>BarcodeDetector</code> what types of Barcodes to recognize. ML Kit can recognize most common formats of barcodes like Codabar, Code 39, Code 93, UPC-A, UPC-E, Aztec, PDF417, QR Code, etc. For our purpose, we&#x2019;ll ask the detector to recognize all types of formats. The <code>vision</code> variable returns an instance of the Firebase Vision service. It is through this variable that we perform most of our computations through.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="802" class="aligncenter size-full wp-image-12462" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-539x300.png 539w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-768x428.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-1024x570.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-1240x691.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-860x479.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-680x379.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-19-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Next, we need to handle the recognition logic. We will implement all of this in the <code>imagePickerController:didFinishPickingMediaWithInfo</code> function. This function runs when we have picked an image. Currently, the function only sets the <code>imageView</code> to the image we choose. Add the following lines of code below the line <code>imageView.image = pickedImage</code>.</p>
<pre lang="swift">// 1
let barcodeDetector = vision.barcodeDetector(options: options)
let visionImage = VisionImage(image: pickedImage)

//2
barcodeDetector.detect(in: visionImage) { (barcodes, error) in
    //3
    guard error == nil, let barcodes = barcodes, !barcodes.isEmpty else {
        self.dismiss(animated: true, completion: nil)
        self.resultView.text = &quot;No Barcode Detected&quot;
        return
    }
    
    //4
    for barcode in barcodes {
        let rawValue = barcode.rawValue!
        let valueType = barcode.valueType
        
        //5
        switch valueType {
        case .URL:
            self.resultView.text = &quot;URL: \(rawValue)&quot;
        case .phone:
            self.resultView.text = &quot;Phone number: \(rawValue)&quot;
        default:
            self.resultView.text = rawValue
        }
    }
}
</pre>
<p>Here&#x2019;s a quick rundown of everything that went down. It may seem like a lot but it&#x2019;s quite simple. This will be the basic format for everything else we do for the rest of the tutorial.</p>
<ol>
<li>The first thing we do is define 2 variables: <code>barcodeDetector</code> which is a barcode detecting object of the Firebase Vision service. We set it to detect all types of barcodes. Then we define an image called <code>visionImage</code> which is the same image as the one we picked.</li>
<li>We call the <code>detect</code> method of <code>barcodeDetector</code> and run this method on our <code>visionImage</code>. We define two objects: <code>barcodes</code> and <code>error</code>.</li>
<li>First, we handle the error. If there is an error or there are no barcodes recognized, we dismiss the Image Picker View Controller and set the <code>resultView</code> to say &#x201C;No Barcode Detected&#x201D;. Then we <code>return</code> the function so the rest of the function will not run.</li>
<li>If there is a barcode detected, we use a for-loop to run the same code on each barcode recognized. We define 2 constants: a <code>rawValue</code> and a <code>valueType</code>. The raw value of a barcode contains the data it holds. This can either be some text, a number, an image, etc. The value type of a barcode states what type of information it is: an email, a contact, a link, etc.</li>
<li>Now we could simply print the raw value but this wouldn&#x2019;t provide a great user experience. Instead, we&#x2019;ll provide custom messages based on the value type. We check what type it is and set the text of the <code>resultView</code> to something based on that. For example, in the case of a URL, we have the <code>resultView</code> text say &#x201C;URL: &#x201D; and we show the URL.</li>
</ol>
<p>Build and run the app. It should work amazingly fast! What&#x2019;s cool is that since <code>resultView</code> is a UITextView, you can interact with it and select on any of the detected data such as numbers, links, and emails.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1920" height="1080" class="aligncenter size-full wp-image-12464" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-21-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<h2>Face Detection</h2>
<p>Next, we&#x2019;ll be taking a look at Face Detection. Rather than draw boxes around faces, let&#x2019;s take it a step further and see how we can report if a person is smiling, whether their eyes are open, etc.</p>
<p>Just like before, we need to import Firebase and define some constants.</p>
<pre lang="swift">import UIKit
import Firebase
...
let options = VisionFaceDetectorOptions()
lazy var vision = Vision.vision()
</pre>
<p>The only difference here is that we initialize the default FaceDetectorOptions. And, we configure these options in the <code>viewDidLoad</code> method.</p>
<pre lang="swift">override func viewDidLoad() {
    super.viewDidLoad()
    imagePicker.delegate = self
    
    // Do any additional setup after loading the view.
    options.modeType = .accurate
    options.landmarkType = .all
    options.classificationType = .all
    options.minFaceSize = CGFloat(0.1)
}
</pre>
<p>We define the specific options of our detector in <code>viewDidLoad</code>. First, we choose which mode to use. There are two modes: accurate and fast. Since this is just a demo app, we&#x2019;ll use the accurate model but in a situation where speed is important, it may be wiser to set the mode to <code>.fast</code>.</p>
<p>Next, we ask the detector to find all landmarks and classifications. What&#x2019;s the difference between the two? A landmark is a certain part of the face such as the right cheek, left cheek, base of the nose, eyebrow, and more! A classification is kind of like an event to detect. At the time of this writing, ML Kit Vision can detect only if the left/right eye is open and if the person is smiling. For our purpose, we&#x2019;ll be dealing with classifications only (smiles and eyes opened).</p>
<div class="alert gray"><strong>Update:</strong> Google is enhancing ML Kit&#x2019;s Face Detection API with <a href="https://medium.com/r/?url=https%3A%2F%2Ffirebase.google.com%2Fdocs%2Fml-kit%2Fface-contours&amp;ref=appcoda.com">face contours</a> (beta), which allows developers to detect over 100 detailed points in and around a user&#x2019;s face.</div>
<p>The last option we configure is the minimum face size. When we say <code>options.minFaceSize = CGFloat(0.1)</code>, we are asking for the smallest desired face size. The size is expressed as a proportion of the width of the head to the image width. A value of 0.1 is asking the detector to search for the smallest face which is roughly 10% of the width of the image being searched.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1440" height="801" class="aligncenter size-full wp-image-12465" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-539x300.png 539w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-768x427.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-1024x570.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-1240x690.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-860x478.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-680x378.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-22-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Next, we handle the logic inside the <code>imagePickerController:didFinishPickingMediaWithInfo</code> method. Right below the line <code>imageView.image = pickedImage</code>, type the following:</p>
<pre lang="swift">let faceDetector = vision.faceDetector(options: options)
let visionImage = VisionImage(image: pickedImage)
self.resultView.text = &quot;&quot;
</pre>
<p>This simply sets ML Kit Vision service to be a face detector with the options we defined earlier. We also define the <code>visionImage</code> to be the one we chose. Since we may be running this several times, we&#x2019;ll want to clear the <code>resultView</code>.</p>
<p>Next, we call the <code>faceDetector</code>&#x2019;s detect function to perform the detection.</p>
<pre lang="swift">//1
faceDetector.detect(in: visionImage) { (faces, error) in
    //2
    guard error == nil, let faces = faces, !faces.isEmpty else {
        self.dismiss(animated: true, completion: nil)
        self.resultView.text = &quot;No Face Detected&quot;
        return
    }
    //3
    self.resultView.text = self.resultView.text + &quot;I see \(faces.count) face(s).\n\n&quot;
    
    for face in faces {
        //4
        if face.hasLeftEyeOpenProbability {
            if face.leftEyeOpenProbability &lt; 0.4 {
                self.resultView.text = self.resultView.text + &quot;The left eye is not open!\n&quot;
            } else {
                self.resultView.text = self.resultView.text + &quot;The left eye is open!\n&quot;
            }
        }
        
        if face.hasRightEyeOpenProbability {
            if face.rightEyeOpenProbability &lt; 0.4 {
                self.resultView.text = self.resultView.text + &quot;The right eye is not open!\n&quot;
            } else {
                self.resultView.text = self.resultView.text + &quot;The right eye is open!\n&quot;
            }
        }
        
        //5
        if face.hasSmilingProbability {
            if face.smilingProbability &lt; 0.3 {
                self.resultView.text = self.resultView.text + &quot;This person is not smiling.\n\n&quot;
            } else {
                self.resultView.text = self.resultView.text + &quot;This person is smiling.\n\n&quot;
            }
        }
    }
}
</pre>
<p>This should look very similar to the Barcode Detection function we wrote earlier. Here&#x2019;s everything that happens:</p>
<ol>
<li>We call the <code>detect</code> function on our <code>visionImage</code> looking for <code>faces</code> and <code>errors</code>.</li>
<li>In case if there is an error or there are no faces, we set the <code>resultView</code> text to &#x201C;No Face Detected&#x201D; and return the method.</li>
<li>If faces are detected, the first statement we print to the <code>resultView</code> is the number of faces we see. You&#x2019;ll be seeing <code>\n</code> a lot within the strings of this tutorial. This signifies a new line.</li>
<li>Then we deep dive into the specifics. If a <code>face</code> has a probability for the left eye being opened, we check to see what the probability is. In this case, I have set it to say that the left eye is closed if the probability is less than 0.4. The same goes for the right eye. You can change it to whatever value you wish.</li>
<li>Similarly, I check the smile probability and if it is less than 0.3, the person is most likely not smiling, otherwise, the face is smiling.</li>
</ol>
<div class="alert gray"><strong>Note:</strong> I chose these values based on what I felt would recognize it the best. Since smiles are slightly harder to detect than open eyes, I decreased the probability value so there is a higher chance it will guess it right.</div>
<p>Build and run your code, see how it performs! Feel free to tweak the values and play around!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1920" height="1080" class="aligncenter size-full wp-image-12467" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-24-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<h2>Image Labelling</h2>
<p>Next, we&#x2019;ll check out labelling images. This is much easier than face detection. Actually, you have two options with Image Labelling and Text Recognition. You can either have all of the machine learning done on-device (which is what Apple would prefer since all the data belongs to the user, it can run offline, and no calls are made to Firebase) or you can use Google&#x2019;s Cloud Vision. The advantage here is that the model will automatically be updated and a lot more accurate since it&#x2019;s easier to have bigger, accurate model size in the cloud than on device. However, for our purposes, we&#x2019;ll be continuing everything we have done so far and just implement the on-device version.</p>
<p>See if you can implement it on your own! It&#x2019;s quite similar to the previous two scenarios. If not, that&#x2019;s alright! Here&#x2019;s what we should do!</p>
<pre lang="swift">import UIKit
import Firebase
...
lazy var vision = Vision.vision()
</pre>
<p>Unlike before, we&#x2019;ll be using the default settings for the label detector so we only need to define one constant. Everything else, like before, will be in the <code>imagePickerController:didFinishPickingMediaWithInfo</code> method. Under <code>imageView.image = pickedImage</code>, insert the following lines of code:</p>
<pre lang="swift">//1
let labelDetector = vision.labelDetector()
let visionImage = VisionImage(image: pickedImage)
self.resultView.text = &quot;&quot;

//2
labelDetector.detect(in: visionImage) { (labels, error) in
    //3
    guard error == nil, let labels = labels, !labels.isEmpty else {
        self.resultView.text = &quot;Could not label this image&quot;
        self.dismiss(animated: true, completion: nil)
        return
    }
    
    //4
    for label in labels {
        self.resultView.text = self.resultView.text + &quot;\(label.label) - \(label.confidence * 100.0)%\n&quot;
    }
}
</pre>
<p>This should start to look familiar. Let me briefly walk you over the code:</p>
<ol>
<li>We define <code>labelDetector</code> which is telling ML Kit&#x2019;s Vision service to detect labels from images. We define <code>visionImage</code> to be the image we chose. We clear the <code>resultView</code> in case we use this function more than once.</li>
<li>We call the <code>detect</code> function on <code>visionImage</code> and look for <code>labels</code> and <code>errors</code>.</li>
<li>If there is an error or ML Kit wasn&#x2019;t able to label an image, we return the function telling the user that we couldn&#x2019;t label the image.</li>
<li>If everything works, we set the text of <code>resultView</code> to be the label of the image and how confident ML Kit is with labelling that image.</li>
</ol>
<p>Simple, right? Build and run your code! How accurate (or crazy) were the labels?</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1920" height="1080" class="aligncenter size-full wp-image-12469" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-26-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<h2>Text Recognition</h2>
<p>We&#x2019;re almost done! Optical Character Recognition, or OCR, has become immensely popular within the last two years in terms of mobile apps. With ML Kit, it&#x2019;s become so much easier to implement this in your apps. Let&#x2019;s see how!</p>
<p>Similar to image labelling, text recognition can be done via Google Cloud and through calls to the model in the cloud. However, we&#x2019;ll work with the on-device API for now.</p>
<pre lang="swift">import UIKit
import Firebase
...
lazy var vision = Vision.vision()
var textDetector: VisionTextDetector?
</pre>
<p>Same as before, we call the Vision service and define a <code>textDetector</code>. We can set the <code>textDetector</code> variable to <code>vision</code>&#x2018;s text detector in the <code>viewDidLoad</code> method.</p>
<pre lang="swift">override func viewDidLoad() {
    super.viewDidLoad()
    imagePicker.delegate = self
    textDetector = vision.textDetector()
}
</pre>
<p>Next, we just handle everything in <code>imagePickerController:didFinishPickingMediaWithInfo</code>. As usual, we&#x2019;ll insert the following lines of code below the line <code>imageView.image = pickedImage</code>.</p>
<pre lang="swift">//1
let visionImage = VisionImage(image: pickedImage)
textDetector?.detect(in: visionImage, completion: { (features, error) in
    //2
    guard error == nil, let features = features, !features.isEmpty else {
        self.resultView.text = &quot;Could not recognize any text&quot;
        self.dismiss(animated: true, completion: nil)
        return
    }
    
    //3
    self.resultView.text = &quot;Detected Text Has \(features.count) Blocks:\n\n&quot;
    for block in features {
        //4
        self.resultView.text = self.resultView.text + &quot;\(block.text)\n\n&quot;
    }
})
</pre>
<ol>
<li>We set <code>visionImage</code> to be the image we chose. We run the <code>detect</code> function on this image looking for <code>features</code> and <code>errors</code>.</li>
<li>If there is an error or no text, we tell the user &#x201C;Could not recognize any text&#x201D; and return the function.</li>
<li>The first piece of information we&#x2019;ll give our users is how many blocks of text were detected.</li>
<li>Finally, we&#x2019;ll set the text of <code>resultView</code> to the text of each block leaving a space in between with <code>\n\n</code> (2 new lines).</li>
</ol>
<p>Test it out! Try putting it through different types of fonts and colors. My results have shown that it works flawlessly on printed text but has a really hard time with handwritten text.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1920" height="1080" class="aligncenter size-full wp-image-12471" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-28-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<h2>Landmark Recognition</h2>
<p>Landmark recognition can be implemented just like the other 4 categories. Unfortunately, ML Kit does not support Landmark Recognition on-device at the time of this writing. To perform landmark recognition, you will need to change the plan of your project to &#x201C;Blaze&#x201D; and activate the Google Cloud Vision API.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29.png" alt="Integrating Google ML Kit in iOS for Face Detection, Text Recognition and Many More" width="1439" height="798" class="aligncenter size-full wp-image-12472" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29.png 1439w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-200x111.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-541x300.png 541w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-768x426.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-1024x568.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-1240x688.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-860x477.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-680x377.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-400x222.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/mlkit-29-50x28.png 50w" sizes="(max-width: 1439px) 100vw, 1439px"></p>
<p>However, this is beyond the scope of this tutorial. If you do feel up to the challenge, you can implement the code using the documentation found <a href="https://firebase.google.com/docs/ml-kit/ios/recognize-landmarks?ref=appcoda.com">here</a>. The code is also included in the final project.</p>
<h2>Conclusion</h2>
<p>This was a really big tutorial, so feel free to scroll back up and review anything you may not have understood. If you have any questions, feel free to leave me a comment below.</p>
<p>With ML Kit, you saw how easy it is to implement smart machine learning features into your app. The scope of apps which can be created is large, so here&#x2019;s some ideas for you to try out:</p>
<ul>
<li>An app which recognizes text and reads them back to users with visual disabilities</li>
<li>Face Tracking for an app like <a href="https://itunes.apple.com/us/app/try-not-to-smile/id1307338693?mt=8&amp;ref=appcoda.com">Try Not to Smile by Roland Horvath</a></li>
<li>Barcode Scanner</li>
<li>Search for pictures based on labels</li>
</ul>
<p>The possibilities are endless, it&#x2019;s all based on what you can imagine and how you want to help your users. The final project can be downloaded here. You can learn more about the ML Kit API&#x2019;s by checking out their <a href="https://medium.com/r/?url=https%3A%2F%2Ffirebase.google.com%2Fdocs%2Fml-kit%2F&amp;ref=appcoda.com">documentation</a>. Hope you learned something new in tthis tutorial and let me knowhow it went in the comments below!</p>
<p>For the full project, you can <a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fappcoda%2FML-Kit-Demo&amp;ref=appcoda.com">check it out on GitHub</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>In case you weren&#x2019;t aware, Apple&#x2019;s Worldwide Developers Conference happened this week! It was a big event with a lot of improvements to both the software and the current frameworks Apple currently has. One of these frameworks is Create ML.</p>
<p>Last year, Apple introduced <a href="https://developer.apple.com/documentation/coreml?ref=appcoda.com">Core ML</a></p>]]></description><link>https://www.appcoda.com/create-ml/</link><guid isPermaLink="false">66612a0f166d3c03cf01146b</guid><category><![CDATA[AI]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Thu, 07 Jun 2018 00:52:46 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-featured.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-featured.jpg" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode"><p>In case you weren&#x2019;t aware, Apple&#x2019;s Worldwide Developers Conference happened this week! It was a big event with a lot of improvements to both the software and the current frameworks Apple currently has. One of these frameworks is Create ML.</p>
<p>Last year, Apple introduced <a href="https://developer.apple.com/documentation/coreml?ref=appcoda.com">Core ML</a>: a quick way for you to import pre-trained machine learning models into your app with as little code as possible! This year, with <strong>Create ML</strong>, Apple is giving us developers the ability to create our own machine learning models straight into Xcode Playgrounds! All we need is some data and we&#x2019;re good to go! As of right now, Create ML allows text, images, and tables as data. However, since this constitutes for most ML applications, this should serve your purpose well! I&#x2019;ll show you how to create a ML model with all 3 of these types of data.</p>
<p><img decoding="async" src="https://docs-assets.developer.apple.com/published/e6ad1efd6a/d926fc62-3dea-4447-86fc-920d4d6c4781.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" class="aligncenter"><br>
[Image source: Apple]</p>
<div class="alert green"><strong>Editor&#x2019;s note:</strong> This tutorial was built on Xcode 10 beta and macOS Mojave beta. Please make sure you upgrade your Xcode and macOS in order to follow the tutorial.</div>
<h2>Why Create ML</h2>
<p>You&#x2019;re probably wondering, why should I prefer Create ML? This is because of what it&#x2019;s capable of. Create ML harnesses the machine learning infrastructure built into the software. When you download iOS 12 or macOS Mojave, you are also downloading some machine learning frameworks. That way, when you create your own ML model, it can take up less space since most of the data is already on the user&#x2019;s device.</p>
<p>Another reason why Create ML is so popular is because of its ease-of-use. All you need to do with Create ML is have an extensive dataset (either text or image), write just a few lines of code, and run the playground! This is far more simpler than the other popular tools out there like Tensorflow and Caffe. Those tools require lots of code and don&#x2019;t have a friendly visual interface. Create ML is all built into Xcode Playgrounds so you get the familiarity and best of all, it&#x2019;s all done in Swift!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1844" height="956" class="aligncenter size-full wp-image-12390" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow.png 1844w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-200x104.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-579x300.png 579w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-768x398.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-1024x531.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-1680x871.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-1240x643.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-860x446.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-680x353.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-400x207.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-workflow-50x26.png 50w" sizes="(max-width: 1844px) 100vw, 1844px"></p>
<h2>Prerequisites</h2>
<p>In this tutorial, I will only be showing you how to create your own ML model using Create ML. If you would like to learn how to import a Core ML model into your iOS app, you can find the tutorial <a href="https://appcoda.com/coreml-introduction/?ref=appcoda.com">here</a>.</p>
<p>At the time of writing, iOS 12 and macOS Mojave is still in beta. To successfully run the tutorial, you will need to be running macOS Mojave (10.14) and the Xcode 10 beta. Let&#x2019;s get started!</p>
<h2>The Image Classifier Model</h2>
<h3>The Data</h3>
<p>We&#x2019;ll first get started on building an image classifier model. We can add as many images with as many labels as we want, but for simplicity, we&#x2019;ll be building an image classifier that recognizes fruits as apples or bananas. You can download the images <a href="https://github.com/appcoda/CreateMLQuickDemo/raw/master/resources/FruitImages.zip?ref=appcoda.com">here</a>.</p>
<p>When you open the folder, you&#x2019;ll notice two more folders: <em>Training Data</em> and <em>Testing Data</em>. Each folder consists of a mix between picture of apples and bananas. There are approximately 20 images of apples and 20 images of bananas in the folder called <em>Testing Data</em> and 80 images of apples and 80 images of bananas in <em>Training Data</em>. We will be using the images in <em>Training Data</em> to train our classifier and then use <em>Testing Data</em> to determine its accuracy.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="983" height="580" class="aligncenter size-full wp-image-12364" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1.png 983w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-200x118.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-508x300.png 508w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-768x453.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-860x507.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-680x401.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-400x236.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-1-50x30.png 50w" sizes="(max-width: 983px) 100vw, 983px"></p>
<p>If you want to build your own image classifier, it is important that you split your dataset into 80-20. Approximately 80% of your images go to <em>Training Data</em> and the remaining head to <em>Testing Data</em>. That way, your classifier has more data to train off of. In each of these folders, put the images in their respective folders. Name these folders based on the category label of the images.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2.jpg" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1102" height="662" class="aligncenter size-full wp-image-12365" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2.jpg 1102w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-200x120.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-499x300.jpg 499w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-768x461.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-1024x615.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-860x517.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-680x408.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-400x240.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-2-50x30.jpg 50w" sizes="(max-width: 1102px) 100vw, 1102px"></p>
<p>Now, let&#x2019;s open Xcode and click on <em>Get Started with a Playground</em>. When you do this, a new window opens up. This is the important part: under macOS, select the <code>Blank</code> template as shown below.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="900" class="aligncenter size-full wp-image-12366" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-3-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>It is crucial that you select the Blank template under macOS and not iOS because the framework <code>CreateML</code> isn&#x2019;t supported for iOS Playgrounds.</p>
<p>Name your playground and save it to wherever you want to. Let&#x2019;s get coding now!</p>
<h3>The Code</h3>
<p>Now what I&#x2019;m about to show you will blow your mind. All you need are 3 lines of code! Let me show you! Delete everything in the playground and type the following:</p>
<pre lang="swift">
import CreateMLUI

let builder = MLImageClassifierBuilder()
builder.showInLiveView()
</pre>
<p>And that&#x2019;s it! Make sure you enable the Live View feature in Xcode Playgrounds and you&#x2019;ll be able to see the visual interface!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="900" class="aligncenter size-full wp-image-12367" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-4-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p><code>CreateMLUI</code> is a framework just like <code>CreateML</code> but has a UI to it. As of now, <code>CreateMLUI</code> can only be used for image classification. Now, let&#x2019;s see how we can interact with the UI! You&#x2019;ll see it&#x2019;s quite simple!</p>
<h3>The User Interface</h3>
<p>In the Live View, you&#x2019;ll see that we need to drop images to begin! This is quite simple. Take the <em>Training Data</em> folder, and drop the entire folder into the area.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5.jpg" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="900" class="aligncenter size-full wp-image-12368" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5.jpg 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-200x125.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-480x300.jpg 480w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-768x480.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-1024x640.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-1240x775.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-860x538.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-680x425.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-400x250.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-5-50x31.jpg 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>The moment you drop the folder, you&#x2019;ll see the playground start to train the image classifier! In the console, you&#x2019;ll see the number of images processed in what time and how much percentage of your data was trained!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="900" class="aligncenter size-full wp-image-12369" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-6-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>This should take around 30 seconds (depending on your device). When everything is done processing, you should see something like this:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1438" height="810" class="aligncenter size-full wp-image-12371" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8.png 1438w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-768x433.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-1024x577.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-8-50x28.png 50w" sizes="(max-width: 1438px) 100vw, 1438px"></p>
<p>You&#x2019;ll see a card with three labels: Training, Validation, and Evaluation. Training refers to the percentage of training data Xcode was successfully able to train. This should read 100%.</p>
<p>While training, Xcode distributes the training data into 80-20. After training 80% of training data, Xcode runs the classifier on the remaining 20%. This is what Validation refers to: the percentage of training images the classifier was able to get right. Usually, this can vary because Xcode may not always split the same data. In my case, Xcode had an 88% validation. I wouldn&#x2019;t worry too much about this. Evaluation is empty because we did not give the classifier any testing data. Let&#x2019;s do that now!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="900" class="aligncenter size-full wp-image-12372" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-9-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>This should happen pretty quick. When everything is finished, your evaluation score should ready 100%. This means that the classifier labelled all the images correctly!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1440" height="812" class="aligncenter size-full wp-image-12373" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-532x300.png 532w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-768x433.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-1024x577.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-1240x699.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-860x485.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-400x226.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-10-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>If you&#x2019;re satisfied with your results, all that&#x2019;s left is saving the file! Click on the arrow next to the Image Classifier title. A dropdown menu should appear displaying all the metadata. Change your metadata to how you would like it and save it to where you want to!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="719" height="628" class="aligncenter size-full wp-image-12374" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11.png 719w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11-200x175.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11-343x300.png 343w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11-680x594.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11-400x349.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-11-50x44.png 50w" sizes="(max-width: 719px) 100vw, 719px"></p>
<p>Open the CoreML model and view the metadata. It has everything you filled out! Congratulations! You are the author of your own Image Classifier model that&#x2019;s super powerful, and takes only 17 KB!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="973" height="806" class="aligncenter size-full wp-image-12375" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12.png 973w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-200x166.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-362x300.png 362w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-768x636.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-860x712.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-680x563.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-400x331.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-12-50x41.png 50w" sizes="(max-width: 973px) 100vw, 973px"></p>
<p>You can import it into your iOS app and see how it runs! Next, let&#x2019;s check out how to create our own Text Classifier. This requires a little more code!</p>
<h2>The Text Classifier Model</h2>
<h3>The Data</h3>
<p>Next, we&#x2019;ll be building a Spam Detector model with Create ML. This is a type of model which determines if a message is either spam or ham (ham being not spam). Just like all machine learning applications, we&#x2019;ll need some data. Download the sample JSON file <a href="https://github.com/appcoda/CreateMLQuickDemo/blob/master/resources/spam.json?ref=appcoda.com">here</a>.</p>
<p>Opening it, you can see that it is a JSON table containing lots of messages, each labelled either spam or ham. The amount of data in this sample is very minimal compared to what you might want in your application.</p>
<div class="alert green"><strong>Remember:</strong> More data yields more accuracy! However, it is better to make sure that your data is valid for your ML task. If you use data that is corrupted or may mess up your results, it will greatly shift the results of your classifier.</div>
<h3>The Code</h3>
<p>Now, we have to ask Xcode to train the data. While we don&#x2019;t have a nice and simple UI, the code we use is not too difficult. Type the following:</p>
<pre lang="swift">
import CreateML
import Foundation
//1
let data = try MLDataTable(contentsOf: URL(fileURLWithPath: &quot;/Users/Path/To/spam.json&quot;))
let (trainingData, testingData) = data.randomSplit(by: 0.8, seed: 5)
let spamClassifier = try MLTextClassifier(trainingData: trainingData, textColumn: &quot;text&quot;, labelColumn: &quot;label&quot;)
//2
let trainingAccuracy = (1.0 - spamClassifier.trainingMetrics.classificationError) * 100
let validationAccuracy = (1.0 - spamClassifier.validationMetrics.classificationError) * 100
//3
let evaluationMetrics = spamClassifier.evaluation(on: testingData)
let evaluationAccuracy = (1.0 - evaluationMetrics.classificationError) * 100
//4
let metadata = MLModelMetadata(author: &quot;Sai Kambampati&quot;, shortDescription: &quot;A model trained to classify spam messages&quot;, version: &quot;1.0&quot;)
try spamClassifier.write(to: URL(fileURLWithPath: &quot;/Users/Path/To/Save/SpamDetector.mlmodel&quot;), metadata: metadata)
</pre>
<p>Let me explain what happen. Most of the code should be fairly simple!</p>
<ol>
<li>First, we create a constant called <code>data</code> which is a type of <a href="https://developer.apple.com/documentation/create_ml/mldatatable?ref=appcoda.com"><code>MLDataTable</code></a> to our <code>spam.json</code> file. <code>MLDataTable</code> is a brand new object used to create a table determined to train or evaluate a ML model. We split our data into <code>trainingData</code> and <code>testingData</code>. Like before, the ratio is 80-20 and the seed is 5. The seed refers to where the classifier should start from. Then we define a <code>MLTextClassifier</code> called <code>spamClassifier</code> with our training data, defining what values of the data are text and what values are labels.</li>
<li>We create two variables called <code>trainingAccuracy</code> and <code>validationAccuracy</code> used to determine how accurate our classifier is. In the side pane, you&#x2019;ll be able to see the percentage.</li>
<li>We also check how the evaluation performed. (Remember that the evaluation is the results used on text which the classifier has not seen before and how accurate it got them.)</li>
<li>Finally, we create some metadata for the ML model like the author, description, and version. We use the <code>write()</code> function to save the model to the location of our choice! In the image below, you&#x2019;ll see that I chose the Desktop!</li>
</ol>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1110" height="717" class="aligncenter size-full wp-image-12376" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13.png 1110w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-200x129.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-464x300.png 464w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-768x496.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-1024x661.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-860x556.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-680x439.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-400x258.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-13-50x32.png 50w" sizes="(max-width: 1110px) 100vw, 1110px"></p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="970" height="790" class="aligncenter size-full wp-image-12378" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15.png 970w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-200x163.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-368x300.png 368w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-768x625.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-860x700.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-680x554.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-400x326.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-15-50x41.png 50w" sizes="(max-width: 970px) 100vw, 970px"></p>
<p>Run the playground. You can see the iterations in the console and the accuracy in the right hand bar! When all is done, the Core ML model is saved! You can view the model and see the metadata!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16.jpg" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="2086" height="752" class="aligncenter size-full wp-image-12382" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16.jpg 2086w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-200x72.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-600x216.jpg 600w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-768x277.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-1024x369.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-1680x606.jpg 1680w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-1240x447.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-860x310.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-680x245.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-400x144.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/create-ml-15-16-50x18.jpg 50w" sizes="(max-width: 2086px) 100vw, 2086px"></p>
<h2>Tabular Classification</h2>
<h3>The Data</h3>
<p>Tabular data is one of the most advanced and interesting features about Create ML. By observing a bunch of features in a table, Create ML can detect patterns and create a classifier to detect the target feature you want.</p>
<p>In this case, let&#x2019;s deal with one of the most popular datasets in the world of machine learning- house pricings! And to make this more interesting, the dataset is not in the JSON format but rather the CSV format! Download the dataset <a href="https://github.com/appcoda/CreateMLQuickDemo/raw/master/resources/HouseData.csv?ref=appcoda.com">here</a></p>
<p>This dataset is a modified version of the Boston housing dataset found on the <a href="https://archive.ics.uci.edu/ml/datasets/Housing?ref=appcoda.com">UCI Machine Learning Repository</a>. Opening the file you can see that there is a huge table filled with numbers and 4 abbreviations. Here&#x2019;s what they mean:</p>
<ul>
<li><strong>RM</strong>: The average number of rooms per dwelling</li>
<li><strong>LSTAT</strong>: The percentage of population considered lower status</li>
<li><strong>PTRATIO</strong>: The pupil-student ratio in town</li>
<li><strong>MEDV</strong>: The median value of owner-occupied homes</li>
</ul>
<p>As you can guess, we&#x2019;ll be using the 3 features (RM, LSTAT, PTRATIO) to calculate the final price (MEDV)!</p>
<h3>The Code</h3>
<p>Getting Xcode to read the table is quite simple! The following code should look really similar to the text classification code!</p>
<pre lang="swift">
//1
let houseData = try MLDataTable(contentsOf: URL(fileURLWithPath: &quot;/Users/Path/To/HouseData.csv&quot;))
let (trainingCSVData, testCSVData) = houseData.randomSplit(by: 0.8, seed: 0)
//2
let pricer = try MLRegressor(trainingData: houseData, targetColumn: &quot;MEDV&quot;)
//3
let csvMetadata = MLModelMetadata(author: &quot;Sai Kambampati&quot;, shortDescription: &quot;A model used to determine the price of a house based on some features.&quot;, version: &quot;1.0&quot;)
try pricer.write(to: URL(fileURLWithPath: &quot;/Users/Path/To/Write/HousePricer.mlmodel&quot;), metadata: csvMetadata)
</pre>
<p>If you weren&#x2019;t able to understand the above code, no problem! I&#x2019;ll go through it step by step!</p>
<ol>
<li>The first step is to reference our data in `HouseData.csv`. This is done through a simple call of `URL(fileURLWithPath:)`. Next, we define what portion of the data should be split into training and testing. We&#x2019;ll split it into 80-20 like always and just to change things up a bit, let&#x2019;s start from the beginning (setting the `seed` to 0).</li>
<li>Next, we define a type of regressor named `pricer` for our data using the brand new <a href="https://developer.apple.com/documentation/create_ml/mlregressor?ref=appcoda.com">MLRegressor</a> enumeration. This is one of the coolest parts about Create ML. There are a lot of regressors ML algorithms use: Linear, Boosted Tree, Decision Tree, and Random Forests. And these are just the most common ones. Unless you&#x2019;re a ML expert, it can be hard to determine which one is the best suited for your data. This is where Create ML comes in to help. When you select `MLRegressor`, Create ML runs your data through all these regressors and chooses the best one for you. We choose the training data to be `houseData` and set the target column as `MEDV` which is the median price.
<p>Here&#x2019;s some quick terminology. You may be wondering what&#x2019;s the difference between a classifier and a regressor. A classifier groups the output of your data into classes or labels. Regressors, on the other hand, predicts the output value using training data. Regressors don&#x2019;t have labels. Also, in machine learning, features are the variables in a dataset. In our case, the <strong>features</strong> are the average number of rooms, percent of population, and the pupil-student ratio. The <strong>target</strong> is also a column in our data which is what we want the regression to predict. In this case, it&#x2019;s the median price of the houses.</p></li>
<li>Finally, we define some metadata for our model and save it to wherever we would like!</li>
</ol>
<p>As of this writing, Create ML does not support showing the accuracy of MLRegressors. It can only show the maximum error and the square root error, both of which don&#x2019;t contribute much to displaying how accurate the model is. Trust me though, when I say that the model Xcode generates is fairly accurate.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17.jpg" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="1319" height="767" class="aligncenter size-full wp-image-12380" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17.jpg 1319w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-200x116.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-516x300.jpg 516w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-768x447.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-1024x595.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-1240x721.jpg 1240w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-860x500.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-680x395.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-400x233.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-17-50x29.jpg 50w" sizes="(max-width: 1319px) 100vw, 1319px"></p>
<p>After the playground is done running, observe the right hand pane. It looks like Create ML has determined that Boosted Tree is the best regressor for our data! Isn&#x2019;t that amazing? I have saved my Core ML model to my Desktop. Open your Core ML model and observe the metadata.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18.png" alt="Introduction to Create ML: How to Train Your Own Machine Learning Model in Xcode" width="991" height="794" class="aligncenter size-full wp-image-12381" srcset="https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18.png 991w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-200x160.png 200w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-374x300.png 374w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-768x615.png 768w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-860x689.png 860w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-680x545.png 680w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-400x320.png 400w, https://www.appcoda.com/content/images/wordpress/2018/06/Create-ML-18-50x40.png 50w" sizes="(max-width: 991px) 100vw, 991px"></p>
<p>You can see that the model is a Pipeline Regressor and is about 10 KB. It takes in 3 features (just like we wanted) and output the final price!</p>
<h2>Conclusion</h2>
<p>In this tutorial, you saw how to create your own machine learning models using Apple&#x2019;s newest framework Create ML! With just a few lines of code, you can create advanced, state-of-the-art machine learning algorithms to process your data and give you the results you want!</p>
<p>You saw how to train images, text, and tabular data in both CSV and JSON formats. With <code>CreateMLUI</code> it&#x2019;s super simple to train images and while there is no UI for text and tabular data, you can write the code in less than 10 lines.</p>
<p>To learn more about Create ML, you can watch Apple&#x2019;s video on Create ML from <a href="https://developer.apple.com/videos/play/wwdc2018/703/?ref=appcoda.com">WWDC 2018 here</a>. You can also check out Apple&#x2019;s documentation on Create ML <a href="https://developer.apple.com/documentation/create_ml?ref=appcoda.com">here</a>.</p>
<p>You can download the final playground <a href="https://github.com/appcoda/CreateMLQuickDemo?ref=appcoda.com">here</a>. Along with the project, you&#x2019;ll get access to the final Core ML models so you can see if your model matches up! Keep experimenting with Create ML and observe your results as you import them to your iOS app! Let me know how everything goes and share screenshots of your app using CoreML in the comments below!</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Building Your First Blockchain App in Swift]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>Blockchain is one of the many disruptive technologies that has just started to gain traction among many people. Why? This is because blockchain is the founding technology for many cryptocurrencies like Bitcoin, Ethereum, and Litecoin. How exactly does Blockchain work though? In this tutorial, I&#x2019;ll be covering everything</p>]]></description><link>https://www.appcoda.com/blockchain-introduction/</link><guid isPermaLink="false">66612a0f166d3c03cf011469</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><category><![CDATA[UIKit]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Thu, 31 May 2018 00:44:14 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-featured.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-featured.jpg" alt="Building Your First Blockchain App in Swift"><p>Blockchain is one of the many disruptive technologies that has just started to gain traction among many people. Why? This is because blockchain is the founding technology for many cryptocurrencies like Bitcoin, Ethereum, and Litecoin. How exactly does Blockchain work though? In this tutorial, I&#x2019;ll be covering everything there is to know about the blockchain technology itself and how to make your own &#x201C;blockchain&#x201D; in Swift. Let&#x2019;s get started!</p>
<h2>Blockchain: How does it work?</h2>
<p>As the name implies, blockchain is a chain comprised of different blocks strung together. Each block contains 3 pieces of information: the data, a hash, and the previous block&#x2019;s hash.</p>
<ol>
<li><strong>Data</strong> &#x2013; Depending on the use, the data that is stored in a block depends on the type of blockchain. For example, in the Bitcoin blockchain, the data stored is the information relating to the transaction: the amount of money transferred and the information of the two people involved in the transaction.</li>
<li><strong>Hash</strong> &#x2013; You can think of a hash as a digital fingerprint. It is used to identify a block and its data. What&#x2019;s important about hashes is that it&#x2019;s a <strong>unique</strong> alphanumeric code, usually about 64 characters. When a block is created, so is its hash. When a block is modified, the hash is also modified. In this way, hashes are vital if you want to detect any changes made to the block.</li>
<li><strong>Previous block&#x2019;s hash</strong> &#x2013; By storing the hash of the previous block, you can see how each block is linked to form a blockchain! This is what makes a blockchain so secure.</li>
</ol>
<p>Take a look at this picture:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained.png" alt="Building Your First Blockchain App in Swift" width="2050" height="790" class="aligncenter size-full wp-image-12315" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained.png 2050w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-200x77.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-600x231.png 600w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-768x296.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-1024x395.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-1680x647.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-1240x478.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-860x331.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-680x262.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-400x154.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-explained-50x19.png 50w" sizes="(max-width: 2050px) 100vw, 2050px"></p>
<p>As you can see, each block consists of data (not shown), a hash, and the previous block&#x2019;s hash. For example, the yellow block contains its own hash: H7S6, and the red block&#x2019;s hash: 8SD9. This way, they all form a chain and are connected as such. Now, let&#x2019;s say that a malicious hacker comes along and tries to modify the red block. Well remember, every time a block is modified in any way, the hash of that block changes! That way when the next block runs a check and sees that its previous hash does not match, it will not be accessible to the hacker since it effectively cut itself away from the chain.</p>
<p>This is what makes blockchain such so secure. It&#x2019;s close to impossible to try to go back and change any data. While hashes do provide a great sense of secrecy and privacy, there are two more safeguards to keep blockchain even more secure: Proof-of-work and Smart Contracts. While I won&#x2019;t be going into the details, you can read more about it over <a href="https://hackernoon.com/what-on-earth-is-a-smart-contract-2c82e5d89d26?ref=appcoda.com">here</a>.</p>
<p>The last way a blockchain secures itself is based on its location. Unlike most data which is stored on servers and databases, blockchain uses a peer-to-peer network (P2P). A P2P is a type of network that allows anyone to join and the data on that network is distributed to each recipient.</p>
<p>When someone joins this network, they get a full copy of the blockchain. When someone creates a new block, it is sent to everyone on the network. Then, through several complex programs, the node determines whether or not the block has been tampered with before adding it to the chain. That way, the information is available for everyone, everywhere. This may sound familiar if you&#x2019;re a fan of <em>HBO&#x2019;s Silicon Valley</em>. In that TV show, the protagonist uses a similar technology to create his new internet.</p>
<p>Since everyone has a copy of the blockchain, or nodes, they can form a consensus and determine which blocks are valid and which are not. Therefore, if you want to hack one block, you&#x2019;ll have to hack more than 50% of the blocks on the network in order to pass along your information. This is why blockchain is perhaps one of the most secure technology created in the past decade.</p>
<h2>About the Sample Application</h2>
<p>Now that you have an understanding of how blockchain works, let&#x2019;s get started with our sample application! Download the <a href="https://github.com/appcoda/BlockchainDemo/raw/master/BlockchainStarter.zip?ref=appcoda.com">starter project here</a>.</p>
<p>As you can see, we have two Bitcoin wallets. The first account, Account 1065, has 500 BTC available while the second account, 0217, has nothing. We send bitcoins to the other account using the send button. To earn BTC, we can press the Mine button which will give us a reward of 50 BTC. Basically what we are doing is observing the transaction that takes place between 2 Bitcoin accounts by looking at the console when the app is running.</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2.png" alt="Building Your First Blockchain App in Swift" width="1440" height="758" class="aligncenter size-full wp-image-12300" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-200x105.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-570x300.png 570w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-768x404.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-1024x539.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-1240x653.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-860x453.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-680x358.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-400x211.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-2-50x26.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>You&#x2019;ll notice that in the sidebar there are two important classes: <code>Block</code> and <code>Blockchain</code>. Opening these files, you&#x2019;ll see that they&#x2019;re empty. That&#x2019;s because I&#x2019;ll walk you through writing the logic for these classes. Let&#x2019;s get started!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3.png" alt="Building Your First Blockchain App in Swift" width="1398" height="729" class="aligncenter size-full wp-image-12301" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3.png 1398w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-200x104.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-575x300.png 575w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-768x400.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-1024x534.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-1240x647.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-860x448.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-680x355.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-400x209.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-3-50x26.png 50w" sizes="(max-width: 1398px) 100vw, 1398px"></p>
<h2>Defining the Block in Swift</h2>
<p>Going over to <code>Block.swift</code> and let&#x2019;s add the code which would define a Block. To begin, let&#x2019;s break down what a Block is. We defined earlier that a block consists of 3 pieces of data: <em>the hash</em>, <em>the actual data</em> to be recorded, and <em>the previous block&#x2019;s hash</em>. When we want to build our blockchain, we have to know whether the block is the first or the second. Well we can easily define this in Swift. Add to the class:</p>
<pre lang="swift">var hash: String!
var data: String!
var previousHash: String!
var index: Int!
</pre>
<p>Now, one last important code needs to be added. I mentioned earlier how every time a block is modified, its hash changes. This is one of the features of a blockchain which makes it secure. So we have to create a function which generates a hash filled with random letters and numbers. This function only requires a few lines of code:</p>
<pre lang="swift">func generateHash() -&gt; String {
    return NSUUID().uuidString.replacingOccurrences(of: &quot;-&quot;, with: &quot;&quot;)
}
</pre>
<p><code>NSUUID</code> is an object which represents a universally unique value that bridges to UUID. It&#x2019;s built right into Swift and great for generating 32 character strings. This function generates a UUID, erases any hyphens, and returns the <code>String</code>, now known as the block&#x2019;s hash. <code>Block.swift</code> should now look like this:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4.png" alt="Building Your First Blockchain App in Swift" width="1440" height="761" class="aligncenter size-full wp-image-12302" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-200x106.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-568x300.png 568w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-768x406.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-1024x541.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-1240x655.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-860x454.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-680x359.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-400x211.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-4-50x26.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Now that we have defined the <code>Block</code> class, let&#x2019;s define our <code>Blockchain</code> class. Switch over to <code>Blockchain.swift</code> to begin.</p>
<h2>Defining the Blockchain in Swift</h2>
<p>Just like before, let&#x2019;s try to break down a blockchain into its fundamentals. In very basic terminology, a blockchain is nothing but a chain of blocks strung together, or in other words, <strong>a list with items held together</strong>. Does that sound familiar? If it does, that&#x2019;s because it&#x2019;s the definition of an Array! And this array is held together by blocks! Let&#x2019;s add this to our code!</p>
<pre lang="swift">var chain = [Block]()
</pre>
<div class="alert gray"><strong>Quick tip:</strong> This applies to almost everything in the world of Computer Science. If you ever encounter a large problem, try to break it down into small components in order to build you way up and solve the problem, just like what we did with figuring out how to add blocks and blockchains in Swift!</div>
<p>You&#x2019;ll notice that inside the array is the <code>Block</code> class which we defined earlier. That&#x2019;s all the variables we need for the blockchain. To finish up, we need to add two functions to our class. Try to answer this question based on what I taught earlier.</p>
<blockquote><p>
What are the two main functions in a blockchain?</p></blockquote>
<p>I hope you were able to answer this question! The two main functions a blockchain has is to create the genesis block (the initial block) and to add a new block to its end. Now, of course, I&#x2019;m not going into decentralizing the chain and adding smart contracts, but these are the basic functions! Add the following code to <code>Blockchain.swift</code>:</p>
<pre lang="swift">func createGenesisBlock(data:String) {
    let genesisBlock = Block()
    genesisBlock.hash = genesisBlock.generateHash()
    genesisBlock.data = data
    genesisBlock.previousHash = &quot;0000&quot;
    genesisBlock.index = 0
    chain.append(genesisBlock)
}

func createBlock(data:String) {
    let newBlock = Block()
    newBlock.hash = newBlock.generateHash()
    newBlock.data = data
    newBlock.previousHash = chain[chain.count-1].hash
    newBlock.index = chain.count
    chain.append(newBlock)
}
</pre>
<ol>
<li>The first function we are adding is creating the genesis block. To do this, we create a function which takes in the data of a block as input. Then we define a variable named <code>genesisBlock</code> and make it of type <code>Block</code>. Since it is of type <code>Block</code>, it has all the variables and functions we defined earlier in <code>Block.swift</code>. We set the hash to <code>generateHash()</code> and its data to the input <code>data</code>. Sice it&#x2019;s the first block, we set the previous block&#x2019;s hash to 0000 in order to let us know that it&#x2019;s the initial block. We set its index to 0 and we append it to the blockchain <code>chain</code>.</li>
<li>The next function we create is applicable for all the blocks after the <code>genesisBlock</code> and it creates the rest of the blocks. You&#x2019;ll notice it&#x2019;s very similar to the previous function. The only difference is that we set the <code>previousHash</code> to the previous block&#x2019;s hash and by setting its <code>index</code> to its place in the blockchain.</li>
</ol>
<p>And that&#x2019;s all! We are done defining our Blockchain! Your code should look like something below!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5.png" alt="Building Your First Blockchain App in Swift" width="1440" height="761" class="aligncenter size-full wp-image-12303" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-200x106.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-568x300.png 568w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-768x406.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-1024x541.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-1240x655.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-860x454.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-680x359.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-400x211.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-5-50x26.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Next, we&#x2019;ll connect all the pieces to our <code>ViewController.swift</code> file and see it running in action!</p>
<h2>The Wallet Backend</h2>
<p>Switching over to <code>ViewController.swift</code>, we can see that all our outlets are connected. All we need to do is handle the transaction and print them to the console.</p>
<p>Before we do that however, we should explore the Bitcoin blockchain a little bit. Bitcoins come from an overall account, let&#x2019;s say this account number was 0000. When you mine a BTC, it means that you solve math problems and are issued a certain number of bitcoins as a reward.  This provides a smart way to issue the currency and also creates an incentive for more people to mine. In our app, let&#x2019;s make the reward 100 BTC&#x2019;s. First, let&#x2019;s add the variables we need to our View Controller:</p>
<pre lang="swift">let firstAccount = 1065
let secondAccount = 0217
let bitcoinChain = Blockchain()
let reward = 100
var accounts: [String: Int] = [&quot;0000&quot;: 10000000]
let invalidAlert = UIAlertController(title: &quot;Invalid Transaction&quot;, message: &quot;Please check the details of your transaction as we were unable to process this.&quot;, preferredStyle: .alert)
</pre>
<p>We defined an account with number <code>1065</code> and another account with number <code>0217</code>. We also added a variable called <code>bitcoinChain</code> to be our blockchain and we let the <code>reward</code> be 100. We need a master account from where all the bitcoins come: this is our genesis account with number <code>0000</code>. It has 10 million bitcoins. You can think of this account like a bank where for every reward, 100 bitcoins are taken out of it into the rightful account. We also define an alert which will be shown every time a transaction could not be completed.</p>
<p>Now, let&#x2019;s code some generic functions which will run. Can you guess what these functions are?</p>
<ol>
<li>The first function is for handling the transaction. We make sure that the sender and recipient accounts receive or deduct the right amount and this information is recorded into our blockchain.</li>
<li>The next function is for printing to the console the entire record- it will show each block and the data in each block.</li>
<li>The final function is for verifying whether the blockchain is valid by making sure the previous block&#x2019;s hash matches with the information the next block has. Since we won&#x2019;t be demonstrating any hacking methods, in our demo, the chain will always be valid.</li>
</ol>
<h3>The Transaction Function</h3>
<p>Here&#x2019;s the generic transaction function we have. Enter the following code right underneath the place where we defined our variables.</p>
<pre lang="swift">func transaction(from: String, to: String, amount: Int, type: String) {
    // 1
    if accounts[from] == nil {
        self.present(invalidAlert, animated: true, completion: nil)
        return
    } else if accounts[from]!-amount &lt; 0 {
        self.present(invalidAlert, animated: true, completion: nil)
        return
    } else {
        accounts.updateValue(accounts[from]!-amount, forKey: from)
    }
    
    // 2
    if accounts[to] == nil {
        accounts.updateValue(amount, forKey: to)
    } else {
        accounts.updateValue(accounts[to]!+amount, forKey: to)
    }
    
    // 3
    if type == &quot;genesis&quot; {
        bitcoinChain.createGenesisBlock(data: &quot;From: \(from); To: \(to); Amount: \(amount)BTC&quot;)
    } else if type == &quot;normal&quot; {
        bitcoinChain.createBlock(data: &quot;From: \(from); To: \(to); Amount: \(amount)BTC&quot;)
    }
}
</pre>
<p>This may seem like a lot of code but at its core, it&#x2019;s just defining some rules to follow for each transaction. At the top, we have 4 parameters for this function: <em>to</em>, <em>from</em>, <em>amount</em>, and <em>type</em>. To, From, and Amount are self-explanatory but Type is basically defining the types of transaction. There are 2 types: normal and genesis. A normal type of transaction would be between account <code>1065</code> and <code>0217</code> where as a genesis account would involve the account <code>0000</code>.</p>
<ol>
<li>The first <code>if-else</code> condition regards the from account. If it doesn&#x2019;t exist or is short of money, we display the Invalid Transaction alert and return the function. Otherwise, we update the values as is.</li>
<li>The second <code>if-else</code> condition regards the account we send it to. If it doesn&#x2019;t exist, then we leave it alone and return the function. Otherwise, we send the right amount of bitcoins to the account.</li>
<li>The third <code>if-else</code> statement deals with the type transaction. If there is a transaction involving a genesis block, we create a new genesis block, otherwise, we create a new block storing the data.</li>
</ol>
<h3>The Printing Function</h3>
<p>At the end of every transaction, we want to see a list of all the transactions to make sure we know everything that is going on. Here&#x2019;s what we type right underneath the <code>transaction</code> function.</p>
<pre lang="swift">func chainState() {
    for i in 0...bitcoinChain.chain.count-1 {
        print(&quot;\tBlock: \(bitcoinChain.chain[i].index!)\n\tHash: \(bitcoinChain.chain[i].hash!)\n\tPreviousHash: \(bitcoinChain.chain[i].previousHash!)\n\tData: \(bitcoinChain.chain[i].data!)&quot;)
    }
    redLabel.text = &quot;Balance: \(accounts[String(describing: firstAccount)]!) BTC&quot;
    blueLabel.text = &quot;Balance: \(accounts[String(describing: secondAccount)]!) BTC&quot;
    print(accounts)
    print(chainValidity())
}
</pre>
<p>This is a simple for loop where for every block in out <code>bitcoinChain</code>, we print the Block number, the hash, the previous block&#x2019;s hash, and the data is stores. We update the labels on our UI so that they show the correct amount of BTC in each account. Finally, print a list of each account (which should be 3) and check the validity of the chain.</p>
<p>Now you should be getting an error on the last line of that function. This is because we haven&#x2019;t defined our <code>chainValidity()</code> function yet so let&#x2019;s get to it!</p>
<h3>The Validity Function</h3>
<p>Remember that a chain is valid if the previous block&#x2019;s hash matches with what out current block says it is. We can easily to that with another for loop to iterate over every block:</p>
<pre lang="swift">func chainValidity() -&gt; String {
    var isChainValid = true
    for i in 1...bitcoinChain.chain.count-1 {
        if bitcoinChain.chain[i].previousHash != bitcoinChain.chain[i-1].hash {
            isChainValid = false
        }
    }
    return &quot;Chain is valid: \(isChainValid)\n&quot;
}
</pre>
<p>Similar like before, we iterate over every block in our <code>bitcoinChain</code> and we check to see if the previous hash of a block matches with what our current block says it is.</p>
<p>And that&#x2019;s it! We have defined our functions that will be used every time! Your <code>ViewController.swift</code> should now look like this:</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6.png" alt="Building Your First Blockchain App in Swift" width="1440" height="900" class="aligncenter size-full wp-image-12304" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-200x125.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-480x300.png 480w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-768x480.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-1024x640.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-1240x775.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-860x538.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-680x425.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-400x250.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-6-50x31.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>All that&#x2019;s left is to just link up the buttons to the functions. Let&#x2019;s begin the final phase now!</p>
<h2>Linking Everything Together</h2>
<p>When our app first starts, we want the genesis account <code>0000</code> to send 50 BTC to our first account. From there, we&#x2019;ll have the first account give 10 BTC to the second account. This can be done quite simply in 3 lines of code. Change your <code>viewDidLoad</code> function to look like this:</p>
<pre lang="swift">override func viewDidLoad() {
    super.viewDidLoad()
    transaction(from: &quot;0000&quot;, to: &quot;\(firstAccount)&quot;, amount: 50, type: &quot;genesis&quot;)
    transaction(from: &quot;\(firstAccount)&quot;, to: &quot;\(secondAccount)&quot;, amount: 10, type: &quot;normal&quot;)
    chainState()
    self.invalidAlert.addAction(UIAlertAction(title: &quot;OK&quot;, style: .default, handler: nil))
}
</pre>
<p>We use the function which we defined earlier and we call the <code>chainState()</code> function at the end of it. Also, we add an OK action to our Invalid Transaction alert</p>
<p>Now let&#x2019;s see what to add in the remaining four functions: <code>redMine()</code>, <code>blueMine()</code>, <code>redSend()</code>, and <code>blueSend()</code>.</p>
<h3>The Mining Functions</h3>
<p>The mining functions are very easy and require only 3 lines of code. Here what to add:</p>
<pre lang="swift">@IBAction func redMine(_ sender: Any) {
    transaction(from: &quot;0000&quot;, to: &quot;\(firstAccount)&quot;, amount: 100, type: &quot;normal&quot;)
    print(&quot;New block mined by: \(firstAccount)&quot;)
    chainState()
}
    
@IBAction func blueMine(_ sender: Any) {
    transaction(from: &quot;0000&quot;, to: &quot;\(secondAccount)&quot;, amount: 100, type: &quot;normal&quot;)
    print(&quot;New block mined by: \(secondAccount)&quot;)
    chainState()
}
</pre>
<p>In the first mining function, we use our <code>transaction</code> function to send 100 BTC from the genesis account to our first account. We print that a block was mined and  print the <code>chainState</code>. Similarly, we send 100 BTC to our second account in the <code>blueMine</code> function.</p>
<h3>The Sending Functions</h3>
<p>The sending functions are also slightly similar.</p>
<pre lang="swift">@IBAction func redSend(_ sender: Any) {
    if redAmount.text == &quot;&quot; {
        present(invalidAlert, animated: true, completion: nil)
    } else {
        transaction(from: &quot;\(firstAccount)&quot;, to: &quot;\(secondAccount)&quot;, amount: Int(redAmount.text!)!, type: &quot;normal&quot;)
        print(&quot;\(redAmount.text!) BTC sent from \(firstAccount) to \(secondAccount)&quot;)
        chainState()
        redAmount.text = &quot;&quot;
    }
}
    
@IBAction func blueSend(_ sender: Any) {
    if blueAmount.text == &quot;&quot; {
        present(invalidAlert, animated: true, completion: nil)
    } else {
        transaction(from: &quot;\(secondAccount)&quot;, to: &quot;\(firstAccount)&quot;, amount: Int(blueAmount.text!)!, type: &quot;normal&quot;)
        print(&quot;\(blueAmount.text!) BTC sent from \(secondAccount) to \(firstAccount)&quot;)
        chainState()
        blueAmount.text = &quot;&quot;
    }
}
</pre>
<p>First, we check to see if the <code>redAmount</code> or <code>blueAmount</code> text field is empty. If it is, we display the Invalid Transaction alert. If not, we&#x2019;re clear to go. We use our <code>transaction</code> function to send money from the first account to the second account (or vice versa) with the amount entered and set the type to <code>normal</code>. We print how much has been send and we call our <code>chainState()</code> function. At the end, we clear the text field.</p>
<p>And we&#x2019;re all done! Check to see if your code matches the image below!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7.png" alt="Building Your First Blockchain App in Swift" width="1440" height="764" class="aligncenter size-full wp-image-12305" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-200x106.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-565x300.png 565w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-768x407.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-1024x543.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-1240x658.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-860x456.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-680x361.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-400x212.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-7-50x27.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<p>Run the app and try it out! From the front end, it looks like a normal transaction app, but you&#x2019;ll know what&#x2019;s going on behind the scenes! Play around with the app transferring BTC from one account to another, try to trick it, and have fun with it!</p>
<p><img loading="lazy" decoding="async" src="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8.png" alt="Building Your First Blockchain App in Swift" width="1920" height="1080" class="aligncenter size-full wp-image-12306" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8.png 1920w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-200x113.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-533x300.png 533w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-768x432.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-1024x576.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-1680x945.png 1680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-1240x698.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-860x484.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-680x383.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-400x225.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/blockchain-8-50x28.png 50w" sizes="(max-width: 1920px) 100vw, 1920px"></p>
<h2>Conclusion</h2>
<p>In this tutorial, you learned how to create a blockchain in Swift and made your own Bitcoin transaction. Do note that in a real cryptocurrency&#x2019;s backend, the implementation would look nothing like above as it would need to be decentralized with Smart Contracts but the above example is for learning purposes.</p>
<p>In this example, we used Blockchain for cryptocurrency but can you think of any other ways blockchain can be used? Let me know in the comments below! Hope you learned something new!</p>
<p>For reference, you can <a href="https://github.com/appcoda/BlockchainDemo?ref=appcoda.com">download the full project on GitHub</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Working with Drag and Drop APIs in iOS]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>Welcome to the first part of the Drag and Drop series! In this tutorial, you will learn how to implement the drag and drop functionality onto a <code>UIViewController</code>. In the next part of the series, you will learn how to use the Drag and Drop APIs with <code>UITableViewControllers</code> and <code>UICollectionViewControllers</code></p>]]></description><link>https://www.appcoda.com/drag-and-drop/</link><guid isPermaLink="false">66612a0f166d3c03cf011465</guid><category><![CDATA[iOS Programming]]></category><category><![CDATA[Swift]]></category><category><![CDATA[UIKit]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Thu, 03 May 2018 09:37:50 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2018/05/taras-shypka-424932-unsplash.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2018/05/taras-shypka-424932-unsplash.jpg" alt="Working with Drag and Drop APIs in iOS"><p>Welcome to the first part of the Drag and Drop series! In this tutorial, you will learn how to implement the drag and drop functionality onto a <code>UIViewController</code>. In the next part of the series, you will learn how to use the Drag and Drop APIs with <code>UITableViewControllers</code> and <code>UICollectionViewControllers</code> .</p>
<p>One of the most anticipated releases of iOS 11 was the announcement of several new Drag and Drop APIs. So for those of you who aren&#x2019;t familiar with Drag and Drop, it is a way to graphically move or copy data from one application to another or sometimes within the same application.</p>
<p>There are a lot of examples where you can implement Drag and Drop into your apps and while there are a numerous number of APIs for you to implement for different scenarios, it is really easy to implement. I&#x2019;ll be teaching you how you can implement Drag and Drop within your apps specifically regarding to the <code>UIViewController</code>. Let&#x2019;s get started!</p>
<div class="alert green"><strong>Note:</strong> These Drag and Drop APIs work only on Swift 4 and iOS 11 so make sure you are running this on Xcode 9 or above.</div>
<h2>Introduction to Drag and Drop</h2>
<p>As mentioned earlier, Drag and Drop is a graphical way to move or copy data between two applications. Here&#x2019;s some quick terminology:</p>
<ol>
<li><strong>Source App</strong>: The app from which the item or information is dragged (copied)</li>
<li><strong>Destination App</strong>: The app from which the item or information is dropped (pasted)</li>
<li><strong>Drag Activity</strong>: The dragging action, from start to finish.</li>
<li><strong>Drag Session</strong>: The items which are dragged, managed by the system, throughout teh drag activity</li>
</ol>
<div class="alert gray"><strong>Note</strong>: Dragging and Dropping between apps is supported only on iPads. For iPhone, the source app is the same as the destination app.</div>
<p>A really neat fact about Drag and Drop is that while you are dragging items from one app to another, the source and destination apps continue processing as usual. Therefore, these two actions are synchronous. You can continue operating an app with another finger or start a new drag activity.</p>
<p>Also, there is no need to incorporate these APIs into <code>UITextViews</code> and <code>UITextField</code> as they automatically support drag and drop. You can configure <code>UICollectionViews</code>, <code>UITableViews</code>, and almost any view to support Drag and Drop.</p>
<p>For this tutorial, we&#x2019;ll focus on adding Drag and Drop into <code>UIViewControllers</code>. Let&#x2019;s dive into the implementation.</p>
<h2>How to Implement Dropping</h2>
<p>First, download the starter project over <a href="https://github.com/appcoda/Drag-and-Drop-Demo/raw/master/StarterProject.zip?ref=appcoda.com">here</a>. As you can see, we have a simple <code>ViewController</code> with two <code>UIImageViews</code> that are already linked to the code. These two image views are designed for you to drop an image. Now all we have to do is start coding!</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12151" src="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard.png" alt="Working with Drag and Drop APIs in iOS" width="1280" height="419" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard.png 1280w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-200x65.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-600x196.png 600w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-768x251.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-1024x335.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-1240x406.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-860x282.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-680x223.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-400x131.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-storyboard-50x16.png 50w" sizes="(max-width: 1280px) 100vw, 1280px"></p>
<p>Let&#x2019;s start with integrating the Dropping APIs. We can now implement the API in several short steps. Our first step is to add the <code>UIDropInteractionDelegate</code> to the ViewController. So adjust the class to look like this:</p>
<pre lang="swift">class ViewController: UIViewController, UIDropInteractionDelegate {

    @IBOutlet weak var firstImageView: UIImageView!
    @IBOutlet weak var secondImageView: UIImageView!    
        
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
    }
    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }
}
</pre>
<p>Next, we have to add an interaction recognizer to our view. This can be implemented in our <code>viewDidLoad</code> method. After <code>super.viewDidLoad()</code>, type the following line:</p>
<pre class="swift">view.addInteraction(UIDropInteraction(delegate: self))
</pre>
<p>Now, to make our class conform to the <code>UIDropInteractionDelegate</code>, we have to implement 3 methods. I will explain each one to you.</p>
<ol>
<li><code>func dropInteraction(_ interaction: UIDropInteraction, performDrop session: UIDropSession)</code><br>
This method tells the delegate it can request the item provider data from the session&#x2019;s drag items.</li>
<li><code>func dropInteraction(_ interaction: UIDropInteraction, sessionDidUpdate session: UIDropSession) -&gt; UIDropProposal</code><br>
This method tells the delegate that the drop session has changed. In our case, we want to copy the items if the session has updated.</li>
<li><code>func dropInteraction(_ interaction: UIDropInteraction, canHandle session: UIDropSession) -&gt; Bool</code><br>
This method check to see if the view can handle the session&#x2019;s drag items. In our scenario, we want the view to accept images as the drag item.</li>
</ol>
<p>Let&#x2019;s first implement the second and third methods like this:</p>
<pre class="swift">func dropInteraction(_ interaction: UIDropInteraction, sessionDidUpdate session: UIDropSession) -&gt; UIDropProposal {
    return UIDropProposal(operation: .copy)
}

func dropInteraction(_ interaction: UIDropInteraction, canHandle session: UIDropSession) -&gt; Bool {
    return session.canLoadObjects(ofClass: UIImage.self)
}
</pre>
<p>In the <code>dropInteraction(_:sessionDidUpdate:)</code> method, we return a <code>UIDropProposal</code> object and specify it&#x2019;s a copy operation. You must return a <code>UIDropProposal</code> object if a view&#x2019;s drop interaction delegate accepts dropped items. For the <code>dropInteraction(_:canHandle:)</code> method, the return value of the implementation indicates whether the drop is accepted. In the code above, we only accept drop activities that contain images.</p>
<p>We are almost done. We now have to add the code that will allow the app to perform the drop onto the view. Copy and paste the code below:</p>
<pre lang="swift">func dropInteraction(_ interaction: UIDropInteraction, performDrop session: UIDropSession) {
    // 1
    for dragItem in session.items {
        // 2
        dragItem.itemProvider.loadObject(ofClass: UIImage.self, completionHandler: { object, error in
            // 3
            guard error == nil else { return print(&quot;Failed to load our dragged item&quot;) }
            guard let draggedImage = object as? UIImage else { return }
            // 4
            DispatchQueue.main.async {
                let centerPoint = session.location(in: self.view)
                //5
                if session.location(in: self.view).y &lt;= self.firstImageView.frame.maxY {
                    self.firstImageView.image = draggedImage
                    self.firstImageView.center = centerPoint
                } else {
                    self.secondImageView.image = draggedImage
                    self.secondImageView.center = centerPoint
                }
            }
        })
    }
}
</pre>
<ol>
<li>Sometimes, when we drag an image onto our board, we may end up dragging more than one image. This returns an array <code>session.items</code>, so we run the following code for each <code>dragItem</code>.</li>
<li>We load the object which are <code>UIImage</code> in the <code>dragItems</code>.</li>
<li>In case of an error, we use a <code>guard</code> statement to handle the error. If an error exists (say, the item doesn&#x2019;t conform to the <code>UIImage</code> class), then we print an error message.</li>
<li>If there is no error, we set the image of <code>imageView</code> to <code>draggedImage</code>.</li>
<li>We determine where the finger is placed in order to see if we set the image to the first <code>imageView</code> or the second <code>imageView</code>. Notice that the center of the image is exactly where our finger is.</li>
</ol>
<p>Now, we are done! Let&#x2019;s run the app and check to see how well it works! But, first make sure your code looks something like this:</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12141" src="https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2.png" alt="Working with Drag and Drop APIs in iOS" width="1440" height="803" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2.png 1440w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-200x112.png 200w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-538x300.png 538w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-768x428.png 768w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-1024x571.png 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-1240x691.png 1240w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-860x480.png 860w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-680x379.png 680w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-400x223.png 400w, https://www.appcoda.com/content/images/wordpress/2018/05/dropping-api-2-50x28.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<div class="alert gray"><strong>Editor&#x2019;s note:</strong> If you don&#x2019;t know how to use drag and drop in iOS 11, you can refer to <a href="https://www.macworld.co.uk/how-to/ipad/how-drag-drop-in-ios-11-3660811/?ref=appcoda.com">this guide</a>.</div>
<p>Dragging and Dropping between apps is supported only on iPads. So if you want to drag and image from Safari or Photos and drop it onto the view, you will need to run the sample app on an iPad. iPhone only supports dragging and dropping within the app. On a simulator, it may take sometime for the image to copy itself onto the board.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12150" src="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo.jpg" alt="Working with Drag and Drop APIs in iOS" width="1024" height="1365" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-200x267.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-225x300.jpg 225w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-768x1024.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-860x1146.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-680x906.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-400x533.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-50x67.jpg 50w" sizes="(max-width: 1024px) 100vw, 1024px"></p>
<p>If you follow me correctly, your app should run as expected! But what if you made a mistake and want to drag the image in the first image view into the second? Well it&#x2019;s quite simple! Now, we have to implement the dragging API.</p>
<h2>Dragging an item</h2>
<p>Now, say that the image we had in our board was a mistake and we wanted it to be removed. Well, this would require us to drag the image to another location. To achieve this, we would need to implement the dragging APIs. Let&#x2019;s see how it&#x2019;s done!</p>
<p>First, let&#x2019;s add the <code>UIDragInteractionDelegate</code>. To do this, we simply have to add <code>UIDragInteractionDelegate</code> to the list of protocols in our class:</p>
<pre lang="swift">class ViewController: UIViewController, UIDropInteractionDelegate, UIDragInteractionDelegate
</pre>
<p>Now you might get an error and that&#x2019;s because we have not implemented the required protocol stubs in our code. Unlike the <code>UIDropInteractionDelegate</code> where we needed 3 methods to adhere to the protocol, we need only one in this case. Add this function at the end of the code:</p>
<pre lang="swift">func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning session: UIDragSession) -&gt; [UIDragItem] {
}
</pre>
<p>This method is basically used for detecting the type of object being dragged and how to handle it.</p>
<p>Before we implement this method, we need to modify our <code>viewDidLoad</code> code slightly. Since we&#x2019;ll be touching the image view, we need to enable the <code>userInteraction</code> property for both image views. Modify the <code>viewDidLoad</code> method to look like this:</p>
<pre lang="swift">override func viewDidLoad() {
    super.viewDidLoad()
    view.addInteraction(UIDropInteraction(delegate: self))
    view.addInteraction(UIDragInteraction(delegate: self))
    firstImageView.isUserInteractionEnabled = true
    secondImageView.isUserInteractionEnabled = true
}
</pre>
<p>Now, we&#x2019;re almost done. Modify your <code>dragInteraction(:_)</code> function to this.</p>
<pre lang="swift">func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning session: UIDragSession) -&gt; [UIDragItem] {
        if session.location(in: self.view).y &lt;= self.firstImageView.frame.maxY {
            guard let image = firstImageView.image else { return [] }
            let provider = NSItemProvider(object: image)
            let item = UIDragItem(itemProvider: provider)
            return [item]
        } else {
            guard let image = secondImageView.image else { return [] }
            let provider = NSItemProvider(object: image)
            let item = UIDragItem(itemProvider: provider)
            return [item]
        }
}
</pre>
<p>And that&#x2019;s all! Run your code and see if you can drag and drop between images from your Photos albums or the web. Copy them from the first image view to the second image view! It&#x2019;s a drag fest! Check out the result below!</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-12156" src="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2.jpg" alt="Working with Drag and Drop APIs in iOS" width="1024" height="1365" srcset="https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2.jpg 1024w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-200x267.jpg 200w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-225x300.jpg 225w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-768x1024.jpg 768w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-860x1146.jpg 860w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-680x906.jpg 680w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-400x533.jpg 400w, https://www.appcoda.com/content/images/wordpress/2018/05/drag-and-drop-demo-2-50x67.jpg 50w" sizes="(max-width: 1024px) 100vw, 1024px"></p>
<h2>What&#x2019;s Next</h2>
<p>As you can see, it&#x2019;s quite simple to add Drag and Drop to your apps enabling your app to activate a whole new set of powerful APIs. In this tutorial, you learned how to drag and drop images, but this can be applied to text as well! What&#x2019;s more is that drag and drop in different view controllers such as <code>UITableViews</code> and <code>UICollectionViews</code> can really provide a seamless experience for your users.</p>
<p>To download the complete project, you can do so from the Github repository <a href="https://github.com/appcoda/Drag-and-Drop-Demo?ref=appcoda.com">here</a>.</p>
<p>To learn more about Drag and Drop, I reccomend checking out some of these video from WWDC 2017!</p>
<ol>
<li><a href="https://developer.apple.com/videos/play/wwdc2017/203/?ref=appcoda.com">Introducing Drag and Drop</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2017/213/?ref=appcoda.com">Mastering Drag and Drop</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2017/223/?ref=appcoda.com">Drag and Drop with Collection and Table View</a></li>
<li><a href="https://developer.apple.com/videos/play/wwdc2017/227/?ref=appcoda.com">Data Delivery with Drag and Drop</a></li>
</ol>
<p>Finally, here is Apple&#x2019;s Official Documentation on <a href="https://developer.apple.com/documentation/appkit/drag_and_drop?ref=appcoda.com">Drag and Drop</a>.</p>
<p>Let me know what you think of the tutorial and whether or not you would like to see a series on Drag and Drop!</p>
<div class="alert gray"><strong>Editor&#x2019;s note</strong>: If you want to learn more about Drag and Drop API, here is <a href="https://www.appcoda.com/drag-and-drop-api/">a new tutorial</a> that shows you how to build a Trello-like application using the API.</div>

<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Introduction to Natural Language Processing in Swift]]></title><description><![CDATA[<!--kg-card-begin: html-->
<p>There are several underused and not-so-popular frameworks hidden in the iOS SDK. Some of them can be useful and time-saving tools. The Natural Language Processing Class is one of them. Available in both Swift and Obj-C, the <code>NSLinguisticTagger</code> Class is used analyze natural language text to tag part of speech</p>]]></description><link>https://www.appcoda.com/natural-language-processing-swift/</link><guid isPermaLink="false">66612a0f166d3c03cf011454</guid><category><![CDATA[AI]]></category><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Sai Kambampati]]></dc:creator><pubDate>Fri, 22 Dec 2017 12:58:21 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2017/12/freestocks-org-65291.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<img src="https://www.appcoda.com/content/images/wordpress/2017/12/freestocks-org-65291.jpg" alt="Introduction to Natural Language Processing in Swift"><p>There are several underused and not-so-popular frameworks hidden in the iOS SDK. Some of them can be useful and time-saving tools. The Natural Language Processing Class is one of them. Available in both Swift and Obj-C, the <code>NSLinguisticTagger</code> Class is used analyze natural language text to tag part of speech and lexical class, identify names, perform lemmatization, and determine the language and script. As a result, it is used extensively in <a href="https://www.appcoda.com/tag/coreml/">machine learning</a> programs. What does this really mean? Well, that&#x2019;s what you&#x2019;ll find out!</p>
<div class="alert green"><strong>Note:</strong> This tutorial is developed in Swift 4 and has been tested on Xcode 9.2.</div>
<p>To begin, let&#x2019;s go to Xcode and create a new playground. Name the playground whatever you want and set the platform to <code>macOS</code>. Once the playground is created, select everything and delete it. This way you&#x2019;ll have a clean slate to work on. At the top of the playground, type the code below to import the following library.</p>
<pre lang="swift">import Foundation
</pre>
<p>To experiment with the new NLP API, let&#x2019;s choose a big paragraph to mess around with. Here the block of text we&#x2019;ll have our code analyze.</p>
<pre lang="swift">let quote = &quot;Here&apos;s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They&apos;re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can&apos;t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do. - Steve Jobs (Founder of Apple Inc.)&quot;
</pre>
<p>The very first thing we need to do is create a tagger. In Natural Language Processing, a tagger is basically a piece of software which can read text and &#x201C;tag&#x201D; various information to it such as part of speech, recognize names and languages, perform lemmatization, etc. We do this by calling the <code>NSLinguisticTagger</code> class. In the Playground file, insert the following lines of code:</p>
<pre lang="swift">let tagger = NSLinguisticTagger(tagSchemes:[.tokenType, .language, .lexicalClass, .nameType, .lemma], options: 0) 
let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace, .joinNames]
</pre>
<p>What are these tag schemes? Well, basically tag schemes are the constants used to identify the pieces of information we want from the text. The tag schemes we ask the tagger to look for are the token type, language, lexical class, name type, and lemma. We&#x2019;ll be using these tag schemes in the rest of the tutorial. Here&#x2019;s what each one is:</p>
<ol>
<li><a href="https://developer.apple.com/documentation/foundation/nslinguistictagscheme/1411898-tokentype?ref=appcoda.com">Token Type</a>: A property which classifies each character as either a word, punctuation, or whitespace.</li>
<li><a href="https://developer.apple.com/documentation/foundation/nslinguistictagscheme/1408597-language?ref=appcoda.com">Language</a>: Determines the language of the token</li>
<li><a href="https://developer.apple.com/documentation/foundation/nslinguistictagscheme/1415311-lexicalclass?ref=appcoda.com">Lexical Class</a>: A property which classifies each token according to its class. For example, it&#x2019;ll determine the part of speech for a word, the type of punctuation for a punctuation, or the type of whitespace for a whitespace.</li>
<li><a href="https://developer.apple.com/documentation/foundation/nslinguistictagscheme/1415135-nametype?ref=appcoda.com">Name Type</a>: This property looks for tokens which are part of a named entity. It&#x2019;ll look for a personal name, an organizational name, and a place name.</li>
<li><a href="https://developer.apple.com/documentation/foundation/nslinguistictagscheme/1416890-lemma?ref=appcoda.com">Lemma</a>: This basically returns the stem of a word token. I&#x2019;ll be going into more detail about this later on.</li>
</ol>
<p>The <code>options</code> portion basically tells the API how to split up the text. We&#x2019;re asking the analyzer to ignore any punctuation and any whitespace. If there is a named entity, join it together.</p>
<p>With the initial setup, now we are ready to begin writing code using NLP in Swift! Before we continue to add any code, please make sure your code looks something like this.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-11577 size-full" src="https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1.png" alt="Introduction to Natural Language Processing in Swift" width="2854" height="502" srcset="https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1.png 2854w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-200x35.png 200w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-600x106.png 600w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-768x135.png 768w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-1024x180.png 1024w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-1680x296.png 1680w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-1240x218.png 1240w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-860x151.png 860w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-680x120.png 680w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-400x70.png 400w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-1-50x9.png 50w" sizes="(max-width: 2854px) 100vw, 2854px"></p>
<h2>Language Identification</h2>
<p>So now, let&#x2019;s begin by identifying what language this text is in. Obviously, we know that it&#x2019;s in English but our computer doesn&#x2019;t know that. Let&#x2019;s create a function to determine the language:</p>
<pre lang="swift">func determineLanguage(for text: String) {
    tagger.string = text
    let language = tagger.dominantLanguage
    print(&quot;The language is \(language!)&quot;)
}
</pre>
<p>This code should be fairly simple to understand but in case you didn&#x2019;t, don&#x2019;t worry. I&#x2019;ll break it down for you. We assign the string a user inputs to the tagger. We define a constant <code>language</code> to be the dominant language of the string the tagger is assigned to and print it.</p>
<div class="alert gray"><strong>Note:</strong> The dominant language is the most frequently occurring language in the string. If you had a sentence which had a mix of words from English, French, and Spanish, it would choose the most common language.</div>
<p>Now let&#x2019;s call the function with <code>determineLanguage(for: quote)</code>. You should get an output which reads;</p>
<pre class="swift">The language is en
</pre>
<h2>Tokenization</h2>
<p>The next step in parsing text is tokenization. Tokenization is the process of splitting sentences, paragraphs, or documents into your choice of length. In this scenario, we&#x2019;ll be splitting the quote above into words. As before, let&#x2019;s create a function:</p>
<pre lang="swift">func tokenizeText(for text: String) {
    tagger.string = text
    let range = NSRange(location: 0, length: text.utf16.count)
    tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { tag, tokenRange, stop in
        let word = (text as NSString).substring(with: tokenRange)
        print(word)
    }
}
</pre>
<p>Let&#x2019;s break down the code. Similar to what we&#x2019;ve done earlier, we set the text a user inputs to be the tagger&#x2019;s string. Next, we define a constant <code>range</code> to be the range of characters the API should tokenize. After that, we call the <code>tagger.enumerateTags</code> function to tokenize. We set the range, the length to <code>.word</code>, what Linguistic Tag Scheme to choose, and refer to the <code>options</code> constant we made earlier (i.e. what to ignore and what to join).</p>
<p>Upon every word the function tokenizes, we ask the function to print the word to the console. Now insert the following line of code to call the function:</p>
<pre class="swift">tokenizeText(for: quote)
</pre>
<p>You should get a <strong>long</strong> list of all the words looking something like &#x201C;Here, &#x2018;s, to, the, &#x2026; , Founder, of, Apple Inc.&#x201D;</p>
<pre class="sh">Here
&apos;s
to
the
crazy
ones
The
misfits
The
rebels
The
troublemakers
The
round
pegs
in
the
square
holes
The
ones
who
see
things
differently
They
&apos;re
not
fond
of
rules
And
they
have
no
respect
for
the
status
quo
You
can
quote
them
disagree
with
them
glorify
or
vilify
them
About
the
only
thing
you
ca
n&apos;t
do
is
ignore
them
Because
they
change
things
They
push
the
human
race
forward
And
while
some
may
see
them
as
the
crazy
ones
we
see
genius
Because
the
people
who
are
crazy
enough
to
think
they
can
change
the
world
are
the
ones
who
do
Steve Jobs
Founder
of
Apple Inc.
</pre>
<div class="alert gray"><strong>Note</strong>: See how Apple Inc. was attached together. This is because the API recognized it&#x2019;s a named entity and we told it earlier in our `options` constant to join together names. Pretty cool, right?</div>
<h2>Lemmatization</h2>
<p>Now that we have identified the language and dove in a little deeper by splitting up the quote into words, let&#x2019;s go even more deeper by transforming the words into their base root. This is called <strong>Lemmatization</strong>. Take the word <em>run</em> for example. It can be transformed into running, ran, will run, etc. Since there are many forms of a word, Lemmatization breaks down the word into its most basic form.</p>
<p>Let&#x2019;s implement the following function to lemmatize the words.</p>
<pre lang="swift">func lemmatization(for text: String) {
    tagger.string = text
    let range = NSRange(location:0, length: text.utf16.count)
    tagger.enumerateTags(in: range, unit: .word, scheme: .lemma, options: options) { tag, tokenRange, stop in
        if let lemma = tag?.rawValue {
            print(lemma)
        }
    }
}
</pre>
<p>This block of code is about 95% similar to our <code>tokenizeText</code> function. Instead of the <code>.tokenType</code> scheme, we use the <code>.lemma</code> scheme. Then, since the raw value of the tag is the lemma of the word, we have the function <code>print</code> to display the lemma for every word. Now invoke the function and take a look:</p>
<pre class="swift">lemmatization(for: quote)
</pre>
<p>The list will look pretty similar to the same list you got after tokenizing the quote. But, there are a couple of differences. For example, notice how misfits, rebels, and troublemakers all have been return in their singular form. In the phrase &#x201C;They are not fond of&#x2026;&#x201D;, see how <em>are</em> is returned to the console as <em>be</em>.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-11582" src="https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4.png" alt="Introduction to Natural Language Processing in Swift" width="1440" height="759" srcset="https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4.png 1440w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-200x105.png 200w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-569x300.png 569w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-768x405.png 768w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-1024x540.png 1024w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-1240x654.png 1240w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-860x453.png 860w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-680x358.png 680w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-400x211.png 400w, https://www.appcoda.com/content/images/wordpress/2017/12/nlp-playground-4-50x26.png 50w" sizes="(max-width: 1440px) 100vw, 1440px"></p>
<h2>Parts of Speech</h2>
<p>Diving in a little more deeper, let&#x2019;s take every word in the quote and identify its part of speech.</p>
<pre lang="swift">func partsOfSpeech(for text: String) {
    tagger.string = text
    let range = NSRange(location: 0, length: text.utf16.count)
    tagger.enumerateTags(in: range, unit: .word, scheme: .lexicalClass, options: options) { tag, tokenRange, _ in
        if let tag = tag {
            let word = (text as NSString).substring(with: tokenRange)
            print(&quot;\(word): \(tag.rawValue)&quot;)
        }
    }
}
</pre>
<p>By now, the code should look really familiar. Same as our <code>tokenizeText</code> function, the only key difference is changing the scheme to <code>.lexicalClass</code>.</p>
<pre lang="swift">partsOfSpeech(for: quote)
</pre>
<p>The console returns each word and its corresponding part of speech. You can see the verbs, nouns, prepositions, adjectives, etc. Here are some of the results:</p>
<pre class="sh">The: Determiner
troublemakers: Noun
The: Determiner
round: Noun
pegs: Noun
in: Preposition
the: Determiner
square: Adjective
holes: Noun
The: Determiner
ones: Noun
who: Pronoun
see: Verb
</pre>
<h2>Named Entity Recognition</h2>
<p>Finally, let&#x2019;s see if the quote can recognize any names, organizations, or places in the quote above. Here&#x2019;s the function below:</p>
<pre lang="swift">func namedEntityRecognition(for text: String) {
    tagger.string = text
    let range = NSRange(location: 0, length: text.utf16.count)
    let tags: [NSLinguisticTag] = [.personalName, .placeName, .organizationName]
    tagger.enumerateTags(in: range, unit: .word, scheme: .nameType, options: options) { tag, tokenRange, stop in
        if let tag = tag, tags.contains(tag) {
            let name = (text as NSString).substring(with: tokenRange)
            print(&quot;\(name): \(tag.rawValue)&quot;)
        }
    }
}

namedEntityRecognition(for: quote)
</pre>
<p>Notice how there&#x2019;s one extra line of code? These are the tags we want our tagger to be on the lookout for. We want our tagger to list any personal names, place names, or organization names. Of course, change the scheme to <code>.nameType</code> and the rest should be straightforward.</p>
<p><em>Note: You&#x2019;re probably wondering why it&#x2019;s important to search for any named entities in the text. This is because it can lend a lot of insight into the <em>context</em> of the text.</em></p>
<p>As you probably expected, the function returns Steve Jobs as a Personal Name and Apple Inc. as an Organization Name.</p>
<pre class="sh">Apple Inc.: Noun
Steve Jobs: PersonalName
Apple Inc.: OrganizationName
</pre>
<h2>What&#x2019;s Next</h2>
<p>Hopefully by now you know how to use NLP in Swift. However, you&#x2019;re probably wondering how exactly you can implement these in your apps. One way to implement the NLP API into your app is through the search result. Suppose you had a photos app with captions for each photo. One caption reads &#x201C;Hike on Mt. Shasta&#x201D;. Your user probably expect the same photo to show up when he/she search for hike, or hiking, or hikes. This can be implemented through lemmatization and that would definitely improve the user experience of your app.</p>
<p>For reference, you can refer to the <a href="https://github.com/appcoda/NaturalLanguageProcessing?ref=appcoda.com">complete Xcode playground on GitHub</a>.</p>
<p>To learn more about NLP in Swift, you can check out Apple&#x2019;s WWDC 2017 video <a href="https://developer.apple.com/videos/play/wwdc2017/208/?ref=appcoda.com">here</a>.</p>
<p>For more details about the <code>NSLinguisticTagger</code> class, you can refer to the <a href="https://developer.apple.com/documentation/foundation/nslinguistictagger?ref=appcoda.com">official documentation here</a>.</p>

<!--kg-card-end: html-->
]]></content:encoded></item></channel></rss>