SwiftUI

Building an AI Image Recognition App Using Google Gemini and SwiftUI


Previously, we provided a brief introduction to Google Gemini APIs and demonstrated how to build a Q&A application using SwiftUI. You should realize how straightforward it is to integrate Google Gemini and enhance your apps with AI features. We have also developed a demo application to demonstrate how to construct a chatbot app using the AI APIs.

The gemini-pro model discussed in the previous tutorial is limited to generating text from text-based input. However, Google Gemini also offers a multimodal model called gemini-pro-vision, which can generate text descriptions from images. In other words, this model has the capacity to detect and describe objects in an image.

In this tutorial, we will demonstrate how to use Google Gemini APIs for image recognition. This simple app allows users to select an image from their photo library and uses Gemini to describe the contents of the photo.

google-gemini-image-recognition-demo

Before proceeding with this tutorial, please visit Google AI Studio and create your own API key if you haven’t done so already.

Adding Google Generative AI Package in Xcode Projects

Assuming you’ve already created an app project in Xcode, the first step to using Gemini APIs is importing the SDK. To accomplish this, right-click on the project folder in the project navigator and select Add Package Dependencies. In the dialog box, input the following package URL:

You can then click on the Add Package button to download and incorporate the GoogleGenerativeAI package into the project.

Next, to store the API key, create a property file named GeneratedAI-Info.plist. In this file, create a key named API_KEY and enter your API key as the value.

Xcode-google-gemini-apikey

To read the API key from the property file, create another Swift file named APIKey.swift. Add the following code to this file:

Building the App UI

ai-image-recognition-app-ui

The user interface is straightforward. It features a button at the bottom of the screen, allowing users to access the built-in Photo library. After a photo is selected, it appears in the image view.

To bring up the built-in Photos library, we use PhotosPicker, which is a native photo picker view for managing photo selections. When presenting the PhotosPicker view, it showcases the photo album in a separate sheet, rendered atop your app’s interface.

First, you need to import the PhotosUI framework in order to use the photo picker view:

Next, update the ContentView struct like this to implement the user interface:

To use the PhotosPicker view, we declare a state variable to store the photo selection and then instantiate a PhotosPicker view by passing the binding to the state variable. The matching parameter allows you to specify the asset type to display.

When a photo is selected, the photo picker automatically closes, storing the chosen photo in the selectedItem variable of type PhotosPickerItem. The loadTransferable(type:completionHandler:) method can be used to load the image. By attaching the onChange modifier, you can monitor updates to the selectedItem variable. If there is a change, we invoke the loadTransferable method to load the asset data and save the image to the selectedImage variable.

Because selectedImage is a state variable, SwiftUI automatically detects when its content changes and displays the image on the screen.

Image Analysis and Object Recognition

Having selected an image, the next step is to use the Gemini APIs to perform image analysis and generate a text description from the image.

Before using the APIs, insert the following statement at the very beginning of ContentView.swift to import the framework:

Next, declare a model property to hold the AI model:

For image analysis, we utilize the gemini-pro-vision model provided by Google Gemini. Then, we declare two state variables: one for storing the generated text and another for tracking the analysis status.

Next, create a new function named analyze() to perform image analysis:

Before using the model’s API, we need to convert the image view into an UIImage. We then invoke the generateContent method with the image and a predefined prompt, asking Google Gemini to describe the image and identify the objects within it.

When the response arrives, we extract the text description and assign it to the analyzedResult variable.

Next, insert the following code and place it above the Spacer() view:

This scroll view displays the text generated by Gemini. Optionally, you can add an overlay modifier to the selectedImage view. This will display a progress view while an image analysis is being performed.

After implementing all the changes, the preview pane should now be displaying a newly designed user interface. This interface comprises of the selected image, the image description area, and a button to select photos from the photo library. This is what you should see in your preview pane if all the steps have been followed and executed correctly.

google-gemini-demo-scrollview

Finally, insert a line of code in the onChange modifier to call the analyze() method after the selectedImage. That’s all! You can now test the app in the preview pane. Click on the Select Photo button and choose a photo from the library. The app will then send the selected photo to Google Gemini for analysis and display the generated text in the scroll view.

ai-image-recognition-app-result

Summary

The tutorial demonstrates how to build an AI image recognition app using Google Gemini APIs and SwiftUI. The app allows users to select an image from their photo library and uses Gemini to describe the contents of the photo.

From the code we have just worked on, you can see that it only requires a few lines to prompt Google Gemini to generate text from an image. Although this demo illustrates the process using a single image, the API actually supports multiple images. For further details on how it functions, please refer to the official documentation.

SwiftUI
Building a Scrollable Custom Tab Bar in SwiftUI
iOS
A Look at the WebKit Framework in iOS 8 – Part 2
iOS
Swift DocC: How to Host Document Archive on Web Server and GitHub Pages
Shares