Here we save and load our TensorFlow Lite (.tflite) model.
In this series of articles, we’ll present a Mobile image-to-image translation system based on a Cycle-Consistent Adversarial Networks (CycleGAN). We’ll build a CycleGAN that can perform unpaired image-to-image translation, as well as show you some entertaining yet academically deep examples. We’ll also discuss how such a trained network, built with TensorFlow and Keras, can be converted to TensorFlow Lite and used as an app on mobile devices.
We assume that you are familiar with the concepts of Deep Learning, as well as with Jupyter Notebooks and TensorFlow. You are welcome to download the project code.
In the previous article, we implemented a CycleGAN using TensorFlow and Keras. In this article, we’ll show you how to convert that saved model to TensorFlow Lite, as a first step to run it on Android.
What is TensorFlow Lite?
TensorFlow Lite is the lightweight version of TensorFlow. This light version allows you to run models on mobile and embedded devices with low latency, while performing tasks such as classification, regression, and so on. Currently, TensorFlow Lite supports Android and iOS via C++ API. Additionally, TensorFlow Lite has an interpreter that can use the Android Neural Networks API for hardware acceleration on Android devices that support it. On devices that don’t support it, TensorFlow Lite defaults to the CPU for execution. In this article, we’ll focus on deploying TensorFlow Lite in an Android app.
TensorFlow Lite is not designed to train models. Therefore, the usual practice is to train models on high-power machines via TensorFlow and then convert the trained model to TensorFlow Lite (the .tflite format). The .tflite model is then loaded into an interpreter as shown in the diagram below.
TensorFlow Lite Converter
A TensorFlow Lite converter converts a TensorFlow model (model.h5) to a TensorFlow Lite (.tflite) model that can be deployed on mobile devices as an app. The way this converter operates depends on how the model was saved. Our CycleGAN model was saved as a Keras model. Therefore, we’ll convert it to TensorFlow Lite with the following script:
from tensorflow import keras
model = keras.models.load_model('path/to/location')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
The above script results in a .tflite model, which is almost ready to run on Android.
Running a TensorFlow Lite Model
A TensorFlow Lite model executed on a device performs inference, which aims to make predictions on the input data. To perform inference with our TensorFlow Lite model, we must run it through an interpreter. The most important steps a TensorFlow Lite inference includes are:
- Load model: First, we must load our converted .tflite model.
- Transform data: In case some input data does not fit the expected input size and format, we might need to perform processing such as resizing and changing the image format.
- Run inference: We execute the converted model using the TensorFlow API. This involves building an interpreter and allocating tensors, which we’ll discuss in detail in the next article.
- Interpret output: Lastly, we analyze the results obtained by the model inference.
TensorFlow inference APIs are provided for most common mobile/embedded platforms such as Android, iOS, and Linux, in multiple programming languages.
In this project, our aim is to run a mobile Image-to-Image translation model on the Android platform. Therefore, we’ll focus on loading and running our model on Android.
In the next article, we’ll show you how to set up an Android Studio environment that is suitable for loading and running our .tflite model. Stay tuned!