Coral Edge Tpu Compiler Cannot Convert Tflite Model: Model Not Quantized
I am trying to deploy a simple test application with TensorFlow lite. I want to use the Coral Edge TPU Stick on my device, so I have to perform Quantization Aware Training. I want
Solution 1:
You have to use explicit quantization during the TFLite conversion AFAIK. Code example which quantizes a Keras model:
dataset = tf.data.Dataset(...)
def generator():
for item in dataset:
image = # get image from dataset item
yield [np.array([image.astype(np.float32)])]
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = tf.lite.RepresentativeDataset(generator)
model = converter.convert()
Post a Comment for "Coral Edge Tpu Compiler Cannot Convert Tflite Model: Model Not Quantized"