- Notifications
You must be signed in to change notification settings - Fork 340
Description
Describe the bug
I'm doing transfer learning and would like to (at the end) quantize my model. The problem is that when I try to use the quantize_model() function (which is used successfully in numerous tutorials and videos), I get an error. How am I supposed to do quantization for transfer learning (using an already previously built model as a feature extractor)?
System information
TensorFlow installed from (source or binary): pip
TensorFlow version: tf-nightly 2.2.0
TensorFlow Model Optimization version: 0.3.0
Python version: 3.7.7
Describe the expected behavior
I expect the model to be successfully quantized and for no error messages to appear.
Describe the current behavior
I get the error: "ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported."
Code to reproduce the issue
Can be found here