Skip to content

QAT (quantization aware training) Support quantizing models recursively #377

@CRosero

Description

@CRosero

Describe the bug
I'm doing transfer learning and would like to (at the end) quantize my model. The problem is that when I try to use the quantize_model() function (which is used successfully in numerous tutorials and videos), I get an error. How am I supposed to do quantization for transfer learning (using an already previously built model as a feature extractor)?

System information

TensorFlow installed from (source or binary): pip

TensorFlow version: tf-nightly 2.2.0

TensorFlow Model Optimization version: 0.3.0

Python version: 3.7.7

Describe the expected behavior
I expect the model to be successfully quantized and for no error messages to appear.

Describe the current behavior
I get the error: "ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported."

Code to reproduce the issue
Can be found here

Metadata

Metadata

Assignees

Labels

feature requestfeature requesttechnique:qatRegarding tfmot.quantization.keras (for quantization-aware training) APIs and docs

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions