Hi there
,
I’m developing a React Native mobile app which should detect and recognize (using a TensorCamera) text from live stream.
I’m totally new and, reading documentation, my code would do something like that:
I found in the hub this models:
They are all .tflite!!!
How can I use in my code ?
Thank you 
PS:
I’ve tried to convert using:
tensorflowjs_converter --input_format=tf_hub 'https://tfhub.dev/tulasiram58827/lite-model/keras-ocr/float16/2' path/my/folder
but it raises a lot of errors:
If you want to generate a TensorFlow.js model, you can use the following procedure to inverse-quantify tflite to generate onnx, then generate TensorFlow saved_model again, and convert from saved_model to TFJS.
pip install -U tf2onnx python -m tf2onnx.convert \ --opset 11 \ --tflite lite-model_craft-text-detector_dr_1.tflite \ --dequantize \ --output lite-model_craft-text-detector_dr_1.onnx pip install onnx2tf onnx2tf \ -i lite-model_craft-text-detector_dr_1.onnx \ -ois input:1,3,800,600 -osd pip install protobuf==3.20 tensorflowjs_converter \ --input_format tf_saved_model \ --output_format tfjs_graph_model \ saved_model \ tfjs_model_float32 tensorflowjs_converter \ --input_format tf_saved_model \ --output_format tfjs_graph_model \ --quantize_float16 "*" \ saved_model \ tfjs_model_float16
Thank you so much for your kind and very detailed response! 
I’ve tried it but, unfortunately, on running this command:
onnx2tf \ -i lite-model_craft-text-detector_dr_1.onnx \ -ois input:1,3,800,600 -osd
I get these errors:
TypeError: argument of type 'NoneType' is not iterable ERROR: input_onnx_file_path: /Users/someuser/Downloads/myModel/lite-model_craft-text-detector_dr_1.onnx ERROR: onnx_op_name: onnx_tf_prefix_MatMul_84;StatefulPartitionedCall/onnx_tf_prefix_MatMul_845 ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
Any tips?
I performed the same procedure again, but unfortunately no such error occurs in my environment.
tf.nn.convolution_25 (TFOpLambda) (1, 400, 300, 2) 0 ['tf.split_26[0][0]'] tf.math.add_46 (TFOpLambda) (1, 400, 300, 2) 0 ['tf.nn.convolution_25[0][0]'] Identity (TFOpLambda) (1, 400, 300, 2) 0 ['tf.math.add_46[0][0]'] ============================================================================================================================================ Total params: 0 Trainable params: 0 Non-trainable params: 0 ____________________________________________________________________________________________________________________________________________ saved_model output started ========================================================== saved_model output complete! Float32 tflite output complete! Float16 tflite output complete!
I expect that this is probably due to an outdated version of some package used in the back end, etc. Therefore, I recommend trying the following. If you are a Windows user, running the commands inside WSL2 Ubuntu should go very smoothly.
docker login ghcr.io Username (xxxx): {Enter} Password: {Personal Access Token} Login Succeeded docker run --rm -it \ -v `pwd`:/workdir \ -w /workdir \ ghcr.io/pinto0309/onnx2tf:1.13.3 wget -O lite-model_craft-text-detector_dr_1.tflite https://tfhub.dev/tulasiram58827/lite-model/craft-text-detector/dr/1?lite-format=tflite pip install -U tf2onnx python -m tf2onnx.convert \ --opset 11 \ --tflite lite-model_craft-text-detector_dr_1.tflite \ --dequantize \ --output lite-model_craft-text-detector_dr_1.onnx onnx2tf \ -i lite-model_craft-text-detector_dr_1.onnx \ -ois input:1,3,800,600 -osd pip install tensorflowjs tensorflowjs_converter \ --input_format tf_saved_model \ --output_format tfjs_graph_model \ saved_model \ tfjs_model_float32 tensorflowjs_converter \ --input_format tf_saved_model \ --output_format tfjs_graph_model \ --quantize_float16 "*" \ saved_model \ tfjs_model_float16
Now it kills himself 
... Model optimizing complete! Automatic generation of each OP name started ======================================== Automatic generation of each OP name complete! Model loaded ======================================================================== Model convertion started ============================================================ INFO: input_op_name: input shape: [1, 3, 800, 600] dtype: float32 Killed
I used the docker approach you suggest above.
My computer is a Macbook (13.3.1 (a) )
I am not going to advise you on issues specific to your environment. Killed is probably due to lack of RAM or something. TensorFlow unnecessarily consumes a lot of RAM, not an onnx2tf problem.
Thank you again, you are very kind and patient with me. I’m learning a lot with your help! 
With Colab it works and I was able to convert lite-model_craft-text-detector_dr_1.onnx easy.
I was trying with another model from hub, rosetta, but in the command:
!onnx2tf -i lite-model_rosetta_dr_1.onnx -ois input:1,3,800,600 -osd
I get following errors:
ERROR: input_onnx_file_path: lite-model_rosetta_dr_1.onnx ERROR: onnx_op_name: transpose_1;StatefulPartitionedCall/transpose_11 ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
Could you please suggest me a way to fix it?
Thank you in advance 
PS: I’ve tried to use in my code the model exported as json+bin but it gave me following error:
Error: layer: Improper config format: { ... <- all json content here } 'className' and 'config' must set.
There was an omission in the conversion process of Slice in onnx2tf, which will be fixed and released later.
https://github.com/PINTO0309/onnx2tf/pull/377
I fixed it. From today rosetta can also be converted. Note that I do not know how to implement React. You’ll have to rely on other engineers.
Your work is amazing!
Thank you so much. I’m going to try it soon 
I will definitely ask in another thread about other application’s errors.
Hello
,
I was trying to convert keras-ocr tflite using previous Colab but I crash with this error:
ERROR - rewriter <function rewrite_cond at 0x7f207905cdc0>: exception make_sure failure: Cannot find node with output 'std.constant107' in graph 'main' Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
Any suggestions?