You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,8 +34,8 @@ OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
34
34
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
35
35
36
36
37
-
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
38
-
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
37
+
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics.
38
+
It supports pre-trained models from [Open Model Zoo], along with 100+ open
39
39
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
40
40
41
41
### Components
@@ -99,7 +99,7 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
99
99
</tbody>
100
100
</table>
101
101
102
-
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
102
+
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
103
103
<table>
104
104
<thead>
105
105
<tr>
@@ -140,7 +140,7 @@ By contributing to the project, you agree to the license and copyright terms the
140
140
141
141
### User documentation
142
142
143
-
The latest documentation for OpenVINO™ Toolkit is available [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
143
+
The latest documentation for OpenVINO™ Toolkit is available [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
144
144
145
145
### Developer documentation
146
146
@@ -161,29 +161,29 @@ The list of OpenVINO tutorials:
161
161
162
162
## System requirements
163
163
164
-
The full information about system requirements depends on platform and is available on dedicated pages:
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
172
+
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
173
173
174
174
## How to contribute
175
175
176
176
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
177
177
178
178
## Get a support
179
179
180
-
Please report questions, issues and suggestions using:
*[Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
195
195
*[OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
196
196
*[OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
197
-
*[DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
198
-
*[Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
197
+
*[DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
198
+
*[Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
199
199
*[Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
200
200
201
201
---
202
202
\* Other names and brands may be claimed as the property of others.
203
203
204
204
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
Copy file name to clipboardExpand all lines: src/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ flowchart LR
35
35
```
36
36
37
37
*[core](./core/README.md) is responsible for model representation, contains a set of supported OpenVINO operations and base API for model modification.
38
-
*[inference](./inference) provides the API for model inference on different accelerators.
38
+
*[inference](./inference/README.md) provides the API for model inference on different accelerators.
39
39
* Transformations:
40
40
*[common transformations](../src/common/transformations) - a set of common transformations which are used for model optimization
41
41
*[low precision transformations](../src/common/low_precision_transformations) - a set of transformations which are needed to optimize quantized models
OpenVINO Inference is a part of OpenVINO Runtime library.
4
+
The component is responsible for model inference on hardware device, provides API for OpenVINO Plugin development.
5
+
6
+
OpenVINO Inference uses [the common coding style rules](../../docs/dev/coding_style.md).
7
+
8
+
## Key person
9
+
10
+
People from the [openvino-ie-maintainers](https://github.com/orgs/openvinotoolkit/teams/openvino-ie-maintainers) allows to approve and merge PRs to the inference component. These guys can help in case of any questions about the component.
11
+
12
+
## Components
13
+
14
+
OpenVINO Inference has the next structure:
15
+
*[dev_api](./dev_api) contains developer API which is needed to develop OpenVINO Plugins. In order to use this API, you need to link your component against `openvino::runtime::dev`.
16
+
*[include](./include) contains public API. Detailed information about provided API can be found [here](./docs/api_details.md).
17
+
*[src](./src) folder contains sources of the component.
18
+
19
+
OpenVINO Inference has unit and functional tests. Unit tests are located in [src/tests/unit/inference_engine](../tests/unit/inference_engine/), functional tests locates [src/tests/functional/inference_engine](../tests/functional/inference_engine/).
0 commit comments