OpenVINO™ Workflow Consolidation Tool Tutorial


Last modified date: 2019-06-05

About

The OpenVINO™ Workflow Consolidation Tool (OWCT) is a deep learning tool for converting trained models into inference engines accelerated by the Intel® Distribution of OpenVINO™ toolkit. Inference engines allow you to verify the inference results of trained models.

The Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN). The toolkit extends workloads across Intel® hardware accelerators to maximize performance.

You can install the OWCT from App Center in QTS.
Note:

Container Station must be installed in order to use the OWCT.


Compatibility

Platform

Support

NAS models

  • TVS-x72XT

    Note:

    The TVS-x72XT does not support field-programmable gate array (FPGA) cards as hardware accelerators.

  • TVS-x72XU

  • TS-2888X

Note:

Only Intel-based NAS models support the OpenVINO™ Workflow Consolidation Tool.

OS

QTS 4.4

Intel® Distribution of OpenVINO™ toolkit version

2018R5

For details, go to https://software.intel.com/en-us/articles/OpenVINO-RelNotes.

Hardware Accelerators

You can use hardware accelerators installed in your NAS to improve the performance of your inference engines.

Hardware accelerators installed in the NAS are displayed on the Home screen.

Hardware accelerators with the status Ready can be used when creating inference engines.

If the status displayed is Settings, go to Control Panel > System > Hardware > Graphics Card to set up the hardware accelerator.

Note:
  • To use field-programmable gate array (FPGA) cards on a QNAP NAS, virtual machine (VM) pass-through must be disabled.

  • Each FPGA resource can create one inference engine.

  • Each vision processing unit (VPU) resource can create one inference engine.

Creating Inference Engines

Use the OWCT to create inference engines and configure inference parameters.

  1. Open the OWCT.
  2. Click + OpenVINO™ Engine.

    The Create New OpenVINO™ Inference Engine window appears.

  3. Select an inference model.
  4. Click Next.
  5. Configure the inference model.
    Table 1. Intel Pretrained Model

    Field

    Description

    Name

    Enter a name between 2 and 50 characters.

    Inference Model

    Displays the type of inference model.

    Inference Type

    Select the type of inference for your model.

    Intel® pretrained model file

    Select a pretrained model file.

    Tip:

    Click Guide to view information about the selected model.

    Source Type

    Select the type of source file to upload to the inference engine.

    Accelerator/Precision

    Select the hardware accelerator and precision rate that will influence the duration and accuracy of the inference.

    Table 2. Custom Model

    Field

    Description

    Name

    Enter a name between 2 and 50 characters.

    Inference Model

    Displays the type of inference model.

    Inference Type

    Select the type of inference for your model.

    Source Type

    Select the type of source file to upload to the inference engine.

    Accelerator

    Select the hardware accelerator that will determine the processing speed when the inference engine is using a different device.

    Precision

    Select the precision rate that will determine the duration and accuracy of the inference.

    Bitstream (FPGA only)

    Select a computing file for an FPGA device.

    Framework

    Select the framework for your inference model.

    Note:

    The OWCT currently supports Tensorflow and Caffe.

    Model files

    Select the model files for your inference model.

    Certain file types should be uploaded depending on the framework:
    • Tensorflow: .pf, .config

    • Caffe: .caffemodel, .prototxt

    Label file (optional)

    Any objects that are detected will appear during the inference.

    Arguments (optional)

    Add arguments to your inference engine.

  6. Click Next.
  7. Review the summary.
  8. Click Create.

The OWCT creates the inference engine and displays in on the Inference Events screen.

Using Computer Vision with Inference Engines

  1. Open the OWCT.
  2. Go to the Inference Events screen.
  3. Ensure that the status of the target inference engine is Ready.
  4. Click the link below the inference engine name.

    The Object Detection window opens in a new tab.

  5. Click Upload.
    The Select File window appears.
  6. Select a file.
  7. Click Upload.

    The object detection inference process starts.

  8. When the inference finishes, click Download.
    The Select Folder window appears.
  9. Select the folder where you want to save the inference result.
  10. Click Save.

Managing Inference Engines

You can view and manage inference engines on the Inference Events screen.

Button

Description

Starts the inference process.

Stops the inference process.

Displays details and the log status of the inference engine.

Save the inference engine as an Intermediate Representation (IR) file for advanced applications.

Deletes the inference engine.

Was this article helpful?

11% of people think it helps.
Thank you for your feedback.

Please tell us how this article can be improved:

If you want to provide additional feedback, please include it below.

Choose specification

      Show more Less

      This site in other countries/regions:

      open menu
      back to top