Introduction
1. What is Intel Cloud-Client AI Service Framework (CCAI)
2. How does CCAI work
2.1 The high level call flow of CCAI (1.3 release)
2.2 CCAI (1.3 release) stack architecture
3. Integrate and use CCAI runtime environment
3.1 How to install the pre-built runtime (if have) and verify it quickly
3.1.1 Prepare
3.1.2 Proxy setting
3.1.3 Container image preparation
3.1.4 Download and install service-framework packages/test cases/docker files in host
3.1.5 Start/Stop service-framework
3.2 Verify CCAI functions with samples or test cases
4. How to setup development environment
4.1 Download and run development docker image
4.2 Enter development container environment
4.3 Setup development environment directly in your machine
4.4 Setup the Pulseaudio service
5. How to generate CCAI packages and container image
5.1 Build CCAI packages and generate CCAI container image form pre-built binaries
5.2 How to build from source
5.2.1 download initial project - container
5.2.2 build host packages
5.2.3 install CCAI services and image on host
5.3 How to check all component versions
5.4 Generate CCAI OTA image
6. How to develop AI services for CCAI
6.1 CCAI service work mode
6.2 Preparation
6.2.1 Using OpenVINO as inference engine in CCAI
6.2.2 Using PyTorch as inference engine in CCAI
6.2.3 Using ONNX runtime as inference engine in CCAI
6.2.4 Using TensorFlow as inference engine in CCAI
6.2.5 Using PaddlePaddle as inference engine in CCAI
6.3 Development services
6.3.1 Develop FCGI service
6.3.2 Develop gRPC service
6.4 Deploy services for CCAI
6.4.1 Deploy into container
6.4.2 Deploy on host
6.4.3 Specific to PyTorch service
6.4.4 Specific to Onnx service
6.4.5 Specific to Tensorflow service
6.4.6 Specific to PaddlePaddle service
6.5 Sample: Add a service for CCAI
6.5.1 Install packages
6.5.2 Compose the header file
6.5.3 Extract service runtime library from CCAI container
6.5.4 Write the main source code
6.5.5 Build the program
6.5.6 Write the configuration file
6.5.7 Build docker image
6.5.8 Test
7. How to use AI services provided by CCAI
7.1 Request serving via REST APIs
7.2 Request serving via gRPC APIs
7.3 Proxy setting
8. How to integrate new AI services with CCAI framework
8.1 Where to put those services file to
8.2 Where to put related Neural Network Models file
8.3 How to enable services via API gateway
8.4 How to generate new container image
9. How to enable Encryption and Authentication for CCAI
9.1 Encryption
9.2 Enable authentication
10. How to enable DNS interception
11. APIs Reference List
11.1 FCGI APIs Manual
11.1.1 TTS API usage
11.1.2 ASR API usage (offline ASR case)
11.1.3 API in Speech sample
11.1.4 Policy API usage
11.1.5 Classification API usage
11.1.6 Face Detection API usage
11.1.7 Facial Landmark API usage
11.1.8 OCR API usage
11.1.9 formula API usage
11.1.10 handwritten API usage
11.1.11 ppocr API usage
11.1.12 segmentation API usage
11.1.13 super resolution API usage
11.1.14 digitalnote API usage
11.1.15 Video pipeline management (control) API usage
11.1.16 Live ASR API usage (online ASR case)
11.1.17 Pose estimation API usage
11.1.18 Capability API usage
11.2 gRPC APIs Manual
11.2.1 proto file
11.2.2 OCR method
11.2.3 ASR method
11.2.4 Classification method
11.2.5 FaceDetection method
11.2.6 FacialLandmark method
11.3 Low level APIs Manual
11.3.1 C++ APIs for Openvino Backend Engine(Version 0)
11.3.1.1 Return value (deprecated)
11.3.1.2 Server parameter
11.3.1.3 Policy configuration API
11.3.1.4 image API (deprecated)
11.3.1.5 ASR API (deprecated)
11.3.1.6 common API (deprecated)
11.3.1.7 video API
11.3.1.8 Load Openvino Model from Buffer API
11.3.1.9 Configure a temporary inference device API
11.3.2 Python API
11.3.2.1 Image API(deprecated)
11.3.2.2 Image API
11.3.2.3 ASR API
11.3.2.4 Common API
11.3.2.5 Policy configuration API
11.3.2.6 Set temporary inference device API
11.3.3 C++ APIs for Different backend Engines (Version 1)
11.3.3.1 Return Value
11.3.3.2 Inference Engines
11.3.3.3 Image API
11.3.3.4 Speech API
11.3.3.5 Common API
11.3.4 Video pipeline management (construct) APIs
11.4 How to extend video pipeline with video pipeline manager
11.4.1 construct the plugin
11.4.2 Build the plugin
11.4.3 Install the plugin to destination
11.4.4 Test your plugin
11.5 Smart Photo Search
12. Test cases and packages installation
12.1 Enabled services for testing
12.2 High Level APIs test cases
12.2.1 For testing all provided API in a bunch
12.2.2 For testing python implementation of related REST APIs
12.2.3 For testing C++ implementation of related REST APIs
12.2.4 For testing C++ implementation of related gRPC APIs
12.3 Health-monitor mechanism test case
12.3.1 Test case
12.3.2 How it works (in brief)
12.4 Deb package for host installed application/service (if not install yet)
12.5 Deb package for host installed neural network models (if not install yet)
Published with GitBook
Introduction
Intel Cloud-Client AI Service Framework (CCAI) Development Manual
results matching "
"
No results matching "
"