# Accelerate Deep Learning Training and Inference for Model Zoo Workloads on Intel GPU ## Introduction This example shows the guideline to run Model Zoo workloads using TensorFlow* framework on Intel GPU with the optimizations from Intel® Extension for TensorFlow*. ## Quick Start Guide ### Run Models in the Docker Container - For Intel® Data Center GPU Flex Series Refer to [AI Model Zoo Containers on Flex Series](https://www.intel.com/content/www/us/en/developer/articles/containers/model-zoo-flex-series-containers.html) to run optimized Deep Learning inference workloads. - For Intel® Data Center GPU Max Series Refer to [AI Model Zoo Containers on Max Series](https://www.intel.com/content/www/us/en/developer/articles/containers/model-zoo-max-series-containers/model-zoo-max-series-containers.html) to run optimized Deep Learning training and inference workloads. ### Run Models on Bare Metal Refer to [AI Model Zoo Examples on Intel® Data Center GPU](https://github.com/IntelAI/models/tree/master#intel-data-center-gpu-workloads) to run optimized Deep Learning training and inference workloads on bare metal.