# Examples A wide variety of examples are provided to demonstrate the usage of Intel® Extension for TensorFlow*. |Name|Description|Hardware| |-|-|-| |[Quick Example](quick_example.html)|Quick example to verify Intel® Extension for TensorFlow* and running environment.|CPU & GPU| |[ResNet50 Inference](./infer_resnet50/README.html)|ResNet50 inference on Intel CPU or GPU without code changes.|CPU & GPU| |[BERT Training for Classifying Text](./train_bert/README.html)|BERT training with Intel® Extension for TensorFlow* on Intel CPU or GPU.
Use the Tensorflow official example without code change.|CPU & GPU| |[Speed up Inference of Inception v4 by Advanced Automatic Mixed Precision](./infer_inception_v4_amp/README.html)|Test and compare the performance of inference with FP32 and Advanced Automatic Mixed Precision (AMP) (mix BF16/FP16 and FP32).
Shows the acceleration of inference by Advanced AMP on Intel® CPU and GPU.|CPU & GPU| |[Accelerate AlexNet by Quantization with Intel® Extenstion for Tensorflow*](./accelerate_alexnet_by_quantization/README.html)| An end-to-end example to show a pipeline to build up a CNN model to
recognize handwriting number and speed up AI model with quantization
by Intel® Neural Compressor and Intel® Extenstion for Tensorflow* on Intel CPU and GPU.|CPU & GPU| |[Accelerate Deep Learning Inference for Model Zoo Workloads on Intel CPU and GPU](./model_zoo_example/README.html)|Examples on running Model Zoo workloads on Intel CPU and GPU with the optimizations from Intel® Extension for TensorFlow*, without any code changes.|CPU & GPU|