System Stacks Whitepapers

System Stacks for Linux OS logo

Identify Galaxies Using the Deep Learning Reference Stack

This article describes how users can detect and classify galaxies by their morphology using image processing and computer vision algorithms. We used data from the Sloan Digital Sky Survey and galaxy classification from the Galaxy Zoo project, along with the Deep Learning Reference Stack, a stack designed to be highly optimized and performant with Intel® Xeon® processors. Read More

GitHub* Issue Classification Utilizing the End-to-End System Stacks

This article describes how to classify GitHub issues using the end-to-end system stacks from Intel. In this scenario, we auto-classify and tag issues using the Deep Learning Reference Stack for deep learning workloads and the Data Analytics Reference Stack for data processing. Read More

Using AI to Help Save Lives

In health care, AI can provide insights into patient data, whether analyzing records or examining images. Despite all of the advancements in artificial intelligence, developers have been forced to do the integration across solutions and frameworks by themselves – a difficult task that distracts from the development effort. In this paper, we focus on solving two problems facing the domains of medical diagnosis and artificial intelligence. First, we designed an AI training pipeline to detect intracranial hemorrhage (ICH), a serious condition often caused by traumatic brain injuries. ICH must be diagnosed and treated as quickly as possible to avoid disability or death of the patient. Second, we tackled the complexity of creating an AI pipeline with multiple software frameworks, configurations, and dependencies. Our solution was to use the System Stacks for Linux* OS, a purpose-built collection of containers that provide integrated, and tuned AI frameworks. Read More

State-of-the-art BERT Fine-tune Training and Inference on 3rd Gen Intel® Xeon® Scalable processors with the Intel Deep Learning Reference Stack

Driven by real-life use cases ranging from medical diagnostics to financial fraud detection, deep learning’s neural networks are growing in size and complexity as more and more data is available to be consumed. This influx of data allows for more accuracy in data scoring, which can result in better AI models, but presents challenges to compute performance. In this guide we walk through a solution to set up your infrastructure and deploy a BERT fine tune training and inference workload using the DLRS containers from Intel. Read More

Pix2Pix: Utilizing the Deep Learning Reference Stack

This article describes how to perform image to image translation using end-to-end system stacks from Intel. In this scenario, we used the Deep Learning Reference Stack, a highly-performant and optimized stack for Intel® Xeon® Scalable Processors. Read More

Next-Generation Hybrid Cloud Data Analytics Solution

Use the Hybrid Cloud Data Analytics Solution—optimized to run on the latest Intel® architecture—to quickly operationalize data analytics and AI. Read More

Deploying Machine Learning Models with DLRS and TensorFlow* Serving

Use the Deep Learning Reference Stack with TensorFlow* Serving to create a servable for machine learning. Read More

Performance Models in Runway ML with the Deep Learning Reference Stack

Learn about the person segmentation model to accelerate machine learning based on a collaboration between Runway and Intel Deep Learning Reference Stack. Read More

Deep Learning Functions as a Service

We’ve integrated DLRS into Fn and OpenFaaS projects to showcase the advantages of managing event-driven independent functions running on an optimized stack for deep learning applications. Read More