See Vitis™ AI Development Environment on amd.com |
The AI Engine Development Feature Tutorials highlight specific features and flows that help develop AI Engine applications.
The README of AI Engine Development contains important information including tool version, environment settings. It also has a table that describes the platform, operating system, and supported features or flows of each tutorial. Review the details before starting to use the AIE tutorials.
| Tutorial | Description |
| AI Engine A-to-Z Flow for Linux | This tutorial introduces a platform-based approach to develop an adaptable subsystem that contains PL kernels and AI Engine graph. It demonstrates how you can validate the design using hardware emulation or hardware using the base platform, and switch to the custom platform with minimal changes. |
| A to Z Bare-metal Flow | This tutorial walks through the steps to create a custom bare-metal platform. It also integrates a bare-metal host application along with an AI Engines graph and PL kernels. |
| Using GMIO with AIE | This tutorial introduces the usage of global memory I/O (GMIO) for sharing data between the AI Engines and external DDR |
| Runtime Parameter Reconfiguration | Learn how to dynamically update AI Engine runtime parameters |
| Packet Switching | This tutorial shows how to use data packet switching with AI Engine designs to optimize efficiency. |
| AI Engine Versal™ Adaptive SoC Integration for Hardware Emulation and Hardware | This tutorial demonstrates creating a system design running on the AI Engine, PS, and PL and validating the design running on these heterogeneous domains by running Hardware Emulation. |
| Versal Adaptive SoC System Design Clocking | This tutorial demonstrates clocking concepts for the Vitis compiler by defining clocking for Adaptive Data Flow (ADF) graph PL kernels and PLIO kernels. It uses the clocking automation functionality. |
| Using Floating-Point in the AI Engine | These examples demonstrate floating-point vector computations in the AI Engine. |
| DSP Library Tutorial | This tutorial demonstrates how to use kernels provided by the DSP library for a filtering application and how to analyze the design results. It also shows how to use filter parameters to optimize the design's performance using simulation. |
| Debug Walk-through Tutorial | This tutorial demonstrates how to debug a multi-processor application using the Versal adaptive SoC AI Engines, using a beamformer example design. The tutorial shows functional debug and performance level debug techniques. |
| AI Engine DSP Library and Model Composer Tutorial | This tutorial shows how to design AI Engine applications using the Model Composer. This set of block sets for Simulink demonstrates how easy it is to develop applications for AMD devices, integrating RTL/HLS blocks for the Programmable Logic and AI Engine blocks for the AI Engine array. |
| Versal Adaptive SoC Emulation Waveform Analysis | This tutorial demonstrates how to use the Vivado® logic simulator (XSIM) waveform GUI and the Vitis analyzer to debug and analyze your design for a Versal adaptive SoC. |
| AI Engine Performance and Deadlock Analysis Tutorial | This tutorial introduces you to performance analysis and optimization methods. It shows you how synchronization works in graph execution and demonstrates the analysis of a hang issue using an example. |
| Implementing an IIR Filter on the AI Engine | This multi-part tutorial describes how to implement an [infinite impulse response (IIR) filter](https://en.wikipedia.org/wiki/Infinite_impulse_response) on the AI Engine. |
| Post-Link Recompile of an AI Engine Application | This tutorial shows you how to modify an AI Engine application after you freeze the platform. It avoids a complete Vivado® tool run, which can take a long time if timing closure requires specific attention. The only limitation is that the hardware connection between the AI Engine array and the programmable logic (PL) must remain fixed. The tutorial demonstrates a Vitis IDE flow and a Makefile flow. |
| Using RTL IP with AI Engines | This tutorial demonstrates how to reuse any AXI-based IP you have created as an RTL IP. It shows how to control your platform and convert your RTL IP to an RTL kernel, allowing for a more streamlined design process. |
| AIE Compiler Features | This tutorial shares a variety of features useful for AI Engine / AI Engine-ML (AIE-ML) programming. These features help create more visible and efficient code compared to early versions of the compiler. |
| Two Tone Filter on AIE Using DSP libraries and Vitis Model Composer | This tutorial demonstrates how to implement the same MATLAB® model design using the Vitis DSP libraries targeting AI Engine. The MATLAB model design has a two-tone input signal. The Finite Impulse Response (FIR) filter suppresses one tone from the two-tone input signal. The output of the FIR filter connects to the FFT block, which acts as a monitor to display a spectrum plot. This tutorial has four parts: part 1 uses a 400 MSPS sampling rate, part 2 uses 2000 MSPS, part 3 implements the part 1 design using Vitis IDE, and part 4 implements the part 1 design using Vitis Model Composer tool. |
| Compiling AIE Graphs for Independent Partitions | This tutorial demonstrates the flow for compiling AI Engine Graphs for AI Engine partitions. The AI Engine graphs exist in different partitions of the device, verified by the AIE simulator independently, but integrated and packaged by v++ linker and v++ packager together. The flow is suitable for multiple teams working simultaneously in different parts of a system project. It also supports integrating user-owned design with vendor (for example, AMD) provided IP cores. |
| RTL / AI Engine interfacing Examples | This tutorial shows ways of interfacing custom RTL logic to the AI Engine using the Vitis acceleration flow. |
| AIE Kernel Optimization | This tutorial teaches how to diagnose and improve compute efficiency of algorithms implemented as AI Engine kernels by analyzing the generated microcode. It presents fundamentals of interpreting microcode and provides two example labs to encourage hands-on experience with optimizing AI Engine kernel performance. |
| Matrix Compute with Vitis Libraries | In this tutorial, we explore how to use matrix multiplication/General Matrix Multiply (GEMM) from the DSP Vitis library. We examine various design requirements and configure the parameters accordingly. Finally, we migrate the design to the AIE-ML architecture and compare its performance with AIE architecture. |
| A Gentle Introduction to AI Engine Kernel Programming | In this tutorial, we guide you on how to get data into and out of a kernel using a simple contrived example. |
| System Timeline Tutorial | This tutorial demonstrates how to use System Timeline, a new feature that traces all subsystems of the device (PL, PS and AI Engine array). It displays them in Vitis Analyzer on the same graph with a synchronized timeline. |
Copyright © 2020–2026 Advanced Micro Devices, Inc.