Now that we have a VGA synchronization circuit we can move on to designing a pixel generation circuit that specifies unique RGB data for certain pixels (i.e. an image). Before we actually go there, I thought I would separately talk a little bit about how to store image data on an FPGA. This discussion will focus mainly with using a Xilinx FPGA, more specifically the Basys 3, which uses 12-bit color.
Raw images are arrays of pixel data. Each pixel has a number of bits that specifies the intensity of the red, green, and blue color components. Assuming that an image is stored in 24-bit “True Color”, there are 8 bits specifying each respective color component. Since we are using the Basys 3 FPGA, we will need only the upper four bits of each color component (3 colors * 4 bits/color = 12 bits). So we need 12 bits per pixel to represent color, and y*x pixels, where y is the image height, and y is the image width.
The image of Yoshi shown above is scaled up such that we can easily see each pixel as a block of color on our screen. This image has a height of 32 pixels, and a width of25 pixels. That means that in total we have 800 pixels, each needing 12 bits to represent color, so 9600 bits altogether. There are a few ways to store all of these bits.
The first option is to use an external RAM chip, which is the best option in terms of having a lot of memory to work with. This option requires designing a memory controller in HDL, which is not a trivial task. Perhaps another time.
The second option is to use an FPGA’s distributed RAM. The FPGA logic cells have look up tables (LUTs) that can be configured as memory. For the Artix-7 FPGA used on the Basys 3 there are 5200 slices with 4 LUTs per slice, with each LUT being able to act as 64-bits of synchronous RAM. While this gives us 1331 kb of RAM in total to work with, it comes at the cost of using the logic cells we need to implement our logic circuits. Because of this trade off, distributed RAM is generally used in small amounts for smaller memories.
The third option is to use Block RAM, or BRAM, which are dedicated memory modules embedded in the FPGA. The Basys 3 has 100 BRAM modules with 18 kb each, for a total of 1800 kb. BRAM can be configured as single, or dual port RAM, a ROM, or even a FIFO. To store our image data we will use a ROM, or a read-only-memory.
One way to instantiate a ROM using BRAM is to use the Xilinx LogiCORE Block Memory Generator, which comes with ISE or Vivado. This tool can instantiate Xilinx IP memory modules which are device specific and can be initialized with image data using a coefficients or .coe file.
Instead, we will use the Xilinx language templates to infer a ROM using BRAM. While this method is still Xilinx specific, it is semi device agnostic. The ROM we infer in HDL will work for both the Basys 2 and Basys 3, among others. The XST User Guide documents in detail the Verilog and VHDL language templates that can infer many different types of logic components.
Here is a general outline for how to infer a synchronous ROM with 3 address bits that store bytes:
The module is clocked, and has an input port for the address to read from, and an output port for the data at that address. The ROM is synchronous, so the address register is updated on each clock cycle, which also updates the output data synchronously. The output data is routed by a case statement in an always block that maps address values to output values. The default value is necessary to include if not every address has a corresponding data output, as this would infer a latch.
With this basic template we can expand it to infer a synchronous ROM for our 32 x 25 image of Yoshi. While an image can be modeled as an array, the actual inferred RAM will be flat (i.e addresses from 0 to 799), and so we will have to combine the y, x (row, column) pixel addresses into a single address for the case statement. The minimum width of the column and row addresses are 5 bits each (log232 = 5), so we will have inputs for these, and use a concatenation of the column and row addresses for the case statement. We will also need to widen the output port to 12 bits, because we are storing 12 bits for each pixel.
Above is the modified code for the Yoshi ROM, with most of the cases omitted for brevity. Clearly it would be a good idea to automate the creation of the Verilog HDL for a ROM, given an image. Below is a Python program I wrote that does this.
This Python code is written for Python 3, and needs to have numpy, scipy, and PIL installed. The generate function is called at the bottom, with the name of the image file to read in. The generate function has two optional arguments rem_x, and rem_y, which are passed on to another function. Inside the generate call, the image is read, and immediately the width and height are printed out for our reference.
Next rom_12_bit() is called, which creates the Verilog file. This function has three optional arguments, mask, rem_x, and rem_y. If mask is set True, instances of the color data at location rem_x, rem_y will be removed from the output ROM and replaced with 0. If the defaults are left alone, the ROM will contain the exact image data. Later when displaying the image in HDL we can choose the background cyan color to instead be replaced with a background color of our choice. Because of this, it is important to make sure that the background color in the image we use doesn’t appear in the actual sprite of Yoshi.
The synthesis report from Vivado shows that the Verilog HDL inferred a 1024 x 12 ROM using block RAM. Note that the actual image data is 800 x 12, with 800 being between 29 = 512 and 210 = 1024, which is why 1024 was used for that dimension of the ROM.
The utilization report also shows that one 18kb block RAM module was used for our image data.
To see a complete FPGA video game project that utilizes block RAM to store sprites, click here.