|
6 anni fa | |
---|---|---|
.. | ||
data | 6 anni fa | |
rtl | 6 anni fa | |
sim | 6 anni fa | |
sw | 6 anni fa | |
Makefile | 6 anni fa | |
README.md | 6 anni fa |
This is an example usage of the Hub75 driver IP in this repository and implements driving RGB panels using the iCEBreaker board along with the RGB Panel PMOD.
Default configuration is for a 64x64 panel using 1:32 multiplex. Note that some panels have the Red and Blue channels swapped, so you might have to adapt this ...
This example has 3 modes of operations explained below. Each
mode is selected by uncommenting the appropriate define
at the
top of the top.v
file.
This generates a Red & Blue gradient across the two axises and then some moving green lines across. Very simple example of generating data directly on the FPGA itself and can also be used as a pretty reliable test that all works well.
In this mode, frames are read from the SPI flash and displayed in sequence.
For this to work, you need some video content to be preloaded into the flash.
You can use the special make data-prog
target to load a default nyan cat
animation.
To load your own animation in flash, checkout the ADDR_BASE
and N_FRAMES
parameters that tell the module where to look in flash for the image data.
It needs to be raw frames, pixel format is either in RGB332
or RGB565
or RGB888
depending on the BITDEPTH
you selected. (Default is 16 bits and
RGB565
).
In this mode, video content is streamed from the host PC to the FPGA using SPI (through the FT2232H used for programming the FPGA).
A control software stream.py
is provided in the sw/
sub-directory.
It required Python 3.x and pyftdi.
And example usage would be :
./stream.py --fps 10 --loop --input ../data/nyan_glitch_64x64x16.raw
See the --help
for other options available.
To prepare content, you can use ffmpeg :
ffmpeg -i input.mp4 -filter_complex "[0:v]crop=540:540,scale=64:64" -pix_fmt rgb565 -f rawvideo output.raw
Obviously the number for the crop
filter need to be adjusted for your source
material to get a square image that selects the best region to show. Also, you
can do use unix FIFOs to directly pipe content from ffmpeg
to the stream.py
application without the need for intermediate files.