Elevation

The Website of Team 14.


Project maintained by ECE3400-Team14 Hosted on GitHub Pages — Theme by mattgraham

Lab 4: FPGA and Camera

Purpose:

The goal of this lab was to set up the Camera-FPGA system our robot will use to detect treasures during the competition. This involved configuring the settings our OV7670 camera camera, storing the camera output data in our DE0-Nano FPGA, and using the FPGA to communicate the color of the camera image to our Arduino. One team, Greg and Michael, used our Arduino to both control the camera via I2C communication and set up a communication protocol for receiving color and shape data about the camera image from the FPGA. The other team, Andrew and David, worked on setting up the FPGA to provide a clock signal to the camera, receive, down-sample, and store the camera output in memory, and process the image in memory to determine the image color. Both teams worked on integrating the two systems together to accurately transmit images from the camera to the FPGA and transmit the color of the image from the FPGA to the Arduino.

Team Arduino

Configuring the Camera

In order to get the camera to send the correct data, we first searched through the datasheet for the OV7670 camera to find the settings we needed. Consulting the lab description we found registers and values for the following settings:

Reset all registers –> COM7 is register 0x12 in hex, and setting it to 0x80 resets all registers on the camera
Enable scaling –> COM14 at 0x3E is set to 0x08
Use external clock as internal clock –> CLKRC at 0x11 is set to 0xC0
Pixel and resolution format –> COM7 at 0x12 is set to 0xC
Set the gain ceiling to something stable –> COM9 at 0x14 is set to 0x01
Set pixel format –> COM15 at 0x40 is set to 0xD0
Enable a color bar test –> COM17 at 0x42 is set to 0x8
Vertical and mirror flip –> MVFP at 0x1E is set to 0x30
Other parameters –> We set the camera to RGB444 since 565 wasn’t working (register 0x8C set to 0x2)

Once we decided the values to start with, we wrote code for the OV7670_SETUP file which first reads and prints the values stored at the register addresses, writes the values that we want, and then reads and prints them again to check for good setup.

  read_key_registers();
  OV7670_write_register(0x12, 0x80);//reset all registers COM7
  OV7670_write_register(0x0C, 0x8); //COM3 enable scaling
  OV7670_write_register(0x11, 0xC0);//use external clock
  OV7670_write_register(0x12, 0xE);//set camera pixel format and enable color bar test with 0xE disable with 0xC
  OV7670_write_register(0x14, 0x01);//automated gain ceiling of 2x
  OV7670_write_register(0x40, 0xD0);//COM15 set for RGB 565 11010000 (208) D0
  OV7670_write_register(0x1E, 0x30); //mirror and flip
  OV7670_write_register(0x8C, 0x2); //RGB444
  OV7670_write_register(0x42, 0x8);//COM17 enable DSP color bar
  read_key_registers();
  set_color_matrix();

Then we set up the circuit as shown in the Lab description, image (from https://cei-lab.github.io/ece3400-2018/lab4.html)

SDA and SCL from the camera were hooked up to A4 and A5 of the Arduino, respectively. It looked something like this: image

and tried to write the registers of the camera, which seemed to come out accurately: image

It took some significant debugging to make sure that these register values were correct, since communication to the FPGA was wrong and hard to diagnose. After changing many of them around, we settled on the values listed above. It should also be important to note that the displayed register values above were displayed in decimal.

Setting up Arduino-FPGA Communication

For this lab, we decided to keep it simple and do a straightforward proof-of-concept for the FPGA–>Arduino communication scheme. The FPGA sends high or low signals to a few of its pins based on color and shape, and the Arduino reads individually over those pins to figure out if the camera is seeing a certain shape or color.

if (isBlue) {digitalWrite(blueLED, HIGH);}
else {digitalWrite(blueLED, LOW);}
if (isRed) {digitalWrite(redLED, HIGH);}
else {digitalWrite(redLED, LOW);}
if (isTriangle) {digitalWrite(triangleLED, HIGH);}
else {digitalWrite(triangleLED, LOW);}
if (isSquare) {digitalWrite(squareLED, HIGH);}
else {digitalWrite(squareLED, LOW);}

The Arduino lights up LEDs based on these read values.

For the actual robot, this commnunication scheme will be done over a serial data scheme so that the Arduino need only read information of a certain size over a single given input pin.

For the final serial communication scheme we used on our robot, see Milestone 4.

Team FPGA

Setting Up PLL

To set up our PLL, we followed the instructions given to us precisely. What we ended up with are three clock signals with frequencies 24 MHz, 25 MHz, and 50 MHz. We then probed all three signals using an oscilloscope to confirm that we have the correct frequencies. Below are outputs from the oscilloscope with clk_c0 (24 MHz), clk_c1 (25 MHz), and clk_c2 (50 MHz), respectively.

Reading and Writing memory

We first tested the memory module without inputs from the camera to make sure we actually understand how it works. In CONTROL_UNIT, we created two registers: X_ADDR that increments every clock cycle, and Y_ADDR that increments every time an entire row of data has been written into the memory. We then set w_en to 1 and output different colors according to the x-y coordinates. For example, to create our own color bar test, the output is defined as follows:

  if (X_ADDR < 20)
  begin
    input_data <= 8'b11111111;
  end
  if (X_ADDR < 40 && X_ADDR > 19)
  begin
    input_data <= 8'b00000010;
  end
  if (X_ADDR < 60 && X_ADDR > 39)
  begin
    input_data <= 8'b00000100;
  end
  if (X_ADDR < 80 && X_ADDR > 59)
  begin
    input_data <= 8'b00001000;
  end
  if (X_ADDR < 100 && X_ADDR > 79)
  begin
    input_data <= 8'b00010000;
  end
  if (X_ADDR < 120 && X_ADDR > 99)
  begin
    input_data <= 8'b00100000;
  end
  if (X_ADDR < 140 && X_ADDR > 119)
  begin
    input_data <= 8'b01000000;
  end
  if (X_ADDR < 176 && X_ADDR > 139)
  begin
    input_data <= 8'b10000000;
  end

Here are two other patterns we tested:

We then moved on implementing our image processor.

Image Processor for Color Detection

We implemented our image processor without using inputs from the camera as we felt that we should test it in a more controlled environment and make sure it works before connecting it with the camera input. Therefore, we wrote our image processing module to detect the color of the image. Our algorithm was as follows:

Analyzing Each Pixel in the Image:

always @ (posedge CLK) begin
  if (VGA_PIXEL_X == 0 && VGA_PIXEL_Y == 0) begin
    blue <= 0;
    red <= 0;
  end
  else begin
    if (VGA_PIXEL_X < 176 && VGA_PIXEL_Y < 144) begin
       if (PIXEL_IN[7] == 1 && PIXEL_IN[1:0] == 0 && PIXEL_IN[4:3] == 0) begin
	 red <= red + 1;
       end
       else begin
	red <= red;
       end
       if (PIXEL_IN[7:6] == 0 && PIXEL_IN[1] == 1 && PIXEL_IN[4:3] == 0) begin
	 blue <= blue + 1;
       end
       else begin
	 blue <= blue;
       end
     end
  end
end

Calculating the Total Color of the Image

always @ (*) begin
        //We are simply using 50% as threshold. It will be changed when we do field tests
	if (red > 12672) begin
		RESULT <= 8'b00000001;
	end
	else begin
		if (blue > 12672) begin
			RESULT <= 8'b00000010;
		end
		else begin
			RESULT <= 8'b0;
		end
	end
end

We connected the output of color detection to the LEDs on the FPGA. The right 4 LEDs light up when the image is red and the left 4 LEDs light up when the image is blue.

Detecting Red:

Detecting Blue:

Detecting Red Background with Blue Cross:

Detecting Blue Background with Red Cross:

Detecting No Color (White):

Detecting No Color (Purple):

The Control Unit and Downsampler

We set up our CONTROL_UNIT Module to read in a stream of pixel data, down-sample the data to an 8-bit format, and store it in memory. The module definition is shown below:

module CONTROL_UNIT (
  CLK,//the PCLK output from the camera, set using the 24 MHz clock (c0_sig) from the PLL. 
  HREF,//Input HREF from the camera to indicate when row data is being sent
  VSYNC,//Indicates frame reset
  input_data,//8-bit input from the camera[D7-D0]
  output_data,//8-bit output to be sent to memory
  X_ADDR,//pixel x-address where [output_data] should be stored in memory
  Y_ADDR,//pixel y-address where [output_data] should be stored in memory
  w_en//output that enables the M9K block to write [output_data] to the address indicated by [X_Addr] and [Y_Addr]
);

Because the camera sends 16 bits of data per pixel (when using RGB565, RGB555, or RGB444), we needed a way to read pixel data over two clock cycles to be down-sampled to 8-bits for storage in memory. Our module alternates between writing data to the 8-bit register part1 and part2, then combines these parts into the 16-bit value {part1,part2} to be downsampled and written to memory after both parts have been sent by the camera.

always @ (posedge CLK) begin
  if (HREF)//row data is sent only when HREF is high
  begin
    if (write == 0)//write data to part1
    begin
      part1 <= input_data;
      write <= 1; 
      w_en <= 0;
      X_ADDR <= X_ADDR;
    end
    else//write data to part2 and store output to memory
    begin
      part2 <= input_data;
      write <= 0;
      w_en <= 1;//enable writing to memory
      X_ADDR <= X_ADDR+1;//update pixel x address
    end
  end
  else//no row data is being sent
  begin
     w_en <= 0;
     write <= 0;
     X_ADDR <= 0;
  end
end

We use signals from the camera’s HREF and VSYNC output to determine when to read data from each pixel to store in memory. As shown above, HREF is high when row data is being sent, and HREF goes low between rows. Thus, we use the negative edge of HREF to update the y address Y_ADDR of the pixel being read. VSYNC goes high to indicate the start of a new frame, so we use the positive edge of VSYNC to reset Y_ADDR.

always @ (posedge VSYNC, negedge HREF) begin
  if (VSYNC) begin
    Y_ADDR <= 0;
  end
  else begin
    Y_ADDR <= Y_ADDR + 1;
  end
end

To convert the 16-bit camera data to 8-bits to store the data in memory, we put the data read from the camera, {part1, part2} through a downsampler. This downsampler took the most significant bits from each color to construct an 8-bit RGB332 value (3-bits red, 3-bits green, 3-bits blue).

OV7670 Color Formats:

Our Initial Downsampler:

We started off by writing some test images to memory. We did this by writing sample data from our Simulator to our CONTROL_UNIT module to write to each pixel of the 176 x 144 image. Connecting our FPGA to the computer screen via our VGA adaptor, we were able to see the shapes we created, trying out various options:

Sampling From the Camera:

Arduino-Camera-FPGA Setup:

image

For a frustratingly long time, we attempted to read in RGB565 data from the camera. We were able to get an image from the camera, but the colors were all jumbled. Not good if your trying to detect certain colors! We thought that this was initially due to the camera sending us the wrong color format, but we found no way on the camera to correct the error. When we tried RGB444, however, we began to see an image with more correct color output. We noticed that the expected order of the bits was swapped (giving us GB, Rx, rather than xR,GB as expected), but this was easily corrected by switching part1 and part2 in the control unit.

Using the camera color bar test, we noticed that most of the colors were close to accurate except the last two:

Reference Color Bar

Actual Color Bar:

img_4446

Notably, the second-to-last color was orange instead of dark red, and the last color bar was green when it should be black. What the color bar test suggested is that we were receiving excess amounts of green and red in our image. Viewing the camera feed confirmed this suspician, as the entire image was saturated with green. We found that specifically the second-most signifcant green bit (G2) seemed to trigger much more often than it should. Therefore, we removed it from the downsampler. After doing this, we still noticed a lot of red in the image, so we removed the second-most significant bit of red (R1) from the downsampler. The resulting image was dark, but we did start to see colors correctly.

Note: After finding an error in our data wiring from the camera to the FPGA, we were able to use the ideal down-sampling method to get a clearer image (see Milestone4).

Red Saturation:

img_4453

With Modified Downsampler:

img_4456

With the resulting solution, we were able to easily distinguish red on a white background, and somewhat distinguish blue. The blue tresure must be directly illuminated in order to be visible on the camera feed, suggesting that the current setup of the camera might not be sensitive enough to blue.

Demonstration of Color Detection from Camera Feed: