Grid Eye

I am new here, I want to ask about the Grid Eye infrared array sensor evaluation kit or Grid Eye Sensor.

What type of output will be receive from them? Can I carry out further implementation to my own application from the output of the sensor? Does it need processor or can straight communicate with my application? Do you have any example of it? Thanks.

Hello @chow,

Most of the Grid Eye sensors that I am familiar with have an I2C interface. If you are not familiar with one I would recommend looking at the 1528-2409-ND from Adafruit. This uses the AMG8833 and Adafruit has some good documentation for getting it up and running.

Adafruit AMG8833 Getting Started Guide.

-Robert

Thanks for reply, I like to ask that it is better to use the Adafruit AMG8833 8x8 Thermal Camera Sensor only or the Grid EYE Infrared (IR) Array Sensor Evaluation Kit?

For my case I needed to carry out further implementation on my application.

But I have no idea on which typoe of sensor more suitable for me and what is the output of the sensor I will get.

Hope someone can explain this two to me.

Thanks

Hello @chow,

The Adafruit board is nice for someone who is looking to get a board and use it for a project. If you are looking to evaluate the sensor to be implemented in a larger application that I would suggest the manufacturer evaluation board. Panasonic part AMG8834EVAL.

The board functions in a standalone mode or it can be connected to an Arduino host board if you want. we also have a highlight on this board here. https://www.digikey.com/catalog/en/partgroup/grid-eye-infrared-ir-array-sensor-evaluation-kit/76212?mpart=AMG8834EVAL&vendor=10

-Robert

Hi, I will try study both of them (Adafruit Grid Eye and Grid-EYE® Infrared Array Sensor Evaluation Kit) can produce output of temperature, am I right?

For the Grid-EYE® Infrared Array Sensor Evaluation Kit, I can use it to do some implementation onto my application, am I right? Can I request for the step of it?

I also saw that the API of Grid-EYE® Infrared Array Sensor Evaluation Kit has level 2 and level 3, I like to know more about that.

  1. What is the difference between 3 level of them?
  2. Can I only use only level 1?
  3. Can I connect it to my application? If yes, have any special things that I needed to take care?
  4. Do you have any source code of it? If yes, where I can request for that?
    Thanks

I write quite a lot of stuff in one message, hope you able to reply all the information to me. Thanks.

Hi Chow,

You ask about whether you can use the Grid-Eye in your application, but you have not described what your application is. The Grid-Eye sensor simply measures temperature on each of its 64 pixels and outputs those values on an I2C serial bus when commanded to do so by anything that can communicate on an I2C serial bus. Most microcontrollers and embedded processor boards, such as the Raspberry Pi or Arduino-based boards, can communicate with an I2C serial bus.

If your physical application can communicate via I2C, then it can talk directly to the Grid-Eye. Otherwise, it will require an intermediate device to communicate with the Grid-Eye which can then pass on the data in some form of communication that your “application” can understand.

Regarding your API questions, there are several examples available which have been written to communicate with the Grid-Eye using various languages. I believe you may have been referring to the Panasonic-written API when you mention Level 2 and Level 3. That API is written for the Atmel ATSAMD21 microcontroller, which is mounted on the Panasonic Grid-Eye evaluation board. As I understand it:

Level 1 API’s simply control communication with the Grid-Eye, receives the data from it, and calculates temperatures from the data and formats that data in the ATSAMD21.

Level 2 API’s implement filtering of the data, and provides functions for image processing, object detection, and human body recognition, all on the ATSAMD21.

Level 3 API’s implement functions for object detection and object tracking on the ATSAMD21.

You can use any level which suites your needs. It depends on whether you want the eval board to do more of the processing, or whatever device your “application” uses for processing. If you do Level 1, your application will have to do pretty much all of the processing. In this case, it would simply receive temperature values from the board. Using Level 2 or 3 will give more “intelligent” information from the board, such as described above.

Other examples come from Adafruit, where they give examples using their Arduino-based Metro boards and Python code for the Raspberry Pi.

I am planning to do something like monitor the crowd. The installation area will be open area but not expose under the sunlight. I have not get the clear idea of what type of output I will get from the sensor. I am thinking maybe is Temperature of the surrounding or form the infrared image.

For an example, if the specific area have people around the left side, only the left side of the area will sending high temperature output data. Or the thermal image of the specific area will show red on the left side of the area but other part will remain as blue due to lower temperature.

I still not sure which type of sensor is more suitable for me. The output of the sensor is temperature or image.

Second, for the API. I need to ask how I will get different level of API? Is due to the connection or actually I will receive all of the level and I use condition code to extract which level I need?

Thanks

Another question that I like to ask is, if I use Adafruit Grid Eye with Raspberry.
I want to carry out some machine learning, it is better I collect temperature data or convert to image data?

If I need to collect image data is that I must use pygame? May I know why I must use pygame? I am new to Raspberry, so hope you can help me. If possible I like to display the image on the desktop but not any screen of Raspberry equipment such as PiTFT.
If I dont use pygame, is there any other way for me to collect image data?

I write quite a lot of stuff in one message again, hope you able to reply all the information to me again. Thanks.

Hi Chow,

The sensor will give temperature data of each pixel of the 8 x 8 array in a 2-byte format. The pixels are infrared sensors which measure an averaged temperature value that each pixel sees within their respective fields of view. The full field of view of the entire sensor array is about 60 degrees both vertically and horizontally, and each pixel’s field of view will be roughly 7.5 degrees horizontally and vertically. If a person were to stand in the field of view of one or more pixels, then those pixels will measure a higher temperature than the surrounding pixels, assuming the background temperature was cooler than the person. A host device, such as an Arduino or Raspberry Pi, will send the appropriate command to the slave Grid-Eye device under I2C control, and the Grid-Eye will send out the pixel data for all 64 pixels back to the host under I2C control from the host. It takes a total of 135 bytes to receive a full frame of data.

image

Since you provide the code for the host, you can do whatever you want with that data, including assign a color to each temperature value (typically red is assigned to higher temperatures, but that’s up to you). You could choose to create an 8 x 8 image using those color assignments and display that on whatever display is compatible with the host device, or transmit that to a PC if your host is capable of doing that.

If you get the AMG8834EVAL kit and use the software they provide, the output of the kit’s host (an Atmel/Microchip ATSAMD21) can be sent to a PC and displayed within a GUI. As for your question about the three Levels, according to what I read, all three levels are included in the microcontroller firmware, and you just select which you want to use. See the video below for a demo of the Panasonic eval board:

Regarding your questions about the Adafruit Grid-Eye with Raspberry Pi, you will have to decide whether you convert the temperature data to video or not, depending on what you are trying to do. If you intend to display the field of view, you will obviously have to create an image at some point, and I would assume most image processing algorithms require an array of data to work with.

Using Pygame is not a requirement with the Raspberry Pi, but it might be a useful way to generate an image. I am not familiar with that software. I did find a link to a project here were someone used the Grid-Eye and a Pi, and fed the output to a PC. I have not studied it in detail, but he gives the code, so it may well help you along in that regard.

Here’s one additional example, using the Sparkfun SEN-14607, which is another board we sell with a Grid-Eye mounted on it. They give a number of basic examples, and then give an example using “Processing” which is a programming language often used to create graphics. They show an image generated with this which is quite nice.

2 Likes