written by Josh Vazquez Much of what Team GUARDIAN focuses on for competitions relies on the onboard vision system. Consisting of a Canon Rebel SL1 digital SLR camera, a Raspberry Pi single-board computer, and our image analysis software on the ground, the aircraft can communicate with the ground station to produce an accurate composite image of the ground. This composite is fully georeferenced, meaning that each point in the image is associated with a GPS coordinate. Using the image, we are able to perform tasks like quantifying the physical area occupied by a given polygonal region. The first step in creating the composite is to get imagery. The aircraft takes photos of the ground in regular intervals, using a gimbal to keep the camera pointed as straight down as possible. The closer to perpendicular the camera is, the less post-processing needed and the less distortion there will be in the final result. Photos are sent to the ground station as they are taken, tagged with attitude (pitch, roll, and yaw), altitude, and GPS information. The next step is to flatten the image. Consider a projector turned slightly to the side or tilted up. Without correction, the projection will be trapezoidal instead of rectangular. If turned both to the side and up, the projection will be kite-shaped. This illustrates the effects of taking a photo of the ground when the camera is not facing straight down, and it represents the actual non-rectangular area of the Earth captured by the camera in a rectangular image. Based on the attitude of the aircraft, we can determine the extent of this distortion and calculate the GPS coordinates of the locations at each corner of the image. Using these coordinates, the rectangular image is distorted to the trapezoid- or kite-shaped area of the Earth that it actually represents. The image is then ready to be composited with other images. Using geographic information system (GIS) software, processed images are placed on a grid in the correct location based on the attached GPS information. This is done in real-time as images come in, so the composite improves over time. In cases where images overlap, the ideal behaviour is to have lower-altitude images cover up higher-altitude images, since photos taken closer to the ground have greater detail for a smaller area. Blending and intelligent feature analysis are future goals that would provide more accuracy in overlapping areas and make the composite look more like it is made of a single piece.
The composite can now be analyzed to determine things like target locations or the physical area of a region. Our development of the systems described is an ongoing process and we continue to improve them to ensure high quality composites. The image to the right is an early look at our implementation, using sample imagery provided by DroneMapper.