Unmanned aerial vehicles (UAVs), often referred to as autonomous drones, are becoming more and more prevalent in our daily lives. Drones are usually equipped with traditional frame-based cameras and have functions like object detection. However, the high energy consumption of fra
...
Unmanned aerial vehicles (UAVs), often referred to as autonomous drones, are becoming more and more prevalent in our daily lives. Drones are usually equipped with traditional frame-based cameras and have functions like object detection. However, the high energy consumption of frame-based cameras presents a challenge to drone endurance. In addition, due to its own hardware limitations, problems such as blurring and deformation will occur when capturing high-speed moving objects. In contrast, event cameras, as one of the latest neural technology cameras, have the characteristics of low power consumption, low latency, and sensitivity to high-speed objects. This makes them well-suited for integration into drone platforms. In this thesis, we introduce a method for high-speed moving object recognition based on event cameras. This approach involves enhancing objects with externally added ``common features": fiducial markers, and employing a custom-developed deep learning neural network to detect these markers. These marks can carry some messages, such as the information of related objects. When these marks are detected, it is considered that objects are detected. We also developed a fiducial marker decoding method based on region segmentation to obtain the message content, thereby achieving interaction with the detected object. Evaluation results show that the proposed method has a low computing time of 26.9 ms, low storage and memory usages of 670 MB CPU memory and 750 MB GPU video memory, and a high accuracy of 77.9%, making it suitable for high-speed object recognition based on event cameras.