Exploiting Radar Data Domains for Classification with Spatially Distributed Nodes
More Info
expand_more
Abstract
Recognition of continuous human activities is investigated in unconstrained movement directions using multiple spatially distributed radar nodes, where activities can occur at unfavourable aspect angles or occluded perspectives when using a single node. Furthermore, such networks are favourable not only for the aforementioned aim, but also for larger controlled surveillance areas that may require more than just one sensor. Specifically, a distributed network can show significant differences in signature between the nodes when targets are located at long distances and different aspect angles. Radar data can be represented in various domains, where a widely known domain for Human Activity Recognition (HAR) is the microDoppler spectrogram. However, other domains might be more suitable for better classification performance or are superior for low-cost hardware with limited computational resources, such as the Range-Time or the Range-Doppler domain. An open question is how to take advantage of the diversity of information extractable from the aforesaid data domains, as well as from different distributed radar nodes that simultaneously observe a surveillance area. For this, data fusion techniques can be used at both the level of data representations for each radar node, and across the different nodes in the network. The introduced methods of decision fusion, where typically one classifier operates on each node, or feature fusion, where the data is concatenated before using one single classifier, will be exploited, investigating their performance for continuous sequence classification, a more naturalistic and realistic way of classifying human movements, also accounting for inherent imbalances in the dataset.