In numerous scenarios involving distributed sensing, such as system monitoring, surveillance, and the Internet of Things (IoT), vast number of sensors are densely scattered throughout the designated area. This chapter delves into the challenges of target tracking within a sensor network boasting a substantial sensor count. In such networks, where resources like energy and bandwidth are constrained, utilizing all sensors continuously proves to be inefficient. This is due to the presence of uninformative sensors, which consume resources without significantly contributing to the task at hand. Hence, sensor selection emerges as a critical approach, aiming to identify the optimal subset of sensors within the region during each observation period to ensure effective tracking performance within the given resource limitations [1–8], has been investigated in the literature. These inquiries entail the selection of sensor sets with the aim of accomplishing particular objectives, such as attaining good information gain or minimizing the error in estimation during state estimation of the target. For the problems considered in [2–4, 9] sensor selection metrics are derived based on involve mutual information (MI) and entropy. Conversely, in [5, 6], the focus of sensor selection lies in pinpointing sensors which achieve the minimum posterior Cramér–Rao lower bound (PCRLB), where PCRLB is inversely related to Fisher information (FI). For quantized sensor measurements, the work in [10], compares these two selection criteria, ie, MI and PCRLB, noting that the PCRLB-driven approach yields comparable mean square error (MSE) and outcomes MI with substantially …