Automatic Camera Trap Classification Using Wildlife-Specific Deep Learning in Nilgai Management
Camera traps provide a low-cost approach to collect data and monitor wildlife across large scales but hand-labeling images at a rate that outpaces accumulation is difficult. Deep learning, a subdiscipline of machine learning and computer science, has been shown to address the issue of automatically classifying camera trap images with a high degree of accuracy. This technique, however, may be less accessible to ecologists, to small scale conservation projects, and has serious limitations. In this study, a simple deep learning model was trained using a dataset of 120,000 images to identify the presence of nilgai Boselaphus tragocamelus, a regionally specific non-native game animal, in camera trap images with an overall accuracy of 97%. A second model was trained to identify 20 groups of animals and 1 group of images without any animals present, labeled as “none”, with an accuracy of 89%. Lastly, the multigroup model was tested on images collected of similar species but in the southwestern United States and resulted in significantly lower precision and recall for each group. This study highlights the potential of deep learning for automating camera trap image processing workflows, provides a brief overview of image-based deep learning, and discusses the often-understated limitations and methodological considerations in the context of wildlife conservation and species monitoring.
Matthew Kutugata, Jeremy Baumgardt, John A. Goolsby, Alexis E. Racelis; Automatic Camera-Trap Classification Using Wildlife-Specific Deep Learning in Nilgai Management. Journal of Fish and Wildlife Management 1 December 2021; 12 (2): 412–421. doi: https://doi.org/10.3996/JFWM-20-076
Journal of Fish and Wildlife Management