Document Type
Article
Publication Date
12-2021
Abstract
Camera traps provide a low-cost approach to collect data and monitor wildlife across large scales but hand-labeling images at a rate that outpaces accumulation is difficult. Deep learning, a subdiscipline of machine learning and computer science, has been shown to address the issue of automatically classifying camera trap images with a high degree of accuracy. This technique, however, may be less accessible to ecologists, to small scale conservation projects, and has serious limitations. In this study, a simple deep learning model was trained using a dataset of 120,000 images to identify the presence of nilgai Boselaphus tragocamelus, a regionally specific non-native game animal, in camera trap images with an overall accuracy of 97%. A second model was trained to identify 20 groups of animals and 1 group of images without any animals present, labeled as “none”, with an accuracy of 89%. Lastly, the multigroup model was tested on images collected of similar species but in the southwestern United States and resulted in significantly lower precision and recall for each group. This study highlights the potential of deep learning for automating camera trap image processing workflows, provides a brief overview of image-based deep learning, and discusses the often-understated limitations and methodological considerations in the context of wildlife conservation and species monitoring.
Recommended Citation
Kutugata, Matthew, Jeremy Baumgardt, John A. Goolsby, and Alexis E. Racelis. "Automatic camera-trap classification using wildlife-specific deep learning in Nilgai management." Journal of Fish and Wildlife Management 12, no. 2 (2021): 412-421. https://doi.org/10.3996/JFWM-20-076
Publication Title
Journal of Fish and Wildlife Management
DOI
10.3996/JFWM-20-076
Comments
Public Domain