Theses and Dissertations
Date of Award
8-2023
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Computer Science
First Advisor
Emmett Tomai
Second Advisor
Dong-Chul Kim
Third Advisor
Yifeng Gao
Abstract
The use of deep learning (DL) models for solving classification and recognition-related problems are expanding at an exponential rate. However, these models are computationally expensive both in terms of time and resources. This imposes an entry barrier for low-profile businesses and scientific research projects with limited resources. Therefore, many organizations prefer to use fully outsourced trained models, cloud computing services, pre-trained models are available for download and transfer learning. This ubiquitous adoption of DL has unlocked numerous opportunities but has also brought forth potential threats to its prospects. Among the security threats, backdoor attacks and adversarial attacks have emerged as significant concerns and have attracted considerable research attention in recent years since it poses a serious threat to the integrity and confidentiality of the DL systems and highlights the need for robust security mechanisms to safeguard these systems. In this research, the proposed methodology comprises two primary components: backdoor attack and adversarial attack. For the backdoor attack, the Least Significant Bit (LSB) perturbation technique is employed to subtly alter image pixels by flipping the least significant bits. Extensive experimentation determined that 3-bit flips strike an optimal balance between accuracy and covertness. For the adversarial attack, the Pixel Perturbation approach directly manipulates pixel values to maximize misclassifications, with the optimal number of pixel changes found to be 4-5. Experimental evaluations were conducted using the MNIST, Fashion MNIST, and CIFAR-10 datasets. The results showcased high success rates for the attacks while simultaneously maintaining a relatively covert profile. Comparative analyses revealed that the proposed techniques exhibited greater imperceptibility compared to prior works such as Badnets and One-Pixel attacks.
Recommended Citation
Tauhid, Ashraful, "Invading The Integrity of Deep Learning (DL) Models Using LSB Perturbation & Pixel Manipulation" (2023). Theses and Dissertations. 1303.
https://scholarworks.utrgv.edu/etd/1303
Comments
Copyright 2023 Ashraful Tauhid. All Rights Reserved.
https://go.openathens.net/redirector/utrgv.edu?url=https://www.proquest.com/dissertations-theses/invading-integrity-deep-learning-dl-models-using/docview/2861753584/se-2?accountid=7119