Theses and Dissertations
Date of Award
5-2025
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Electrical Engineering
First Advisor
Ping Xu
Second Advisor
Weidong Kuang
Third Advisor
Haoteng Tang
Abstract
Federated Learning (FL) has emerged as a privacy-preserving paradigm that allows multiple clients to collaboratively train a machine learning model without sharing raw data. However, traditional FL relies on a central server for model aggregation, which introduces a single point of failure and makes the system vulnerable to server-side attacks or breakdowns. To address these limitations, Decentralized Federated Learning (DFL) has been proposed, eliminating the need for a central server and enhancing system resilience. Despite these advantages, DFL faces critical challenges related to fairness and robustness, especially under non-i.i.d. data distributions and adversarial conditions. In this thesis, we propose a unified DReweighting Aggregation Framework to address a variety of objectives in DFL by leveraging a general reweighting mechanism. Within this framework, we develop two core algorithms. The first algorithm, DFedReweighting, promotes both client-level and group-level fairness by dynamically adjusting aggregation weights based on their performance on local sample data, thereby mitigating performance disparities among clients. The second algorithm, Local Performance Evaluation with Temperature-scaled Softmax Reweighting (LPE-TSR), enhances Byzantine robustness by assigning lower weights to malicious or inconsistent updates, effectively neutralizing their impact on the global model.
To further improve resilience, we introduce DB-Robust DGSG, a novel algorithm that simultaneously addresses both distributional shifts and Byzantine attacks. DB-Robust DGSG integrates a plug-in robust Byzantine aggregation module and employs distributed Wasserstein distributionally robust optimization to adapt to changing data distributions while maintaining model integrity. We conduct extensive simulation experiments across a wide range of settings, including multiple datasets (MNIST, Fashion MNIST), varying attack scenarios, and different levels of data heterogeneity. The results consistently demonstrate that our proposed framework and algorithms significantly enhance fairness and robustness in DFL. This work contributes to the practical deployment of DFL in real-world, adversarial, and heterogeneous environments by offering scalable and adaptive solutions to persistent challenges in decentralized learning.
Recommended Citation
Zhang, K. (2025). Fairness and Robustness in Decentralized Federated Learning [Master's thesis, The University of Texas Rio Grande Valley]. ScholarWorks @ UTRGV. https://scholarworks.utrgv.edu/etd/1691

Comments
Copyright 2025 Kaichuang Zhang. https://proquest.com/docview/3240627115