Skip to content
View vicobarafor's full-sized avatar

Block or report vicobarafor

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
vicobarafor/README.md

Victor Obarafor

Researcher in Federated Learning and Trustworthy AI (PhD)
https://victorobarafor.com/


Research Focus

  • Federated learning under non-IID data
  • Training instability and failure modes
  • Robust aggregation and optimization
  • Personalization under heterogeneity
  • Geometry of distributed updates

Research Direction

Federated learning systems are typically studied under idealized conditions.

In practice, heterogeneity, drift, and conflicting client updates introduce instability that standard methods do not address.

My work focuses on a central question:

When and why does federated learning fail under real-world conditions?

The goal is to characterize these failure modes and design methods that remain stable in realistic distributed environments.


Research Program

This work is organized around three directions:

Robust Federated Learning (Non-IID)
Aggregation under distribution shift and client heterogeneity.

Personalization under Heterogeneity
Client-specific adaptation and its effect on global performance.

Geometry and Instability in Federated LoRA
Training dynamics through update alignment, conflict, and early predictors of failure.


Selected Work


Perspective

Many approaches optimize for average-case performance.

In realistic settings, systems fail due to:

  • conflicting updates
  • distributional imbalance
  • unstable optimization dynamics

Understanding these behaviors requires focusing on failure modes, not just performance.


Ongoing Work

  • Stability-aware aggregation
  • Early indicators of training collapse
  • Scaling under increasing heterogeneity
  • Geometry-informed optimization

Links

Pinned Loading

  1. federated-personalization-depth federated-personalization-depth Public

    Client-specific personalization depth in federated learning: how much each client should adapt a shared model

    Python

  2. federated-lora-geometry federated-lora-geometry Public

    Geometry dynamics and instability in federated LoRA under heterogeneous data distributions (FedGeoX)

    Python

  3. robust-federated-learning-noniid robust-federated-learning-noniid Public

    Drift-Aware Adaptive Aggregation (DAA) for federated learning on CIFAR-10 under heterogeneous client partitions.

    Python