Bio

I’m a PhD student in Computer Science at the University of Kansas working at the intersection of privacy-preserving machine learning and computational (lensless) imaging. My recent work explores how to train and deploy robust classifiers and image-to-image models (CNNs, GANs, Transformers) while minimizing exposure of sensitive visual data—e.g., steganographic encodings and defenses that keep signals useful for learning but opaque for humans. I care about methods that are measurable, reproducible, and practical to ship.

Before graduate school I spent ~5 years in industry as a software engineer and solution architect, building distributed systems with Java/Spring, Spark/Hadoop, Elasticsearch, Docker, and CI/CD on cloud. At KU I collaborate with physics on real-time, data-intensive workloads and have taught/assisted courses in Python, OOP, and Data Structures. I also serve in Nepali student and literature organizations. I’m open to collaborations on trustworthy AI, efficient training, and privacy-constrained imaging or healthcare ML—feel free to reach out.

Research

Privacy & Security in Imaging and Surveillance
Lensless/low-light imaging · Steganography · Robust classification
  • Design privacy-preserving pipelines for lensless cameras that keep human-readable content obfuscated while retaining features needed for ML.
  • Develop image-to-image steganographic encodings (e.g., GAN-based) and defenses that protect sensitive visual information in transit and at rest.
  • Adversarially robust classification for degraded/encoded imagery; evaluation under common corruptions and threat models.
  • End-to-end reproducible benchmarks for comparing privacy vs. utility trade-offs.
Lensless Imaging GANs Steganography Adversarial Robustness PyTorch
High-Energy Physics Real-Time Data Processing
Streaming ML · Trigger systems · GPU/FPGA acceleration
  • Low-latency inference for streaming detector data with trigger-like constraints.
  • Online feature extraction and anomaly detection for rare-event discovery.
  • Scaled data cleaning pipelines (Spark/Hadoop/Elasticsearch) for dimensionality reduction.
SVD FPGA Dimensionality Reduction
Large Language Models: Attacks & Defenses
Prompt injection · Jailbreaks · Data leakage · Model extraction
  • Characterize attack surfaces and design guardrails for agentic LLMs.
  • Evaluate membership inference, model inversion, and data leakage risks.
  • Red-teaming workflows and poisoning-resilient evaluation pipelines.
Prompt Injection Jailbreaks Privacy Audits

Education

  • PhD in Computer Science, University of Kansas (2020–Present)
  • BE in Electronics & Communication Engineering, Tribhuvan University (2011–2014)

Experience

See the Experience page for full details.

Recent News

  • 2025 — Scheduled talk at AI4EIC
  • 2025 — Paper accepted to ICMLA
  • 2024 — Paper accepted to IEEE MASS (lensless classification)