Eugene is an Assistant Professor at
UMass Amherst CICS. Eugene's work focuses on security and
privacy in emerging AI-based systems and
agentic use-cases under real-life
conditions and attacks.
He completed his PhD at
Cornell Tech
advised by
Vitaly Shmatikov
and
Deborah Estrin. Eugene's research was recognized by
Apple Scholars in AI/ML
and
Digital Life Initiative
fellowships and Usenix Security
Distinguished Paper
Award. He received an engineering degree
from
Baumanka
and worked at Cisco as a software
engineer. Eugene has extensive industry
experience (Cisco, Amazon, Apple) and
spends part of his time as a Research
Scientist at Google.
Eugene grew up in
Tashkent
and plays water polo.
Announcement 1:
I am looking for PhD students (apply) and post-docs to work on attacks on
LLM agents and generative models. Please
reach out over email!
Announcement 2: We
are running a
seminar CS 692PA
on Privacy and Security for GenAI
models, please sign up if you are
interested.
Security: He worked on
backdoor
attacks in federated learning
and proposed new frameworks
Backdoors101
and
Mithridates, and a new
attack
on generative language models covered by
VentureBeat
and
The Economist. Recent work includes studies on
vulnerabilities in multi-modal systems:
instruction injections,
adversarial illusions
and adding
biases to text-to-image models.
Privacy: Eugene worked
on the
Air Gap
privacy protection for LLM Agents and
operationalizing
Contextual Integrity. He worked on
aspects of differential privacy
including fairness
trade-offs, applications to
location heatmaps, and tokenization
methods
for private federated learning.
Additionally he helped to build the
Ancile
system that enforces use-based privacy
of user data.
Courses
CS 360:
Intro to Security, SP 25. Syllabus
CS 692PA: Seminar on Privacy and
Security for GenAI
models, FA'24, SP'25
Academic Service
Conference Program Committees:
- ACM CCS'24, '23
- IEEE S&P (Oakland)'25