Cara August, Trinity Communications
Pardis Emami-Naeini is concerned about AI users’ privacy — and with good reason.
The assistant professor of Computer Science, Electrical and Computer Engineering and Public Policy focuses much of her research on human-computer interactions and social aspects of computing. As the director of the Duke InSPIre Lab, one of her aims is understanding how people use AI technology, as well as their perceived risk of interacting with the tool.
AI systems are constantly learning, absorbing and adapting. Their ability to do so relies on access to large datasets, which are often collected without explicit consent. This collection and storage of data isn’t immune to misuse. As so much of our personal information is now readily available online or even willingly provided directly to AI chatbots, AI can pose significant privacy risks.
With questions about security and privacy, particularly among vulnerable or underserved communities, Emami-Naeini and her team of researchers — Ph.D. candidates Jessie Cao, Jabari Kwesi and Hiba Laabadl — are working to design protective technologies for AI tools and provide data to help inform policy decisions.
We talked with Emami-Naeini to learn more about the scope of privacy concerns related to AI usage, and the work her team is doing to reduce risk.
This article has been edited for length and clarity.
Your research highlights how AI is reshaping human interaction. What is something that you believe people misunderstand about AI?
Emami-Naeini: Many people assume that AI tools are safer than they are, often due to misplaced trust in regulations or technology providers. Another issue is that people are so eager to use new AI-driven technologies that they overlook risks. For example, social robots that interact with humans are becoming popular, yet users rarely question their data collection practices. Privacy concerns that once alarmed users — like Alexa’s always-on listening — are now being accepted in new contexts, simply because the functionality is appealing.
That seems to be a deep societal shift. Do you think AI’s emotional and social roles are growing faster than our ability to regulate them?
Emami-Naeini: Absolutely. The risks evolve as user behavior changes, making it difficult to predict future challenges. While we can’t eliminate all risks, we can improve user education, so people can reason through potential dangers themselves.
What are the main privacy implications for using AI tools like ChatGPT?
Emami-Naeini: For ChatGPT, a lot of users have an account that links to their email. If not, someone can use the tool as a guest, but even so, they’re sharing different types of information which can be easily linked to them. Only a few data points are necessary to create a picture of someone — like who they are as a person, their preferences and interests. A ton can be inferred from that information, including your name, where you live and the resources that are available to you. For example, if someone living in an area with few mental health resources is using ChatGPT for mental health support, they aren’t hard to pinpoint. If you’re using a linked account, of course, it’s much easier.
And what are the dangers on the backend?
Emami-Naeini: So that's a very interesting question. I think you're getting to this concept in security which is called threat model, meaning: who is the adversary? What are they after? What are their resources and how much time do they have to, basically, attack you? There could be different types of adversaries. For example, if your information is being leaked, a potential employer has access to information about your level of anxiety, the things that you're stressed about, maybe depression, things that can have huge stigma, especially if you belong to specific social demographics. Depending on the stigma tied to the information being leaked, the harm could be very different.
You recently presented a study on the use of chatbots for mental health support at the 2025 USENIX Security Symposium. What did the study’s findings indicate?
Emami-Naeini: Firstly, using a general-purpose chatbot like ChatGPT for mental health support differs from using dedicated mental health chatbots, which are AI tools repurposed for emotional support. In our study, we interviewed 21 people using general chatbots for this purpose to understand their privacy concerns and risk awareness.
One major misconception was that many participants believed their interactions were protected under HIPAA, like conversations with a therapist would be. That’s not the case.
Another unexpected finding was that some participants with experiences of domestic violence purposefully chose general-purpose chatbots, instead of mental health-specific ones, to avoid raising red flags with their abusers.
Many users also expressed concerns about oversharing and wondered if it’s possible to include embedded tools to help them manage privacy — like a plug-in that flags sensitive information before submission or a "private mode" that activates automatically. My lab is collaborating with other faculty with expertise in social science and NLP (natural language processing), to explore these solutions.
There are only a few mental health chatbots that are protected or audited by the FDA. How does that impact HIPAA safety for users?
Emami-Naeini: Unless it’s a diagnosis tool, for example something that is more medical, AI tools don’t have to go through an FDA approval. Frankly, companies don't want to say that their tool is being used to diagnose people, even for something like anxiety, because there is a lot of liability.
Are there interdisciplinary collaborations involved in this work?
Emami-Naeini: Yes! We are working closely with the Duke School of Medicine on a project focused on patient-facing transparency in healthcare AI. We’re investigating what information patients need to trust AI-driven medical tools. Many patients, particularly from marginalized communities, are wary of these systems but don’t always know what questions to ask to determine if a system can be trusted or not. We aim to design transparency interfaces that empower patients to ask the right questions, and help doctors better explain AI-driven decisions.
But you can also think about adversaries that are really just after information. For example, there were a few participants in our recent study who were concerned about their abusive partner having access to their ChatGPT information. If they had a shared account, the partner could ask ChatGPT what it learned about the abuse survivor. There is a multitude of possibilities for the information to be misused and for different types of harm to be possible, depending on who is using this information.
Is the desired impact of your research for people to better understand that the information that ‘goes out into the world’ when they use AI is not necessarily anonymous and can be misused, with possibly threatening consequences?
Emami-Naeini: Yes, definitely. But that's just part of it. We’re really not trying to put the blame on the user because all of our study participants also really benefit from the interaction. For example, we had participants who shared that they were struggling and in a really bad situation, and ChatGPT helped them and provided comfort and affirmation.
AI and generative chatbots have a lot of benefits, so we cannot say, “don't use it.” It is useful and the alternative might be much more harmful. But users should be able to think about potential harms to privacy.
Another contribution of this work, which my Ph.D. student Jabari Kwesi is continuing, is to design security and privacy features that can protect the user. Our envisioned tool would give users “informed agency” by disclosing the privacy information they need to have to make protective decisions on what information they're okay sharing with the AI tools and how before they engage with the technology in a high-stress situation.
When someone is in crisis — that’s probably not the best time to consider privacy and security concerns.
Emami-Naeini: Exactly, and it might be very harmful. But what if you ask them when they're a bit more relaxed, or when they're using ChatGPT the first time, or at a moment when they have time to think about privacy and more fully consider what they are looking for? We are trying to understand what type of privacy controls people are expecting to be provided, and at which stage of the interaction it makes the most sense to evaluate those needs.
What are some of the challenges you’re facing with developing user protection prototypes?
Emami-Naeini: One challenge with these types of tools is to make sure the quality of the output is something that users want. For example, let's say people are concerned about over-sharing information in stressful situations. If the protection removes a lot of information from the interaction, the challenges and concerns would be that maybe the outcome is not as helpful and effective.
If the output is not actually providing the help a user is after, it’s not performing, so, we need to put that into consideration.
An expert in security and privacy, usability, and human-computer interaction, Emami-Naeini received a 2024 Google AI Research Scholar Program Award in the Privacy category, aimed at supporting early-career academics for her project “Designing A Usable Security and Privacy Label ‘Dictionary’” with $60,000 funding awarded. To learn more about that project and award visit: Pardis Emami-Naeini Receives Google AI Research Award.