As published in AI-Regulation.com, a new academic study explores what actually happens to conversations people have with consumer AI chatbots after they press “send.” As more users turn to tools like ChatGPT, Gemini, Claude, Grok, and DeepSeek to discuss deeply personal topics—from health concerns to legal questions and relationship struggles—the research examines the often unseen systems behind these interactions. Through a comparative policy and interface analysis of five major AI platforms, the study maps the “internal boundary” of chatbot conversations, analyzing how providers handle training data, human review, advertising use, and operational data sharing.
The findings reveal a landscape marked less by abuse than by structural opacity. Most providers train their systems on consumer chats by default, allow human access to conversations, and maintain internal data pipelines that many users are unaware of—even after messages are deleted. While companies emphasize that they do not sell user data, the study argues that greater transparency and stronger safeguards are needed for sensitive conversations. It proposes several reforms, including a concept called “Sealed Mode,” which would create a protected pathway for high-stakes discussions—such as health topics—where chats are excluded from training, advertising, and broader internal access.
The forthcoming Part II of the study will address the external boundary: civil discovery and litigation holds, government-compelled access and cybersecurity breaches, and how the retention and access design choices documented in Part I might amplify exposure.
To read the full article on AI-Regulation.com, click here.
To read the companion IAPP article, click here.
* * *
These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the Cross-Border Data Forum or any participating individuals or organizations.