Hi all,
I’ve noticed more people (myself included) using tools like ChatGPT to help draft complaints, understand transactions, summarise statements, or sense-check financial emails.
One thing I’ve been thinking about is data redaction before sending anything to public LLMs.
For example:
-
Account numbers
-
Transaction references
-
Full names & addresses
-
Screenshots of balances
-
PDFs with embedded metadata
Even when platforms say they don’t train on your data, there’s still human error risk — copy/paste mistakes, screenshots shared accidentally, browser extensions logging content, etc.
I started looking into this more seriously after realising how easy it is to paste identifiable financial info without thinking.
Out of curiosity (and partly because I’m building something in this space — Questa-AI, focused on automatic redaction before LLM processing), I’ve been experimenting with ways to reduce that risk.
But I’m more interested in how others approach it:
-
Do you manually redact?
-
Use regex scripts?
-
Just rely on trust in the platform?
-
Avoid AI for financial content altogether?
Feels like this might become more relevant as AI usage becomes normalised in financial workflows.
Would be great to hear how others think about this.