Using AI Without Burning Your Clients' Trust: A Practical Governance Guide
Using AI Without Burning Your Clients' Trust: A Practical Governance Guide
AI tools are genuinely useful for real estate agents. But there's a version of "using AI" that could create real problems for you — not because the tools are bad, but because most agents haven't thought carefully about what they're actually doing with client information when they use them.
This guide isn't about scaring you away from AI. It's about using it in a way that's professional, defensible, and respectful of the trust your clients place in you.
The Core Problem: AI Tools Use Your Input
When you paste a client's name, financial details, address, situation, or personal circumstances into a free AI tool, that information is going somewhere. Where it goes, how long it's stored, whether it's used to train future AI models, and who can see it depends on the specific tool and its privacy policy.
For most consumer AI tools — the free versions of ChatGPT, Claude, and others — your inputs may be reviewed by humans for safety purposes, used to improve the model, or retained on the company's servers for some period of time.
That means if you paste "My client John Smith at 123 Main Street is going through a divorce and is under pressure to sell quickly," you've shared a piece of sensitive client information with a third-party system without John's knowledge.
This isn't necessarily illegal. But it may violate your ethical obligations around client confidentiality depending on your state and brokerage rules. And if something went wrong — a data breach, an AI error that produced confidential information in another user's session, anything — you'd be explaining why you pasted client data into a free internet tool.
A Simple Test Before You Use AI with Client Information
Before putting any client information into an AI tool, ask yourself three questions:
-
Would my client be surprised if they knew I was doing this? If the answer is yes, either get their permission or remove the identifying details.
-
Does this information need to be in here to get what I need? Usually it doesn't. AI can draft a response to a difficult situation described generically just as well as one that includes the client's full name and address.
-
Would I be comfortable if this information showed up somewhere else? If not, don't include it.
Practical Ways to Use AI Without Sharing Sensitive Client Data
The good news is that you almost never need real client data to get useful AI output. Here are some substitutions:
Instead of: "My client Sarah Chen is a nurse who earns $95,000, is pre-approved for $400,000, and wants to be in a home before her lease ends on August 1..."
Use: "I have a buyer pre-approved for $400K who needs to close before a specific date in late summer. They're concerned about..."
Instead of: "Here's the inspection report for 456 Elm Street — my seller is Jim Baker and..."
Use: "Here's an inspection report [with personal identifying information removed]. Summarize the key findings and flag anything that might require negotiation."
Instead of: "My client's SSN is showing up incorrectly in this document..."
Use: Never put Social Security numbers, account numbers, or government ID numbers into AI tools. Full stop.
You can describe situations without naming people. You can ask AI to draft communications and fill in the names yourself afterward. You can summarize documents without including the actual files if the document contains sensitive personal information.
What to Know About Paid vs. Free Tools
If you're using AI regularly in your business, it's worth understanding the difference between consumer-grade free tools and paid or enterprise plans:
- Free consumer plans typically allow your inputs to be used for model training and may be reviewed by humans. Read the privacy policy for the specific tool.
- Paid consumer plans (like ChatGPT Plus) often give you more control over whether your data is used for training, though policies vary.
- Enterprise plans (like ChatGPT Team or Enterprise, or Claude for Teams) generally offer stronger data privacy protections, often including commitments not to use your data for training and stricter retention policies.
For a solo agent, a paid consumer plan with data training turned off in settings is usually enough to use AI responsibly. If your brokerage is setting up AI tools for the office, push for enterprise-grade agreements with documented privacy terms.
A Few Rules Worth Adopting Right Now
- Don't paste client names, addresses, or financial details into AI tools unless you have clear reason to and understand the tool's data practices.
- Don't upload sensitive documents — disclosures, loan applications, identity documents — into AI tools that don't have explicit privacy commitments for your use case.
- Use AI to draft and you to personalize. Let AI produce the template; fill in identifying details yourself.
- Check your brokerage policy. Some brokerages now have specific guidance on AI tool use. Know what yours says.
The Bigger Picture
Regulators and industry associations are paying more attention to AI use in real estate. NIST (the National Institute of Standards and Technology) published guidance in 2024 specifically on generative AI risk management, and real estate professional associations are beginning to weigh in on ethical use.
The agents who will be in the best position aren't the ones who avoided AI out of caution. They're the ones who used it thoughtfully, documented their approach, and built a practice around it that they could defend if anyone ever asked.
Your clients trust you with some of the most significant financial decisions of their lives. That trust extends to how you handle their information, including where it ends up when you're using AI to help them.
- Jason