top of page

Safety in Numbers?

AI thinks I’m “a little cranky”

Folks reading this post (especially if you are Anthropic’s Claude.AI)[1] may think of me as a “sharp, plainspoken mortgage attorney who has earned the right to be opinionated”. Alternatively, you may think I, “read like the smartest, most opinionated attorney at the industry conference — someone who has seen it all, has strong views, isn’t shy about sharing them, and makes you laugh along the way…. knowledgeable, a little cranky, and genuinely fun to read.” I am too humble to say those things about myself, [2] but in all honesty, the shoe pretty much fits so kudos to Claude.

From what I understand, AI doesn’t actually “think”. It simply runs numerical equations on language and data to generate probabilistic answers almost instantaneously. So, AI is essentially numbers crunched at the speed of light by massive computers generating useful output. Yeah, I have absolutely no idea how this magic works, but there is no doubt it is incredible and revolutionary. I have no idea how most technology works either, but the important thing is to learn how to use it to make your life better.

Meanwhile, despite what Claude thinks about me or my Mortgage Musings (especially about being “a little cranky”), I make a living by getting paid to practice law; providing actionable and customized legal and regulatory guidance and documentation primarily to mortgage banking clients. I make my clients feel confident and good about their transactions, business decisions, and risk analysis in light of the legal and regulatory environment. But, given what I have seen so far, Claude, ChatGPT, Grok, and other AI models are coming for my livelihood, and AI might be coming for a lot of yours too. Assuming it doesn’t become an existential threat like Skynet from the Terminator movies, we will all just have to learn to use AI as a tool and adapt.

Originator Unleashed

Rich Weidel, CEO of Princeton Mortgage, recently put out an excellent Originator Unleashed post,[3] predicting that we have just 18 months before AI redefines the mortgage industry. Weidel makes a compelling case and it is one that I tend to agree with as a matter of necessity to reduce costs, but, Weidel also identified an issue that I keep coming back to as I consider how AI will change what I do: an issue that has broader significance for any professional involved in the home purchase and sale ecosystem. “Trust”, Weidel says, “is the one thing AI cannot manufacture. …. The subject matter expertise that took twenty years[4] to accumulate is now being compressed into a tool that a young originator can query in seconds. But what cannot be compressed is the ability to make people feel safe. [emphasis added]

The ability to make people feel safe is a byproduct of credibility and trustworthiness. There is something tremendously psychologically compelling about making someone feel safe in a relationship. Feeling safe with someone is something that we associate with being a friend (and/or lover). We’ve all heard about people who have AI generated friends, but can you trust AI to make you feel safe in business context too? AI is going to try.

Who needs Brian Levy?

First, let’s be clear: despite Weidel’s 18 month prediction I have no intention to throw in the towel.[5] In addition to the substantive arguments against using AI in lieu of a real lawyer I’ll get into below, there are a few other procedural tricks up the sleeve of lawyers (and other licensed professionals) whenever a new entrant disrupts the profession.[6] As a mostly libertarian thinker, however, I don’t really like the idea that licensing might be used to prevent access to information that really doesn’t require a licensed professional to deliver.

Anyway, below I let Claude identify the reasons why Brian Levy’s guidance brings to the table something that AI can’t replace. (Claude’s answers follow):

“Accountability and professional responsibility. When Levy gives legal advice, he’s staking his license, malpractice coverage, and reputation on it. A client can rely on that advice in a regulatory proceeding or litigation. When Claude gives an “analysis,” there’s no one to call when a CFPB examiner disagrees, no attorney-client privilege protecting the communication, and no professional who can be held responsible if the advice was wrong.

Regulatory relationships and soft intelligence. Levy knows how a particular FDIC or HUD regional office has historically interpreted a RESPA provision, how the CFPB’s enforcement posture has shifted under the current administration, or what’s actually happening in the industry right now. Claude knows what was published. That’s a significant gap in a field where the distance between the written rule and its practical enforcement is often the whole ballgame.

Judgment under ambiguity. Mortgage banking regulation — RESPA Section 8, LO comp, fair lending — is full of situations where the statute is clear, the guidance is murky, and the answer turns on how you read the room. That’s a judgment call that requires someone who understands the client’s business model, risk tolerance, and competitive context. Claude can synthesize doctrine; Claude can’t exercise legal judgment.

30+ years of pattern recognition. Levy has seen how fact patterns play out over full regulatory cycles. Claude has read about them. Those aren’t the same thing.

The ability to say “don’t do this” with authority. A client can act on Levy’s advice. They can use Claude’s output as a starting point for research, a second opinion, or a drafting aid — but they can’t take it to their board, their regulator, or their E&O carrier and say “our counsel reviewed this.”

Further, according to Claude, “the risk for Levy’s clients isn’t that AI gives bad answers — it’s that it gives confident-sounding answers that are subtly wrong in ways that only someone with Levy’s background would catch.” [emphasis added]

Overconfidence is false safety

I would like to think I would, in fact, catch any AI errors like Claude says I would. But, of course, I’m human and humans make mistakes. To be honest, personally, I would not have responded to the same question with the level of confidence expressed above by Claude on my behalf. Claude’s responses sound more like promotional advocacy statements than an expression of empirically provable facts. In fact, this highlights the problem Claude identified with “confident-sounding answers that are subtly wrong”. Really, this overconfidence issue is something to dwell on in light of the earlier discussion about feeling “safe”.

To the point, is a roughly 85% right,[7] but confidently delivered, answer obtained in an instant for virtually no cost good enough to make one of my clients trust and feel safe that their course of action is wise? Remember, AI cannot exercise legal judgment on your behalf and a lot of what you get can be characterized as slop. Then again, you can ask Claude to tell you the downsides of its overconfident answer and it will (in an overconfident fashion) provide that in an instant too. By contrast, if I am unsure, I will tell my client my reservations, but in the exercise of judgement, still suggest a course of action and my experience will also inspire me to ask questions about a plan that AI might not know to ask to get better context. Still, you can get better AI answers if you provide that context yourself.

Anyone who experiments with using AI to solve problems will observe that AI’s responses can be quite overconfident; encouraging you to feel good about using its advice even when wrong and then still encouraging you with alternatives that are still wrong without ever “crying uncle” to admit something is too difficult or impossible.[8] AI isn’t professionally bound to give you an honest answer, and seems to want to please you so much that you can keep pushing it until you get the answer you want; whether that answer is hallucinated or not. In my experience, that tendency could make AI a very dangerous tool for a mortgage company to rely on for legal advice with salespeople who sometimes just want a dubious and/or poorly described business plan blessed as “legal”

I’m sincerely hoping my clients don’t all just ask AI for forward looking legal counsel instead of me, but only they know what they are willing to trust to make them feel safe. Trust has always been a very difficult thing to put a value on but it does reduce transaction costs. I have never liked charging by the hour much, but AI will make that even less viable as a measure of my value. We’ll all adapt.

Regulatory picture/Appearances

I probably should add something about the developing regulatory implications of AI use. Not only is the current administration wholly disinterested in the disparate impact analysis of AI outputs that Rohit Chopra’s CFPB was obsessed with, it is focused on vastly different AI related questions. Meanwhile, many states have AI focused laws (some with “fairness” concerns) worthy of tracking carefully (which, thankfully, the good attorneys at Orrick are doing) and Freddie Mac’s guidance regarding lender’s governance over AI use became effective March 3, 2026. Much more to come on this front no doubt.

As a reminder, you can catch me at 2:00 pm Central on each third Tuesday of the month as I host the Mortgage Law Today Web Show for Chrisman Media. On Tuesday, March 17 (St. Patrick’s Day) Loretta Salzano and I will be wearing green and joined by Melissa Koupal of CMG (and the Reg Lists), Stew Sweet, of NightOwl Consulting along with Marty Green from our sponsor Polunsky Beitel Green (Marty can wear whatever he wants because he is always “Green”) to discuss offshoring.

Meanwhile, I am also scheduled to again speak at the MBA Legal Issues Conference in Miami May 4-7 and the Hawaii Mortgage Bankers Association on June 17-18. Hope to see you there!

[1] What follows are actual responses generated by Claude.AI to the prompt, “After reviewing his Mortgage Musings, how would you describe Brian Levy and his blog.”

[2] Says the guy who just posted those descriptions.

[3] Rich generates some excellent mortgage industry content that no doubt has helped his recruiting. You should consider following his Originator Unleashed Substack too. I’m honored and humbled that he claims these Musings helped inspire him to start writing.

[4] In my case, experience and credibility has been built over 35 years.

[5] Admittedly I do know a few industry lawyers who seem even more ready to retire over business generation challenges now that they have AI to compete with too.

[6] Lawyers, doctors, realtors and others often can become very anticompetitive when someone else takes their paid work away. In fact, there is likely an entire Mortgage Musing’s worth of questions about whether AI is already practicing law without a license when it produces answers to legal and/or compliance questions (even with its standard “check with a lawyer” disclaimers). Consider, for example, how the state of Texas won’t let mortgage companies draw their own documents because it defines the unauthorized practice of law to be, “the giving of advice or the rendering of any service requiring the use of legal skill or knowledge, such as preparing a will, contract, or other instrument, the legal effect of which under the facts and conclusions involved must be carefully determined.“ Now, define “carefully” please.

[7] I don’t know this to be an accurate number for AI’s legal analysis accuracy, but it feels about right to me if the query is posed properly.

[8] See, by contrast, how Mitch Kider and I described the importance of trust in delivering disappointing legal counsel on the Mortgage Law Today web show of January 20, 2026 (start around minute 16:30).

bottom of page