A new study from Monash Business School warns that patients and clients who fact-check doctors, lawyers, or other professionals using AI risk damaging trust, as experts feel disrespected and less motivated to help when their advice is questioned by artificial intelligence.

  • Consulting AI to verify experts can offend professionals and erode trust.
  • Advisors feel less motivated to help clients who use AI tools for fact-checking.
  • Clients are advised to keep AI consultations private to preserve relationships.

What happened

A study published by Monash Business School reveals that professionals such as doctors and lawyers feel insulted when clients use AI tools like ChatGPT to verify or seek second opinions on their recommendations. This perceived disrespect makes experts less willing to continue assisting those clients, even if the AI is used only as a supplementary resource.

The research found that clients who defer to AI can be seen as less competent and warm by their advisors. Professionals fear that AI threatens their expertise and value, especially as AI capabilities improve, prompting them to question the significance of their human contribution.

Why it matters

Trust and rapport are essential in professional-client relationships, particularly in fields requiring specialized knowledge like medicine and law. When clients reveal they fact-checked their advice using AI, it can create a sense of mistrust and disrespect, undermining these critical bonds.

Since AI systems often provide generalized responses that depend heavily on input quality, their information can be misleading or inaccurate. Professionals argue that it is unfair to judge extensive human expertise against AI-generated answers, especially when AI cannot replace nuanced professional judgment.

What to watch next

As AI technology becomes more sophisticated and widely accessible, professional norms around AI use are expected to evolve. Future work might explore how best to integrate AI into advisory roles without damaging human relationships or undermining expert authority.

In the meantime, the study recommends clients keep any AI fact-checking private, especially when establishing new professional relationships. Avoiding disclosure helps preserve goodwill and encourages productive collaboration between clients and experts.

Source assisted: This briefing began from a discovered source item from TechRadar. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings