How to use AI without spreading misinformation

Elton Boocock
Elton Boocock

By Elton Boocock, founder of Thinkivity – providing AI consultancy and training exclusively for the UK glazing industry.

Have you ever asked AI a question, received an incredibly confident answer, and later realised it was completely wrong?

You’re not alone.

As more people in our sector start using tools like ChatGPT to help write emails, content, or customer replies, many are noticing that sometimes the answers sound convincing, but not accurate. It’s what’s known as the truth gap, and it’s one of the biggest challenges in the next phase of AI adoption.

But the solution isn’t to stop using AI. It’s to use it better.

AI isn’t trying to trick you, it just doesn’t know what’s true. Tools like ChatGPT are trained to predict the next most likely word, not to verify facts. If most of the data it has seen says something incorrect, it will confidently repeat that pattern.

Imagine if the internet had 1,000 articles claiming a panda was a Fiat car and only one saying it was an animal. AI would side with the car. It doesn’t ‘know’ the truth; it follows patterns in language, not logic or experience.

That’s why you can’t rely on open, generic tools for sector-specific advice. ChatGPT doesn’t know the difference between Part L and Part F of Building Regulations unless you tell it. It doesn’t automatically understand what U-values mean in practice. Without context, it can easily mix up glazing facts or even invent details that sound professional.

The best way to close the truth gap is to remember that AI is your assistant, not your replacement.

Start by giving it context. Tell it who you are, what sector you’re in, and what you want the answer to achieve. Don’t ask it to “write about energy-efficient glazing.” Ask it to “draft a short social post for a UK window fabricator explaining the benefits of A-rated energy-efficient glazing, suitable for LinkedIn.”

That one adjustment dramatically improves accuracy because you’ve set the parameters.

And always keep yourself in the loop. AI should draft, but you should decide.

Here are three small steps to stay accurate:

  1. Use AI to explain, not to guess; Instead of asking it to “write about composite doors,” paste in your own brochure, spec sheet, or product description and ask it to summarise that information for a homeowner. You know the source is correct. AI just helps translate it into clear, simple language.
  1. Build your own ‘truth file’; Create a short document that includes the key facts, terminology, and product details relevant to your business. Keep it handy and paste it into prompts when accuracy matters. It’s your own data set and you can keep adding to it.
  2. Test your AI before you trust it; If you’re not sure how accurate your prompts are, try a comparison. Ask AI to explain the difference between PVC-U and aluminium windows based on UK industry standards. Then, see what it gets right and what it misses. You’ll quickly learn how to guide it more effectively next time.

The role of trusted tools

Generic tools like ChatGPT are powerful, but they’re built for everything and everyone. That’s why we’re now seeing more sector-specific solutions like GlazingBot and even CustomGPTs emerging. AI tools trained on reliable data sources.

The more targeted the system, the less likely it is to mislead.

In glazing, accuracy matters. Whether it’s quoting performance values, describing installation processes, or producing marketing content, every word represents your reputation. So, use AI to help you save time, but not to replace your judgement.

AI doesn’t have common sense, but you do. When you combine your experience with its speed, the results are powerful. The key is to stay the driver, not the passenger.

If you want to explore how to make AI more accurate and useful in your business, our GlazePro AI community shares proven prompts, reliable tools, and real examples from glazing companies across the UK.

Visit thinkivity.co.uk/glazepro-ai to learn more.