Thousands of people will take to the waters around the UK over the festive season at organised swims that have been held for decades.
But those going it alone may want to heed Maritime and Coastguard Agency warnings not to rely on AI tools when planning outdoor activities as it can make mistakes.
The advice comes after two people became stranded on Sully Island, near Barry, after ChatGPT gave them the wrong tide times and they had to be rescued by the coastguard.
The warning echoes advice from a top tech industry chief not to “blindly trust” everything AI tells people.



Sorry, but that’s bunk:
LLM’s are THE ultimate brainstorming-tools.
Want to know what you’re not thinking-of, when writing a document, story, scenario, or whatever?
They’re one hell of alot better than normal people are, for that.
The problem is that people are using them for doing their work for them, instead of using the things for what they’re brilliant at, due to narcissism+laziness.
Which is normal…
Using an LLM for critical-reference is like using a blurry-lens for photography: yeah, you can get “results”, but it isn’t what you could get with the appropriate tool.
_ /\ _
You want to know something since writing that comment. I accidentally installed a malware on my computer and got rid of it with help from ChatGPT.
So yeah, it’s very very good at telling you useful information if that information has been written about extensively.
Lucky the malware I got has been written about extensively, as has best practices for tracing such attacks.