New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favours wealthier, Western regions in response to questions ranging from 'Where are people more beautiful?' to 'Which country is safer?' - mirroring long-standing biases in the data they ingest.
It’s not necessarily garbage, but it sure isn’t curated either. Throwing everything into the blender and hoping the mechanism will usually spit out good info is a scientific spinning of the roulette wheel. Sometimes the odds are pretty good. Sometimes they’re horrible, and you should know better than to expect anything but.
But AI has become the shiniest hammer, and every damn thing is a nail now.
It’s not necessarily garbage, but it sure isn’t curated either. Throwing everything into the blender and hoping the mechanism will usually spit out good info is a scientific spinning of the roulette wheel. Sometimes the odds are pretty good. Sometimes they’re horrible, and you should know better than to expect anything but.
But AI has become the shiniest hammer, and every damn thing is a nail now.