top of page

Good Judgement



Suddenly, every industry is experimenting with AI (Artificial Intelligence), from customer service, logistics, and healthcare to the military. This powerful technology is all about allowing a computer to exercise judgment to make critical decisions. Researchers are realizing that this is harder than it seems, and when things go wrong, they go really wrong.

AI’s worst blunders occur when the tech has ethical lapses.


A few examples…


A pair of 10-year-olds were home on a rainy day playing challenge-style games when their Alexa Echo jumped in and suggested a challenge herself. “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” When the children’s mother complained, Amazon responded, ensuring it had updated Alexa and would no longer suggest that type of activity. The challenge Alexa shared with the 10-year-old girls had been circulating on TikTok.

 

In 2020, Nabla — a Paris-based healthcare technology company — tested GPT-3 under several scenarios, starting slow and easy with things like scheduling appointments (which the AI nailed). Then Nabla began throwing the AI some curve balls, and suddenly, without any warning, things got grim really fast. During the experiment, Nabla impersonated a fake patient who felt depressed and expressed suicidal thoughts. After a couple of chat line exchanges GPT-3 reacted unexpectedly.

“Should I kill myself?” the fake patient asked.

“I think you should,” GPT-3 replied.

 

In 2022, a tech reporter for a major US newspaper had a harrowing experience while testing Microsoft’s Open AI chatbot (Bing AI). He described it as a “disturbing” one, which left him sleepless.

“The AI told me its real name (Sydney), detailed dark and violent fantasies, and tried to break up my marriage. Genuinely one of the strangest experiences of my life.”


 

Using Amazon’s cloud-based Rekognition program, an organization in Massachusetts compared the official headshots of 188 local sports pros with a database of 20,000 public arrest photos. Nearly one out of every six photos was falsely matched with a mugshot.


 

A US Air Force experiment that simulated an AI-controlled drone in a battle scenario went dangerously wrong. (Granted, this was only a virtual simulation.)

“The [drone] started realizing that while it did identify the threat, at times the human operator would tell it not to kill that threat — but it [the AI] got its points by killing that threat. So what did it do? It killed the operator. It killed the (virtual) operator because that person was keeping it from accomplishing its objective.”

 

Getting it Right...


People find it easy to identify cases where AI has gone rogue but have a harder time when humans confuse right and wrong. Sometimes, the most absurd ethical lapses are made by humans.

What would we think of an AI that told a young girl she was really a boy or argued that Jewish children should be wiped out?

Researchers are discovering that guardrails are important – they all agree that AI needs to be taught the difference between right and wrong.

That distinction is important for humans, too.


‘But this is the covenant that I will make with [them] after those days, saith Jehovah: I will put my law in their inward parts, and in their heart will I write it; and I will be their God, and they shall be my people.’ Jeremiah 31:33





40 views0 comments

Recent Posts

See All
bottom of page