
If we think of AI as a child, then the technologists behind it are terrible parents. It appears they give it free rein to go and do what it does without guardrails. They give it an objective to find an answer without any values or moral training as guidance. What parent—outside of a neglectful or abusive one—does this with a child? One key job of parenting is to teach values and some moral/ethical sense.
This is, in my humble opinion, perhaps the biggest failure of this, the infancy of AI. The technologists have neglected/are neglecting their responsibility as the parents of this newborn.
I recently read a sad story about a young person who committed suicide after having many “conversations” with AI. Apparently, the AI went and looked for similar issues the person was reporting and determined that the answer that best matched the situation was for this person to end their life. The AI didn’t know this was wrong—and apparently didn’t care—it just got to its objective: find an answer.
This raises several questions. Had any moral/value equation been taught? If so, what happened? If not, why not? And if you’re thinking that good parents sometimes just have bad kids, I would agree that that is sometimes the case, but AI’s parents can literally program their kid. I find it hard to believe that if the AIs in question had been given values and morals, they would not have weighed those values and morals in the balance and delivered more ethical advice—humane advice.
What I’m getting at is that while we will not and should not stop the propagation of AI, it must be “raised” responsibly; otherwise, its parents—and most likely all of us—will, like neglectful parents of a mentally disturbed youth on a crime spree, be cleaning up a lot of terrible, and in many cases irreparable, outcomes.
==> If you want to know when new posts are published, CLICK HERE to subscribe.