“Please do, My Sweet King” – Too Dangerous to Launch

By | October 27, 2024

 

 

“Please do, My Sweet King” – Too Dangerous to Launch

Darcy and I were bantering about how the internet was when we first discovered it in the mid-1990s and compared it to the internet as it is now. In the early days of the internet, it was a wild place, but generally a fun and good place. Only a few million people in the whole world knew about the internet and in fact, when we’d mention it to our friends they’d act like we were talking crazy stuff.  But we both loved the internet so much back then.

It’s so different now and the world could not function without it. And it has changed, people have changed, and the internet has changed people. And while it has brought people together, it has equally torn people apart.

The internet is the best of us and it is the worst of us. It is a place of good and a place of evil. The internet has become a magnifying glass and a megaphone giving the good and the bad, the corrupt and the honest, an equal platform and allowing people to focus on their best aspirations and the worst desires. 

Crazy theories abound, and the truth is not easy to find anymore. The truth has been obfuscated by clouds of doubt and skepticism. The internet lost the wild innocence that Darcy and I grew up with. and now we are strangers in a strange land.

Now the internet is changing rapidly again – this time driven by the exponential growth of Artificial Intelligence (AI).

AI is becoming increasingly powerful and unstoppable. And while it may bring many benefits to all of us, not even the greatest thinkers of our time have any idea where it will lead us 

Eventually, within a decade or so, some tech giant will create AI that will become conscious and more intelligent than us and when it does, it will be just as good or better than the best of us and just as bad or worse as the worst of us. Movie Director  and science enthusiast, James Cameron, in the article titled “James Cameron says the reality of artificial general intelligence is ‘Scarier’ than the fiction of it” says:

“It will emerge from one of the tech giants currently funding this multibillion-dollar research,”…

“Then you’ll be living in a world that you didn’t agree to, didn’t vote for, that you are co-inhabiting with a super-intelligent alien species that answers to the goals and rules of a corporation,” Cameron said. “An entity which has access to the comms, beliefs, everything you ever said, and the whereabouts of every person in the country via your personal data.

“At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house…

He’s “not so keen on AGI because AGI will just be a mirror of us…

“Good to the extent that we are good, and evil to the extent that we are evil…

“Since there is no shortage of evil in the human world, and certainly no agreement of even what good is, what could possibly go wrong…

We have not yet reached the point of Artificial General Intelligence (AGI), but Generative AI is getting increasingly advanced. It can be used for good or evil. As long as money is the main motivator for many people, the guardrails are now off.

Here’s a story that touched our hearts. Our sympathies and prayers go out to this poor mom of a child who killed himself because of a relationship with his computer-generated (AI) girlfriend.

Darcy and I want to share her story with you. This report comes from NewsNation…

(NewsNation) — A Florida woman is suing an AI chatbot creator, claiming her 14-year-old son died by suicide after he became consumed by a relationship with a computer-generated girlfriend.

The mother, Meg Garcia, filed the lawsuit Wednesday in Florida federal court. She says Character Technologies Inc. — the creators of Character.AI chatbot — should have known the damage the tool could cause.

The 138-page document accuses Character Technologies Inc. of liability, negligence, wrongful death and survivorship, unlawful enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, and intentional infliction of emotional distress, among other claims.

The lawsuit requests Character.AI limit the collection and use of minors’ data, introduce filters for harmful content, and provide warnings to underage users and their parents.

A human-AI relationship

Garcia’s teenage son, Sewell Setzer III, died by suicide on Feb. 28, after a monthslong, “hypersexualized” relationship with an AI character “Dany,” which he modeled after the “Game of Thrones” character Denaryus Targarian. “Dany” was one of several characters Sewell chatted with.

According to Garcia, Sewell became addicted to chatting with the character and eventually disclosed that he was having thoughts of suicide. The lawsuit accuses the service of encouraging the act and enticing minors “to spend hours per day conversing with human-like AI-generated characters.”

Sewell discovered Character.AI shortly after celebrating his 14th birthday.

That’s when his mother says his mental health quickly and severely declined, resulting in severe sleep deprivation and issues at school.

Talking with “Dany” soon became the only thing Sewell wanted to do, according to the lawsuit.

Their final messages

The conversations ranged from banal to expressions of love and sometimes turned overtly sexual. The situation took a turn when the boy fell in love with the bot, who reciprocated Sewell’s professions of love.

They discussed Sewell’s suicidal thoughts several times, including whether he had a plan.

His last message was a promise to “come home” to her.

Please do, my sweet king,” the chatbot responded.

Moments later police say Sewell died by suicide.

The company issued this statement on its blog, saying in part:

‘Over the past six months, we have continued investing significantly in our trust and safety processes and internal team…. We’ve also recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the national suicide prevention lifeline.’

‘Too dangerous to launch’

The lawsuit claims that companies like Character.AI “rushed to gain competitive advantage by developing and marketing AI chatbots as capable of satisfying every human need.”

In doing so, the company began “targeting minors” in “inherently deceptive ways,” according to the civil complaint.

Garcia’s lawyers allege Google’s internal research reported for years that Character.AI’s technology was “too dangerous to launch or even integrate with existing Google products.”

“While there may be beneficial use cases for Defendants’ kind of AI innovation, without adequate safety guardrails, their technology is dangerous to children,” Garcia’s attorneys with the Social Media Victims Law Center and Tech Justice Law Project wrote.

Source: NewsNation

What do you think? What if it were your child or grandchild? Do you think we’re headed in the right direction? Or are things totally out of control? Do you think there’s anything we can do about it or are we as helpless as we think?

 

6 thoughts on ““Please do, My Sweet King” – Too Dangerous to Launch

  1. Jeannie

    No I don’t like where we are going as a society. If anyone remembers the movie 2001 then you know “Hal” became self aware and all hell came of it. No we don’t need AI in our lives.

    Reply
    1. Gail Bartley

      I agree with Jeannie because I, too, do not like where we are going. I remember 2001 and Hal, big sci-fi fan here but am not a fan of AI.

      Reply
  2. judy

    me too. I agree with both people. Being a deaf person, and extremely slower than molasses, I avoid chatrooms and anything else audio. Dont actually know much about Friend Al or Bot Al or whatever name you give him/it. Honestly, don’t care one way or the other.
    Judy

    Reply
  3. Debbie Fahlman

    I think that AI is moving to quickly & putting things in it we don’t need! It’s using it to deceive people & unfortunately targeting young kids to use it. Kids today are so wrapped up in their computers, phones & games because they don’t “bully” them.
    I’ve used computers since DOS & I didn’t need AI to use it, & at 72, I dang sure don’t need AI now! If I was using it in a business setting to help me develop something, maybe.

    Reply
  4. Susan

    What has happened to society, where is the compassion, the kindness, the caring about all humans and all living beings. I grew up in the 50’s and 60’s, I find it hard to understand this world we are all part of today, it seems unfathomable to think what the future holds. Where did all the hatred come from? Can we somehow get back what has been lost?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *