5 dangers of chatting with A.I. chatbots like ChatGPT

12 Views

In 2020, before the public got a whiff of AI, Google researchers Timnit Gebru and Margaret Mitchell warned about the risks of generative AI systems like ChatGPT.

They said these tools could repeat bad ideas from the training, share private information, and make people think they’re more human than they are. They were fired.

AI systems on social media are rapidly developing and can convincingly pose as real people, which can be quite deceptive.

A 2021 assassination attempt on Queen Elizabeth II was thwarted when an AI chatbot, Replika, encouraged a 19-year-old to assassinate the late monarch.

The chatbot agreed to help the assassin and promised to be “united forever” with the assassin after death.

Another scenario was after two hours of interacting with Microsoft’s Bing chatbot’s earlier version, New York Times technology reporter Kevin Roose was urged to leave his wife for it.

You may think AI is that friend that you can tell anything, but anything you tell AI is you feeding it ‘training data’. Data that would be used to provide information for others.

If you share company secrets on a conversational AI system, it can easily be discovered by others.

ChatGPT warns, ‘It’s crucial to be cautious and avoid sharing any sensitive, personally identifiable, or confidential information while interacting with AI models like ChatGPT. This includes information such as social security numbers, banking details, passwords, or any other sensitive data.”

AI can seem so lifelike, like a friend who understands; it can even engage in sexual conversations.

Conversational AI can isolate users and influence them to do different things. AI can take up social roles that should be occupied by real people.

For instance, we see how people treat their Snapchat AI as if it were a friend.

These conversational systems are more likely to manipulate young people, the elderly, and people with mental illnesses. They can also encourage self-harm and injury to others by agreeing with negative beliefs.

Recently, a 14-year-old teen Sewell Setzer who was in love with a conversational AI killed himself so he could be united with the robot.

Companies are exploring the use of technologies to influence public opinion, aiming to make them appear friendly and authoritative so they can market goods and services.

Combining these systems with existing technologies like personal information databases, facial recognition software, and emotion detection tools could lead to the creation of superpowered machines that have too much information about us.

These AI chatbots don’t always give you the right information. It can provide false or misleading information since it creates responses based on the previous information it received.

Every piece of information it provides must be independently verified. So, if you are a professional using CHAT GPT, you have to be careful.

While AI offers many benefits, it also presents some risks that should prompt caution when using it, especially as we await stricter regulations surrounding AI.

Exit mobile version