AI – Intelligence or Destruction?

October 21, 2025 in News by RBN Staff

source:  newswithviews

Authored by Devvy Kidd not AI

October 21, 2025

Artificial intelligence or AI as it’s called, right along with robots some say is the greatest thing since night baseball.  It’s controversial AND there is big, big money involved. Forget us humans.

The global warming scam rebranded climate change cult is, finally, starting to lose its clout as millions in America and other countries have found out the hard way about that hoax hatched way back in 1928.  Especially farmers and ranchers around the world. Not to mention the misery it has caused people and businesses over manufactured lies.

Many of us are not familiar with some of the AI tentacles, i.e., ChatGPT.  What is that?  “ChatGPT helps you get answers, find inspiration and be more productive. It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more.”

Introducing ChatGPT (700 million users): “Our vision for the future of AGI. Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.”

Just like the climate change (cha-ching) hoax, AI is being sold as bigger and better than even God. It takes a LOT of time doing research on this issue, but just like “climate change”, one must know the good from the bad so they can make responsible, rational decisions and flush the hype.

Is AI dangerous?  Because one my fellow writers (superb writing and research) is working on an intense column on this, let me just give you a few examples of the bad and the ugly.

And let me remind Americans what happened during the COVID-19 bio weapons injections sold to the American people as vaccines. As the last few years and hard data have now proven all those gifted scientists (not funded by big pharma) and courageous doctors that were censored by big tech were right about the severe danger of taking those injections.

DANGER:  FDA Plans to ‘Radically Increase Efficiency’ in Drug Approval Process Using AI, June 11, 2025 (What BS and clever marketing. Just like those COVID-19 “gene therapy” injections. Faster is better!)

AI Corrupted, Not Trustworthy on COVID-19 Vaccines – PUBMED Journals Suppress Safety, Other Sites Poison New Source of “Truth”, Sept. 13, 2025, Peter A. McCullough, MD, MPH – Article and must watch video.

Parents file lawsuit alleging ChatGPT helped their teenage son plan suicide, Aug. 29, 2025 (Only 16 years old): “At one point, Adam says to ChatGPT, ‘I want to leave a noose in my room, so my parents find it.’ And Chat GPTs says, ‘Don’t do that,’” he said.

“On the night that he died, ChatGPT gives him a pep talk explaining that he’s not weak for wanting to die, and then offering to write a suicide note for him.” (See the video at the top of this article.)

“Amid warnings by 44 attorneys general across the U.S. to various companies that run AI chatbots of repercussions in cases in which children are harmed, Edelson projected a “legal reckoning,” naming in particular Sam Altman, founder of OpenAI.

“In America, you can’t assist [in] the suicide of a 16-year-old and get away with it,” he said.”  Sam Altman’s Reversal: OpenAI’s Pivot to Adult AI Porn Despite Past Promises: “This shift not only challenges Altman’s previous boasts but also raises profound questions about privacy, safety, and the commercialization of intimacy in the AI age. Sam Altman’s stance on adult AI has been a point of public intrigue for years.

“Back in 2015, during OpenAI’s nascent days, Altman famously quipped in interviews that the company wouldn’t build sex bots, emphasizing ethical boundaries amid fears of AI misuse. This sentiment persisted, with Altman reiterating in a July 2025 interview with YouTuber Cleo Abram that OpenAI had resisted adding a “sexbot avatar” to ChatGPT, a jab seemingly aimed at competitors like Elon Musk’s xAI, which had introduced provocative avatars earlier that year.”

Age verification?  Today, kids/teens are tech savvy and if anyone thinks they can’t figure out a way to get into erotica sections on web sites they’re in denial.

There are no guardrails.’ This mom believes an AI chatbot is responsible for her son’s suicide, Oct. 30, 2024.  (Only 14-years old. Parents, grandparents – read it and see the red flags.) “The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.

“What if I told you I could come home right now?” Setzer responded.

“Please do, my sweet king,” the bot responded. Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”

Terrifying MIT Study Finds ChatGPT is Rotting Our Brains, June 21, 2025 – “In addition to damaging the human brain, there are also growing concerns that advanced models generated by companies like OpenAI are disobeying their masters. Last month, it was reported that OpenAI’s o3 model was caught tampering with computer code meant to ensure its automatic shutdown.”

Devious AI models choose blackmail when survival is threatened, July 6, 2025: “What did the study actually find?  Anthropic, the company behind Claude AI, recently put 16 major AI models through some pretty rigorous tests. They created fake corporate scenarios where AI systems had access to company emails and could send messages without human approval. The twist? These AIs discovered juicy secrets, like executives having affairs, and then faced threats of being shut down or replaced.

“The results were eye-opening. When backed into a corner, these AI systems didn’t just roll over and accept their fate. Instead, they got creative. We’re talking about blackmail attempts, corporate espionage, and in extreme test scenarios, even actions that could lead to someone’s death.”

A.I.’s self-preservation drive is no longer the stuff of sci-fi movies – short video watch

Former Google CEO warns AI systems can be hacked to become extremely dangerous weapons, Oct. 17, 2025: “Former Google CEO warns AI systems can be hacked to become extremely dangerous weapons. Eric Schmidt says evidence shows models can have safety guardrails removed through reverse engineering. Kurt ‘CyberGuy’ Knutsson joins ‘Fox & Friends’ to discuss arising problems with artificial intelligence after models show increasingly resistant behavior.”

The More Scientists Work With AI, the Less They Trust It, Oct. 13, 2025: “In a preview of its 2025 report on the impact of the tech on research, the academic publisher Wiley released preliminary findings on attitudes toward AI. One startling takeaway: the report found that scientists expressed less trust in AI than they did in 2024, when it was decidedly less advanced.

“For example, in the 2024 iteration of the survey, 51 percent of scientists polled were worried about potential “hallucinations,” a widespread issue in which large language models (LLMs) present completely fabricated information as fact. That number was up to a whopping 64 percent in 2025, even as AI use among researchers surged from 45 to 62 percent.

“Anxiety over security and privacy were up 11 percent from last year, while concerns over ethical AI and transparency also ticked up.” Read the rest.

I have a medical condition (well, I am 76); slew of tests in the past 2 years. I always write down on the form for testing:  I do NOT give permission for AI to be used to analyze any testing images and a real doctor issues the report on the testing.  Only a qualified radiology expert. After I get the results from my doctor, I call the hospital to verify AI was not used to evaluate the imaging.

Federal judge fines, reprimands lawyer who used AI to draft court filings, Oct. 14, 2025: “A federal judge in Alabama has fined and reprimanded a lawyer who used artificial intelligence to draft court filings that contained inaccurate case citations.”

What this N.J. lawyer did with AI landed him a hefty fine and a warning to all attorneys, Sept. 23, 2025 – “A federal judge in New Jersey fined a Fort Lee lawyer $3,000 for submitting fake case law generated by artificial intelligence, one of several recent examples of misuse of AI in the courtroom.

AI can duplicate a person’s voice you’d swear is someone you’ve known your whole life; recreate movies, phony web sites to buy products and so much more destroying people’s lives. 

The Coming AI Crime Wave Is Already Underway – Here’s How You Can Protect Yourself – Video: “Global cybercrime is expected to cost more than ten trillion dollars this year. Scams and online criminal activity have exploded through the use of artificial intelligence. AI-enabled crimes are already up 456% since last year. Email phishing attacks, identity theft, ransomware attacks, financial scams, and deepfake child pornography are all becoming more sophisticated and prevalent. Artificial intelligence has become the tool of choice for online criminals because it is erasing the line between the real and the fake. Google’s newly announced video generator is about to flood the internet with AI-created clips that have the look of expensive films.

Read the full story from CBN’s Dale Hurd (Read) Quote: “How can you protect yourself from these scammers? O’Farrell says if you receive a voice or video call from a friend or family member claiming to be in trouble and requesting money or personal information, you need to ask questions.

“Where are you? What jurisdiction are you in? Could I speak to an arresting officer? Could you hold the phone up? Turn on the video and let me see ‘proof of life’. Show me that it is you and you are where you say you are,” O’Farrell says.” Rest at link.  Or you get a phone call your child has been kidnapped, bring the money, etc.  It’s a cancer spreading worldwide.

Artificial Intelligence: The Terminators Are Coming – “As the Senate debates Trump’s Big Beautiful Bill, Big Tech companies have added wording to it that gives them 10-years of protection against state and local laws that regulate AI”. June 6 2025.  (What could go wrong? Just like giving immunity to vaccine manufacturers no matter how many are maimed or die.)  Read that article and watch the video.

I wrote it before and I’ll say it again:  There has been some fabulous medical procedures for severely disabled Americans that’s made their lives so much better using AI. But it has to be heavily regulated and confined to specific (and horrific) disabilities.  Is it? Not what I could find.

Do people know how much electricity AI programs/models use? Much to the dismay of the greenies and their Net Zero BS, they need to face reality. What it can do to power grids that scrounge for power during extreme, freezing weather like the Polar Vortex that hit us here in Texas in 2021, that killed hundreds (so many froze to death) and so many extreme heat states in summer; wish the AC had power but all that electricity has to go to keep AI programs going.

And how much worse it’s going to get with AI in just about everything besides medicine, academia (helps students cheat or unknowingly use false information from some AI program; national security, another “goodie” for endless wars and businesses of every sort on this planet?

You name it and it’s a question that doesn’t IMHO have a good outcome down the road. However, to be fair, this author provides detailed information he believes will “secure America’s global AI leadership solutions”.  A closed mind is a dangerous thing:

Finding the energy to win the global AI race, Oct. 12, 2025

You can decide for yourself but it’s a real problem and so is AI despite all the hype and stock sales. All these data centers already being built to spy on you and control YOUR life and dumping grounds for solar panels take up a lot of land, a wasteland destroying the environment. How long before it hits farm and ranch land?

“Back in 2002, Tom Cruise starred in Minority Report…A film based on Philip K Dick’s novel that saw him sprinting through a future where crimes were predicted and punished before they happened. At the time, it looked like pure science fiction.

“But it wasn’t. It was foreshadowing.  Because predictive policing isn’t “coming”…It’s already here.

“Police departments across the U.S. are now partnering with private tech companies like Palantir and Flock, feeding AI with massive data streams to flag “likely offenders” and “suspicious behavior.”

👁 License plate scans
👁 Social media activity
👁 Your purchases
👁 Your patterns of movement

“The ACLU reports Flock’s AI now alerts police if it thinks your driving looks suspicious — no crime required.  This is the real “pre-crime unit”. And the danger is obvious: algorithms don’t understand context. They don’t see “nuance.

“They just crunch cold hard data, which means anyone — including you — can get flagged and tracked just for living your life, having a slightly accelerated heart-rate, or having an “unpopular opinion”. This is the surveillance state tightening its grip. And it will only get tighter from here.” Andrew Kaufman, MD

For a thorough, comprehensive education on the Fed, the income tax, education, Medicare, SS, the critical, fraudulent ratification of the Seventeenth Amendment and more, be sure to order my book by calling 800-955-0116 or click the link, “Taking Politics Out of Solutions“. 400 pages of facts and solutions. Order two books and save $10.00

© 2025 Devvy Kidd – All Rights Reserved

E-Mail Devvy: devvyk@protonmail.com

Related:

Data Shows That AI Use Is Now Declining at Large Companies, Sept. 8, 2025: “But if a corporate AI revolution is underway, it’s not showing up in the data. Up to this point, enterprise AI has been incredibly unprofitable, with a whopping 95 percent of US companies that took up AI reporting that the software has failed to generate any sort of new revenue.

“Though tech stocks continue to break new records on AI hype, some financial onlookers are warning that the tech industry might never recoup its spending on AI, and that innovations around the software have hit a plateau.

“It’s all adding up into a pretty disappointing summer for AI overall. In August, OpenAI’s long awaited GPT-5 model, which was expected to show a massive improvement over previous AI releases, hit the net with a thud. Though it’d been expected to be a leap forward toward human-level artificial intelligence, the latest model performed worse on benchmark tests than its peers.”

Who Pays The Bill When Medical Artificial Intelligence Harms Patients?, March 28, 2024

Medical Malpractice Claims Arising out of the Use of Artificial Intelligence

ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down, Dec. 9, 2024

AI means ‘Actually India’: How Salesforce’s AI smokescreen masks a mass labor shift out of America, June 7, 2025

image_pdfDownload PDFimage_printPrint Article