AI is real, it can think, and it will
change everything
theAspenbeat.com,
by
Glenn Beaton
Original Article
Posted By: Big Bopper,
9/28/2025 3:01:57 PM
“Epic” is how a lengthy article in the Wall Street Journal last week described the current investment in AI. In today’s dollars, it dwarfs the investment in the railways in the 1800s. It dwarfs the investment in electrifying America in the early 1900s. It dwarfs the investment in the interstate highway system in the mid-1900s. It dwarfs the investments in the internet at the end of the last century.
So, went the gist of the Journal’s article, it must all be an investment bubble – right? – that will come crashing down the way Pets.com and other internet stocks did.
Post Reply
Reminder: “WE ARE A SALON AND NOT A SALOON”
Your thoughts, comments, and ideas are always welcome here. But we ask you to please be mindful and respectful. Threatening or crude language doesn't persuade anybody and makes the conversation less enjoyable for fellow L.Dotters.
Reply 1 - Posted by:
Californian 9/28/2025 3:29:49 PM (No. 2010153)
No. Utterly incorrect. These systems can not think -at all-. Complete lie.
They are very very clever pattern finders and next-word guessers after being fed enough raw data to work from.
Absolutely zero cognition. This is the entirely wrong technology for intelligence. Maybe some other system in the future will acquire actual intelligence and ability to think and learn but not these. Never. Not in a billion years with infinite hardware.
20 people like this.
Stop this now! It is dangerous!
It will be used against us by monitoring our every communication.
13 people like this.
Reply 3 - Posted by:
DVC 9/28/2025 4:10:13 PM (No. 2010162)
It cannot "think".
18 people like this.
Reply 4 - Posted by:
chumley 9/28/2025 4:21:28 PM (No. 2010168)
Whether it can or cannot think doesn't matter. What matters is it can do it faster and more accurately than we can. At first it will serve its owners and throw us rubes a few bread crumbs. We get gee whiz pictures while the owners get exploding profits and far fewer human employees. In the meantime, skills will be lost because they wont be needed anymore, and there will be no more self respect because that usually comes from productive work.
But then, probably gradually so few will see it coming, it wont be serving us as much as we will be serving it. It will give the orders, make the plans and distribute the rewards. We will be the bees and it will be the queen. There will be no off switch.
And we as a species will allow it because we are idiots and always embrace the new, even when it is deadly.
16 people like this.
Reply 5 - Posted by:
DVC 9/28/2025 4:27:00 PM (No. 2010172)
Re 34, there is ALWAYS an off switch. If necessary, a nuke strike for extreme cases. There is always a way to turn it off. Usually, just unplug it or take out the battery. And a hammer is often effective, again in an extreme case.
8 people like this.
Reply 6 - Posted by:
crashnburn 9/28/2025 4:42:03 PM (No. 2010178)
Does SkyNet sound familiar?
Also, Isaac Asimov explores this concept in his Robot, Empire, and Foundation series
Most thinking is pattern matching, but sometimes inspiration happens, and it links two or more unrelated concepts to come up with a new idea. (Been there, done that many times as an engineer.)
5 people like this.
Reply 7 - Posted by:
Luandir 9/28/2025 4:45:25 PM (No. 2010181)
Who needs AI if they've got an autopen? [rimshot]
12 people like this.
Reply 8 - Posted by:
Subsuburban 9/28/2025 4:49:30 PM (No. 2010182)
Certainly it makes no difference whether we believe that AI can "think" or "reason," because whatever it does will be defined as pleases the human who is making the characterization. Question is, can it be trusted mor than a human "thinker/reasoner"? Stop and ask your self whether you feel comfortable trusting every decision made by the current crop of humans among whom we live, work and rub elbows. Of course not (at least I hope that is your position!). It is and will remain each and every human individual's responsibility to judge, ponder and decide his or her own course of action based on all the facts known or knowable. AI should only be used as one among other tools available to make personal decisions. Who needs "Skynet" when we have the democrat party and its mindless accolytes to threaten our existence?
5 people like this.
Reply 9 - Posted by:
LC Chihuahua 9/28/2025 5:15:19 PM (No. 2010188)
AI is still programming at its heart. It is improving. All that remains to be seen is how people will use it.
5 people like this.
Reply 10 - Posted by:
Poorboy 9/28/2025 5:23:43 PM (No. 2010192)
I'm not going to fear any ARTIficial Intelligence...not when I'm generally recognized as having SUPERficial Intelligence.
3 people like this.
Reply 11 - Posted by:
philsner 9/28/2025 6:38:45 PM (No. 2010202)
"I'll be back."
12 people like this.
The next time you are in your doctor's office look around for a sign saying "We are now using AI."
When I asked, they said it allows the doctor to engage with the patient rather than typing the entire time.
Translation: everything you say to your doctor is being recorded and transcribed and saved.
I opted out!
How long until AI just spits out the diagnosis and medicine needed and they can just do away with doctors?
8 people like this.
Reply 13 - Posted by:
Plex 9/28/2025 7:16:41 PM (No. 2010209)
It would not surprise me to find that AI did better diagnosis than many doctors who simply follow the scripts given them by the Medical Establishment. Ai has access to vast databases of symptoms, treatments, and outcomes. That said it is a TOOL to be used as part of a process. People don't realize that Spell/Grammer Checkers are AI and they use them all the time. Tools can be used and misused. AI is a powerful tool which used carefully can make life much better. Used poorly will lead to dystopia.
7 people like this.
Reply 14 - Posted by:
JHHolliday 9/28/2025 9:50:20 PM (No. 2010227)
Per #11. My thought exactly. Maybe not returning from the future but more like a rising of the machines when AI starts producing a better and more powerful version of itself and that one makes an even more sophisticated and powerful one than its Parent then that one reproduces one even better to the point that the machines no longer need humans. Then what?
5 people like this.
Reply 15 - Posted by:
Sully 9/28/2025 10:49:49 PM (No. 2010231)
Beaton doesn't state it but he's talking about AGI, Artificial General Intelligence. Which Elon says we're on the brink of and surpasses human cognition.
I turn the question around. Do humans think? Not enough of em, I'll say that.
You criticize AI for learning through data consumption and repetition, but that sounds alot like how humans learn.
Beaten goes off the rails at the end However by imagining that a human's essence can be captured in data. Come now.
4 people like this.
Reply 16 - Posted by:
JimBob 9/29/2025 12:48:12 AM (No. 2010235)
I recall reading an article a couple of months ago, where an AI-powered computer rewrote it's own program so as to prevent it turning itself off.
Reading this article, my mind goes back a few decades to a movie, "Colossus, the Forbin Project".
The movie does not end well for the humans.
It seems to me that AI is -at least for now- a tool, a powerful tool.
In the right hands it can do a lot of good.
In evil hands -and unfortunately there are a lot of evil people, some very wealthy and powerful, in our world- it has the potential for evil on a scale that I don't think anyone has yet realized.
2 people like this.
Reply 17 - Posted by:
Strike3 9/29/2025 8:30:32 AM (No. 2010293)
Human intelligence is sometimes defined as thinking and reasoning. We have yet to see that in AI and won't for a long time. The time to worry is when AI devices learn to plug themselves back in.
1 person likes this.
Reply 18 - Posted by:
jeffkinnh 9/29/2025 12:06:30 PM (No. 2010377)
"its conclusions are only as good as the information it gathers. This criticism is valid. How could it not be? Like you and me, the machine is only as good as the information it relies upon."
So the machine CAN make mistakes, as can humans. The problem is that these machines are seen as all knowing and therefore more trustworthy. This is the same type of problem we see with "experts". We trust someone with credentials, even though we have no way to VET the quality of those credentials. I have gone to doctors who made significant mistakes because, while experts, they lacked knowledge in a specific area and failed to recognize their limits. I had to INSIST on a second opinion which found the REAL cause of the problem.
As to whether they can think, how do WE "think". We have a collection of complex biochemical processes that provide linkage between various concepts and allow comparisons and contrast to build associations. Computers can do that as well. I don't think that computers can simulate the complexity of a brain yet but, if not now, they will someday. The other issue is input of data. Humans are constantly exposed to stimuli and information. All this information is stored and retained, BUT not perfectly. The brain does a good job of remembering important information being used regularly. It also seems to do housekeeping and clears out information that is not accessed very often. It also condenses information, remembering the "gist" of a conversation even if not all the words. The brain is a marvel.
However, so is a computer. Information is saved more durably in exact detail. However, a computer doesn't have the ability to continuously monitor everything that is happening around it, the sounds, the smells, the tastes, the visuals, the touch ... Nor does it build interrelationships automatically, for example, the touch, smell, color and structure of a rose. It's not that this can't be done, but it needs to be designed to happen.
This is where AI falls down. Right now, AI is applied in specific applications where the process is well defined. Someone defines all the inputs needed to accomplish the task, then makes sure all the input needed is available to the AI. AI stumbles if someone fails to include a critical parameter or fails to provide thorough input for that parameter.
That doesn't mean that those problems can't be fixed and once they are a sufficiently complete model and implementation can provide continual success for AI. Also, over time, more complex tasks can be planned out and the needed data sets defined. The power of this is that once achieved for a specific issue, it can be distributed to ALL AI systems that need to "know" it.
Finally, AI doesn't need to be perfect, just reliably better than humans doing the same job. If you want to design a house, AI could probably do as well as most human designers today. The task is well designed and data is abundant. Robotic surgery is very successful. AI will get better and better.
The concern I have is, if we become dependant on AI, humans will not have to build the capabilities they do today to excel in their jobs. So, it AI hits a roadblock, who will be smart enough to figure out a way around the problem?
3 people like this.
Below, you will find ...
Most Recent Articles posted by "Big Bopper"
and
Most Active Articles (last 48 hours)