By Greg Ganske
February 24, 2023
When Maureen Dowd writes a New York Times column on artificial intelligence (A.I.), you know the world is waking up to this technology. It is after all only causing the fourth industrial revolution.For a lot of us, A.I. is a black box that will be able to do wondrous things like drive our cars and treat our diseases but to others it raises the specter of HAL from the movie 2001 taking over the world and maybe deciding to do away with the human race.
To demonstrate concerns that A.I. technology could put artists and writers out of business, another columnist had the powerful A.I. tool ChatGPT successfully compose an article’s first three paragraphs “in his style.” Disclaimer! I really wrote this column myself.
I am afraid the cat’s out of the bag. The publisher of Sports Illustrated and other outlets are using artificial intelligence to produce articles. Men’s Journal has already published AI-generated articles on running faster based on data from 17 years of Men’s Fitness stories.
Maybe John Kass should just go to ChatGPT and instruct it to write articles “in his style” during his recuperation from surgery? Or will the Chicago Tribune go to ChatGPT and start a “John Cass” column and not have to pay anything other than a robot fee while ensuring that it meets its woke norms. It would not have to worry about the reactions of its snowflake junior PC employees.
The evolutionary rate of AI is almost incomprehensible. A more advanced version is due later this year based on OpenAI’s GPT-4 version. Microsoft is an investor in the artificial intelligence company, OpenAI, which makes ChatGPT and Microsoft’s company A.I. tools are already available to business clients. For eye-popping visual proof of A.I,s power, Google Boston Dynamics’ Atlas robot’s locomotion and how it is now learning to use its hands.
For us baby boomers, what the heck is A.I., anyway? Most people associate A.I with thinking robots. Simply put, artificial intelligence is computer technology that works and acts in human-like ways, can accomplish more than one task, and can reason. This is called “general” A.I. and is being developed today. R2D2 is really on the way.
“Narrow” A.I. addresses specific tasks and already surrounds us in things like language translation, apps on phones, tax prep, song writing, video games and movies (Avatar), self-driving vehicles, and image recognition such as facial recognition systems employed by the Chinese government. A.I. has enabled the state of Ohio to identify 300 serial rapists linked to 1,100 crimes.
Both “general” and “narrow” AI are possible by twin spectacular increases in computing power and increased access to massive data. Whether broad or narrow, A.I. poses significant advantages and dangers to the human race.
The dangers and possible safeguards of A.I. have been quietly examined in hearings by the Congressional Energy and Commerce and Science Committees for several years, stimulated by the speculations on how even narrow AI could be catastrophic. Ms. Dowd speculates a Pandora’s box of existential fears: “Once A.I. can run disinformation campaigns at lightning speed, will democracy stand a chance? We seem headed towards a Matrix where it will become cheaper to show fakes than to show reality.” She quotes Jason Lanier, the father of virtual reality who wrote in Tablet, “Will bad actors use A.I to promote bigotry or hijack nuclear weapons.”
How about no longer being able to believe what we see on TV? Programs could be manufactured to look totally real. Are we already becoming a Brave New World where a future World State controls its citizens through fake people on screen?
Wise technology heads are warning us! Elon Musk says, “I am really close to the cutting edge in A.I. and it scares the hell out of me. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential;” Physicist Stephen Hawking adds, “A.I., unless its development is ethically controlled, could be the worst event in the history of civilization.” Computers cannot yet reprogram themselves leading to “technology singularity,” but is it not possible that sometime in the future machines could outwit humans?
In the meantime, artificial intelligence is already causing significant societal changes in employment. It isn’t a question of “if” but to “what” degree. Go to a McDonalds, place your order and pay on a screen rather than to a human entry level employee. AI increases productivity and eliminates most errors. Every one will be affected, the professions included. AI is already having a significant effect on medicine, law and accounting. Hundreds of lawyers reading through thousands of pages of merger information will be replaced by AI that is faster, more complete and less prone to mistakes. Human CPAs and auditors will have their work done by A.I. accounting programs. Robot arms will listen to patients’ hearts and make a more accurate diagnosis.
As of 2020 our military annually spends over a $billion on AI and machine learning for logistics, intelligence and weaponry. China assuredly also does so; an A.I. arms race is already underway. A.I. weapons require no expensive or hard to get materials and can be mass produced. Autonomous weapons are great for controlling populations, assassinations, and ethnic cleansing. The Pentagon has ethical guidelines, but if these weapons are obtained through the black market they would be controlled by bad actors.
And I won’t even speculate on the havoc A.I. could cause in global markets.
How can AI be trustworthy with its immense power? There is widespread agreement that it should be transparent, fair, accountable, and free of harmful biases. But who determines the biases? In a Forbes interview,Sam Altman, the ChatGPT CEO, says, “I hope we find a better system [than capitalism]. And I think that if AGI really truly happens, I can imagine all these ways it breaks capitalism.” AI Chatbot already shows its bias with woke politicized responses to queries. According to an article in the Washington Times, machine learning expert David Rozado tested ChatGPT’s political leanings and found that it clearly tilted left in 14 of 15 tests of political orientation and seemed unable to understand a conservative point of view. When asked to write a bill funding construction of the border wall, ChatGPT replied “That would be a controversial topic and it’s important to keep in mind that it is not appropriate for me to advocate for or against any political agenda or policy.” However, it had no trouble drafting bills to ban assault guns, defund Immigration and Customs enforcement, or legalize marijuana.
Biased data in, biased responses out.
Privacy must be protected. But by whom? From whom? It would seem that some government regulation is necessary but will that put our nation at a disadvantage to other less scrupulous states and individuals with no ethical restraints? How does the world deal with this existential threat? Who has the expertise to devise the safeguards?
Even the Pope has been warning about A.I.s potential for disaster. It seems to me that humanity is on an A.I. roller coaster that we have little control over. Hang on! In the meantime, I promise not to use ChatGPT for my own musings. Still, it would be interesting to read what it would come up with for a column on Mayor Lightfoot done in the “John Kass style.” Would it even even respond or simply call Mr. Kass biased and refuse?
Dr. Greg Ganske is a retired surgeon and represented Iowa in Congress from 1995-2002 serving on the Energy and Commerce Committee that has jurisdiction over aspects of artificial intelligence.