Saturday, July 27

What if AI outsmarts humans? Scientists worry 2023

Vernor Vinge, a computer scientist and sci-fi novelist, projected in 1993 that humanity will be able to develop a superior intelligence in 30 years. “The human era will end shortly,” Vinge predicted.

As it happens, 30 years later, the concept of an artificially made being that can surpass—or at least match—human skills is no longer speculative or authorial. AI researchers and tech capitalists want an artificial general intelligence (AGI) that can do all intellectual functions at human levels. Some experts fear “the end of the human era” will become a reality if humans develop AGI.

Vinge popularized “the Singularity” among futurists. He thought technology will someday create a superintelligence. Its introduction to civilization will impact the planet “comparable to the rise of human life on Earth,” according to Vinge.

Vinge saw Singularity as more than a formidable AI. Biotech or electrical upgrades might make the human brain quicker and smarter, combining intuition and creativity with a computer’s processing and information access to execute superhuman tasks. For a more ordinary example, imagine how the average smartphone user would amaze a 1993 time traveler.

“Once machines take over science and engineering, the progress is so quick, you can’t keep up,” says University of Louisville computer scientist Roman Yampolskiy.

Yampolskiy envisions a microcosm of that future in his profession, where AI researchers publish a lot of work quickly. He claims that experts no longer know the state of the art. “It’s evolving too fast.”

Superhuman intelligence?

Some scientists believe AGI is the key to the Singularity through computer science, albeit Vinge didn’t specify a way. Others say it’s a useless jargon. It generally denotes a system that equals human performance in any intellectual job.

AGI might lead to superintelligence. Intelligence used to research might rapidly create new discoveries and technology. Imagine an AI system that outperforms any computer scientist. Imagine that system redesigning AI systems. Some researchers think AI might accelerate exponentially.

We may never understand why many AI systems operate the way they do, thus that may be an issue. Yampolskiy’s research implies we can never foresee an AGI’s capabilities. Yampolskiy believes we cannot control it without that skill. He argues that might be disastrous.

Predicting the future is difficult, and AI researchers worldwide disagree. AI Impact polled 738 researchers on the possibility of a Singularity-like event in mid-2022. 33 percent said such a destiny was “likely” or “quite likely,” while 47 percent said “unlikely” or “quite unlikely.”

It distracts from the issue.

AGI and Singularity are hard to experimentally study because they lack a uniform definition, according to University of California, Irvine computer scientist Sameer Singh. “Those are interesting academic things to think about,” he says. “From an impact perspective, I think there is a lot more that could happen in society that’s not just based on this threshold-crossing.”

Singh fears that focussing on prospective futures hides the real effects of AI’s mistakes. “When I hear of resources going to AGI and these long-term effects, I feel like it’s taking away from the real problems,” he adds. The models provide racist, sexist, and inaccurate results. AI-generated material often violates copyright and data privacy rules. Analysts blame AI for job losses and layoffs.

Singh believes, “We’ve reached this science-fiction goal” is more exhilarating than discussing reality. “That’s kind of where I am, and I feel like a lot of the community I work with is.”

Need AGI?

AI-powered future reactions indicate one of several community divisions in model creation, fine-tuning, expanding, and monitoring. Computer science pioneers Geoffrey Hinton and Yoshua Bengio have voiced remorse and lack of direction over a subject they perceive racing out of control. Some academics want a six-month embargo on building AI systems stronger than GPT-4.

Yampolskiy supports a halt, but he doesn’t think half a year—or one, or two—is enough. “The only way to win is not to do it,” he says.

Leave a Reply