Home » Sport » DeepMind CEO Hassabis: AI may beat more Nobel Prize-level problems_AIAT_Blue_Kasparov

DeepMind CEO Hassabis: AI may beat more Nobel Prize-level problems_AIAT_Blue_Kasparov

On May 11, 1997, Garry Kasparov fidgeted in a plush leather chair at the Justice Center in Manhattan, New York, USA, stroking his hair anxiously.

It was his final match against IBM’s Deep Blue supercomputer, and a crucial tiebreaker between humans and silicon. However, things didn’t go well and Kasparov was forced into a corner due to a serious error early in the game.

Usually, a high-level chess match lasts at least four hours, but Kasparov realized after an hour that he was doomed. He announced the strike, leaning on the board to shake hands stiffly with Joseph Hoane, an IBM engineer who helped develop Deep Blue and who moved the computer’s pieces on the board.

Kasparov then staggered toward the audience and shrugged helplessly. At its best, the machine “played like a god,” he later said.

Anyone interested in artificial intelligence should have heard of the master’s failure. Newsweek called the game “the final battle for the brain,” and another headline called Kasparov the “guardian of humanity.”

If artificial intelligence can beat the sharpest chess mind in the world, it seems likely that computers will soon beat humans at everything — and IBM is leading the way.

Today, 25 years later, when we look back, the victory of the Deep Blue supercomputer was not so much a victory for artificial intelligence as it was a death knell. The laborious hand-crafting of endless code, a hallmark of old-fashioned computer intelligence, will soon be overtaken by a competing artificial intelligence—neural networks—especially by a technique known as “deep learning.” alternative.

The heavier Deep Blue is like a “bulky dinosaur that is about to be killed by an asteroid,” while the neural network is a small mammal that can survive “and change the planet.” Even in today’s world filled with artificial intelligence in everyday life, computer scientists still debate whether machines can actually think.

In 1989, when IBM started to create Deep Blue, artificial intelligence went into a panic. The field has been on a roller coaster of dizzying hype and humiliating debacles many times.

For example, pioneers in the 1950s claimed that they would soon see great advances in artificial intelligence; mathematician Claude Shannon predicted that “in 10 to 15 years, something will emerge in the laboratory , which is not far from the robots of science fiction.”

However, none of the above happened because the inventor failed to realize the vision. This made investors angry and stopped funding new projects.

Thus, the 1970s and 1980s became the “AI winter”.

We now know the reason for the failure. The creators of artificial intelligence try to use pure logic to deal with the chaos of daily life, and they will patiently create a rule for every decision an artificial intelligence needs to make. However, because the real world is too vague and nuanced to be managed in a rigid way.

Engineers crafted their “clockwork” masterpieces, or “expert systems,” as they called them, and they would work reasonably well until reality throws a curveball.

For example, a credit card company might set up a system to automatically approve credit applications, only to find out that they have already issued a credit card to a dog or a 13-year-old. Programmers never thought that minors or pets would apply for a card, so they never wrote rules to accommodate these edge cases. Therefore, such a system cannot learn a new rule by itself.

AI built from hand-crafted rules is “fragile”: when it encounters a strange situation, it breaks down. By the early 1990s, problems with expert systems brought about another winter in artificial intelligence.

“A lot of the conversation around AI is like, ‘Come on. It’s just hype,'” said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, a young man at the time. Professor of Computer Science, who started his career in artificial intelligence.

In such cynical circumstances, Deep Blue’s arrival feels like a strangely ambitious moonshot. The project grew out of the work of Deep Thought, a chess computer built at Carnegie Mellon University by Murray Campbell and Feng-hsiung Hsu, among others. .

It is reported that the name Deep Thought comes from the ridiculous artificial intelligence in “The Hitchhiker’s Guide to the Galaxy” – when asked about the meaning of life, it came up with the answer “42”.

Deep Thought performed extremely well; in 1988, it became the first chess AI to beat grandmaster Bent Larsen. The Carnegie Mellon team has come up with better algorithms for evaluating chess moves, and they’ve also created custom hardware that can compute these algorithms quickly.

IBM got wind of Deep Thought and decided to launch a “grand challenge” to build a computer good enough to beat any human being. In 1989, it hired Xu Fengxiong and Campbell and asked them to beat the world’s top masters.

In artificial intelligence circles, chess has long had a symbolic power—two opponents pitted against each other in a starry sky of pure thought. If they can beat Kasparov, it will definitely make the headlines.

To build the Deep Blue game, Campbell and his team kept making new chips to calculate chess positions faster, and hired grandmasters to help improve the algorithms for evaluating next moves.

Efficiency matters, there are probably more chess games than there are atoms in the universe, and not even a supercomputer can think through all of them in a reasonable amount of time.

To play chess, Deep Blue looks at a move first, calculates possible moves from there, “prunes” those that don’t look promising, goes deeper down promising paths, and repeats the process a few times.

“We thought it would take five years — it actually took a little more than six years,” Campbell said. By 1996, IBM was finally ready to face Kasparov, with the game set for February of that year. Campbell and his team are still frantically rushing to finish Deep Blue. “The system was only working for a few weeks before we actually came to power,” he said.

The first presentation of Deep Blue came as promised. While Deep Blue won one set, Kasparov won three sets and took the game. IBM called for a rematch, and Campbell’s team built faster hardware the following year.

By the time they finished improving, Deep Blue had already consisted of 30 PowerPC processors and 480 custom chess chips; they had also hired more grandmasters—four or five at any given point—to Help make better algorithms for parsing chess positions.

In May 1997, when Kasparov and Deep Blue met again, the computer was twice as fast, evaluating 200 million moves per second.

Still, IBM was not confident of a victory, Campbell recalled: “We expected it to be a tie.”

(Source: MIT Technology Review)

However, the reality is much more dramatic. Kasparov took the lead in the first race. But on step 36 of the second game, Deep Blue did something Kasparov didn’t expect.

Deep Blue is accustomed to the traditional way of computer chess, a style derived from the sheer brute force of machines. It outperforms humans in short-term tactics, and Deep Blue can easily deduce the best option a few steps away.

But what computers have traditionally been bad at is strategy—the ability to think about the shape of many future moves. And this is where humans still have an advantage.

Or so Kasparov thought, until Deep Blue’s actions in Game 2 shocked him. It looked so complicated that Kasparov began to worry: maybe the machine was much better than he thought! Convinced he had no way of winning, he gave up the second game.

But he shouldn’t do it. As it turns out, Deep Blue isn’t actually that good. It’s just that Kasparov didn’t find a move that would have ended the game in a draw.

He’s torturing himself, worrying that machines may be far more powerful than they actually are, that machines have begun to see human-like reasoning where they don’t exist.

Kasparov was disrupted until it got worse and worse. He freaked himself out again and again. In the sixth, the decisive game, he made a move so bad that chess watchers cried out in shock. “I wasn’t in the mood to play chess at all,” he later told a news conference.

IBM’s market cap rose $11.4 billion in a week amid the media spree following the success of Deep Blue. More importantly, though, this victory is like a thawing of AI’s long winter. If chess can be conquered, what happens next? The public’s attention and curiosity were mobilized.

“That’s what gets people’s attention,” Campbell told me.

The truth is, it’s not surprising that computers beat Kasparov. Most people who have been following AI and chess expect it to happen eventually.

Chess seems to be the pinnacle of the human mind, but it’s not. In fact, it’s a mental effort well suited to brute force computing: the rules are clear, there’s no hidden information, and the computer doesn’t even need to keep track of what happened to previous moves. It just evaluates the current position of the pawn.

Everyone knows that once computers develop fast enough, they will overwhelm humans, it’s only a matter of time. By the mid-1990s, “words were, in a sense, out of date,” said Demis Hassabis, head of DeepMind, an artificial intelligence company owned by Alphabet.

Deep Blue’s victory shows how limited hand-coded systems can be. IBM spent years and millions developing a computer to play chess, but it couldn’t do anything else.

“Deep Blue’s victory didn’t make AI have a huge impact on the world,” Campbell said, because they didn’t really discover any principles of intelligence, and the real world isn’t the same as chess.

He added: “There are very few problems like chess where you have all the information you might need to make the right decision. Most of the time, there are unknowns and randomness.”

As Deep Blue rubbed shoulders with Kasparov, some up-and-comers from “no business” were working on a more promising form of artificial intelligence: neural networks.

With neural networks, the idea is not to patiently write the rules for every AI decision like an expert system. Instead, training and reinforcement strengthen internal connections, roughly mimicking how the human brain learns (as the theory says).

The idea has been around since the 1950s. But training a useful large neural network requires lightning-fast computers, lots of memory, and lots of data. None of these were readily available at the time. Even in the 1990s, neural networks were considered a waste of time.

“At that time, most people in the AI ​​field thought neural networks were just crap,” said Geoff Hinton, a professor emeritus of computer science at the University of Toronto and a pioneer in the field. “I was called a ‘true believer’. ‘.” That’s not a compliment.

But by the 2000s, the computer industry was evolving to make neural networks more feasible. Video gamers’ thirst for better and better graphics has created a huge industry of super-fast graphics processing units that have proven to be perfect for neural networks.

At the same time, the internet was exploding, producing a flood of images and text that could be used to train these systems.

By the early 2010s, these technological leaps allowed Hinton and his faithful “believers” to take neural networks to new heights. They can now create networks with many layers of neurons (that’s what “deep” means in “deep learning”).

In 2012, his team easily won the annual Imagenet competition. In this competition, artificial intelligence was able to identify elements in a picture. It shocked the world of computer science: Self-learning machines are finally feasible.

Ten years after the deep learning revolution began, neural networks and their pattern recognition capabilities have taken over every corner of everyday life. They help Gmail autocomplete your sentences, banks detect fraud, photo apps automatically recognize faces, and, with the help of OpenAI’s GPT-3 and DeepMind’s Gopher, write long, human-sounding articles and summaries text.

They even changed the way science was developed. In 2020, DeepMind launched AlphaFold2, an artificial intelligence that can predict how proteins fold — a superhuman skill that could help guide researchers in the development of new drugs and treatments.

Meanwhile, Deep Blue disappeared, leaving no useful invention behind. It turns out that playing chess is not a computer skill that is required in everyday life. Demis Hassabis, founder of DeepMind, said: “Deep Blue ultimately shows the shortcomings of trying to create everything by hand.”

IBM has attempted to address this situation with another specialized system, Watson, designed to solve a more practical problem: getting a machine to answer the question. It used statistical analysis of large amounts of text to achieve language understanding, which was cutting edge at the time, and was more than a simple hypothetical system.

But Watson’s timing was unfortunate: just a few years later, it was eclipsed by a revolution in deep learning, which brought about a collection of language processing models more granular than Watson’s statistical techniques.

Deep learning has outpaced old-fashioned artificial intelligence precisely because “pattern recognition is incredibly powerful,” said former Stanford professor Daphne Koller, who founded and runs Insitro, which uses neural Networks and other forms of machine learning to study new drug treatments.

The flexibility of neural networks, the many ways in which patterns can be recognized, is why another AI winter has yet to happen. “Machine learning does bring value,” she said, something that the “previous boom” in artificial intelligence never did.

Deep Blue and the reversed fortunes of neural networks show that we have long been poor at judging the difficulty and value of artificial intelligence.

For decades, it was considered important to master chess because it is difficult for humans to reach a very high level. However, chess is quite easy for a computer to master, after all, it is very logical.

Even harder for a computer to learn is the casual, unconscious mental work that humans do — like engaging in lively conversations, driving a car through a road, or recognizing a friend’s emotional state.

We do these things so effortlessly that we rarely realize how tricky they are and how much fuzzy grayscale judgment they require. The enormous utility of deep learning comes from being able to capture small fragments of this subtle, untold human intelligence.

However, the field of artificial intelligence has not yet achieved the final victory. Deep learning may be on a roll right now, but it’s also accumulating sharp criticism.

“For a long time, there was this techno-chauvinist passion, they thought, AI can solve all problems!” said Meredith Broussard, a journalism student at New York University Professor and author of Artificial Unintelligence.

But as she and other critics point out, deep learning systems are often trained on biased data and absorb those biases.

Computer scientists Joy Buolamwini and Timnit Gebru have found that three commercial visual artificial intelligence systems are terrible at analyzing the faces of darker-skinned women . Amazon trained an AI to review resumes, only to find it ranked women down.

While computer scientists, and many AI engineers, are now aware of these bias issues, they are not always sure how to deal with them.

On top of that, neural networks are also “giant black boxes,” says Daniela Rus, an artificial intelligence expert who currently runs MIT’s Computer Science and Artificial Intelligence Laboratory.

Once a neural network is trained, its mechanics are also not easy to understand. It’s unclear how it will come to its conclusion or how it will fail.

Ross believes that relying on a black box for a task that is not “safety critical” may not be a problem. But what about high-risk jobs like autonomous driving? “It’s really amazing that we can put so much trust and confidence in them,” she said.

That’s where Deep Blue excels. Old-fashioned handcraft rules can be fragile, but it’s understandable. The machine is complex, but it’s not mysterious. Ironically, this old way of programming could be making a comeback as engineers and computer scientists grapple with the limitations of pattern matching.

Language generators, like OpenAI’s GPT-3 or DeepMind’s Gopher, can take some of the sentences you write and go on to write page after page of plausible-sounding prose.

But despite some impressive imitations, Gopher “still couldn’t really understand what it was talking about,” Hassabis said, “not in a real sense.”

Similarly, visual AI can make terrible mistakes when it encounters edge cases. Self-driving cars have crashed into fire trucks parked on the highway because it has never been the case in the millions of hours of video they’ve been trained on. Neural networks present a “fragility” problem in their own way.

What AI really needs to move forward, as many computer scientists now suspect, is the ability to understand the facts of the world and reason about them. Self-driving cars cannot rely solely on pattern matching. It also has to have common sense about what a fire truck is and why seeing one parked on the highway means danger.

The problem is, no one knows how to build neural networks that can reason or use common sense. Gary Marcus, a cognitive scientist and co-author of Rebooting AI, suspects that the future of AI will require a “hybrid” approach — neural network learning models, but with some Old-fashioned guide to hand-coded logic. In a sense, this will combine the strengths of Deep Blue with the strengths of deep learning.

Die-hard deep learning enthusiasts disagree. Hinton believes that in the long run, neural networks should be fully capable of reasoning. After all, that’s what humans do, “and the brain is a neural network.” Using hand-coded logic drives him crazy; it suffers from the problem with all expert systems, which is that you can never predict what you want to give a machine. All common sense.

The road ahead, Hinton said, is to continue innovating in neural networks — exploring new architectures and new learning algorithms to more accurately mimic how the human brain itself works. Computer scientists are dabbling in a variety of approaches.

At IBM, Deep Blue developer Campbell is working on “neurosymbolic” AI, which works a bit like Marcus suggested. Ezioni’s lab is trying to build common-sense modules for artificial intelligence, including trained neural networks and traditional computer logic; but so far, it’s early days.

The future may look less like an outright triumph of Deep Blue or neural nets and more like a Frankenstein-esque approach — a combination of the two.

Given that artificial intelligence is likely to be here to stay, how will we humans live with it? Will we end up being defeated by AI like Kasparov because it is so good at “thinking work” that we can’t compete with it?

Kasparov himself did not think so. Shortly after losing to Deep Blue, he saw no point in fighting AI. Machines “think” in a fundamentally inhuman way, using brute force math. It always has better tactics and short-term power.

So why compete? Instead, why not cooperate?

After playing against Deep Blue, Kasparov invented “advanced chess”, where humans and silicon work together. One played against the other — but each brandished a laptop running chess software to help with possible moves in the war game.

When Kasparov started hosting advanced chess tournaments in 1998, he quickly noticed a fascinating difference in the game. Interestingly, amateur fighters have more fist weight than their body weight. In a man-versus-laptop competition in 2005, a pair of contestants beat several masters to win the top prize.

How do they become the best chess players? Because hobbyists better understand how to work with machines. They know how to quickly explore ideas, when to take a machine’s advice and when to ignore it. Currently, some leagues are still hosting senior chess tournaments.

“The future lies in finding ways to combine human and machine intelligence to reach new heights and do things that neither can do alone,” Kasparov told me in an email.

Of course, neural networks behave differently than chess engines. But many celebrities strongly agree with Kasparov’s vision of humans and AI working together. DeepMind’s Hassabis believes that artificial intelligence is a development in science that will guide humanity to new breakthroughs.

“I think we’re going to see a huge boom,” he said, “and we’re going to start seeing Nobel Prize-level scientific challenges being knocked down one by one.”

Kohler’s company, Insitro, is also using AI as a collaborative tool for researchers,” she said. “We’re playing a human-machine hybrid game. “

Will there come a time when we can make AI’s reasoning capabilities so human-like that humans really have nothing to offer and AI takes over all the thinking? possible. But even these cutting-edge scientists can’t predict when, or even if, it will happen.

So, 25 years after this famous game, this may be the last gift from Deep Blue. In his defeat, Kasparov got a glimpse of the true endgame of artificial intelligence and humanity. “We’re going to be more and more stewards of algorithms, he told me,” and use them to improve our creative output — our risk-taking soul. “

Support: Da Yi, Lu Yuqing

original

https://www.technologyreview.com/2022/02/18/1044709/ibm-deep-blue-ai-history/Return to Sohu, see more

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.