微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 4387|回复: 0
打印 上一主题 下一主题
收起左侧

2019.03.01 DeepMind和谷歌:控制人工智能之战

[复制链接]
跳转到指定楼层
1
发表于 2022-3-15 23:43:53 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
TECH
DeepMind and Google: the battle to control artificial intelligence
Demis Hassabis founded a company to build the world’s most powerful AI. Then Google bought him out. Hal Hodson asks who is in charge



Mar 1st 2019
BY HAL HODSON


One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage. Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: “So today I’m going to be talking about different approaches to building…” He stalled, as though just realising that he was stating his momentous ambition out loud. And then he said it: “agi”.

agi stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human. agi will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (ais) that inhabit our phones and computers. But it will also add, subtract, play chess and speak French. It will also understand physics papers, compose novels, devise investment strategies and make delightful conversation with strangers. It will monitor nuclear reactions, manage electricity grids and traffic flow, and effortlessly succeed at everything else. agi will make today’s most advanced ais look like pocket calculators.


The only intelligence that can currently attempt all these tasks is the kind that humans are endowed with. But human intelligence is limited by the size of the skull that houses the brain. Its power is restricted by the puny amount of energy that the body is able to provide. Because agi will run on computers, it will suffer none of these constraints. Its intelligence will be limited only by the number of processors available. agi may start by monitoring nuclear reactions. But soon enough it will discover new sources of energy by digesting more physics papers in a second than a human could in a thousand lifetimes. Human-level intelligence, coupled with the speed and scalability of computers, will make problems that currently appear insoluble disappear. Hassabis told the Observer, a British newspaper, that he expected agi to master, among other disciplines, “cancer, climate change, energy, genomics, macro-economics [and] financial systems”.

The conference at which Hassabis spoke was called the Singularity Summit. “The Singularity” refers to the most likely consequence of the advent of agi, according to futurists. Because agi will process information at high speed, it will become very smart very quickly. Rapid cycles of self-improvement will lead to an explosion of machine intelligence, leaving humans choking on silicon dust. Since this future is constructed entirely on a scaffolding of untested presumptions, it is a matter of almost religious belief whether one considers the Singularity to be Utopia or hell.

Judging by the titles of talks, the attendees at the conference tended towards the messianic: “The Mind and How to Build One”; “ai against Aging”; “Replacing Our Bodies”; “Modifying the Boundary between Life and Death”. Hassabis’s speech, by contrast, appeared underwhelming: “A Systems Neuroscience Approach to Building agi”.


Hassabis paced between the podium and a screen, speaking at a rapid clip. He wore a maroon jumper and a white button-down shirt like a schoolboy. His slight stature seemed only to magnify his intellect. Up until now, Hassabis explained, scientists had approached agi from two directions. On one track, known as symbolic ai, human researchers tried to describe and program all the rules needed for a system that could think like a human. This approach was popular in the 1980s and 1990s, but hadn’t produced the desired results. Hassabis believed that the brain’s mental architecture was too subtle to be described in this way.

The other track comprised researchers trying to replicate the brain’s physical networks in digital form. This made a certain kind of sense. After all, the brain is the seat of human intelligence. But those researchers were also misguided, said Hassabis. Their task was on the same scale as mapping every star in the universe. More fundamentally, it focused on the wrong level of brain function. It was like trying to understand how Microsoft Excel works by tearing open a computer and examining the interactions of the transistors.

Instead, Hassabis proposed a middle ground: agi should take inspiration from the broad methods by which the brain processes information – not the physical systems or the particular rules it applies in specific situations. In other words it should focus on understanding the brain’s software, not its hardware. New techniques like functional magnetic resonance imaging (fmri), which made it possible to peer inside the brain while it engaged in activities, had started to make this kind of understanding feasible. The latest studies, he told the audience, showed that the brain learns by replaying experiences during sleep, in order to derive general principles. ai researchers should emulate this kind of system.

A logo appeared in the lower-right corner of his opening slide, a circular swirl of blue. Two words, closed up, were printed underneath it: DeepMind. This was the first time the company had been referred to in public. Hassabis had spent a year trying to get an invitation to the Singularity Summit. The lecture was an alibi. What he really needed was one minute with Peter Thiel, the Silicon Valley billionaire who funded the conference. Hassabis wanted Thiel’s investment.

Hassabis has never spoken about why he wanted Thiel’s backing in particular. (Hassabis refused multiple interview requests for this article through a spokesperson. 1843 spoke to 25 sources, including current and former employees and investors. Most of them spoke anonymously, as they were not authorised to talk about the company.) But Thiel believes in agi with even greater fervour than Hassabis. In a talk at the Singularity Summit in 2009, Thiel had said that his biggest fear for the future was not a robot uprising (though with an apocalypse-proof bolthole in the New Zealand outback, he’s better prepared than most people). Rather, he worried that the Singularity would take too long coming. The world needed new technology to ward off economic decline.

DeepMind ended up raising £2m; Thiel contributed £1.4m. When Google bought the company in January 2014 for $600m, Thiel and other early investors earned a 5,000% return on their investment.


For many founders, this would be a happy ending. They could slow down, take a step back and spend more time with their money. For Hassabis, the acquisition by Google was just another step in his pursuit of agi. He had spent much of 2013 negotiating the terms of the deal. DeepMind would operate as a separate entity from its new parent. It would gain the benefits of being owned by Google, such as access to cash flow and computing power, without losing control.

Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies. Every element was in place to hasten the arrival of agi and solve the causes of human misery.

Demis Hassabis was born in north London in 1976 to a Greek-Cypriot father and a Chinese-Singaporean mother. He was the eldest of three siblings. His mother worked at John Lewis, a British department store, and his father ran a toy shop. He took up chess at the age of four, after watching his father and uncle play. Within weeks he was beating the grown-ups. By 13 he was the second-best chess player in the world for his age. At eight, he taught himself to code on a basic computer.

Hassabis completed his a-levels in 1992, two years ahead of schedule. He got a job programming videogames with Bullfrog Productions. Hassabis wrote Theme Park, in which players designed and ran a virtual amusement park. It was a huge success, selling 15m copies and forming part of a new genre of simulation games in which the goal is not to defeat an opponent but to optimise the functioning of a complex system like a business or a city.

As well as making games, he was brilliant at playing them. As a teen, he’d run between floors at board-game competitions to compete in simultaneous bouts of chess, scrabble, poker and backgammon. In 1995, while studying computer science at Cambridge University, Hassabis wandered into a student Go tournament. Go is an ancient board game of strategy that is considerably more complex than chess. Mastery is supposed to require intuition acquired by long experience. No one knew if Hassabis had even played before.

First, Hassabis won the beginners’ tournament. Then he beat the winner of the experienced players, albeit with a handicap. Charles Matthews, the Cambridge Go master who ran the tournament, remembers the expert player’s shock at being thrashed by a 19-year-old novice. Matthews took Hassabis under his wing.

Hassabis’s intellect and ambition have always expressed themselves through games. Games, in turn, sparked his fascination with intelligence. As he observed his own development at chess, he wondered whether computers might be programmed to learn as he had, through accumulated experience. Games offered a learning environment that the real world couldn’t match. They were neat and contained. Because games are hived off from the real world, they can be practised without interference and mastered efficiently. Games speed up time: players build a crime syndicate in a couple of days and fight the battle of the Somme in minutes.


In the summer of 1997, Hassabis travelled to Japan. That May, ibm’s Deep Blue computer had beaten Garry Kasparov, the world chess champion. This was the first time a computer had defeated a grandmaster at chess. The match captured the world’s attention and raised concerns over the growing power and potential menace of computers. When Hassabis met Masahiko Fujuwarea, a Japanese board-game master, he spoke of a plan that would combine his interests in strategy games and artificial intelligence: one day he would build a computer program to beat the greatest human Go player.

Hassabis approached his career methodically. “At the age of 20, Hassabis was taking a view that certain things have to be in place before he could go into artificial intelligence at the level he wanted to,” says Matthews. “He had a plan.”

In 1998 he started a game studio of his own called Elixir. Hassabis focused on one hugely ambitious game, Republic: The Revolution, an intricate political simulation. Years earlier, when still in school, Hassabis had told his friend Mustafa Suleyman that the world needed grand simulations in order to model its complex dynamics and solve the toughest social problems. Now, he tried do so in a game.

His aspirations proved harder than expected to wrangle into code. Elixir eventually released a pared-down version of the game, to lukewarm reviews. Other games flopped (one was a Bond-villain simulator called Evil Genius). In April 2005 Hassabis shut down Elixir. Matthews believes that Hassabis founded the company simply to gain managerial experience. Now, Hassabis lacked just one crucial area of knowledge before embarking on his quest for agi. He needed to understand the human brain.

In 2005, Hassabis started a phd in neuroscience at University College London (ucl). He published influential research on memory and imagination. One paper, which has since been cited over 1,000 times, showed that people with amnesia also had difficulty imagining new experiences, suggesting that there is a connection between remembering and creating mental images. Hassabis was building up an understanding of the brain required to tackle agi. Most of his work came back to one question: how does the human brain obtain and retain concepts and knowledge?


Hassabis officially founded DeepMind on November 15th 2010. The company’s mission statement was the same then as it is now: to “solve intelligence”, and then use it to solve everything else. As Hassabis told the Singularity Summit attendees, this means translating our understanding of how the brain accomplished tasks into software that could use the same methods to teach itself.

Hassabis does not pretend that science has fully comprehended the human mind. The blueprint for agi could not simply be drawn from hundreds of neuroscience studies. But he clearly believes enough is known to begin work on agi in the manner he would like. Yet it is possible that his confidence outruns reality. We still know very little for certain about how the brain actually functions. In 2018 the findings of Hassabis’s own phd were called into question by a team of Australian researchers. The statistics are devilish and this is just a single paper, but it shows that the science that underwrites DeepMind’s work is far from settled.

Suleyman and Shane Legg, an agi-obsessed New Zealander whom Hassabis also met at ucl, joined as co-founders. The firm’s reputation grew rapidly. Hassabis reeled in talent. “He’s a bit of a magnet,” says Ben Faulkner, DeepMind’s former operations manager. Many new recruits came from Europe, beyond the terrible gaze of Silicon Valley giants like Google and Facebook. Perhaps DeepMind’s greatest accomplishment was moving early to hire and retain the brightest and best. The company set up shop in the attic of a terraced house on Russell Square in Bloomsbury, across the road from ucl.

One machine-learning technique that the company focused on grew out of Hassabis’s twin fascination with games and neuroscience: reinforcement learning. Such a program is built to gather information about its environment, then learn from it by repeatedly replaying its experiences, much like the description that Hassabis gave of human-brain activity during sleep in his Singularity Summit lecture.

Reinforcement learning starts with a computational blank slate. The program is shown a virtual environment about which it knows nothing but the rules, such as a simulation of a game of chess or a video game. The program contains at least one component known as a neural network. This is made up of layers of computational structures that sift through information in order to identify particular features or strategies. Each layer examines the environment at a different level of abstraction. At first these networks have minimal success but, importantly, their failures are encoded within them. They become increasingly sophisticated as they experiment with different strategies and are rewarded when they are successful. If the program moves a chess piece and loses the game as a result, it won’t make that mistake again. Much of the magic of artificial intelligence lies in the speed at which it repeats its tasks.

DeepMind’s work culminated in 2016 when a team built an ai program that used reinforcement learning alongside other techniques to play Go. The program, called AlphaGo, caused astonishment when it beat the world champion in a five-game match in Seoul in 2016. The machine’s victory, watched by 280m people, came a decade earlier than experts had predicted. The following year an improved version of AlphaGo thrashed the Chinese Go champion.


Like Deep Blue in 1997, AlphaGo changed perceptions of human accomplishment. The human champions, some of the most brilliant minds on the planet, no longer stood at the pinnacle of intelligence. Nearly 20 years after he had confided his ambition to Fujuwarea, Hassabis fulfilled it. Hassabis has said that the match brought him close to tears. Traditionally, a student of Go pays back their teacher by beating them in a single contest. Hassabis thanked Matthews by beating the entire game.

DeepBlue won through the brute strength and speed of computation, but AlphaGo’s style appeared artistic, almost human. Its grace and sophistication, the transcendence of its computational muscle, seemed to show that DeepMind was further ahead than its competitors on the quest for a program that could treat disease and manage cities.

Hassabis has always said that DeepMind would change the world for the better. But there are no certainties about agi. If it ever comes into being, we don’t know whether it will be altruistic or vicious, or if it will submit to human control. Even if it does, who should take the reins?

From the start, Hassabis has tried to protect DeepMind’s independence. He has always insisted that DeepMind remain in London. When Google bought the company in 2014, the question of control became more pressing. Hassabis didn’t need to sell DeepMind to Google. There was plenty of cash on hand and he had sketched out a business model in which the company would design games to fund research. Google’s financial heft was attractive, yet, like many founders, Hassabis was reluctant to hand over the company he had nurtured. As part of the deal, DeepMind created an arrangement that would prevent Google from unilaterally taking control of the company’s intellectual property. In the year leading up to acquisition, according to a person familiar with the transaction, both parties signed a contract called the Ethics and Safety Review Agreement. The agreement, previously unreported, was drawn up by senior barristers in London.

The Review Agreement puts control of DeepMind’s core agi technology, whenever it may be created, in the hands of a governing panel known as the Ethics Board. Far from being a cosmetic concession from Google, the Ethics Board gives DeepMind solid legal backing to keep control of its most valuable and potentially most dangerous technology, according to the same source. The names of the panel members haven’t been made public, but another source close to both DeepMind and Google says that all three of DeepMind’s founders sit on the board. (DeepMind refused to answer a detailed set of questions about the Review Agreement but said that “ethics oversight and governance has been a priority for us from the earliest days.”)

Hassabis can determine DeepMind’s destiny by other means too. One is loyalty. Employees past and present say that Hassabis’s research agenda is one of DeepMind’s greatest strengths. His programme, which offers fascinating and important work free from the pressures of academia, has attracted hundreds of the world’s most talented experts. DeepMind has subsidiary offices in Paris and Alberta. Many employees feel more affinity with Hassabis and his mission than with its revenue-hungry corporate parent. As long as he retains their personal loyalty, Hassabis holds considerable power over his sole shareholder. Better for Google to have DeepMind’s ai talent working for it by proxy than for those people to end up at Facebook or Apple.

DeepMind has another source of leverage, though it requires constant replenishment: favourable publicity. The company excels at this. AlphaGo was a pr coup. Since the Google acquisition, the firm has repeatedly produced marvels that have garnered global attention. One piece of software can spot patterns in an eye scan that are indicators of macular degeneration. Another program learned to play chess from scratch using similar architecture to AlphaGo, becoming the greatest chess player of all time after just nine hours playing against itself. In December 2018 a program called AlphaFold proved more accurate than competitors at predicting the three-dimensional structure of proteins from a list of their composites, potentially paving the way to treat diseases such as Parkinson’s and Alzheimer’s.

DeepMind is particularly proud of the algorithms it developed that calculate the most efficient means to cool Google’s data centres, which contain an estimated 2.5m computer servers. DeepMind said in 2016 that they had reduced Google’s energy bill by 40%. But some insiders say such boasts are overblown. Google had been using algorithms to optimise its data centres long before DeepMind existed. “They just want to have some pr so they can claim some value added within Alphabet,” says one Google employee. Google’s parent Alphabet pays DeepMind handsomely for services like these. DeepMind billed £54m to Alphabet companies in 2017. That figure pales in comparison to DeepMind’s overheads. It spent £200m on staff alone that year. Overall, DeepMind lost £282m in 2017.


This is a pittance for the cash-rich giant. But other Alphabet subsidiaries in the red have attracted the attention of Ruth Porat, Alphabet’s parsimonious chief financial officer. Google Fiber, an effort to build an internet-service provider, was put on hiatus after it became clear that it would take decades to make a return on investment. ai researchers wonder privately whether DeepMind will be “Porated”.

DeepMind’s careful unveiling of ai advances forms part of its strategy of managing up, signalling its reputational worth to the powers that be. That’s especially valuable at a time when Google stands accused of invading users’ privacy and spreading fake news. DeepMind is also lucky to have a sympathiser at the highest level: Larry Page, one of Google’s two founders, now chief executive of Alphabet. Page is the closest thing that Hassabis has to a boss. Page’s father, Carl, studied neural networks in the 1960s. Early in his career, Page said that he built Google solely to found an ai company.

DeepMind’s tight control needed for press management doesn’t gel with the academic spirit that pervades the company. Some researchers complain that it can be difficult to publish their work: they have to battle through layers of internal approval before they can even submit work to conferences and journals. DeepMind believes that it needs to proceed carefully to avoid scaring the public with the prospect of agi. But clamming up too tightly could start to sour the academic atmosphere and weaken the loyalty of employees.

Five years after the acquisition by Google, the question of who controls DeepMind is coming to a crunch point. The firm’s founders and early employees are approaching earn-out, when they can leave with the financial compensation that they received from the acquisition (Hassabis’s stock was probably worth around £100m). But a source close to the company suggests that Alphabet has pushed back the founders’ earn-outs by two years. Given his relentless focus, Hassabis is unlikely to jump ship. He is interested in money only in so far as it helps him achieve his life’s work. But some colleagues have already left. Three ai engineers have departed since the start of 2019. And Ben Laurie, one of the world’s most prominent security engineers, has now returned to Google, his previous employer. This number is small, but DeepMind offers such an exhilarating mission and handsome pay that it is rare for anyone to leave.

So far, Google has not interfered much with DeepMind. But one recent event has raised concerns over how long the company can sustain its independence.


DeepMind had always planned to use ai to improve health care. In February 2016, it set up a new division, DeepMind Health, led by Mustafa Suleyman, one of the company’s co-founders. Suleyman, whose mother was an nhs nurse, hoped to create a program called Streams that would warn doctors when a patient’s health deteriorated. DeepMind would earn a performance-based fee. Because this work required access to sensitive information about patients, Suleyman established an Independent Review Panel (irp) populated by the great and good of British health care and technology. DeepMind was wise to proceed with care. The British information commissioner subsequently found that one of the partner hospitals broke the law in handling patient data. Nonetheless by the end of 2017, Suleyman had signed agreements with four large nhs hospitals.

On November 8th 2018, Google reported the creation of its own health-care division, Google Health. Five days later, it was announced that DeepMind Health was to be rolled into its parent company’s efforts. DeepMind appeared to have had little warning. According to information gained from Freedom of Information requests, it gave its partner hospitals only three days’ notice of the change. DeepMind refused to say when discussions about the merger began but said that the short gap between the notification and a public announcement was in the interests of transparency. Suleyman had written in 2016 that “at no stage will patient data ever be linked or associated with Google accounts, products or services.” His promise seemed to have been broken. (In response to 1843’s questions, DeepMind said that “at this stage, none of our contracts have moved across to Google, and they only will with our partners’ consent. Streams becoming a Google service does not mean the patient data...can be used to provide other Google products or services.”)

Google’s annexation has angered employees at DeepMind Health. According to people close to the health team, more employees plan to leave the company once the absorption is complete. One member of the irp, Mike Bracken, has already walked out on Suleyman. According to multiple people familiar with the event, Bracken quit in December 2017 over concerns that the panel was more about window-dressing than genuine oversight. When Bracken asked Suleyman if he would give panel members the accountability and governance powers of non-executive directors, Suleyman scoffed. (A spokesperson for DeepMind said they had “no recollection” of the incident.) Julian Huppert, the head of the irp, argues that the panel delivered “more radical governance” than Bracken expected because members were able to speak openly and not bound by a duty of confidentiality.

This episode shows that peripheral parts of DeepMind’s operation are vulnerable to Google. DeepMind said in a statement that “we all agreed that it makes sense to bring these efforts together in one collaborative effort, with increased resources.” This begs the question of whether Google will apply the same logic to DeepMind’s work on agi.

From a distance, DeepMind looks to have taken great strides. It has already built software that can learn to perform tasks at superhuman levels. Hassabis often cites Breakout, a videogame for the Atari console. A Breakout player controls a bat that she can move horizontally across the bottom of the screen, using it to bounce a ball against blocks that hover above it, destroying them on impact. The player wins when all blocks are obliterated. She loses if she misses the ball with the bat. Without human instruction, DeepMind’s program not only learned to play the game but also worked out how to cannon the ball into the space behind the blocks, taking advantage of rebounds to break more blocks. This, Hassabis says, demonstrates the power of reinforcement learning and the preternatural ability of DeepMind’s computer programs.

It’s an impressive demo. But Hassabis leaves a few things out. If the virtual paddle were moved even fractionally higher, the program would fail. The skill learned by DeepMind’s program is so restricted that it cannot react even to tiny changes to the environment that a person would take in their stride – at least not without thousands more rounds of reinforcement learning. But the world has jitter like this built into it. For diagnostic intelligence, no two bodily organs are ever the same. For mechanical intelligence, no two engines can be tuned in the same way. So releasing programs perfected in virtual space into the wild is fraught with difficulty.

The second caveat, which DeepMind rarely talks about, is that success within virtual environments depends on the existence of a reward function: a signal that allows software to measure its progress. The program learns that ricocheting off the back wall makes its score go up. Much of DeepMind’s work with AlphaGo lay in constructing a reward function compatible with such a complex game. Unfortunately, the real world doesn’t offer simple rewards. Progress is rarely measured by single scores. Where such measures exist, political challenges complicate the problem. Reconciling the reward signal for climate health (the concentration of CO₂ in the atmosphere) with the reward signal for oil companies (share price) requires satisfying many human beings with conflicted motivations. Reward signals tend to be very weak. It is rare for human brains to receive explicit feedback about the success of a task while in the midst of it.

DeepMind has found a way around this by employing vast amounts of computer power. AlphaGo takes thousands of years of human game-playing time to learn anything. Many ai thinkers suspect this solution is unsustainable for tasks that offer weaker rewards. DeepMind acknowledges the existence of such ambiguities. It has recently focused on StarCraft 2, a strategy computer game. Decisions taken early in the game have ramifications later on, which is closer to the sort of convoluted and delayed feedback that characterises many real-world tasks. In January, DeepMind software beat some of the world’s top human players in a demo that, while heavily constrained, was still impressive. Its programs have also begun to learn reward functions by following the feedback of human taskmasters. But putting human instruction in the loop risks losing the effects of scale and speed that unadulterated computer-processing offered.

Current and former researchers at DeepMind and Google, who requested anonymity due to stringent non-disclosure agreements, have also expressed scepticism that DeepMind can reach agi through such methods. To these individuals, the focus on achieving high performance within simulated environments makes the reward-signal problem hard to tackle. Yet this approach is at the heart of DeepMind. It has an internal leaderboard, in which programs from competing teams of coders vie for mastery over virtual domains.

Hassabis has always seen life as a game. A large part of his career was devoted to making them, a large part of his leisure time has been spent playing them. At DeepMind, they are his chosen vehicle for developing agi. Just like his software, Hassabis can learn only from his experiences. The pursuit of agi may eventually lose its way, having invented some useful medical technologies and out-classed the world’s greatest board-game players. Significant achievements but not the one he craves. But he could yet usher agi into being, right under Google’s nose but beyond its control. If he does this, Demis Hassabis will have beaten the toughest game of all.■

photographs david ellis




科技
DeepMind和谷歌:控制人工智能之战
德米斯-哈萨比斯(Demis Hassabis)创立了一家公司,以建立世界上最强大的人工智能。然后谷歌收购了他。哈尔-霍德森问谁在负责



2019年3月1日
作者:HAL HODSON

2010年8月的一个下午,在一个位于旧金山湾边缘的会议厅里,一个名叫德米斯-哈萨比斯的34岁伦敦人走上了舞台。他走到讲台上,以一个试图控制自己神经的人的审慎步态,将嘴唇抿成一个简短的微笑,开始讲话。"所以今天我将谈论不同的建设方法......" 他停顿了一下,仿佛刚刚意识到他正在大声陈述他的重大野心。然后他说了出来。"AGI"。

agi是人工通用智能的缩写,是一种假设的计算机程序,能够像人类一样或比人类更好地完成智力任务。agi将能够完成离散的任务,如识别照片或翻译语言,这是居住在我们的手机和电脑中的众多人工智能(ais)的唯一焦点。但它也会做加法、减法、下棋和讲法语。它还将理解物理学论文,创作小说,设计投资策略,并与陌生人进行愉快的交谈。它将监测核反应,管理电网和交通流量,并毫不费力地在其他方面取得成功。敏捷将使今天最先进的人工智能看起来像袖珍计算器。


目前唯一能够尝试所有这些任务的智能是人类被赋予的那种。但是人类的智力受到容纳大脑的头骨大小的限制。它的力量受到身体所能提供的微不足道的能量的限制。由于敏捷将在计算机上运行,它将不会受到这些限制。它的智能将只受到可用的处理器数量的限制。敏捷可能从监测核反应开始。但很快,它将通过在一秒钟内消化比人类一千次生命都要多的物理学论文来发现新的能源来源。人类水平的智能,加上计算机的速度和可扩展性,将使目前似乎无法解决的问题消失。哈萨比斯告诉英国报纸《观察家报》,他期望人工智能能够掌握 "癌症、气候变化、能源、基因组学、宏观经济学[和]金融系统 "等学科。

哈萨比斯发言的会议被称为 "奇点峰会"。根据未来学家的说法,"奇点 "指的是敏捷出现后最可能出现的结果。因为敏捷将高速处理信息,它将很快变得非常聪明。自我完善的快速循环将导致机器智能的爆炸,让人类在硅尘中窒息。由于这个未来完全是在一个未经检验的假设的脚手架上构建的,因此,一个人认为奇点是乌托邦还是地狱,这几乎是一个宗教信仰的问题。

从演讲的题目来看,与会者倾向于救世主式的演讲:"心灵和如何建立一个心灵";"AI对抗衰老";"取代我们的身体";"改变生命和死亡的边界"。相比之下,哈萨比斯的演讲则显得不温不火。"建立敏捷的系统神经科学方法"。


哈萨比斯在讲台和屏幕之间踱来踱去,以快速的速度发言。他穿着栗色毛衣和白色纽扣衬衫,像个小学生。他瘦小的身躯似乎只是放大了他的智慧。哈萨比斯解释说,直到现在,科学家们一直从两个方向来研究敏捷。在一个被称为 "符号AI "的方向上,人类研究人员试图描述和编程一个能够像人类一样思考的系统所需的所有规则。这种方法在20世纪80年代和90年代很流行,但并没有产生预期的结果。哈萨比斯认为,大脑的心理结构过于微妙,无法用这种方式来描述。

另一条轨道由研究人员组成,试图以数字形式复制大脑的物理网络。这在某种程度上是有道理的。毕竟,大脑是人类智慧的源泉。但这些研究人员也被误导了,哈萨比斯说。他们的任务与绘制宇宙中每颗恒星的规模相同。更根本的是,它关注的是大脑功能的错误层面。这就像试图通过拆开电脑和检查晶体管的相互作用来了解微软Excel的工作原理。

相反,哈萨比斯提出了一个中间立场:敏捷应该从大脑处理信息的广泛方法中获得灵感--而不是物理系统或它在特定情况下应用的特殊规则。换句话说,它应该专注于理解大脑的软件,而不是硬件。功能性磁共振成像(fmri)等新技术使人们有可能在大脑从事活动时窥视其内部,并开始使这种理解变得可行。他告诉听众,最新的研究表明,大脑在睡眠中通过回放经验进行学习,从而得出一般原则。

在他的开场幻灯片的右下角出现了一个标志,一个蓝色的圆形漩涡。它的下面印有两个字,闭合起来。DeepMind。这是该公司第一次在公开场合被提及。哈萨比斯花了一年时间试图获得参加奇点峰会的邀请。讲座是一个不在场证明。他真正需要的是与资助该会议的硅谷亿万富翁彼得-泰尔(Peter Thiel)会面一分钟。哈萨比斯希望得到蒂尔的投资。

哈萨比斯从未谈及他为什么特别想要泰尔的支持。(哈萨比斯通过发言人拒绝了本文的多次采访请求。1843年采访了25个消息来源,包括现任和前任员工以及投资者。他们中的大多数人都是匿名发言,因为他们没有被授权谈论公司的情况)。但泰尔比哈萨比斯更狂热地相信阿吉。在2009年奇点峰会的一次演讲中,泰尔说他对未来最大的恐惧不是机器人起义(尽管他在新西兰内陆有一个防天灾的避难所,他比大多数人准备得更好)。相反,他担心 "奇点 "的到来时间太长。世界需要新技术来抵御经济衰退。

DeepMind最终筹集了200万英镑;泰尔出资140万英镑。当谷歌在2014年1月以6亿美元收购该公司时,泰尔和其他早期投资者获得了5000%的投资回报。


对许多创始人来说,这将是一个幸福的结局。他们可以放慢脚步,退一步,花更多的时间在他们的钱上。对哈萨比斯来说,谷歌的收购只是他追求敏捷的又一步。他在2013年花了很多时间来谈判交易的条款。DeepMind将作为一个独立于新母公司的实体运作。它将获得被谷歌拥有的好处,比如获得现金流和计算能力,但不会失去控制权。

哈萨比斯认为DeepMind将是一个混合体:它将拥有初创公司的动力、最优秀大学的大脑和世界上最有价值的公司之一的雄厚财力。每一个元素都已到位,以加速敏捷的到来,解决人类痛苦的原因。

德米斯-哈萨比斯1976年出生于伦敦北部,父亲是希腊裔塞浦路斯人,母亲是新加坡华裔。他是三个兄弟姐妹中的长子。他的母亲在英国百货公司John Lewis工作,他的父亲经营一家玩具店。他在四岁的时候,看着他的父亲和叔叔下棋,就开始学习国际象棋。几周内他就打败了成年人。到13岁时,他是世界上同龄人中第二好的象棋手。八岁时,他自学了基本的计算机编码。

哈萨比斯在1992年完成了高考,比计划提前两年。他在Bullfrog Productions找到了一份电子游戏编程的工作。哈萨比斯编写了《主题公园》,玩家在其中设计和经营一个虚拟游乐园。这款游戏取得了巨大的成功,销售了1500万份,并形成了一种新的模拟游戏类型,其中的目标不是打败对手,而是优化一个复杂系统的运作,如企业或一个城市。

除了制作游戏外,他在玩游戏方面也很出色。十几岁的时候,他就在棋盘游戏比赛的楼层之间奔跑,同时参加国际象棋、拼字游戏、扑克和双陆棋的比赛。1995年,哈萨比斯在剑桥大学学习计算机科学时,误入了一场学生围棋比赛。围棋是一种古老的棋盘策略游戏,比国际象棋要复杂得多。掌握它应该需要通过长期经验获得的直觉。没有人知道哈萨比斯以前是否玩过。

首先,哈萨比斯赢得了初学者的比赛。然后他击败了有经验的选手的冠军,尽管有障碍。主持比赛的剑桥大学围棋大师查尔斯-马修斯还记得这位专家棋手被一个19岁的新手打败时的震惊。马修斯将哈萨比斯纳入自己的麾下。

哈萨比斯的智慧和野心总是通过游戏来表达。游戏反过来又激发了他对智力的迷恋。当他观察自己在国际象棋方面的发展时,他想知道计算机是否可以被编程为像他一样通过积累经验来学习。游戏提供了一个现实世界无法比拟的学习环境。它们是整齐的,而且是封闭的。因为游戏是从现实世界中分离出来的,所以可以不受干扰地进行练习并有效地掌握。游戏加速了时间:玩家在几天内建立一个犯罪集团,在几分钟内打完索姆河战役。


1997年夏天,哈萨比斯去了日本。那年5月,ibm的 "深蓝 "计算机击败了世界象棋冠军加里-卡斯帕罗夫。这是计算机第一次在国际象棋中击败大师。这场比赛吸引了全世界的注意力,并引起了人们对计算机日益增长的力量和潜在威胁的关注。当哈萨比斯遇到日本的棋盘游戏大师富士瓦拉(Masahiko Fujuwarea)时,他谈到了一个将他对战略游戏和人工智能的兴趣结合起来的计划:有一天他将建立一个计算机程序来击败最伟大的人类围棋选手。

哈萨比斯有条不紊地对待他的职业生涯。"马修斯说:"在20岁的时候,哈萨比斯就认为,在他能够以他想要的水平进入人工智能领域之前,必须要有某些东西。"他有一个计划。"

1998年,他成立了一个自己的游戏工作室,名为Elixir。哈萨比斯专注于一个雄心勃勃的游戏--《共和国》。革命》,一个复杂的政治模拟游戏。几年前,当他还在学校时,哈萨比斯曾告诉他的朋友穆斯塔法-苏莱曼,世界需要大的模拟,以模拟其复杂的动态并解决最棘手的社会问题。现在,他试图在一个游戏中做到这一点。

事实证明,他的愿望比预期的更难被纳入代码。Elixir公司最终发布了一个精简版的游戏,但评论并不热烈。其他游戏也失败了(其中一个是邦德-恶棍模拟器,名为《邪恶天才》)。2005年4月,Hassabis关闭了Elixir。马修斯认为,哈萨比斯创办公司只是为了获得管理经验。现在,哈萨比斯在开始追求阿吉之前,只缺乏一个关键领域的知识。他需要了解人的大脑。

2005年,哈萨比斯开始在伦敦大学学院(Ucl)攻读神经科学博士学位。他发表了关于记忆和想象力的有影响力的研究。有一篇论文,后来被引用了1000多次,表明失忆症患者也很难想象新的经验,这表明记忆和创造心理图像之间存在着联系。哈萨比斯正在建立一种解决失忆症所需的对大脑的理解。他的大部分工作都回到了一个问题上:人脑是如何获得并保留概念和知识的?


哈萨比斯于2010年11月15日正式成立DeepMind。该公司当时的任务声明与现在一样:"解决智能",然后用它来解决其他一切问题。正如哈萨比斯在奇点峰会上所说,这意味着将我们对大脑如何完成任务的理解转化为软件,使其能够使用同样的方法来教导自己。

哈萨比斯并不假装科学已经完全理解了人类的思想。敏捷的蓝图不可能简单地从数百项神经科学研究中得出。但他显然相信已经有足够的了解,可以按照他所希望的方式开始进行敏捷的工作。然而,他的信心有可能超过了现实。我们对大脑的实际功能仍然知之甚少。2018年,哈萨比斯自己的博士研究结果被澳大利亚的一个研究小组质疑了。统计数字是魔鬼般的,这只是一篇论文,但它表明,支撑DeepMind工作的科学远远没有解决。

苏莱曼和沙恩-莱格(Shane Legg),一个痴迷于敏捷的新西兰人,哈萨比斯也是在加州大学认识的,作为联合创始人加入。该公司的声誉迅速增长。哈萨比斯吸引了很多人才。"DeepMind的前运营经理本-福克纳(Ben Faulkner)说:"他有点像磁铁。许多新员工来自欧洲,在谷歌和Facebook等硅谷巨头的可怕目光之外。也许DeepMind最大的成就是很早就开始雇用和保留最聪明和最优秀的人。该公司在布鲁姆斯伯里的罗素广场上的一栋排屋的阁楼上开了店,就在加州大学的对面。

该公司专注于一种机器学习技术,这源于哈萨比斯对游戏和神经科学的双重迷恋:强化学习。这样一个程序被建立起来,以收集有关其环境的信息,然后通过反复重放其经验来学习,这很像哈萨比斯在他的奇点峰会演讲中对人类大脑在睡眠期间活动的描述。

强化学习从一块计算的白板开始。程序被展示在一个虚拟的环境中,它对这个环境除了规则之外一无所知,比如一盘棋的模拟或一个视频游戏。该程序至少包含一个被称为神经网络的组件。这是由多层计算结构组成的,它们通过筛选信息来识别特定的特征或策略。每一层都在不同的抽象层次上检查环境。起初,这些网络的成功率极低,但重要的是,它们的失败也被编码在其中。当它们试验不同的策略时,它们会变得越来越复杂,并在成功时得到奖励。如果程序移动了一个棋子,结果输掉了比赛,它就不会再犯这种错误。人工智能的大部分魔力在于它重复任务的速度。

DeepMind的工作在2016年达到顶峰,当时一个团队建立了一个AI程序,使用强化学习和其他技术来下围棋。这个程序被称为AlphaGo,当它在2016年首尔的五场比赛中击败世界冠军时,引起了人们的惊讶。在2.8亿人的注视下,这台机器的胜利比专家们的预测早了十年。第二年,AlphaGo的改进版击败了中国围棋冠军。


与1997年的深蓝一样,AlphaGo改变了人们对人类成就的看法。人类的冠军,一些地球上最聪明的头脑,不再站在智力的顶峰。在他向Fujuwarea倾诉他的野心近20年后,哈萨比斯实现了他的野心。哈萨比斯曾说,这场比赛让他几乎要哭了。传统上,围棋的学生通过在一次比赛中击败老师来报答他们。哈萨比斯通过击败整场比赛来感谢马修斯。

DeepBlue通过蛮力和计算速度获胜,但AlphaGo的风格显得很艺术,几乎是人类的风格。它的优雅和精致,对其计算能力的超越,似乎表明DeepMind在寻求一个可以治疗疾病和管理城市的程序方面,比其竞争对手更领先。

哈萨比斯一直说,DeepMind会让世界变得更好。但对于agi来说,没有任何确定性。如果它真的出现了,我们不知道它是利他的还是恶毒的,也不知道它是否会服从人类的控制。即使是这样,又该由谁来管理呢?

从一开始,哈萨比斯就试图保护DeepMind的独立性。他一直坚持让DeepMind留在伦敦。当谷歌在2014年收购该公司时,控制权的问题变得更加紧迫。哈萨比斯不需要把DeepMind卖给谷歌。他手上有大量的现金,而且他已经勾勒出了一个商业模式,公司将设计游戏来资助研究。谷歌的财务实力很有吸引力,然而,像许多创始人一样,哈萨比斯不愿意交出他所培育的公司。作为交易的一部分,DeepMind制定了一项安排,以防止谷歌单方面控制该公司的知识产权。据一位熟悉该交易的人士透露,在收购前的一年,双方签署了一份名为《道德与安全审查协议》的合同。该协议此前未被报道,是由伦敦的资深大律师起草的。

审查协议》将DeepMind的核心人工智能技术的控制权,无论其何时诞生,都掌握在一个被称为伦理委员会的管理小组手中。据同一消息来源称,伦理委员会远不是谷歌的表面让步,它为DeepMind提供了坚实的法律支持,以保持对其最有价值和可能最危险技术的控制。小组成员的名字还没有公开,但另一个接近DeepMind和谷歌的消息来源说,DeepMind的三位创始人都在委员会中。(DeepMind拒绝回答关于审查协议的一系列详细问题,但表示 "道德监督和治理从最开始就是我们的优先事项")。

哈萨比斯也可以通过其他手段决定DeepMind的命运。一个是忠诚度。过去和现在的员工都说,哈萨比斯的研究议程是DeepMind的最大优势之一。他的计划提供了迷人而重要的工作,不受学术界压力的影响,吸引了数百名世界上最有才华的专家。DeepMind在巴黎和阿尔伯塔设有附属办公室。许多员工对哈萨比斯和他的使命的感觉比对其渴望收入的母公司的感觉更亲近。只要他能保持他们的个人忠诚度,哈萨比斯就能对其唯一的股东拥有相当大的权力。对谷歌来说,让DeepMind的AI人才通过代理方式为其工作,比让这些人最终在Facebook或苹果公司工作更好。

DeepMind还有另一个筹码来源,尽管它需要不断补充:有利的宣传。该公司擅长于此。AlphaGo是一个公关政变。自谷歌收购以来,该公司一再创造奇迹,引起了全球关注。一款软件可以在眼睛扫描中发现黄斑变性的迹象。另一个程序利用与AlphaGo类似的架构从头开始学习下棋,在与自己对弈仅仅9个小时后,就成为有史以来最伟大的棋手。2018年12月,一个名为AlphaFold的程序被证明在从蛋白质的合成物列表中预测蛋白质的三维结构方面比竞争对手更准确,有可能为治疗帕金森症和阿尔茨海默氏症等疾病铺平道路。

DeepMind特别引以为豪的是它开发的算法,这些算法计算出了冷却谷歌数据中心的最有效手段,这些中心估计有250万台计算机服务器。DeepMind在2016年表示,他们已经将谷歌的能源支出减少了40%。但一些业内人士表示,这种吹嘘是夸大其词。早在DeepMind出现之前,谷歌就已经使用算法来优化其数据中心。"他们只是想拥有一些特权,这样他们就可以在Alphabet内部宣称有一些附加值,"一名谷歌员工说。谷歌的母公司Alphabet为这样的服务向DeepMind支付了丰厚的费用。2017年,DeepMind向Alphabet旗下公司开出了5400万英镑的账单。与DeepMind的开销相比,这个数字显得微不足道。这一年,它仅在员工身上就花费了2亿英镑。总体而言,DeepMind在2017年损失了2.82亿英镑。


这对这个现金充裕的巨头来说是微不足道的。但Alphabet其他处于亏损状态的子公司已经引起了Alphabet吝啬的首席财务官露丝-波拉特的注意。谷歌光纤(Google Fiber)是建立一个互联网服务提供商的努力,在明确了需要几十年才能获得投资回报后,它被搁置了。 AI研究人员私下里怀疑DeepMind是否会被 "Porated"。

DeepMind小心翼翼地公布人工智能的进展是其管理战略的一部分,向当权者表明其声誉价值。在谷歌被指控侵犯用户隐私和传播假新闻的时候,这一点尤其有价值。DeepMind也很幸运,在最高层有一个同情者。拉里-佩奇,谷歌的两位创始人之一,现在是Alphabet的首席执行官。佩奇是哈萨比斯最接近于老板的人。佩奇的父亲卡尔在20世纪60年代研究过神经网络。在他职业生涯的早期,佩奇说,他建立谷歌完全是为了创办一家ai公司。

DeepMind的新闻管理所需的严格控制与弥漫在该公司中的学术精神并不相符。一些研究人员抱怨说,发表他们的工作可能很困难:在向会议和期刊提交工作之前,他们必须经过层层的内部审批。DeepMind认为,它需要谨慎行事,以避免用阿基的前景来吓唬公众。但捂得太紧可能会开始破坏学术氛围,削弱员工的忠诚度。

在被谷歌收购五年后,谁控制DeepMind的问题正在进入紧要关头。该公司的创始人和早期员工即将获得收益,届时他们可以带着从收购中获得的经济补偿离开(哈萨比斯的股票可能价值约1亿英镑)。但一位接近该公司的消息人士表示,Alphabet已经将创始人的收益分配时间推迟了两年。考虑到他的不懈关注,哈萨比斯不太可能跳槽。他对金钱的兴趣仅在于它能帮助他实现其一生的工作。但是一些同事已经离开了。自2019年年初以来,已有三名ai工程师离职。而世界上最著名的安全工程师之一本-劳里(Ben Laurie),现在已经回到了他以前的雇主谷歌。这个数字不大,但DeepMind提供了如此令人振奋的任务和丰厚的报酬,很少有人会离开。

到目前为止,谷歌没有对DeepMind进行过多干预。但最近的一件事引起了人们对该公司的独立性能维持多久的担忧。


DeepMind一直计划用ai来改善医疗服务。2016年2月,它成立了一个新的部门--DeepMind Health,由公司的联合创始人之一穆斯塔法-苏莱曼领导。苏莱曼的母亲是一名国家卫生局的护士,她希望创建一个名为Streams的程序,在病人健康状况恶化时向医生发出警告。DeepMind将获得基于绩效的费用。由于这项工作需要接触病人的敏感信息,苏莱曼成立了一个独立审查小组(irp),成员都是英国医疗保健和技术领域的精英。DeepMind谨慎行事是明智的。英国信息专员随后发现,其中一家合作医院在处理病人数据时违反了法律。尽管如此,到2017年底,Suleyman已经与四家大型NHS医院签署了协议。

2018年11月8日,谷歌报告了自己的医疗保健部门--谷歌健康的创建。五天后,宣布DeepMind Health将被纳入其母公司的工作。DeepMind似乎没有什么警告。根据从信息自由请求中获得的信息,它只在三天前向其合作医院发出了关于这一变化的通知。DeepMind拒绝透露关于合并的讨论是何时开始的,但表示,通知和公开宣布之间的短暂间隔是为了保持透明度。苏莱曼曾在2016年写道,"在任何阶段,患者数据都不会与谷歌账户、产品或服务相联系或关联"。他的承诺似乎已经被打破。(在回答1843年的问题时,DeepMind说,"在这个阶段,我们的合同都没有转移到谷歌,只有在我们的合作伙伴的同意下才会转移。流媒体成为谷歌服务并不意味着患者数据......可以用来提供其他谷歌产品或服务。")

谷歌的吞并行为激怒了DeepMind Health的员工。据接近健康团队的人说,一旦吸收完成,更多员工计划离开公司。irp的一名成员迈克-布雷肯(Mike Bracken)已经离开了苏莱曼。据多位熟悉该事件的人说,由于担心小组更多是为了装点门面而不是真正的监督,Bracken于2017年12月辞职。当Bracken问Suleyman他是否会给小组成员以非执行董事的问责和治理权力时,Suleyman嗤之以鼻。(DeepMind的一位发言人表示,他们对这一事件 "没有印象")。irp的负责人Julian Huppert认为,该小组提供了比Bracken预期的 "更激进的治理",因为成员能够公开发言,并且不受保密义务的约束。

这段插曲表明,DeepMind运作的外围部分在谷歌面前很脆弱。DeepMind在一份声明中说,"我们都同意,将这些工作集中在一起合作,增加资源,是有意义的"。这就引出了一个问题:谷歌是否会将同样的逻辑应用于DeepMind在agi上的工作。

从远处看,DeepMind似乎已经取得了很大的进步。它已经建立了能够学习执行超人水平的任务的软件。哈萨比斯经常引用《突破》,这是一款用于雅达利游戏机的电子游戏。一个 "突围 "玩家控制着一只蝙蝠,她可以在屏幕底部水平移动,用它来弹击盘旋在上面的方块,在撞击中摧毁它们。当所有方块都被摧毁时,玩家就赢了。如果她用球棒打不中球,她就输了。在没有人类指导的情况下,DeepMind的程序不仅学会了玩这个游戏,而且还研究出了如何将球打入积木后面的空间,利用反弹来打破更多积木。哈萨比斯说,这展示了强化学习的力量和DeepMind计算机程序的超自然能力。

这是一个令人印象深刻的演示。但哈萨比斯漏掉了几件事。如果虚拟桨被移得再高一点,程序就会失败。DeepMind的程序所学到的技能是如此有限,以至于它甚至不能对一个人大步走来的环境的微小变化做出反应--至少在没有数千轮强化学习的情况下是如此。但世界上有像这样的抖动。对于诊断智能,没有两个身体器官是相同的。对于机械智能来说,没有两个引擎可以以同样的方式进行调整。因此,将在虚拟空间中完善的程序释放到野外是充满困难的。

第二个注意事项,DeepMind很少谈及,就是在虚拟环境中的成功取决于奖励功能的存在:一个允许软件衡量其进展的信号。该程序了解到,从后墙弹射出去会使它的分数上升。DeepMind在AlphaGo上的大部分工作在于构建一个与如此复杂的游戏相适应的奖励函数。不幸的是,现实世界并没有提供简单的奖励。进展很少用单一的分数来衡量。在有这种衡量标准的地方,政治挑战使问题复杂化。调和气候健康的奖励信号(大气中的二氧化碳浓度)和石油公司的奖励信号(股价)需要满足许多具有冲突动机的人。奖励信号往往是非常弱的。人类大脑在执行任务时很少能收到关于任务成功的明确反馈。

DeepMind通过运用大量的计算机能力,找到了解决这个问题的方法。AlphaGo需要人类数千年的游戏时间来学习任何东西。许多AI思想家怀疑这种解决方案对于提供较弱回报的任务是不可持续的。DeepMind承认存在这种模糊不清的情况。它最近专注于《星际争霸2》,一个战略计算机游戏。在游戏早期做出的决定会在后期产生影响,这更接近于现实世界中许多任务所特有的那种错综复杂的延迟反馈。今年1月,DeepMind软件在一次演示中击败了一些世界顶级的人类玩家,虽然受到很大的限制,但仍然令人印象深刻。它的程序也已经开始通过跟踪人类任务负责人的反馈来学习奖励功能。但是,将人类指令放入循环中,有可能失去不加修饰的计算机处理所提供的规模和速度的效果。

DeepMind和谷歌的现任和前任研究人员由于严格的保密协议而要求匿名,他们也对DeepMind能否通过这种方法达到敏捷表示怀疑。在这些人看来,专注于在模拟环境中实现高性能使得奖励-信号问题难以解决。然而,这种方法是DeepMind的核心所在。它有一个内部的排行榜,在这个排行榜上,来自竞争的编码员团队的程序争夺对虚拟领域的掌握权。

哈萨比斯一直把生活看作是一场游戏。他的职业生涯的很大一部分是致力于制作游戏,他的休闲时间的很大一部分是在玩游戏。在DeepMind,它们是他选择的开发敏捷的工具。就像他的软件一样,哈萨比斯只能从他的经验中学习。对敏捷的追求最终可能会迷失方向,他已经发明了一些有用的医疗技术,并超过了世界上最伟大的棋盘游戏玩家。重要的成就,但不是他渴望的成就。但是,他还可以在谷歌的眼皮底下,但在谷歌的控制之外,迎来敏捷的诞生。如果他能做到这一点,德米斯-哈萨比斯就能战胜最艰难的游戏了。

摄影:David Ellis
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-14 14:31 , Processed in 0.087602 second(s), 19 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表