微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 5332|回复: 0
打印 上一主题 下一主题
收起左侧

2010 莱斯利·瓦伦特

[复制链接]
跳转到指定楼层
1
发表于 2022-4-17 23:21:14 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Leslie G Valiant
BIRTH:
28 March 1949, Budapest, Hungary

EDUCATION:
Latymer Upper School, London England; King’s College, Cambridge, England (BA, Mathematics, 1970); Imperial College, London, England (DIC in Computing Science); University of Warwick, England (PhD, Computer Science, 1974)

EXPERIENCE:
Carnegie Mellon University (Visiting Assistant Professor, 1973-1974); Leeds University, England (Lecturer, 1974-1976); University of Edinburgh, Scotland (Lecturer, later Reader, 1977-1982); Harvard University (Gordon McKay Professor of Computer Science and Applied Mathematics, from 1982, T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics (2001 forward).

HONORS & AWARDS:
International Mathematical Union Nevanlinna Prize (1986); Fellow of the Royal Society (1991); ACM Special Interest Group on Algorithms and Computation Theory and the IEEE Technical Committee on the Mathematical Foundations of Computation Theory Knuth Prize (1997); member of the USA National Academy of Sciences (2001); European Association for Theoretical Computer Science Distinguished Achievement Award (2008); Turing Award (2010); Fellow of the American Association for artificial Intelligence.

LESLIE GABRIEL VALIANT DL Author Profile link
United States – 2010
CITATION
For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing.

SHORT ANNOTATED
BIBLIOGRAPHY
ACM TURING AWARD
LECTURE VIDEO
RESEARCH
SUBJECTS
ADDITIONAL
MATERIALS
VIDEO INTERVIEW
Les Valiant has had an extraordinarily productive career in theoretical computer science producing results of great beauty and originality. His research has opened new frontiers and has resulted in a transformation of many areas. His work includes the study of both natural and artificial phenomena. The natural studies encompass the algorithms used by computing objects such as the human brain while the artificial include computers and their capabilities. In the case of computers the limitations of these devices are only beginning to be understood while for natural objects, such as the human brain, the questions of how they operate remain to be answered.

Valiant discusses his realization that machine learning is the central concept in artificial intelligence.       
In 2003, in the 50th anniversary volume of the Journal of the ACM, he published a paper [1] which advocates this view that computer science describes both natural and artificial phenomena, in terms of three general problem areas for further investigation:

Characterizing the power of computation, i.e., to fully characterize what can be computed in practice in the physical world.
Characterizing a semantics for cognitive computation, i.e., to seek out a semantics or description of knowledge that can computationally support the basic phenomena of intelligent behavior.
Characterizing cortical computation, i.e., to describe how knowledge is represented in the brain and to determine the algorithms that are used for computing the most basic behavioral tasks.
In 1983 he published an important paper [4] in the area of semantics for cognitive computation. In it he devised a model of learning that offers a quantitative criterion on when a computing device can be considered to be able to learn. This probably approximately correct (PAC) model has given rise to a fruitful research area now known as computational learning theory. The PAC model considers a learning algorithm that takes experience from the past to create a hypothesis that can be used to make a decision in the future with controlled error. The model has been intensively studied and extended by other researchers into an important tool in practical applications.

Valiant discusses his 1983 paper “A Theory of the Learnable” which introduced the PAC (Probably Approximately Correct) model of learning.       
In 1994 he broadened the PAC concept in his book Circuits of the Mind [5] to investigate how the brain accesses and computes on the large amount of information it needs when reasoning. This investigation uncovered some important quantitative constraints on neural computation, such as the strength of interconnections, and offers a range of questions for experimentalists. The book also provides a model in a computational language and framework that can be used in future investigations into memory, learning and reasoning.

The power of computation is the subject of computational complexity theory.In the early 1970’s, computational complexity generally dealt with the difficulty of decision problems, such as whether a graph has a perfect matching or whether a traveling salesman can find a route of at most a certain length. This difficulty was characterized by complexity classes, such as P (tractable problems) and NP (problems for which a solution can easily be checked once it has been produced, but perhaps not easily found in the first place). Of course, many important practical problems are not simply decision problems: instead of asking whether there is a route of at most 1000 miles, one could ask for the length of the shortest possible route. However, these more general problems can often be reduced to a sequence of decision problems, for example by using binary search to narrow in on the length of the shortest route.

One of Valiant’s most noteworthy discoveries is that counting problems are much more subtle than previous experience suggested. A counting problem asks for the number of some combinatorial objects: for example, how many perfect matchings are there in a graph? Now we are not just asking the decision problem of whether that number is positive, but also how large it is. If the decision problem is difficult, then the counting problem must be as well, but Valiant’s surprising realization was that the converse fails. In his paper “The complexity of computing the permanent” [2], he showed that although there is an efficient algorithm to tell whether a graph has a perfect matching, counting perfect matchings is as hard as any counting problem, including for any NP-complete problem. This came as a shock to the computational complexity community, which had grown accustomed to the idea that decision problems would easily capture the key features of a problem. Instead, Valiant extended the theory of complexity classes to include the new counting class #P.

Valiant discusses the origins of P#-completeness in his analysis of the complexity of counting problems.       
If counting problems were limited to esoteric mathematical problems, Valiant’s theory would still have been conceptually fascinating but it would have had limited impact. However, it turns out that these problems pervade much of computer science. For example, estimating any probability amounts to an approximate counting problem, and random sampling is closely related. Approximate counting and sampling are both important goals in their own right (for example, in statistics) and valuable tools for solving other problems; entire subfields of computer science, such as Markov Chain Monte Carlo algorithms, are by now devoted to these topics. The ideas Valiant introduced have grown into a thriving field, which has united computer science, probability and statistics, combinatorics, and even aspects of statistical physics.

In the area of artificial computation, his work has analyzed of how to control vary large parallel computing systems. These range from the multi-core processors that are commonly found in personal computers to large, widely distributed, clusters of computing machines. He centered his work on investigating the role played by communication between parallel processors. In 1990 he published a paper [3] describing his bulk synchronous parallel (BSP) model. This model has had wide influence on the way parallel computation is conceptualized and carried out.

Valiant describes the origins of the bulk synchronous model for the organization of communication within large parallel computing systems.       
In describing Valiant’s contributions, the ACM Turing Award Committee pointed out:  

Rarely does one see such a striking combination of depth and breadth as in Valiant’s work. His is truly a heroic figure in theoretical computer science and a role model for  his courage and creativity in addressing some of the deepest unsolved problems in science.




Leslie G Valiant
出生地:匈牙利布达佩斯
1949年3月28日,匈牙利,布达佩斯

教育经历
英国伦敦Latymer高级中学;英国剑桥国王学院(1970年获得数学学士学位);英国伦敦帝国学院(计算机科学博士学位);英国华威大学(1974年获得计算机科学博士学位)。

工作经验。
卡内基梅隆大学(客座助理教授,1973-1974);英国利兹大学(讲师,1974-1976);苏格兰爱丁堡大学(讲师,后为讲师,1977-1982);哈佛大学(计算机科学和应用数学的Gordon McKay教授,1982年起,计算机科学和应用数学的T. Jefferson Coolidge教授(2001年起)。

荣誉和奖项。
国际数学联盟Nevanlinna奖(1986);英国皇家学会会员(1991);ACM算法和计算理论特别兴趣小组和IEEE计算理论数学基础技术委员会Knuth奖(1997);美国国家科学院院士(2001);欧洲理论计算机科学协会杰出成就奖(2008);图灵奖(2010);美国人工智能协会会员。

LESLIE GABRIEL VALIANT DL作者简介链接
美国 - 2010
嘉奖
对计算理论的变革性贡献,包括可能近似正确(PAC)的学习理论,枚举和代数计算的复杂性,以及并行和分布式计算的理论。

简短注释
书目
亚马逊图灵奖
讲座视频
研究成果
主题
额外的
材料
采访视频
莱斯-瓦里安在理论计算机科学领域有一个非常富有成效的职业生涯,产生了非常漂亮和有创意的结果。他的研究开辟了新的领域,并导致了许多领域的变革。他的工作包括对自然和人工现象的研究。自然研究包括计算对象(如人脑)所使用的算法,而人工研究包括计算机及其能力。就计算机而言,这些设备的局限性才刚刚开始被理解,而对于自然物体,如人脑,它们如何运作的问题仍有待解答。

瓦里安讨论了他的认识,即机器学习是人工智能的核心概念。       
2003年,在《ACM杂志》50周年纪念册上,他发表了一篇论文[1],主张这种观点,即计算机科学描述自然和人工现象,在三个一般问题领域进行进一步调查。

表征计算的力量,即全面表征物理世界中实际可计算的东西。
表征认知计算的语义,即寻找一种语义或对知识的描述,可以在计算上支持智能行为的基本现象。
描述皮层计算的特征,即描述知识在大脑中是如何表示的,并确定用于计算最基本行为任务的算法。
1983年,他在认知计算的语义学领域发表了一篇重要论文[4]。在这篇论文中,他设计了一个学习的模型,为计算设备何时能够被认为是能够学习提供了一个量化标准。这个可能近似正确(PAC)的模型催生了一个富有成效的研究领域,现在被称为计算学习理论。PAC模型考虑的是一种学习算法,它从过去的经验中创造出一种假设,可以用来在未来做出错误可控的决定。该模型已被其他研究人员深入研究并扩展为实际应用中的一个重要工具。

Valiant讨论了他1983年的论文 "A Theory of the Learnable",提出了PAC(Probably Approximately Correct)学习模型。       
1994年,他在《心灵的电路》一书中扩大了PAC的概念[5],研究大脑在推理时如何获取和计算所需的大量信息。这项调查发现了一些关于神经计算的重要的量化约束,如相互连接的强度,并为实验者提供了一系列问题。本书还以计算语言和框架提供了一个模型,可用于未来对记忆、学习和推理的调查。

计算的力量是计算复杂性理论的主题。在20世纪70年代初,计算复杂性一般是处理决策问题的难度,比如一个图是否有完美的匹配,或者一个旅行推销员是否能找到一条最多一定长度的路线。这种难度以复杂度等级为特征,如P(可处理的问题)和NP(一旦产生了解决方案就很容易被检查出来的问题,但也许一开始就不容易找到)。当然,许多重要的实际问题并不是简单的决策问题:与其问是否有一条最多1000英里的路线,不如问最短的可能路线的长度。然而,这些更普遍的问题往往可以被简化为一连串的决策问题,例如通过使用二进制搜索来缩小最短路线的长度。

瓦里安最值得注意的发现之一是,计数问题比以前的经验表明的要微妙得多。计数问题问的是一些组合对象的数量:例如,一个图中有多少个完全匹配?现在我们不只是问这个数字是否为正数的决策问题,还问这个数字有多大。如果决策问题是困难的,那么计数问题也一定是困难的,但是瓦里安令人惊讶的认识是,反过来也是失败的。在他的论文 "计算永久的复杂性"[2]中,他表明,尽管有一个有效的算法来告诉一个图是否有完全匹配,但计算完全匹配的问题和任何计数问题一样难,包括对任何NP-complete问题。这让计算复杂性界感到震惊,他们已经习惯了决策问题会很容易捕捉到问题的关键特征的想法。相反,瓦里安扩展了复杂性类的理论,包括新的计数类#P。

瓦里安在分析计数问题的复杂性时讨论了P#-完整性的起源。       
如果计数问题仅限于深奥的数学问题,瓦里安的理论在概念上仍然很吸引人,但它的影响有限。然而,事实证明,这些问题充斥着计算机科学的大部分领域。例如,估计任何概率都相当于一个近似计数问题,而随机抽样与此密切相关。近似计数和抽样本身就是重要的目标(例如,在统计学中),也是解决其他问题的宝贵工具;计算机科学的整个子领域,如马尔科夫链蒙特卡洛算法,现在都致力于这些主题。瓦里安引入的思想已经发展成为一个蓬勃发展的领域,它将计算机科学、概率和统计学、组合学、甚至统计物理学的各个方面结合起来。

在人工计算领域,他的工作分析了如何控制不同的大型并行计算系统。这些系统包括从个人电脑中常见的多核处理器到广泛分布的大型计算机集群。他的工作主要集中在研究并行处理器之间的通信所发挥的作用。1990年,他发表了一篇论文[3],描述了他的批量同步并行(BSP)模型。这个模型对并行计算的概念化和执行方式产生了广泛的影响。

Valiant描述了在大型并行计算系统中组织通信的批量同步模型的起源。       
在描述瓦里安的贡献时,ACM图灵奖委员会指出。

很少有人能像瓦里安的工作那样,看到深度和广度的惊人结合。他是理论计算机科学中真正的英雄人物,他在解决科学中一些最深层的未解决的问题上的勇气和创造力是一个榜样。
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-24 02:53 , Processed in 0.072896 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表