微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 3576|回复: 0
打印 上一主题 下一主题
收起左侧

1995 曼纽尔·布卢姆

[复制链接]
跳转到指定楼层
1
发表于 2022-4-17 04:34:49 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Manuel Blum

PHOTOGRAPHS
BIRTH:
Caracas, Venezuela, April 26, 1938

EDUCATION:
B.S., Electrical Engineering, MIT (1959); M.S., Electrical Engineering, MIT (1961); Ph.D., Mathematics, MIT (1964).

EXPERIENCE:
Research Assistant and Research Associate for Dr. Warren S. McCulloch, Research Laboratory of Electronics, MIT, (1960-1965); Assistant Professor, Mathematics, MIT (1966-68); Visiting Assistant Professor, Associate Professor, Professor, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, (1968-2001); Associate Chair for Computer Science, U.C. Berkeley (1977-1980); Arthur J. Chick Professor of Computer Science, U.C. Berkeley, (1995-2001); Visiting Professor of Computer Science. City University of Hong Kong (1997-1999); Bruce Nelson Professor of Computer Science, Carnegie Mellon University, 2001-present.

HONORS AND AWARDS:
ACM Turing Award (1995); Fellow of the Institute of Electrical and Electronics Engineers (1982); the American Association for the Advancement, of Science (1983); and the American Academy of Arts and Sciences (1995). Member, National Academy of Engineering; United States National Academy of Sciences (2002).



MANUEL BLUM DL Author Profile link
United States – 1995
CITATION
In recognition of his contributions to the foundations of computational complexity theory and its application to cryptography and program checking.

SHORT ANNOTATED
BIBLIOGRAPHY
RESEARCH
SUBJECTS
VIDEO INTERVIEW
Manuel Blum was born in Caracas, Venezuela in 1938. He remembers that as a child he wanted to know how brains work. The reason: he wanted to be smarter.  (“I was the opposite of a prodigy," he claims). At M.I.T. he studied electrical engineering because he thought electric circuits might hold the answer. As a junior he studied brains in Warren McCulloch's lab. Eventually he became convinced that the limitations of our brain are related to computational complexity, and that is how he embarked on a life-long journey that transformed that subject as well as much of computer science.

By the early 1960s computers were already solving all kinds of problems in business and science. An algorithm for solving a problem expends resources: the time it takes to run, the amount of memory it uses, the energy it consumes, etc. The irresistible question is: Can I do better? Can I find an algorithm that solves the problem faster using fewer resources? You can sort n numbers in a computer's memory with n log n comparisons using Mergesort, Quicksort or a similar sorting algorithm. But can it be done better? You can find the factors of an n-bit integer in time proportional to 2n/2 by checking all candidate divisors up to the square root of the number. But this is exponential in n.  Is there a better, truly fast algorithm? You can search a maze with n corridors using only log2 n memory. Is there a solution utilizing log n memory? This is the stuff of complexity.

Of the three questions above, after half a century of contemplation we know the answer to only the first one, about sorting: An elementary argument discovered in the 1960s establishes that you cannot sort with fewer than n log n comparisons.  (This is proved by considering the decision tree of a sorting algorithm and then taking the logarithm of its size). So sorting a list of n numbers requires n log n time.

Computing the average of a list of numbers is, of course, easier than sorting, since it can be done in just n steps. But the median is usually more informative than the average–in connection with incomes, for example. Unfortunately you need to first sort the numbers in order to find the median. Or do you? In the late 1960s, Blum was convinced that computing the median does indeed require n log n steps, just like sorting. He tried very hard to prove that it does, and in the end his labors were rewarded with a most pleasant surprise: in 1971 he came up with an algorithm that finds the median in linear time! (A few other great researchers had in the meantime joined him in this quest Robert Tarjan, Ronald Rivest, Bob Floyd,  and Vaughan Pratt.) [2]

Blum discusses the algorithm he created to find the median of a series of numbers in linear time.       
But we are getting ahead of our story. In the early 1960s Blum wanted to understand complexity. To do so one must first pick a specific machine model on which algorithms run, and then a resource to account for in each computation. But Blum wanted to discover the essence of complexity–its fundamental nature that lies beyond such mundane considerations. In his doctoral thesis at M.I.T. under Marvin Minsky, he developed a machine-independent theory of complexity that underlies all possible studies of complexity. He postulated that a resource is any function that has two basic properties (since then called the Blum axioms), essentially stating that the amount of the resource expended by a program running on a particular input can be computed—unless the computation fails to halt, in which case it is undefined.

Surprisingly these simple and uncontroversial axioms spawn a rich theory full of unexpected results. One such result is Blum’s speedup theorem, stating that there is a computable function such that any algorithm for this function can be sped up exponentially for almost all inputs [1]. That is, any given algorithm for that function can be modified to run exponentially faster, but then the modified algorithm can be modified again, and then again, an infinite sequence of exponential improvements! Blum's theory served as a cautionary tale for complexity theorists: unless resource bounds are restricted to the familiar, well-behaved ones (such as n log n, n2, polynomial, exponential, etc.), very strange things can happen in complexity.

Blum explains his counter-intuitive thesis result known as the “speedup theorem.”       
By the early 1970s Manuel Blum was a professor at UC Berkeley, where he would teach for over thirty-five years. He had married the notable mathematician Lenore Blum, about whom Manuel wrote a haiku: “Honor her requests as if/Your life depends on it/It does." They had a son Avrim, now a professor of Computer Science at Carnegie Mellon, where Lenore and Manuel are also now employed.

By the early 1970s the study of complexity had become much more concrete, partly because of the scary phenomena exposed by Blum's model.  It had already succeeded in articulating the fundamental P vs NP question, which remains famously unresolved to this day: are there problems which can only be solved by rote enumeration of all possible solutions? At that point, when most students of complexity embarked on a long-term research effort aiming at either taming the beast, or establishing that all such efforts are futile, Blum's thinking about the subject took a surprising turn that would constitute the overarching theme of his work for the coming decades: he decided to make friends with complexity, to turn the tables on it, and use it to accomplish useful things.

One particular such “application” of complexity was emerging just around that time: public key cryptography, which allows secure communication through the use of one-way functions that are easy to compute but hard to reverse. It so happened that in the mid-1970s Blum decided to spend a sabbatical studying number theory—the beautiful branch of mathematics theretofore famously proud of its complete lack of application. Nothing could be more fortuitous.  The RSA approach to public key cryptography, which was being invented around that time, exploits important one-way functions inspired by number theory. RSA's inventors Ron Rivest, Adi Shamir, and Len Adleman (the latter a doctoral student of Blum's) won the Turing award in 2003 for this idea.

Blum wondered what else complexity can help us with beyond cryptography. One strange thought came to him: can two people, call them Alice and Bob, settle an argument over the telephone about, say, who is buying drinks next time, by flipping a coin? This seems impossible, because who is going to force the one who flips, for example, Bob, to admit that he lost the toss? Surprisingly, Blum showed [3] that it can be done: Alice “calls” the flip by announcing to Bob not her choice, but some data which commits her to her choice. Then Bob announces the outcome of the flip, and they both check whether the committing data reveals a choice that wins or loses the flip. The notion of commitment (Alice creating a piece of data from a secret bit that can later be used, with Alice's cooperation, to reveal the bit in a way that can neither be manipulated by Alice nor questioned by Bob) has since become an important tool in cryptography.

Perhaps the most consequential application of complexity initiated by Manuel Blum relates to randomness. We often use pseudo-random number generators to instill randomness into our computations. But to what extent are the numbers generated truly random, and what does this mean, exactly? Blum's idea was that the answer lies in complexity. Distinguishing the output of a good generator from a truly random sequence seemed be an intractable problem, and many random number generators that were popular at the time fail this test. In 1984, Blum and his student Silvio Micali came up with a good generator [4] based on another notoriously difficult problem in number theory, the discrete logarithm problem. With Lenore Blum and Michael Shub they found another generator [5] }using  repeated squaring modulo the product of two large primes. To discover the seed, the adversary must factor the product, and we know that this is a hard problem.

Blum describes work with his students Micali and Goldwasser on zero knowledge proofs.       
On top of all this, Blum demonstrated that cryptographically strong pseudorandom generators can pay back their intellectual debt to cryptography. He and student Shafi Goldwasser came up in 1986 with a public key encryption scheme [6] based on the Blum-Blum-Shub generator [5] which, unlike RSA, has been mathematically proven to be as hard to break as factoring.

In the mid-1980s, Blum became interested in the problem of checking the correctness of programs. Given a program that purports to solve a particular problem, can you write a program checker for it: an algorithm that interacts with the program and its input, and eventually decrees (in reasonable time and with great certainty) whether or not the program works correctly. Checkability turns out to be a windfall of complexity, and Manuel and his students Sampath Kannan and Ronitt Rubinfeld were able to come up with some interesting checkers [7]. They used the tricks of their trade: randomness, reductions, and algebraic maneuvers reminiscent of coding theory.

Blum describes his work with Rubinfeld on automated program checkers.       
All said, this work was not very successful in helping software engineers in making a dent into the practical problem of software testing. Instead, it serendipitously inspired other theoreticians to come up with more clever tricks of this sort, and eventually use them—closing a full circle—in the study of complexity. The concept of interactive proof was proposed, whereby an all-powerful prover can convince a distrustful but limited checker that a theorem is true (just as a program interacts with its checkers to convince it of its correctness), and such proof systems were demonstrated for truths at higher and higher spheres of complexity. And, finally, in 1991, this long chain of ideas and results—this intellectual Rube Goldberg structure initiated by Blum's ideas about program checking—culminated with the famed PCP (Probabilistically checkable proof) Theorem stating that any proof can be rewritten in such a way that its correctness can be tested with high confidence by quickly examining for consistency just a few bits of the proof! Importantly, the PCP theorem implies a result which had been long sought by complexity theorists, namely that, assuming that P is not NP, not only can we not solve the notorious NP-complete optimization problems exactly in polynomial time, we cannot even approximately.

In the mid-1990s, Manuel and Lenore Blum went to Hong Kong for a long sabbatical, and soon after that Manuel decided to retire from UC Berkeley and move to CMU. At CMU, Blum envisioned a new use for complexity, motivated by the vagaries of our networked environment: What if one takes advantage of the complexity of a problem—and the difficulty involved in solving it by a computer program—to verify that the solver is not a computer program? This kind of twisted Turing test would be useful, for example, in making sure that the new users signing up for a web service are not mischievous bots with base intentions. In 2000 Blum and his student Von Ahn, with the collaboration of others, came up with the idea of a visual challenge (“what is written on this wrinkled piece of paper?”) as an appropriate test. They called it “Completely Automated Public Turing test to tell Computers and Humans Apart,” or CAPTCHA for short [8]. And, as we all know all too well from our encounters with this CAPTCHA device on web pages, the idea spread. Blum is now working on harnessing people's creativity through games, and on a computational understanding of consciousness.

Blum recalls his invention of the CAPTCHA system to distinguish humans from bots.       
The research career of Manuel Blum is extraordinary. It is not just the sheer number and impact of his contributions, it is the unmistakable style that runs through them. Manuel believes that research results should be surprising, paradoxical, and nearly contradictory. “When you can prove that a proposition is true, and also that the same proposition is false, then you know you are on to something,” he says. A wonderful advisor with an amazing array of former students, he is always delighted to impart wisdom.  His “advice to a new graduate student" essay is a must-read. Quoting Anatole France, he implores students to “know something about everything and everything about something.” He urges them to read books as random access devices, not necessarily from beginning to end. And to write: if we read without writing, he observes, we are reduced to finite state machines; it is writing that makes us all-powerful Turing machines. He advocates the advice of John Shaw Billings: “First, have something to say.  Second, say it. Third, stop when you have said it.”

Author: Christos H. Papadimitriou



曼努埃尔-布卢姆

照片
出生。
委内瑞拉加拉加斯,1938年4月26日

学历
麻省理工学院电子工程学士(1959);麻省理工学院电子工程硕士(1961);麻省理工学院数学博士(1964)。

工作经验。
麻省理工学院电子研究实验室Warren S. McCulloch博士的研究助理和研究员(1960-1965);麻省理工学院数学助理教授(1966-68);加州大学伯克利分校电子工程和计算机科学系客座助理教授、副教授、教授(1968-2001);加州大学伯克利分校计算机科学副主任(1977-1980);加州大学伯克利分校计算机科学Arthur J. Chick教授(1995-2001);计算机科学客座教授。香港城市大学(1997-1999);卡内基梅隆大学布鲁斯-纳尔逊计算机科学教授,2001年至今。

荣誉和奖项。
ACM图灵奖(1995年);电气和电子工程师协会会员(1982年);美国科学促进会会员(1983年);美国艺术和科学学院会员(1995年)。国家工程院院士;美国国家科学院院士(2002)。



MANUEL BLUM DL作者简介链接
美国 - 1995年
嘉奖
表彰他对计算复杂性理论的基础及其在密码学和程序检查方面的应用的贡献。

简短注释
书目
研究成果
题目
视频采访
曼努埃尔-布卢姆于1938年出生于委内瑞拉的加拉加斯。他记得,小时候他就想知道大脑是如何工作的。原因是:他想变得更聪明。 (他声称:"我是神童的反面")。在麻省理工学院,他学习电气工程,因为他认为电路可能是答案。大三时,他在沃伦-麦库洛赫的实验室研究大脑。最终,他确信我们大脑的局限性与计算的复杂性有关,就这样,他踏上了改变该学科以及大部分计算机科学的终生旅程。

到20世纪60年代初,计算机已经在解决商业和科学领域的各种问题。解决一个问题的算法需要消耗资源:运行的时间,使用的内存量,消耗的能量等等。不可抗拒的问题是:我可以做得更好吗?我能否找到一种算法,用更少的资源更快地解决这个问题?你可以使用Mergesort、Quicksort或类似的排序算法,在计算机的内存中用n log n的比较对n个数字进行排序。但是可以做得更好吗?你可以通过检查所有的候选除数到数字的平方根,在与2n/2成比例的时间内找到一个n位整数的因子。但这是指数级的,有没有更好的、真正快速的算法?你可以只用log2 n的内存来搜索一个有n条走廊的迷宫。有没有一个利用log n内存的解决方案?这就是复杂性的东西。

在上述三个问题中,经过半个世纪的沉思,我们只知道第一个问题的答案,即关于排序的问题。20世纪60年代发现的一个基本论据证明,你不能用少于n个对数n的比较进行排序。 (这可以通过考虑一个排序算法的决策树,然后取其大小的对数来证明)。因此,对一个n个数字的列表进行排序需要n对数n的时间。

当然,计算一个数字列表的平均数比排序更容易,因为它只需要n个步骤就可以完成。但是中位数通常比平均数更有参考价值--例如在收入方面。不幸的是,你需要首先对数字进行排序,以便找到中位数。或者你会吗?在20世纪60年代末,Blum确信计算中位数确实需要n log n个步骤,就像排序一样。他非常努力地想证明这一点,最后他的努力得到了一个最令人惊喜的回报:1971年,他想出了一种算法,可以在线性时间内找到中位数!(其他一些伟大的研究者也曾在1971年找到过这种算法。(在此期间,其他一些伟大的研究人员也加入了他的探索,罗伯特-塔扬、罗纳德-里维斯特、鲍勃-弗洛伊德和沃恩-普拉特。) [2]

Blum讨论了他创建的在线性时间内找到一系列数字的中位数的算法。       
但是,我们的故事已经走在前面了。在1960年代早期,Blum想了解复杂性。要做到这一点,首先必须选择一个特定的机器模型,算法在上面运行,然后在每个计算中说明一个资源。但布卢姆想发现复杂性的本质--其基本性质超越了这些世俗的考虑。在他在麻省理工学院师从马文-明斯基的博士论文中,他发展了一个独立于机器的复杂性理论,它是所有可能的复杂性研究的基础。他假设资源是任何具有两个基本属性的函数(后来被称为布卢姆公理),基本上说明在特定输入上运行的程序所消耗的资源量是可以计算的--除非计算失败停止,在这种情况下,它是未定义的。

令人惊讶的是,这些简单的、没有争议的公理产生了一个丰富的理论,充满了意想不到的结果。其中一个结果是Blum的加速定理,指出有一个可计算的函数,对于几乎所有的输入,这个函数的任何算法都可以以指数级的速度加速[1]。也就是说,该函数的任何给定算法都可以被修改为以指数速度运行,但修改后的算法还可以再修改,然后再修改,这是一个无限的指数改进序列! Blum的理论对复杂性理论家来说是一个警示:除非资源界线被限制在熟悉的、良好的界线上(如n log n、n2、多项式、指数等),否则在复杂性中会发生非常奇怪的事情。

布卢姆解释了他被称为 "加速定理 "的反直觉的论文结果。       
到20世纪70年代初,曼努埃尔-布卢姆是加州大学伯克利分校的教授,他在那里教了超过三十五年的书。他与著名的数学家莱诺尔-布卢姆结婚,关于她,曼努埃尔写了一首俳句。"尊重她的要求,仿佛/你的生命取决于它/它确实如此。" 他们有一个儿子Avrim,现在是卡内基梅隆大学的计算机科学教授,Lenore和Manuel现在也在那里工作。

到20世纪70年代初,对复杂性的研究已经变得更加具体,部分原因是布卢姆的模型所暴露的可怕现象。 它已经成功地阐明了基本的P与NP问题,这个问题至今仍是著名的未解决的问题:是否有一些问题只能通过对所有可能的解决方案进行罗列来解决?当时,当大多数研究复杂性的学生开始长期研究,旨在驯服这头野兽,或者确定所有这些努力都是徒劳的时候,布卢姆对这个问题的思考发生了惊人的转变,这将构成他未来几十年工作的首要主题:他决定与复杂性交朋友,扭转局面,并利用它来完成有用的事情。

复杂性的一个特殊 "应用 "就在那个时候出现了:公钥密码学,它通过使用容易计算但很难逆转的单向函数来实现安全通信。恰好在1970年代中期,布卢姆决定用休假时间研究数论--这个美丽的数学分支,此前一直以其完全缺乏应用而闻名。没有什么比这更偶然的了。 当时正在发明的RSA公钥密码学方法,利用了受数论启发的重要单向函数。RSA的发明者Ron Rivest、Adi Shamir和Len Adleman(后者是Blum的博士生)在2003年因这个想法获得了图灵奖。

布卢姆想知道,除了密码学之外,复杂性还能帮助我们做些什么。他想到了一个奇怪的想法:两个人,称他们为爱丽丝和鲍勃,能不能通过抛硬币来解决电话中的争论,比如说,下次由谁来买饮料?这似乎是不可能的,因为谁会强迫掷硬币的那个人,比如说鲍勃,承认他丢了硬币?令人惊讶的是,Blum表明[3],这是可以做到的。爱丽丝通过向鲍勃宣布不是她的选择,而是一些承诺她的选择的数据来 "呼叫 "翻牌。然后Bob宣布翻转的结果,他们都检查承诺的数据是否显示了一个赢得或输掉翻转的选择。承诺的概念(爱丽丝从一个秘密的位子上创建了一个数据,后来在爱丽丝的配合下,可以用一种既不能被爱丽丝操纵也不能被鲍勃质疑的方式来揭示这个位子)后来成为密码学的一个重要工具。

也许由Manuel Blum发起的复杂性的最有影响的应用与随机性有关。我们经常使用伪随机数生成器来给我们的计算灌输随机性。但是,生成的数字在多大程度上是真正随机的,这究竟意味着什么?布卢姆的想法是,答案就在于复杂性。将一个好的生成器的输出与一个真正的随机序列区分开来似乎是一个难以解决的问题,而当时流行的许多随机数生成器都无法通过这一测试。1984年,Blum和他的学生Silvio Micali根据数论中另一个臭名昭著的难题--离散对数问题,想出了一个好的发生器[4]。与Lenore Blum和Michael Shub一起,他们发现了另一个生成器[5]},使用两个大素数的乘积进行重复平方运算。为了发现种子,对手必须对积进行分解,而我们知道这是一个困难的问题。

Blum描述了与他的学生Micali和Goldwasser关于零知识证明的工作。       
在这一切之上,布卢姆证明了密码学上强大的伪随机生成器可以偿还对密码学的智力债务。他和学生Shafi Goldwasser在1986年提出了一个基于Blum-Blum-Shub发生器[5]的公钥加密方案,与RSA不同,该方案在数学上被证明和因式分解一样难以破解。

在20世纪80年代中期,Blum对检查程序的正确性问题产生了兴趣。给定一个声称要解决一个特定问题的程序,你能不能为它写一个程序检查器:一个与程序及其输入交互的算法,并最终决定(在合理的时间内,非常肯定地)该程序是否正确运行。可检查性被证明是一种复杂的意外收获,曼努埃尔和他的学生桑帕斯-卡南(Sampath Kannan)和罗尼特-鲁宾菲尔德(Ronitt Rubinfeld)能够想出一些有趣的检查器[7]。他们使用了他们的交易技巧:随机性、还原以及让人联想到编码理论的代数手法。

Blum描述了他与Rubinfeld在自动程序检查器方面的工作。       
综上所述,这项工作在帮助软件工程师解决软件测试的实际问题方面并不是很成功。相反,它偶然地启发了其他理论家想出更多这样的聪明招数,并最终在复杂性的研究中使用它们,完成了一个完整的循环。交互式证明的概念被提出来了,一个全能的验证者可以说服一个不信任的但有限的检查者相信一个定理是真的(就像一个程序与它的检查者互动以说服它的正确性一样),这样的证明系统被证明适用于越来越高的复杂性领域的真理。最后,在1991年,这一长串的想法和结果--这个由布卢姆关于程序检查的想法发起的智力上的Rube Goldberg结构--以著名的PCP(Probabilistically checkable proof)定理而达到高潮,该定理指出,任何证明都可以被重写,通过快速检查证明的几个比特的一致性,就可以高度自信地测试其正确性。重要的是,PCP定理意味着复杂度理论家长期以来一直在寻求的一个结果,即假设P不是NP,我们不仅不能在多项式时间内准确地解决臭名昭著的NP-完全优化问题,甚至不能大约解决。

1990年代中期,曼纽尔和莱诺尔-布卢姆到香港进行了一次长期休假,不久之后,曼纽尔决定从加州大学伯克利分校退休,转到CMU。在CMU,布卢姆设想了复杂性的新用途,其动机是我们的网络环境变化无常。如果我们利用一个问题的复杂性和计算机程序解决这个问题的难度来验证解决者不是一个计算机程序,会怎么样?这种扭曲的图灵测试是有用的,例如,在确保注册网络服务的新用户不是具有基本意图的恶作剧机器人方面。2000年,Blum和他的学生Von Ahn在其他人的合作下,提出了以视觉挑战("这张皱巴巴的纸上写着什么?")作为适当测试的想法。他们将其称为 "完全自动公开图灵测试,以区分计算机和人类",或简称CAPTCHA[8]。而且,正如我们从我们在网页上遇到的这个CAPTCHA装置中所知道的那样,这个想法传播开来。布卢姆现在正致力于通过游戏来驾驭人们的创造力,以及对意识的计算性理解。

布卢姆回忆起他发明的验证码系统,以区分人类和机器人。       
曼努埃尔-布卢姆的研究生涯是非同寻常的。这不仅仅是他的贡献的数量和影响,而是贯穿这些贡献的明确无误的风格。曼努埃尔认为,研究成果应该是令人惊讶的、自相矛盾的、几乎是矛盾的。"他说:"当你能证明一个命题是真的,同时也能证明同一个命题是假的,那么你就知道你在做什么了。作为一个拥有众多前辈学生的优秀导师,他总是乐于传授智慧。 他的 "给新研究生的建议 "一文是一篇必读的文章。他引用阿纳托尔-法兰西的话,恳请学生们 "对一切都有所了解,对一切都有所了解"。他敦促他们把书当作随机存取设备来读,不一定要从头读到尾。还有写作:他说,如果我们只读不写,我们就会沦为有限状态机;正是写作使我们成为无所不能的图灵机。他推崇约翰-肖-比林斯的建议。"首先,有话要说。 第二,说出来。第三,说了就停"。

作者。克里斯托斯-H-帕帕迪米特里欧
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-5 18:44 , Processed in 0.065573 second(s), 19 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表