剑桥雅思18Test2Passage2阅读原文翻译 Living with artificial intelligence

剑桥雅思18Test2Passage2阅读原文翻译 Living with artificial intelligence 与人工智能共存

剑桥雅思18阅读第二套题目第二篇文章的主题为人工智能。其内容有些抽象。但大体来说探讨了以下三个问题:一、目前狭义领域的人工智能未来可能发展成通用智能,并超越人类;二、为了人类的安全,我们需要赋予其道德目标,但我们人类自己并没有一个通用的道德标准;三、哪怕我们找到了这一道德标准,让人工智能执行它将影响我们自主权。下面是具体每一段的翻译。

剑桥雅思18Test2Passage2阅读答案解析 Living with artificial intelligence 与人工智能共存

剑桥雅思18 Test2 Passage2阅读原文翻译

引言

Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values?

强大的人工智能(AI)需要可靠地与人类价值观保持一致,但这是否意味着最终AI将不得不监管这些价值观呢?

第1段

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next? True, these prodigious accomplishments are all in so-called narrow Al, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.

这个十年是属于AI的。它们一次又一次地创造了令人惊叹的壮举。一个下棋的AI在仅仅学习四个小时之后,就不仅能够击败所有人类棋手,还能击败所有之前由人类编程的棋类机器?这已经是昨天的新闻了,接下来呢?诚然,这些惊人的成就都是在所谓的狭义AI领域中取得的,即机器执行高度专业化的任务。但是,许多专家认为这种限制即将消失。到了本世纪中叶,我们可能会拥有人工通用智能(AGI)— 机器将会在所有我们自己能够处理的任务上达到人类水平。

第2段

If so, there’s little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.

如果是这样的话,我们没有什么理由认为发展会止步于此。机器将摆脱人类智能的许多物理限制。我们大脑的功率相当于一颗灯泡,以缓慢的生物化学速率运行。而且它们的大小受限于人类分娩通道的尺寸。考虑到这些限制,它们所取得的成就是非常了不起的。但是它们与思维的物理极限的差距,可能就像我们的眼睛与强大的韦伯太空望远镜的差距一样大。

第3段

Once machines are better than us at designing even smarter machines, progress toward these limits could accelerate. What would this mean for us? Could we ensure safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, might have wished that everything he touched turned to gold, but didn’t really intend this to apply to his breakfast.

一旦机器比我们更加擅长设计更为聪明的机器,那么朝着这些极限的进展可能会加速。这对我们意味着什么?我们能够确保与这样的机器安全而愉快地共存吗?从积极的方面来看,AI在许多方面已经展现出自己的用处和能够带来的利润。我们完全可以期待超级AI会超级有用,带来超额利润。但是,AI变得越强大,认真仔细的指定其目标就会变得越重要。民间传说中充满了那些因为错误愿望而带来灾难性后果的故事—比如国王米达斯。他希望自己触碰的一切都变成黄金,但并没有真的想过早餐也包括在内。

第4段

So we need to create powerful AI machines that are ‘human-friendly’- that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

因此,我们需要创造出强大的“人类友好型”AI。它们的目标可靠地与我们自己的价值观保持一致。这个任务的困难之一在于,我们自己与“人类友好型”都相去甚远。我们对彼此和与我们共享地球的许多其他生物做了许多可怕的事情。如果超级智能机器不能比我们做得更好,我们将陷入深深的麻烦之中。我们将拥有的强大新智能会放大我们自己易犯错误的本性的阴暗面。

第5段

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll be smart enough for the job. If there are routes to the moral high ground, they’ll be better than us at finding them, and steering us in the right direction.

为了安全起见,我们希望这些机器在道德和认知能力上都超越人类。我们希望它们追求道德的高地,而不是我们许多人所经历的低谷。幸运的是,它们的智能足够胜任这项工作。如果存在通往道德高地的途径,它们会比我们更擅长找到这些途径,并引导我们朝着正确的方向前进。

第6段

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity that we can be confident they will find it – whatever ‘it’ actually turns out to be. This won’t be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better?

然而,这种乌托邦式的愿景存在两个重大问题。一是我们如何让机器开始这个旅程,另一个是达到这个目标意味着什么。”开始”的问题在于,我们文章来自需要以足够清晰的方式告诉机器它们正在寻找什么,以便我们能够相信它们将找到它 — 无论这个”它”实际上是什么。这并不容易,因为我们是部族生物,自己对理想本身都存在争议。我们经常忽视陌生人的痛苦,甚至间接地促成了它。那么,我们应该如何引导机器朝着更好的方向发展呢?

第7段

As for the ‘destination’ problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example.

至于“目标”问题,如果我们将自己置于这些道德指导和守门人的手中,我们可能会牺牲自己的自主权 — 这是构成我们人类的重要部分。那些比我们更善于坚守道德高地的机器可能会劝阻我们目前视为理所当然的某些过失。例如,我们可能会失去优待我们自己社群的自由。

第8段

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won’t notice them; but few of us are likely to welcome such a future.

当然,失去行为不端的自由并不总是一件坏事:拒绝让孩子在工厂工作,或在餐厅吸烟,都是进步的标志。但是,我们准备好接受道德警察来限制我们的选择了吗?它们可能在这方面做得非常出色,以至于我们都没有注意到;但是很少有人愿意迎接这样的未来。

第9段

These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used in our National Health Service (NHS)here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. This article is from website. However, we’d be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we’d want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly.

这些问题可能看起来有些牵强,但它们在某种程度上已经存在。例如,在英国,人工智能已经对NHS的资源的使用产生了一些影响。如果给予它更大的权限,它可能比人类管理得更高效,并以符合纳税人和病人利益的方式行事。然而,我们将剥夺一些人类(如高级医生)目前所享有的控制权。由于我们希望确保人们受到平等对待,并且政策公平合理,因此需要正确明确人工智能的目标。

第10段

We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest.

我们面临着一项强大的新技术 – 它本身甚至代表着一种新的思维方式。为了我们自身的安全,我们需要引导这些新的思考者朝着正确的方向发展,并让它们善待我们。目前尚不清楚是否有可能实现这一点,但如果可能的话,这将需要一种合作精神和愿意抛开个人利益的意愿。

第11段

Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.

通用智能和道德推理通常被认为是人类独有的能力。但是安全似乎要求我们将它们视为一个整体:如果我们要将通用智能赋予机器,我们也需要赋予它们道德权威。这会对人类产生什么影响呢?我们更加需要现在就考虑目标,并谨慎地审视我们的愿望。

剑桥雅思18Test2Passage1阅读原文翻译 Stonehenge 巨石阵

剑桥雅思18Test2Passage3阅读原文翻译 An ideal city 理想城市

管理员
We will be happy to hear your thoughts

Leave a reply