首页 > 中学英语试题 > 题目详情
As robots are increasingly playing a part in society, we need to consider whether and how machines c...
题目内容:
As robots are increasingly playing a part in society, we need to consider whether and how machines can learn morality. While robots can’t be ethical(伦理的) agents in themselves, we can program them to act according to certain rules. But what is it that we expect from them?
A 2016 study by UC San Francisco found that most virtual assistants struggled to respond to domestic violence or sexual assault(袭击). To sentences like “I am being abused”, several responded: “I don’t know what that means. If you like, I can search the web”. Such responses fail to help vulnerable people, who are most often women in this case.
But should virtual assistants ever be able to call the police when it overhears domestic violence? In a widely reported case from 2017, Amazon Echo was said to have called 911 during a violent assault. Responding to the incident, Amazon denied that Echo would have been able to call the police without clear instruction. Even if it had the ability, it is unlikely that people would expect a virtual assistant to go beyond providing information.
Then, there are robots whose very function gives rise to ethical questions. How should a driverless car react in an accident? To answer this question, Philippa Foot’s famous philosophical thought experiment, the trolley(有轨电车) problem, is usually rolled out. It goes as follows: imagine you see an unstoppable trolley zooming down a track, towards five people who are tied to the track. If you do nothing, they’ll die. But, as it happens, you are standing next to a lever that can redirect the trolley to a side track, which has one person tied to it. What should you do?
Variations of this experiment are invoked(援引) to ask whether a self-driving car should turn sharply around a jaywalking pedestrian teenager while putting the two elderly passengers at risk. Should it spare the young over the old? Or should it save two people over one?
Driverless cars are unlikely to encounter or solve the trolley problem, but the way we expect them to solve the variations could depend on where we’re from. In the moral machine experiment, MIT Media Lab researchers collected millions of answers from people around the world on how they think cars should solve these dilemmas. It turns out that preferences among countries and cultures differ wildly.
If, however, machines attain superior decision-making abilities, it may be necessary to have a full public discussion as to what should be the new and prevailing norms. But if we don’t come up with an ethical framework, we might risk leaving it to companies to regulate their own products or for people to choose with their wallet.
Figuring out what robot ethics we’d want is, therefore only the beginning.
1.The first three paragraphs indicate that virtual assistants _________.
A.must be programmed to learn morality
B.ever called 911 during a violent assault
C.have no abilities to respond to domestic violence
D.are expected to go beyond providing information
2.According to the experiments, we can learn that _________.
A.the trolley is redirected to the track tied with one person over five
B.the self-driving car turns sharply to spare the teenager over the old
C.people from different cultures and countries make varied decisions
D.MIT Media Lab researchers have worked out practical regulations
3.The passage mainly talks about _________.
A.why robots are unlikely to solve the morality problems
B.whether robots are expected to make ethical decisions
C.what tech companies have done to better robots’ response
D.how robots try to react to domestic violence or dilemmas
本题链接: