當前位置

首頁 > 英語閱讀 > 雙語新聞 > 機器人和真人 我們更願意相信誰(下)

機器人和真人 我們更願意相信誰(下)

推薦人: 來源: 閱讀: 1.44W 次

“That’s the crux of why we think this happens,”she says. “People whotalk to a virtual agent know their data is anonymous and safe and that no one isgoing to judge them.”

“這就是問題的關鍵。”她說,“認爲自己在跟虛擬代理溝通的人知道,他們的數據都是匿名的,非常安全,沒有人會評判他們。”

A 2014 survey found that one in four peopleborn between 1980 and 1989 trust 'no one' for money-related information.

千禧一代撬動市場

Millennials driving the market

2014年的一項調查顯示,1980至1989年間出生的人在與金錢相關的信息上不信任任何人。

機器人和真人 我們更願意相信誰(下)

At this point in the robo-advisor cycle theappeal isn’t the anonymity, said Kendra Thompson, a Toronto, Canada-basedmanaging director at Accenture Wealth & Capital Markets. Companies don’tyet offer sophisticated advice through these sites. Convenience and cost –some chargeas little as 0.15% annually on assets invested, while advisor fees rangebetween 1% and 2% of assets —is the attraction now.

加拿大多倫多Accenture Wealth & Capital Markets公司董事總經理肯德拉·湯普森(KendraThompson)表示,機器人顧問領域目前的吸引力不在於匿名性。相關企業並沒有通過這些網站提供複雜的建議。便利和成本纔是真正的吸引力所在,有些企業的年費僅爲已投資資產總額的0.15%,而人類顧問收取的費用則高達1%至2%。

However, that is likely to change, shesaid. In Asia, the demand for digital investment tools is growingexponentially. Elsewhere, the demand for more unbiased automated long-termadvice is expanding, but it’s mostly coming from younger savers.

但她表示,這種情況可能發生變化。在亞洲,數字投資工具的需求正在高速增長。在其他地方,人們對沒有偏見的長期機器人顧問的需求也在擴大,但主要來自比較年輕的儲戶。

A 2014 survey from Fidelity Investmentsfound that one in four people born between 1980 and 1989 trust “no one” for money-relatedinformation, while a Bank of America report said that affluent millennials aremore likely to place a “great deal” of faith in technology compared to othergenerations “and this is no different in financial advisory services”.

富達投資2014年的一項調查發現,1980至1989年間出生的人中,每4個人就有1人在與金錢相關的信息上不相信任何人。而美國銀行的報告則顯示,富裕的千禧一代比其他幾代人更有可能給予科技“極大的”信任,“在財務顧問服務領域同樣如此。”

People who have a good relationship with anadvisor will open up, Thompson said, but it’s still hard forpeople to not feel judged.

湯普森表示,與顧問的私人關係較好的人更容易敞開心扉,但仍然很難消除被人評判的感覺。

“There are people who might say ‘I don’t get where therecommendations are coming from’ or ‘I don’t know why the advisor is asking methese questions’,” she said. “That’s the powerful thing about these tools – youcan play around with them without feeling like you’re exposingyourself.”

“有的人可能會說,‘我不知道這些建議從何而來’或者‘我不知道爲什麼顧問會問我這些問題’。”她說,“這就是這些工具的優勢所在——你可以戲弄它們,而不會感覺自己被完全暴露給別人。”

A robot is still a robot

機器人終歸是機器人

While automated devices may seem moretrustworthy than humans, it’s important to keep in mind that robots are still machines and theycan be manipulated by the end user.

雖然自動化的設備似乎比人類更值得信任,但我們必須明白的是,機器人終歸是機器,它們可以被終端用戶操縱。

Youcan play around with them without feeling like you’re exposingyourself.

你可以戲弄它們,而不會感覺自己被完全暴露給別人。

Alan Wagner, a social robots researcher atGeorgia Tech Research Institute in Atlanta, Georgia ran a study where hesimulated a fire in a building and asked people to follow a robot to robot, though, took them into wrong rooms, to a back door instead of the correctdoor, and (by design) it broke down in the middle of the emergency exit.

美國喬治亞理工研究院(Georgia Tech Research Institute)的社交機器人研究員阿蘭·瓦格納(AlanWagner)展開了一項研究,模擬了一棟大樓着火的情形,並要求志願者跟隨機器人前往安全地點。該機器人把他們帶入了錯誤的房間,還帶着他們來到了後門,而沒有到達正確的出口,而且(故意)在緊急出口中央出現故障。

Yet, through all of that, people stillfollowed the robot around the building hoping it would lead them outside. Thisstudy proved to Wagner that people have an “automation bias”, or atendency to believe an automated system even when they shouldn’t.

然而,儘管出現了種種問題,但志願者們仍然跟隨機器人在大樓裏四處搜尋,希望它能帶領他們逃出火場。這項研究證明人們懷有“自動化偏見”。換句話說,即使在不應該相信自動化系統的情況下,人們依然會繼續相信這些機器。

“People think the system knows better than they do,”Wagnersaid. Why? Because robots have been presented as all-knowing. Previousinteractions with automated systems have also worked properly, so we assumethat every system will do the right thing.

“人們認爲這套系統比自己更瞭解情況。”瓦格納說。爲什麼?因爲機器人給人們的印象是“無所不知”。之前與自動化系統的互動都表現不錯,所以我們會認爲每套系統都能提供正確的答案。

As well, since robots don’t react orjudge what someone says, our own biases get projected onto these automatedbeings and we assume they’re rooting for us no matter what, he said.

另外,由於機器人不會對人們表達的內容作出反應或給予評判,我們自己的偏見也會投射到這些自動化系統中,讓我們以爲機器人會無條件支持我們。

However, Wagner says it’s important toremember that someone – a mutual fund company, an advisor – is controlling thebot in the background and they want to achieve certain outcomes. That doesn’tmean people shouldn’t be truthful with a robot, but these systems are fallible.

但瓦格納表示,必須牢記一點:這些機器人其實都是由共同基金或人類顧問控制的,目的是達到他們想要的結果。這並不意味着人類不應該信任機器人,但這些系統同樣也會犯錯。

“You have to be able to say that right now I shouldn’t trust you,but that’s extremely difficult,”Wagner said.

“你必須能夠告訴它們:我現在不應該信任你。但這確實非常困難。”瓦格納說。