ESG Academy
2019-07-24

Facing Facts_ An Inside Look At Facebook's Fight Against Misinformation


Facing Facts_ An Inside Look At Facebook's Fight Against Misinformation

Facing Facts_ An Inside Look At Facebook's Fight Against Misinformation


Facebook and other social media sites are being criticized for not doing enough, to stop bogus stories that seemed to dominate the election cycle.
臉書為首的許多社群網站,近期正遭受嚴重的抨擊。起因是他們沒有好好審核選舉期間帶動風向的假貼文。

I mean the big thing that happened was in the wake of the u.s. presidential election in 2016,
我認為這一切都起源於美國在2016年的總統大選,

is we just were under a massive amount of scrutiny.
眾人拿放大鏡檢視我們。

That's result of for us making mistakes along the way, both in what we built, and how we explain what we did.
不論是我們建構的東西亦或是期間作出的解釋,這就是一路上我們犯的錯所帶來的結果。

Or maybe not explaining enough.
或許我們作出的解釋還不夠多。

Facebook is now unveiling this new tool, that will allow users to see if they had all interacted with a troll farm with ties to the Russian government.
臉書現在正式揭開這項新工具的神秘面紗,這項新工具將會讓使用者揭露,巨魔農場和俄國政府是否真有私下勾結的行為。
(巨魔農場:據指一家與俄國政府互動密切的企業。)

That’s really difficult and painful thing, I think the scrutiny fundamentally was a healthy thing. 
這是一個十分困難和痛苦的過程,我認為監督本應該要是立意良善的。

You've created these platforms, and now they are being misused, and you have to be the ones to do something about it, or we will.
你創造了這個平台而現在它正被濫用,你必須對此作出補償不然我們將會出手。

Fundamentally, Facebook is about trying to bring people together.
基本上來說,臉書的本質是將使用者們連結在一起。
We're trying to figure out what stories people are gonna find interesting.
我們嘗試去挖掘,什麼樣的故事會引起使用者的注意。

If you look at, do you like a story? Do you comment on a story? Do you share a story?
比如說,你是否向這則貼文按讚?你是否在這則貼文下方留言?你是否願意分享這則貼文?


Nearly everything you see in your newsfeed, you're seeing because somebody who you're connected with, or a page that you've decided to follow decided to share that.
幾乎所有你在臉書上看到的動態消息,都是因為和你有連結的某人、或是你追蹤的某個專頁決定分享。

For a time we felt our responsibility was mostly around, just trying to help organize that information that you in some sense had asked to see.
過去我們認為自己的職責所在,只不過是替使用者整理出他們有可能感興趣的那些內容。

There was some reluctance to try to get in between you and those people.
不過這中間卻開始出現了一些阻力,它們試圖分裂我們和那些使用者。

One of the challenges in misinformation, is that there is no one consensus or source for truth.
我們在對抗假消息的路上,最困難的應該是沒有人能提供可靠的真相來源。

If you think about all of the news that you read in a day, how much of it is objectively false, how much of them is objectively true.
試圖回想你今天看過的所有新聞,有多少是絕對錯誤的又有多少是客觀公正的?

The truth has this unfortunate aspect to it that sometimes it is not aligned with your desires, it is not aligned with what you have invested, in what you would like.
事實通常是令人遺憾的,它未必會與你所想、所求、所喜站在同一陣線。

And you can see that reflected inside of the content.
你會發現這都反應在內容上。

There's a lot of content in the gray area, most of it probably exists in some space where people are presenting the facts as they see them.
許多貼文內容都遊走在灰色地帶,它們大多數出現在人們眼見且相信的事實。

People consider misinformation to involve a lot of different things.
眾所皆知的,假消息所涵蓋的層面非常廣闊。

We’ve heard hate speech is misinformation, false news is misinformation, the speech about the government is misinformation.
我們曾經接觸過有關仇恨言論的假消息、假新聞衍生出的假消息、關於政府言論的假消息。

So one of the things we're doing internally is defining what we're really looking at, what we can measure reliably, and then figuring out how do we communicate that in a way, that puts it in the right context.
所以公司內部正在做的努力之一是定義出使用者都看了些什麼內容、如何量測出它的可靠度,然後了解該如何串連這樣正確的路徑來拼湊出完整的來龍去脈。

We in journalism have a myth of objectivity it doesn't exist.
我們新聞工作者心裡都會有一套客觀卻不存在的理論。

There’s also an expectation of a myth of objectivity or neutrality in the platforms — it doesn't exist.
當然大家期待平台上有客觀且中立的理論,但它並不存在。

Because Facebook is being manipulated,
因為臉書早已被操作著,

Facebook has an obligation to recognize that and compensate for that.
臉書有義務識別它,並補償這樣的錯誤。

I think an extreme will be bad.
我認為所有極端的行為都是非常不利的。

Would be if a group of Facebook employees, reviewed everything that people tried to post, and determine if the content of that post was true or false.
比如說,我們召集一群臉書的員工,請他們審核所有使用者打算張貼的文章,並裁決這則貼文的內容是真是假。

And based on that determination, decided whether or not it could be on the platform.
並且針對裁決的結果,來認定這篇貼文是否能出現在平台上。

What I think would also be bad, is if we took absolutely no responsibility whatsoever, and allowed hate speech and violence to be broadly distributed.
另一方面,我認為也非常不利的,是我們完全不採取任何相關行動以示負責,並讓仇恨言論和暴力行為到處流竄。

That wouldn't be taking nearly enough responsibility.
這甚至不算是有負起任何責任。

The right answer is definitely somewhere in the middle, but that's a big middle.
正確答案勢必在這兩者中間的某處,不過這中間幅員遼闊,難以界定。

We want to make sure that we don't inadvertently introduce bias.
我們想確定自己並沒有因不慎而帶來偏見。

It's extra important in all of our work to kind of know your own biases, but also sort of take a step back and make sure you're listening to the other side.
了解自身的偏好是我們工作內容中事關重大的一部分,不過與此同時,我們也要確保能閱聽到與自己所偏好的事物以外的其他想法。

This is a chart that I often draw for people about the world of false news.
這是一個我和大家解釋假新聞的範疇時,常會繪製的圖表。


Imagine on the x-axis, that you have the amount of truth in a piece of content.
想像這個X軸代表的是一個內容當中事實的含量。

Now on the y-axis you have the intent to mislead.
現在,這個Y軸代表的是那些欺騙大眾的意圖。

You can take this chart and you can split it into four quadrants, right?
你可以將這個圖表分成四個象限,對吧?

And the bottom left, you have the set of things that are low truth, but nobody was intending to mislead anything.
在左下角的這個象限,代表著事實含量低的部分,不過並沒有任何人企圖要矇騙大眾。

That's just called being wrong on the internet, and it happens.
這在網路上僅僅能被稱為錯誤訊息,而這是會發生的。

And in the bottom right, you know, it's the set of things that have high truth but again, nobody was trying to mislead anyone.
而在右下角,你看,是事實含量高的部分,並且沒有任何人企圖要矇騙大眾。

That's just called being right on the internet, and I'm sure it'll happen someday.
這在網路上被稱為正確訊息,而我相信總有一天這是會發生的。

But then we move up to the places where there's intent to mislead, right?
不過接下來我們要移動到這些高風險、意圖矇騙大眾的部分,是吧?

And these two quadrants are the interesting ones.
這兩個象限才是真正有趣的地方。

The top right, this is things that are high truth — high intent to mislead, so this is stuff like propaganda.
右上角的部分,是擁有高事實含量—卻同時也有高可能性使人誤會,這樣的內容就好比政治性的宣傳一樣。

This is stuff like ‘cherry-picking’ of statistics.
這就很像「摘櫻桃」的策略一樣。
(摘櫻桃:指刻意挑選對自己有利的選項。)

Now, mind you, we have to be really careful here, right?
現在,我們要開始非常小心謹慎了,對吧?

Because of our commitment to free speech, everything we do here has to be incredibly, incredibly careful.
因為社會對言論自由的保護,所有我們在這個部分做出的行為都應該非常、非常小心謹慎。

But then we moved to this quadrant, this is like the really dangerous quadrant, right?
然後我們移動到這個象限,這是最危險的象限,對吧?

Low amount of truth, high intent to mislead.
事實含量低落,卻擁有意圖矇騙的高風險。

These are things that were explicitly designed and architected to be viral.
這些是被明確設計、建構出來,試圖以病毒式的方式在網路傳開。

These are the hoaxes of the world.
這完全是一堆欺騙社會大眾的內容。

These are things like ‘pizza gate’, this is just false news.
這就像「披薩門事件」一樣,僅僅不過是假新聞罷了。

We have to get this right if we're going to regain people's trust.
如果我們還想贏回閱聽人的信任,我們就需要將這一切導回正軌。

Once we’re able to define the tactic or the problem area, we're able to make more progress on it.
一但我們有能力訂定戰略或找出有問題的範圍,我們就有能力做出更多的進步。

And there's a lot of different types of misinformation.
但假消息的種類實在是太多了。

There’s bad actors, there's bad behavior, and there's bad content.
有劣質的行為人、不適當的舉動和差勁的內容。

Bad actors or things like, fake accounts or foreign agents.
劣質的行為人就好比假帳號和外國間諜。

Bad behavior is using tactics, like spamming to try to spread a message.
不適當的舉動通常指使用各種手段,比如濫發垃圾訊息給大眾。

And bad content, includes things like false news, hate speech or clickbait.
差勁的內容包含假新聞、仇恨言論或鏈接誘餌。

We don't see just one of these things in isolation.
以上的行為我們並不會同時只看到一項。

They tend to kind of come in different combinations, but each of them requires a different strategy.
他們通常會以各種組合體出現,不過個體上依然擁有不盡相同的陰謀。

And each of them also has different teams.
這樣的組合體背後也隱藏了各式各樣的集團在操縱著。
And so one of the things we have to figure out, is how do we work across this complex space to ensure that the teams were fighting fake accounts, the teams who are fighting spam, and the teams that are fighting misinformation are coordinating and understand how best to leverage each other's technology and each other’s understandings.
因此現在最需要琢磨的事情之一,是我們的該如何在這樣複雜的環境中,把打擊假帳號的團隊、打擊垃圾訊息的團隊、消除假消息的團隊配合在一起,瞭解如何以最高效能發揮大家所知所學。

We have a series of steps that we can take, which we call ‘remove’, ‘reduce’, and ‘inform’.
我們有一系列的策略可以採取,稱之為「移除」、「減少」、「通知」。

So the worst violations and the things that violate our community standards, those are simply removed.
所以那些糟糕的、侵擾我們社群守則的內容,基本上都會被移除。

We also remove millions of fake accounts and bots that spread bad content.
我們也會移除數百萬個散佈不利內容的假帳號和非真人帳號。

The next category is this gray area, things that don't violate a community standard but maybe something that people don't want to see — like clickbait or spam.
接下來要探討的可說是一個灰色地帶,它們不會違背社群守則,但卻有可能是使用者們不想看到的內容—比如鏈接誘餌或是垃圾訊息。

In that case, we greatly reduce its distribution.
這些情況我們都十分成功的減少了。

A few people will still see it, if this is something that, you know, your best friend shared on a topic that is greatly interesting to you, 
不過少部分的使用者仍會看到這些內容,假設這些是你很要好的朋友分享的主題,且你會感到高度興趣的話,

it might still be in your feed, because we think you might still want to see it.
這些內容還是會出現在你的動態消息當中,因為我們認為你可能還是會對它產生興趣。

And then the last step is ‘inform’.
最後一個步驟稱之為「通知」。

An example of this is that, in some cases, we'll show you a related article that maybe gives you a little more context,
舉例來說,在某些個案當中,我們會顯示出一些與使用者興趣高度相關的文章,它們或許包含了一點新的內容,

or will give you more information about the source of the story and that's just to help you make your own decisions.
或是給使用者多一點關於這個故事來源的資訊,讓使用者自己決定否要接受並閱讀這個故事。


Is this a reputable source? Is this really giving you all sides of the story?
這是一個完整規範的消息來源嗎?它真的有揭露這個故事的所有面向嗎?

The community is the best defense against misinformation in the long run,
整個社群可說是長期對抗假消息最好的武器,

and so by informing the community, we can make that defense a little stronger.
所以藉由通知整個社群現在的情況,或許可以將這個防護網強化一些。

One of the cases of ideologically motivated misinformation, was a story of a undocumented immigrant who was living by a creek, sleeping under a bridge, and having a cooking fire to keep himself warm.
其中,有個企圖操弄意識形態的假消息,訴說著某未持有身份證明文件的移民,住在小溪旁、睡在橋下且只能靠炊事生火取暖。

But then it was picked up by one site, that said that he was the cause of the Napa Valley wine country fires.
結果突然有個網站表示,他就是納帕酒廠大火的縱火者。

So the misinformation took this story, and twisted it.
可見假消息完全扭轉了這個故事的本質。

In these situations, we work very closely with third party fact-checking partners and rely on them to both surface content to us that might be misinformation,
在這樣的情況下,我們與事實查核部門密切合作,期待前期先藉由他們找出可能為假消息的內容,

as well as help us verify things that are going viral on the platform.
並且幫助我們辨識正在荼毒整個社群的資訊。

If a certain source has multiple examples of false information,
如果某些特定的網站已有多次散佈假消息的紀錄,

then that's a pretty good signal that things that they publish in the future have at least a higher likelihood of being false, and so it maybe should be reduced in ranking.
那麼這便是一個非顯著的訊號提醒著我們,未來他們發佈的資訊有較高機率是錯誤的,因此我們有必要減少他們在列表當中出現的次數。

Misinformation travels in a cluster with people spreading polarization using exaggerations and sensationalism, low-quality content, running ad farms.
假消息以群組的形式在人群之中流竄著,它們誇大不實、譁眾取寵、良莠不齊且過度被廣告。

So we combat a whole spectrum of problems, and one of the things that gives me help in fighting misinformation is that we can come at the problem from so many angles.
我們和各種系列的疑難雜症搏鬥著,而其中一個在這場搏鬥中給予我們巨大助力的,即是可以從不同的角度切入解析問題。

When someone posts a misleading photo or video, it can be a lot more challenging.
當有人在平台上發布了容易使人產生誤會的照片或影片時,會使我們的任務更加充滿挑戰。

Because those are more visual, they’re more visceral.
因為這些是更能被使用者直接看到的,自然更能深入人心。

It's harder for you to see it and then not believe that it's true.
當使用者親眼看到假消息後,很難不去相信。

One famous example of this, is the photo of the Seattle Seahawks supposedly burning an American flag in a locker room.
有個著名的案例是:據傳西雅圖海鷹隊在他們的更衣室焚燒了一面美國國旗。

They were celebrating after a victory, someone photoshopped in a burning flag,
他們正在慶祝一場比賽的勝利,可有心人士以修圖軟體竟在隊員手上加上了一面正在燃燒的國旗,

and then used that to make claims about these players disrespecting their country.
並且聲稱這些隊員不尊重自己的國家。

We work with third-party experts who are trained in visual verification, and they're able to use variety of tools,
我們和受過視覺驗證訓練的專家合作,他們可以使用更多樣化的工具和軟體來防堵假訊息,

such as: reverse image search tools, tools that let them scrape the metadata from a public photo, 
比如:反轉圖片軟體、將元資料從一個公開照片上刮除的軟體,

and then use information in that metadata, like where the image was taken, and cross-reference that against the context in which the image is being used.
並且將資訊應用在刮除下來的元資料上,就像它從原始圖片上擷取一般如出一徹,再與該圖片的原文出處交叉比對。

And it's not just happening here, it’s happening in hundreds of countries for billions of people around the world.
而這樣的應用並不是只單單在這執行,這在全世界幾百個國家、幾十億使用者身上發生著。

Our teams are global teams.
我們是一個全球性的組織。

People we’re trying to hire thoughtfully, who bring different perspectives and different life experiences, and who do want to be accountable to the world.
我們深思過後決定要聘雇有洞察力的、擁有不同生活背景和想對世界作出貢獻的人才。

In the early days of the net, there really was nothing that stood between us and each other.
早先在以前的網路世界,人與人之間是沒有這麼多隔閡的。

Now, of course, we hit abundance, and overload, and thus you need algorithms and ranking to make sense out of it.
現在,當然,我們擁有豐富且過量的資訊,相對的也需要一套演算法和合理的排位來讓這些訊息系統化。

We have two plus billion users here at Facebook.
臉書擁有二十幾億的使用者。

There are billions of pieces of content, and we cannot individually categorize this is this or that is that.
每天都有幾十億的貼文流竄在平台上,我們無法一一單獨將他們分類。

And so we have to do everything in the form of machine learning.
因此我們的首要之務就是要讓機器自己學會辨識。

Machine learning algorithms are pieces of software, that can essentially identify patterns,
機器學習演算法是軟體的一種,基本上被用來辨識各種形式結構,

so you train the software by showing it examples of false content and it can derive from that patterns that it can use to flag potentially incorrect content in the future.
所以透過出示假消息的範例給機器看,它們會依這樣的範例慢慢學習,並自己監測未來可能成為假消息的貼文。

Machine learning enables us to make predictions about a lot of data without having to have humans review each individual piece.
機器學習演算法讓人類不用一一單獨瀏覽貼文,就能輕易對大量資料做出預測。

So, a machine learning model can predict how likely it is to use clickbait or how likely something is to be sensational.
所以,一個套用機器學習演算法的原型可以推算出什麼內容可能是鏈接誘餌、而什麼內容有可能引起使用者轟動。

Because the problem is so complicated, we're deploying fundamentally every resource.
因為現在面臨的問題十分複雜,所以我們在基本上部署了各式各樣的資源來應對。

We're leveraging machine learning everywhere we can, we’re creating datasets that allow us to build algorithms that detect even the most nuanced version of misinformation.
我們在各式可能的地方都部署了機器演算法,這樣一來即便最微小的錯誤信息都能被偵測出來。

And we're figuring out ways how to leverage our network and our graph to understand how false information propagates, so we can get ahead of it before it gets ahead of us.
而我們也在計算如何讓臉書的互聯網和圖表顯示出假消息傳播的普及程度,如此一來我們才能先發制人。

Responsibilities.
責任感。

With connecting people, particularly at our scale comes an immense amount of responsibility.
除了將使用者連結在一塊之外,我們的任務很重要的一環就是要擁有無限的責任感。

Every week we talk to all of you, all of the new hires, about all the work we’re doing to try and improve the integrity of the information that flows through newsfeed.
每週透過和所有使用者對話、和所有新進員工對話,都是為了要不斷的嘗試和改進,使動態消息能呈現更健全的資訊給所有人。

Because it's important that you have that context.
因為讓使用者擁有正確的資訊內容是事關重大的。

And we're gonna have to work together if we're going to be able to effectively address the issues that we face.
並且需要團結合作,我們才能有效地應付現在面對的難題。

We definitely think a lot about our responsibility.
在責任感上我們當然著墨了許多。

At the end of the day, we depend on our community of users.
不過到頭來說,我們都依靠在這些社群的使用者身上。

So, ideally, what's good for them is also good for us and there's sort of a natural alignment of interests.
所以,理論上,對使用者友善的事務亦對我們是有益的,這就像一場再自然不過的結盟一般。

We try to make a more interesting newsfeed because we think that's good for people, we think it's good for communities, and that will also be good for us and our business in the long run.
我們試著將動態消息的內容形塑的更有趣,因為臉書認為這對使用者來說是有益的、對整個社群是有益的、對公司內部是有益的,並且對公司長期營運上也是有益的。

Misinformation is gonna remain a topic, but it's gonna be an arms race, it's gonna move from one frontier of the battle to a different frontier.
假消息將持續成為一個重要的議題,並且會衍生成一場新領域的軍備競賽。

But in the time that I've been here, we doubled in size in my team and we’re doubling again.
不過自從我來到這個部門後,公司已經將對付假消息的人力增為雙倍,並且打算再次增為雙倍。

So there's a great commitment to improving the integrity of our systems.
所以這可謂一個我們絕對會完善這個制度的承諾。

I think that we're making progress now and that progress is going to accrue and it's going to get better and better and better.
我認為我們已經有做出一定的成績,而這樣的結果也正在優化當中,它勢必會更加、更加、更加進步。

We're gonna get interest on that progress, but we’re taking great steps every single day towards solving this incredibly complex problem.
我們對於這樣的進展慢慢產生興趣,並且每天都一步步的進步、一步步地解決這個極其複雜難解的問題。

We have to get this right, not just for our platform, but for the community of people that we serve around the world.
我們需要將這件事導回正軌,不只是為了臉書這個平台,而是為了整個社群上的使用者。

 

Copyright Announcement: The above videos are  from the official Youtube channel of the organization. Facing Facts_ An Inside Look At Facebook's Fight Against Misinformation https://www.youtube.com/user/theofficialfacebook . The videos we selected are all publicly available on the official channel, and we do not own the copyright of these videos. Our work on Chinese translation for this film is free for the 1.2 billion audiences in Mandarin Chinese around the world to watch and learn SDGs and SROI advertising campaigns.

Choose Your Region