{"id":56515,"date":"2018-04-05T09:26:49","date_gmt":"2018-04-05T16:26:49","guid":{"rendered":"http:\/\/69.46.6.243\/?p=56515"},"modified":"2018-04-05T09:26:49","modified_gmt":"2018-04-05T16:26:49","slug":"mark-zuckerberg-on-protecting-peoples-information-up-to-87-million-records-exposed","status":"publish","type":"post","link":"https:\/\/new.thepinetree.net\/?p=56515","title":{"rendered":"Mark Zuckerberg on Protecting People\u2019s Information &#038; Up To 87 Million Records Exposed"},"content":{"rendered":"<p>Palo Alto, CA&#8230;Mark Zuckerberg spoke with members of the press about Facebook\u2019s efforts to better protect people\u2019s information. The following is a transcript of his remarks and the Q&#038;A that followed.  Opening Remarks Hey everyone. Thanks for joining today. Before we get started today, I just want to take a moment to talk about what happened at YouTube yesterday.  Silicon Valley is a tight-knit community, and we all have a lot of friends over there at Google and YouTube.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/new.thepinetree.net\/wp-content\/uploads\/2017\/09\/facebook-logo.jpg\" alt=\"\" width=\"430\" height=\"200\" class=\"alignnone size-full wp-image-44391\" srcset=\"https:\/\/new.thepinetree.net\/wp-content\/uploads\/2017\/09\/facebook-logo.jpg 430w, https:\/\/new.thepinetree.net\/wp-content\/uploads\/2017\/09\/facebook-logo-300x140.jpg 300w, https:\/\/new.thepinetree.net\/wp-content\/uploads\/2017\/09\/facebook-logo-150x70.jpg 150w\" sizes=\"auto, (max-width: 430px) 100vw, 430px\" \/><\/p>\n<p>We\u2019re thinking of everyone there and everyone who was affected by the shooting.<\/p>\n<p>Now I know we face a lot of important questions. So I just want to take a few minutes to talk about that upfront, and then we\u2019ll take about 45 minutes of your questions.<\/p>\n<p>Two of the most basic questions that I think people are asking about Facebook are: first, can we get our systems under control and can we keep people safe, and second, can we make sure that our systems aren\u2019t used to undermine democracy?<\/p>\n<p>And I\u2019ll talk about both of those for a moment and the actions that we\u2019re taking to make sure the answers are yes. But I want to back up for a moment first.<\/p>\n<p>We\u2019re an idealistic and optimistic company. For the first decade, we really focused on all the good that connecting people brings. And as we rolled Facebook out across the world, people everywhere got a powerful new tool for staying connected, for sharing their opinions, for building businesses. Families have been reconnected, people have gotten married because of these tools. Social movements and marches have been organized, including just in the last couple of weeks. And tens of millions of small business now have better tools to grow that previously only big companies would have had access to.<\/p>\n<p>But it\u2019s clear now that we didn\u2019t do enough. We didn\u2019t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn\u2019t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.<\/p>\n<p>So now we have to go through every part of our relationship with people and make sure that we\u2019re taking a broad enough view of our responsibility. It\u2019s not enough to just connect people, we have to make sure that those connections are positive and that they\u2019re bringing people closer together. It\u2019s not enough to just give people a voice, we have to make sure that people are not using that voice to hurt people or spread disinformation. And it\u2019s not enough to give people tools to sign into apps, we have to ensure that all of those developers protect people\u2019s information too. It\u2019s not enough to have rules requiring they protect information, it\u2019s not enough to believe them when they tell us they\u2019re protecting information \u2014 we actually have to ensure that everyone in our ecosystem protects people\u2019s information.<\/p>\n<p>So across every part of our relationship with people, we\u2019re broadening our view of our responsibility, from just giving people tools to recognizing that it\u2019s on us to make sure those tools are used well.<\/p>\n<p>Now let me get into more specifics for a moment.<\/p>\n<p>With respect to getting our systems under control, a couple of weeks ago I announced that we were going to do a full investigation of every app that had a large amount of people\u2019s data before we locked down the platform, and that we\u2019d make further changes to restrict the data access that developers could get.<\/p>\n<p>[VP, Product Partnerships] Ime Archibong and [Chief Technology Officer] Mike Schroepfer followed up with a number of changes we\u2019re making, including requiring apps you haven\u2019t used in a while to get your authorization again before querying for more of your data. And today we\u2019re following up further and restricting more APIs like Groups and Events. The basic idea here is that you should be able to sign into apps and share your public information easily, but anything that might also share other people\u2019s information \u2014 like other posts in groups you\u2019re in or other people going to events that you\u2019re going to \u2014 those should be more restricted. I\u2019m going to be happy to take questions about everything we\u2019re doing there in a minute.<\/p>\n<p>I also want to take a moment to talk about elections specifically.<\/p>\n<p>Yesterday we took a big action by taking down Russian IRA pages targeting their home country.<\/p>\n<p>Since we became aware of this activity, their activity after the 2016 US elections, we\u2019ve been working to root out the IRA and protect the integrity of elections around the world. And since then there have been a number of important elections that we\u2019ve focused on. A few months after the 2016 elections there was the French presidential election, and leading up to that we deployed some new AI tools that took down more than 30,000 fake accounts. After that there was the German election, where we developed a new playbook for working with the local election commission to share information on the threats we were each seeing. And in the US Senate Alabama special election last year, we successfully deployed some new AI tools that removed Macedonian trolls who were trying to spread misinformation during the election.<\/p>\n<p>So all in, we now have about 15,000 people working on security and content review, and we\u2019ll have more than 20,000 by the end of this year.<\/p>\n<p>This is going to be a big year of elections ahead, with the US midterms and presidential elections in India, Brazil, Mexico, Pakistan, Hungary and others \u2014 so this is going to be a major focus for us.<\/p>\n<p>But while we\u2019ve been doing this, we\u2019ve also been tracing back and identifying this network of fake accounts the IRA has been using so we can work to remove them from Facebook entirely. This was the first action we\u2019ve taken against the IRA in Russia itself, and it included identifying and taking down Russian news organization that we determined were controlled and operated by the IRA. So we have more work to do here, and we\u2019re going to continue working very hard to defend against them.<\/p>\n<p>All right. So that\u2019s my update for now. We expect to make more changes over the coming months, and we\u2019ll keep you updated, and now let\u2019s take some questions.<\/p>\n<p>Q&#038;A<br \/>\nDavid McCabe, Axios: Given that Colin testified just last year, and more has come out since then, and given that the numbers around the time of the IRA operation changed so drastically, why should lawmakers\u2014why should users and Congress trust that you are giving them a full and accurate picture now?<\/p>\n<p>Mark: Of the IRA \u2014 I think there is going to be more content that we are going to find over time. As long as there are people employed in Russia who have the job of trying to find ways to exploit these systems, this is going to be a never-ending battle. You never fully solve security \u2014 it\u2019s an arms race. In retrospect we were behind, and we didn\u2019t invest enough in it up front. We had thousands of people working on security, but nowhere near the 20,000 that we\u2019re going to have by the end of this year. So I am confident we are making progress against these adversaries. But they were very sophisticated, and it would be a mistake to assume that you can ever fully solve a problem like this, or think that they are going to give up and stop doing what they are doing.Rory Cellan Jones, BBC: You, back in November 2016 when you could say this crisis began, dismissed as crazy the idea that fake news could influence the election, and more recently here in the UK you\u2019ve turned down an invitation to speak to our Parliamentarians in the House of Commons, just as we learn tonight that 1 million UK users were affected by the Cambridge Analytica data leak. Are you taking this seriously enough, and can you convince British users that you care enough about the situation?<\/p>\n<p>Mark: Yes. So we announced today that I\u2019m going to be testifying in front of Congress. I imagine that is going to cover a lot of ground. I am going to be sending one of our top folks. I believe it\u2019s going to be [Mike Schroepfer], the CTO, or Chris Cox, the product officer. These are the top folks who I run the company with\u2014to answer additional questions from countries and other places.<\/p>\n<p>Oh sorry, I should also probably address \u2014 you asked about my comments after the 2016 election. I\u2019ve said this already \u2014but I think at this point that I clearly made a mistake by just dismissing fake news as \u201ccrazy\u201d\u2014 as having an impact. People will analyze the actual impact of this for a long time to come, but what I think was clear at this point is that it was too flippant. I should have never referred to it as crazy. This is clearly a problem that requires careful work, and since then we\u2019ve done a lot to fight the spread of disinformation on Facebook from working with fact checkers to making it so that we\u2019re trying to promote and work with broadly trusted news sources. But this is an important area of work for us.<\/p>\n<p>Ian Sherr, CNET: So you just announced 87 million people affected by the Cambridge Analytica stuff today. How long did you know this number was affected? Because the 50 million number was out there for quite a while. I know you guys weren\u2019t specifically saying that, but it feels like the data keeps changing on us. And we\u2019re not getting a full forthright view of what\u2019s going on here.<\/p>\n<p>Mark: We only just finalized our understanding of the situation in the last I think couple of days on this. And as you said, we didn\u2019t put out the 50 million number. That came from other parties. We wanted to wait until we had the full understanding. Just to give you the complete picture on this: we don\u2019t have logs going back from when exactly [Aleksandr] Kogan\u2019s app queried for everyone\u2019s friends. What we did was basically constructed the maximum possible number of friends lists that everyone could have had over the time, and assumed that Kogan queried each person at the time when they had the maximum number of connections that would\u2019ve been available to them. That\u2019s where we came up with this 87 million number. We wanted to take a broad view that is a conservative estimate. I am quite confident that given our analysis that it is not more than 87 million. It very well could be less, but we wanted to put out the maximum we felt that it could be as that analysis says.<\/p>\n<p>David Ingram, Reuters: Hi Mark. I\u2019m wondering if you can you address the audits that you\u2019re doing for third-party app developers. Specifically, I hear what you\u2019re saying about taking a broader view now about the company\u2019s responsibility, but why weren\u2019t there audits of the use of social graph API done years ago back in the 2010-2015 period?<\/p>\n<p>Mark: Well, in retrospect, I think we clearly should have been doing more all along. But just to speak to how we were thinking about it at the time, as just a matter of explanation, I\u2019m not trying to defend this now: I think our view in a number of aspects of our relationship with people is that our job is to give them tools, and that it was largely people\u2019s responsibility how they chose to use them \u2014 whether that\u2019s tools on how to share your voice, tools on how to log in to apps and bring your information to them. I think it was wrong in retrospect to have that limited of a view, but the reason why we acted the way that we did was because we viewed that when someone chose to share data with the platform it acted the way it was designed. With this personality quiz app, our view is that yes, Kogan broke the policies and that he broke peoples\u2019 expectations. But also that people chose to share that data with him. I think today, given what we know, not just about developers, but across all of our tools, and across what our place in society is, it\u2019s such a big service that\u2019s so central in peoples\u2019 lives. I think we need to take a broader view of our responsibility. We\u2019re not just building tools, but we need to take full responsibility for the outcome and how people use those tools as well. That\u2019s at least why we didn\u2019t do it at the time, but knowing what I know today, clearly we should have done more. And we will going forward.<\/p>\n<p>Cecilia Kang, New York Times: Hi. Thanks for taking my question. Mark, you have indicated that you could be comfortable with some sort of regulation, and I think you alluded to potentially political ads. I\u2019d like to ask you about privacy regulations that are about to take form, or take effect in Europe\u2013GDPR. Would you be comfortable with those types of data protection regulations in the United States and deeper for global users?<\/p>\n<p>Mark: Overall, I think regulations like the GDPR are very positive. I was somewhat surprised by yesterday\u2019s Reuters story that ran on this because the reporter asked if we are planning on running the controls for GDPR across the world and my answer was yes. We intend to make all the same controls and settings available everywhere, not just in Europe. Is it going to be exactly the same format? Probably not. We need to figure out what makes sense in different markets with the different laws and different places. But\u2014let me repeat this\u2014we\u2019ll make all controls and settings the same everywhere, not just in Europe.<\/p>\n<p>Tony Romm, Washington Post: In a blog post, you acknowledged that profile information had been scraped by malicious actors? Who are these actors? Are they political organizations like Cambridge or others? And given that, do you believe this was all in violation of your 2011 settlement with the FTC? Thanks.<\/p>\n<p>Mark: To take a step back on this, all of the changes we announced today were about ways that we built tools that were useful to a lot of people on sharing information or connecting with people but that we basically felt like the amount of information that potential bad actors could get\u2014or specific folks who we\u2019ve observed\u2014could potentially misuse this. Whether that\u2019s the changes are in groups or events, it\u2019s not unreasonable to have an API where someone can bring the activity in a group to an app and be able to interactive with that in a group in an external app. We still wanted to shut that down because we felt like there was too many apps and too many folks who would have had access to people\u2019s content, and that would have been problematic. It\u2019s a similar situation with search. What we found here is we built this feature, and its very useful. There a lot of people who were using it until we shut it down today to look up the people who they want to add as friends but they don\u2019t have as friends yet. Especially in places where there are languages that makes it easier to type in a phone number or a number than for someone\u2019s name, or where a lot of people have the same name, it\u2019s helpful to have a unique identifier to disambiguate. But I think what was also clear is that the methods of rate limiting this weren\u2019t able to prevent malicious actors who cycled through hundreds of thousands of different IP address and did a relatively small number of queries for each one. Given that and what we know today, it just makes sense to shut that down.<\/p>\n<p>You asked about the FTC consent order. We\u2019ve worked hard to make sure that we comply with it. I think the reality here is that we need to take a broader view of our responsibility, rather than just the legal responsibility. We\u2019re focused on doing the right thing and making sure people\u2019s information is protected, and we\u2019re doing investigations. We\u2019re locking down the platform, et cetera. I think that our responsibilities to the people that use Facebook are greater than just what\u2019s written in that order, and that\u2019s the standard I want to hold us to.<\/p>\n<p>Hannah Kuchler, Financial Times: Hi Mark. Thanks for taking my question. Investors have raised a lot of concerns about whether this is the result of corporate governance issues at Facebook. Has the board discussed whether you should step down as chairman?<\/p>\n<p>Mark: Not that I\u2019m aware of.<\/p>\n<p>Alexis Madrigal, The Atlantic: Every company, big and small, balances the service they provide with the needs of the business. In light of [Andrew Bosworth] Boz\u2019s post and your rethinking of Facebook\u2019s responsibility, have you ever made a decision that benefited Facebook\u2019s business but hurt the community?<\/p>\n<p>Mark: I\u2019ll answer your question, but first because you brought up Boz\u2019s post. Let me take a moment to make sure that everyone understands that I disagreed with that at the time and I disagree with that now. I don\u2019t think that it stands for what most people inside the company believe. If you looked at the comments on that thread, when he initially wrote it, it was massively negative. So, I feel like that\u2019s an important point to set aside.<\/p>\n<p>In terms of the questions you asked, balancing stakeholders, the thing that I think makes our product challenging to manage and operate are not the trade-offs between the people and the business \u2014 I actually think that those are quite easy because over the long term the business will be better if you serve people. I just think that it would be near-sighted to focus on short-term revenue over what value to people is, and I don\u2019t think we are that short-sighted. All of the hard decisions that we have to make are actually trade-offs between people. One of the big differences between the type of product that we are building is \u2014 which is why I refer to it as a community and what I think some of the specific governance challenges we have are \u2014 the different people that use Facebook have different interests. Some people want to share political speech that they think is valid, and other people feel like it\u2019s hate speech. And then, people ask us, \u201cAre you just leaving that up because you want people to be able to share more?\u201d These are real values and questions and trade-offs. Free expression on the one hand, making sure it\u2019s a safe community on the other hand. We have to make sure we get to the right place, and we\u2019re doing that in an environment that\u2019s not static. The social norms are changing continually, and they\u2019re different in every country around the world. Getting those trade-offs right is hard, and we certainly don\u2019t always get them right. To me, that\u2019s the hard part about running the company\u2014not the trade-off between the people and the business.<\/p>\n<p>Alyssa Newcomb, NBC News: Hi Mark, you said you\u2019ve clearly made some mistakes in past, and I\u2019m wondering do you still feel like you\u2019re the best person to run Facebook moving forward?<\/p>\n<p>Mark: Yes. I think life is about learning from the mistakes and figuring out what you need to do to move forward. A lot of times people ask, \u201cWhat are the mistakes you made early on, starting the company, or what would you try to do differently?\u201d The reality of a lot of this is that when you are building something like Facebook that is unprecedented in the world, there are going to be things that you mess up. And if we had gotten this right, we would have messed something else up. I don\u2019t think anyone is going to be perfect, but I think what people should hold us accountable for is learning from the mistakes and continually doing better and continuing to evolve what our view of our responsibility is \u2014 and, at the end of the day, whether we\u2019re building things that people like and that make their lives better. I think it\u2019s important to not lose sight of that through all of this. I\u2019m the first to admit that we didn\u2019t take a broad enough view of what our responsibilities were. But, I also think it\u2019s important to keep in mind that there are billions of people who love the services that we\u2019re building because they\u2019re getting real value and being able to connect and build connections and relationships on day-to-day basis. And that\u2019s something I\u2019m really proud of our company for doing, and I know that we will keep on doing that.<\/p>\n<p>Josh Constine, TechCrunch: Thank you. During today\u2019s disclosure and announcement, Facebook explained that the account recovery and search tools using email and phone number could have been used to scrape information about of all of Facebook\u2019s users. When did Facebook find out about this scraping operation, and, if that was before a month ago, why didn\u2019t Facebook inform the public about it immediately?<\/p>\n<p>Mark: We looked into this and understood it more over the last few days as part of the audit of our overall system. Everyone has a setting on Facebook, that controls \u2014 it\u2019s right in your privacy settings \u2014 whether people can look you up by your contact information. Most people have that turned on, and that\u2019s the default, but a lot of people have also turned it off. So it\u2019s not quite everyone, but certainly the potential here would be that over the period of time that this feature has been around, people have been able to scrape public information. The information\u2014that if you have someone\u2019s phone number, you can put that in, and get a link to their profile which pulls their public information. So, I certainly think that it is reasonable to expect that if you had that setting turned on, that at some point during the last several years, someone has probably accessed your public information in this way.<\/p>\n<p>Will Oremus, Slate: Thanks very much for doing this. You run a company that relies on people being willing to share data, that is then used to target them with ads. We also now know that it can be used in more manipulative ways or ways they don\u2019t expect. We also know you\u2019re protective of your own privacy. You acknowledged that you put tape over your webcam at one point, think you bought one of the lots surrounding your home just to get more privacy. I\u2019m curious \u2014 what other steps do you take personally to protect your privacy online? Do you use an ad blocker? As a Facebook user, would you sign up for an apps like the personality quiz that folks signed up for? Thanks very much for having us.<\/p>\n<p>Mark: I certainly use a lot of apps. I don\u2019t know if I use that one specifically, but I am a power user of the internet here. In order to protect privacy, I would just advise that people follow best practices around security: turn on two-factor authentication, change passwords regularly, don\u2019t have your password recovery responses be information that you made publicly available somewhere. All the basic practices, and then just look out and understand that most attacks are going to be social engineering, and not necessarily people trying to break into security systems. For Facebook specifically, one of the things we need to do and that I hope that more people look at are just the privacy controls that you have. I think, especially leading up to the GDPR event, a lot of people are asking us, \u201cOkay, are you going to implement all those things?\u201d And my answer is that we\u2019ve had almost all of what\u2019s in there implemented for years, around the world, not just in Europe. So, to me, the fact that a lot of people might not be aware of that is an issue, and I think we could do a better job of putting these tools in front of people and not just offering them, and I would encourage people to use them and make sure that they\u2019re comfortable with how their information is used on our services and others.<\/p>\n<p>Sarah Frier, Bloomberg: Hi Mark. There\u2019s broad concern that these audits for developers won\u2019t actually work, that the data that users gave to third-parties years ago could be anywhere by now. What results do you hope to achieve from the audit and what won\u2019t you be able to find?<\/p>\n<p>Mark: It\u2019s a good question. No measure that you take on security is going to be perfect, but a lot of the strategy has to involve changing the economics of potential bad actors to make it not worth doing what they might do otherwise. So I think you\u2019re right that we\u2019re not going to be able to go out and necessarily find every single bad use of data. What we can do is make it a lot harder for folks to do that going forward: change the calculus on anyone who is considering doing something sketchy going forward. And I actually do think that we\u2019ll be able to uncover a large amount of bad activity, of what exists, and we will be able to go in and do audits and ensure people go get rid of bad data.<\/p>\n<p>Steve Kovach, Business Insider: Hi. Has anyone been fired related to the Cambridge Analytica issue or any other data privacy issue?<\/p>\n<p>Mark: I have not\u2026 due to the Cambridge Analytica situation. We are still working through this. At the end of the day, this is my responsibility. So there have been a bunch of questions about that. I started this place. I run it. And I am responsible for what happens here. To the question before, I still think that I\u2019m going to do the best job to help run it going forward. I\u2019m not looking to throw anyone else under the bus for mistakes that we\u2019ve made here.<\/p>\n<p>Nancy Cortez, CBS News: Hi there. Thank you so much for taking the question. Your critics say, look, Facebook\u2019s model, Facebook\u2019s business model, depends on harvesting personal data. How can you ever personally reassure users that their data won\u2019t be used in ways they don\u2019t expect?<\/p>\n<p>Mark: I think we can certainly do better job of explaining what we actually do. There are many misconceptions around what we do that I think we haven\u2019t succeeded in clearing up for years. So, first, the vast majority of data that Facebook knows about you is because you chose to share it. Right? It\u2019s not tracking. There are other internet companies or data brokers or folks that might try to track and sell data, but we don\u2019t buy and sell. In terms of the ad activity, I means that\u2019s a relatively smaller part of what we\u2019re doing. The majority of the activity is people actually sharing information on Facebook, which is why people understand how much content is there, because people put all the photos and information there themselves. The second point, which I touched on briefly there: for some reason we haven\u2019t been able to kick this notion for years that people think we will sell data to advertisers. We don\u2019t. That\u2019s not been a thing that we do. Actually it just goes counter to our own incentives. Even if we wanted to do that, it just wouldn\u2019t make sense to do that. So, I think we can certainly do a better job of explaining this and making it understandable, but the reality is the way we run the service is: people share information, we use that to help people connect and to make the services better, and we run ads to make it a free service that everyone in world can afford.<\/p>\n<p>Mathew Braga, CBC News: Hey Mark, I just want to go back to something that was brought up earlier around the scraping of profile information. I know Mike Schroepfer in his post said something about the scale and sophistication of the activity. And I\u2019m just wondering can you put a little more context on that? Like what sort of scale are we talking about? Do you have exact numbers? Can you give us any harder sense than, sort of, what\u2019s in that post?<\/p>\n<p>Mark: In terms of sophistication, this is stuff that I\u2019ve already said on some of the other answers, so I\u2019ll try to keep this short. We had basic protections in place to prevent rate-limiting, making sure that accounts couldn\u2019t do a whole lot of searches. But we did see a number of folks who cycled through many thousands of IPs, hundreds of thousands of IP addresses to abade the rate-limiting system, and that wasn\u2019t a problem we really had a solution to. So now, that\u2019s partially why the answer we came to is to shut this down even though a lot of people are getting a lot of use out of it. That\u2019s not something we necessarily want to have going on. In terms of the scale, I think the thing people should assume, given this is a feature that\u2019s been available for a while and a lot of people use it in the right way, but we\u2019ve also seen some scraping, I would assume if you had that setting turned on, that someone at some point has accessed your public information in this way.<\/p>\n<p>Rebecca Jarvis, ABC News: Hi Mark. Thanks for doing this. Cambridge Analytica has tweeted now since this conversation began, \u201cWhen Facebook contacted us to let us know the data had been improperly obtained, we immediately deleted the raw data from our file server, and began the process of searching for and removing any of its derivatives in our system.\u201d And I want to understand from you, now that you have this finalized understanding, do you agree with Cambridge\u2019s interpretation and the tweet they just shared? And will you be pursuing legal action against Cambridge Analytica?<\/p>\n<p>Mark: I don\u2019t know that what we announced today really is connected to what they just said at all. What we announced with the 87 million is the maximum number of people we could calculate could have been accessed. We don\u2019t actually know how many people\u2019s information Kogan actually got. We don\u2019t know what he sold to Cambridge Analytica, and we don\u2019t know today what they have in their system. What we have said and what they\u2019ve agreed to do, is a full forensic audit of their system, so we can get those answers. But, at the same time the UK government, and the ICO, are doing a government investigation and that takes precedence. So, we\u2019ve stood down temporarily, to let the ICO do their investigation and their audit, and once that\u2019s done, we\u2019ll resume ours, so we can get answers to the questions that you\u2019re asking and ultimately to make sure that none of the data persists or is being used improperly. And at that point if it makes sense, we will take legal action if we need to do that to protect people\u2019s information.<\/p>\n<p>Alex Kantrowitz, BuzzFeed: Hey Mark, thanks so much for doing this. We should do this every month, this is great. So, my question is that Facebook is really good at making money. But I wonder if your problems could be somewhat mitigated if the company didn\u2019t try to make so much. So, you can still run Facebook as a free service and collect significantly less data and offer significantly less ad targeting criteria. So, I wonder if you think that would put you and society at less risk and if you think it\u2019s something you\u2019d consider doing?<\/p>\n<p>Mark: People tell us that if they\u2019re going to see ads, they want the ads to be good. And the way to make the ads good, is by making it so that when someone tells us they have an interest, they like technology or they like skiing or whatever it is they like, that the ads are actually tailored to what they care about. So, like most of the hard decisions that we make, this is one where there is a trade-off between values that people really care about. On the one hand people want relevant experiences, and on the other hand I do think that there is some discomfort for how data is used in systems like ads. But I think the feedback is overwhelming on the side of wanting a better experience. You know, maybe its 95-5 or something like that in terms of the preferences that people state to us and in their use of the product. So, that informs us of decisions that we make here to offer the best service to people, but these are hard values trade-offs and I think we are doing the right thing to serve people better.<\/p>\n<p>Nancy Scola, POLITICO: When you became aware in 2015 that Cambridge Analytica inappropriately accessed this Facebook data, did you know that firm\u2019s role in American politics and in Republican politics in particular?<\/p>\n<p>Mark: I certainly didn\u2019t. One of the things and in retrospect looking back at it, people ask, why didn\u2019t you ban them back then? We banned Kogan\u2019s app from our platform, but we didn\u2019t ban Cambridge Analytica in 2015, why did we do that? It actually turns out in our understanding of the situation, they weren\u2019t using any of Facebook\u2019s services back then. They weren\u2019t an advertiser, although they went on to become one in the 2016 elections. And I don\u2019t think they were administering tools and they didn\u2019t build an app directly. So, they were not really a player that we had been paying attention to. So, that\u2019s the history there.<\/p>\n<p>Carlos Hernandez, Expansion: Hi Mark. You mentioned one of the main important things about Facebook is people\u2026 and users\u2019 understanding of the platform. Do you have any plans to let users know how their data is being used? Not just on Facebook but also on Instagram and other platforms that you are responsible for?<\/p>\n<p>Mark: I think we need to do a better job of explaining principles that the service operates under, but the main principles are, you have control over everything you put on the service, and most of the content Facebook knows about you it because you chose to share that content with your friends and put it on your profile. And we\u2019re going to use data to make those services better, whether that\u2019s ranking News Feed, or ads, or search, or helping you connect with people through people you may know, but we\u2019re never going to sell your information. And I think if we can get to a place where we can communicate that in a way that people can understand it, then I think we have a shot of distilling this down to something, to a simpler thing, but that\u2019s certainly not something we have succeeded at doing historically.<\/p>\n<p>Kurt Wagner, Recode: Hey Mark. There\u2019s been the whole #deletefacebook thing that went around a few weeks ago, there\u2019s been advertisers that have that they are either going to pull advertising money or pull their pages down altogether. I\u2019m wondering if on the back end, have you seen any actual change in usage from users or change in ad buys from advertisers over the past couple of weeks as result of all this?<\/p>\n<p>Mark: I don\u2019t think there has been any meaningful impact we\u2019ve observed. But, look, it\u2019s not good. I don\u2019t want anyone to be unhappy with our services or what we do as a company. So, even if we can\u2019t really measure a change and the usage of a product, or the business or anything like that, it still speaks to people feeling like this is a massive breach of trust and that we have a lot of work to do to repair that.<\/p>\n<p>Fernando Santillanes, Grupo Milenio: Hi Mark. Thank you very much for doing this. There\u2019s a lot of concern in Mexico about the fake news. People say that associating with media to [downrank] these fake articles is not enough. We are in an election year, here in Mexico. People are worried that there are a lot of apps, a lot of means, that a candidate won\u2019t manipulate the information. What do you say to Mexicans this election year, here almost all internet users have a Facebook account and that they want to see a more active Facebook position to detect and [downrank] fake news?<\/p>\n<p>Mark: This is important. Let me say two things. The first is that 2018 is going to be an important year for protecting election integrity around the world. There\u2019s the Mexican presidential election, there are big presidential elections in India and Brazil, as well as Pakistan and Hungary and a number of other countries, and the US midterms, of course, too. Second, let me talk about how we\u2019re fighting fake news across the board. Right, because there are really three different types of activity that require different strategies for fighting them, so you can understand all of what we\u2019re doing here. The three basic categories, are: economic actors \u2014 basically spammers \u2014 the second are governments, trying to interfere in elections \u2014 that\u2019s a security issue \u2014 the third is just polarization and some kind of lack of truthfulness in what you\u2019ve described as the media and in terms of people who are legitimately trying to get the opinion they believe out there. So let\u2019s look at each of these briefly.<\/p>\n<p>So for economic actors, these are folks like the Macedonian trolls who we identified with AI tools leading up to the Alabama special election. What these folks are doing is just an economic game, it\u2019s not ideological at all. They come up with the most sensational thing they can, try to push it out to social media and the internet to try to get you to click on it so that they can make money on ads. So, we make it so that the economics stop working for them, and they\u2019ll move on to something else. I mean, these are the same type of people who were sending you Viagra emails in the 90s. Right, we can attack it both sides: on the revenue side we can make it so that they can\u2019t run on the Facebook ad network, and that\u2019s important because now they don\u2019t make as much money on that because the ad network works well for folks. On the distribution side, we make it so that as we detect the stuff that it gets less distribution on news feed. So now we just make it so that it\u2019s less worth it for them, so that they kind of go and do other stuff and we\u2019re seeing that that\u2019s working.<\/p>\n<p>The next category are these national security type issues. So that\u2019s the Russian election interference, and instead of treating it like spammers, you treat it as a security issue. In order to solve that, what we need to do is identify these bad actors. It\u2019s actually less about content, because some of the stuff would\u2019ve been legitimate speech had someone who is not a bad actor been doing it, but people are setting up these large networks of fake accounts, like the IRA had done, and what we need to do is just track that really carefully in order to be able to remove it from Facebook entirely. What we\u2019re seeing is the IRA and organizations like that, are morphing \u2014 whether they\u2019re media organizations or sanctioned news organizations in Russia, are that when we investigate this closely over time, we\u2019re able to prove are completely owned, controlled and operated by the IRA, we take that down and treat it as a security issue.<\/p>\n<p>The third category is about legitimate media. And there, I think there are a few different strategies. The first is doing more fact checking. To your question, in Mexico, we recently launched our fact-checking initiative in Mexico, specifically, leading up to the election, that\u2019s an important thing to do. We find that even though the fact-checkers aren\u2019t checking millions of things a day, we can show them the highest volume things and that can both be used to show a useful signal on the product, and help inform rankings to flag to people if it\u2019s a hoax. But then even beyond that, for stuff that\u2019s not just broad hoaxes, there\u2019s still a big polarization issue, which is that often even if someone isn\u2019t false, they\u2019re kind of cherry-picking facts to tell one side of the story, and the aggregate picture ends up not being true even if the specific facts within it might be. And there, the work that you need to do is about promoting broadly-trusted journalism. The folks who people across society are going to take the full picture and show, and do a fair and thorough job. That\u2019s the news feed change we made there, which I think we\u2019ve gotten relatively good feedback from people using Facebook on that change and the quality of what they\u2019re seeing.<\/p>\n<p>So, those three streams, I think that if we can do a good job on each of those, we\u2019ll make a big dent in the problem. Not only for the Mexican election this year, but across the world and that\u2019s basically the road-map that we\u2019re executing.<\/p>\n<p>Casey Newton, The Verge: With respect to some of the measures you\u2019re putting into place to protect election integrity, and reduce fake news that you just talked about, how are you evaluating the effectiveness of the changes you\u2019re making and how will you communicate regarding any wins and losses in run up to and the aftermath of the next election?<\/p>\n<p>Mark: One of the big things that we\u2019re working on now is a major transparency effort to be able to share the prevalence of different types of bad content. Right now, one of the big issues that we see \u2014 you know a lot of the debate around things like fake news or hate speech happens through anecdotes. People see something that is bad, that shouldn\u2019t be allowed on the service, and they call us out on it. And frankly, they are right, it shouldn\u2019t be there, and we should do a better job of taking that down. What I think is missing from the debate today is the prevalence of the different categories of bad content. Whether it\u2019s fake news, and all the different kinds therein, hate speech, bullying, terror content. All of the things that I think we would all agree are bad and that we want to drive down. The most important thing though there is to make sure that the numbers we put out are accurate. We wouldn\u2019t be doing anyone a favor by putting out numbers, then coming back a quarter later and saying hey we messed this up. Part of the point of transparency is both to inform the public debate and to build trust. And if we have to go back and restate those because we got it wrong, then I think the calculation internally is that it\u2019s much better to take a little longer and make sure we\u2019re accurate then to put something out that might be wrong. I [believe] that\u2019s going to end up being way we should be held accountable and measured by the public. I think it will help create more informed debates. And my hope over time is that the playbook and scorecard that we put out will also be followed by other internet platforms, so that way there can be a standard measure across the industry about how to measure these important issues.<\/p>\n<p>Barbara Ortutay, AP: Hi. Thank you. So one of the things that you\u2019ve addressed recently, some of the ways malicious actors are misusing Facebook. So I\u2019m wondering what are you doing differently now to prevent things from happening, and not just respond after the fact? You know, will this be built into your new product launches that you have to think about and, if possible, [misuse] right away once the product is out?<\/p>\n<p>Mark: Yeah. I think going forward, I think a lot of the new product development has already internalized this perspective of the broader responsibility that we need to take to make sure our tools are used well. I can give you a few examples across different work that we\u2019re doing. But right now, if you take the election integrity work for example, in 2016 we were behind where we wanted to be. We had a more traditional view of the security threats. We expected Russia and other countries to try do to phishing, and traditional kind of security exploits, but not necessarily kind of misinformation campaign that they did. We were behind. That was a really big miss. So now we want to make sure that we\u2019re not behind again. As I mentioned my opening remarks earlier \u2014 since then there was the French election, the German election, you know last fall there was the Alabama special election and we\u2019ve been proactively developing AI tools to detect trolls who are spreading fake news or foreign interference. In the French election and Alabama election, we were able to take down thousands of fake accounts. So that\u2019s an example of proactive work we\u2019re doing to get ahead, which gives me confidence that on that kind of specific issue, around election integrity, we\u2019re making progress. It\u2019s not that there\u2019s no bad content out there. I don\u2019t want to ever promise that we\u2019re going to find everything or that we\u2019ve beaten the enemies into submission. They are still employed, they still have their jobs. We need to strengthen our system. But across the different products and things we\u2019re building, I do think that we\u2019re starting to internalize a lot more that we have this broader responsibility.<\/p>\n<p>Last thing I\u2019ll say on this, I wish that I could snap my fingers and in three or six months have solved all of these issues. But I just think the reality is, given how complex Facebook is and how many systems there we need to rethink our relationship with people and our responsibility there across every single part of what we do. I do think this is a multi-year effort. It doesn\u2019t mean its not going to get better, every month. I think it will continue to get better. I think part of the good news is that we\u2019ve really started ramping up on this a year ago or more. So we\u2019re not getting a cold start, we\u2019re probably a year into a massive three-year push. My hope is that by the end of this year, we\u2019ll have turned the corner on a lot of these issues and people see that things are getting a lot better. But these are just big issues, this is a big shift for us to take a lot more responsibility for how each of the tools are used, not just the developer platform, not just fake news, not just elections, but everything. And its going to take some time. And we\u2019re committed to getting that right and we\u2019re going to invest and keep on working until we do.<\/p>\n<p>Thank you all for joining today. What we announced today were some of changes that we need to make. We\u2019re going to keep on looking for things, we\u2019re going to keep on finding more, and we\u2019ll update you then. Thanks for joining and talking to us about this. We look forward to keeping you updated on our progress.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Palo Alto, CA&#8230;Mark Zuckerberg spoke with members of the press about Facebook\u2019s efforts to better protect people\u2019s information. The following is a transcript of his remarks and the Q&#038;A that followed. Opening Remarks Hey everyone. Thanks for joining today. Before we get started today, I just want to take a moment to talk about what [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":44391,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_cbd_carousel_blocks":"[]","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3,20,1],"tags":[],"class_list":["post-56515","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business","category-featured","category-news","last_archivepost"],"jetpack_featured_media_url":"https:\/\/new.thepinetree.net\/wp-content\/uploads\/2017\/09\/facebook-logo.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/posts\/56515","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=56515"}],"version-history":[{"count":0,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/posts\/56515\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=\/wp\/v2\/media\/44391"}],"wp:attachment":[{"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=56515"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=56515"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/new.thepinetree.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=56515"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}