Yesterday Facebook CEO Mark Zuckerberg announced his much-battered social media platform will institute new verification requirements for popular pages and for anyone wishing to run political advertisements.
This comes on the back of public outrage over Facebook’s failure to shut down Russian bot accounts during the 2016 election and public ire following the revelation that user data was obtained and used, in some cases illegally, by advertising firm Cambridge Analytica.
Facebook is not culpable in the Cambridge Analytica incident, at least not to the degree most people believe. While the prospect of the network failing to guard user data from theft is troubling and responsibility for this does fall squarely on the shoulders of Facebook’s security people, the selling off of data users uploaded of their own volition to third-party advertisers is another story. Data mining is a long-running practice that is ethically questionable but totally legal. Facebook should not be held to account for a failure of individual users to provide discretionary oversight of their online activities.
The network’s culpability in the spread of misinformation during the 2016 election is also dubious. If Facebook is simply a platform, a kind of digital public square where users exercise freedom of speech and association just as they might do in a physical forum for public debate, then it shouldn’t be their responsibility to monitor and remove anything that they consider fishy. Of course, the question of what exactly Facebook is, whether it’s simply a neutral platform or a more proactive actor with a duty to enforce certain social responsibilities, is uncertain. Certainly, the government, at least in the wake of the 2016 election, has strong opinions that favor the latter view. A cynical mind might question exactly what jurisdiction Congress has over questions surrounding the moral–not legal–responsibility of social media networks.
Yes, social media has become one of the primary means of disseminating information, but services like Facebook still remain private businesses, and what goes on within those platforms is outside the purview of Congressional power. The government cannot regulate Facebook without seriously limiting the First Amendment. Besides, protectionism is insulting to the intelligence of the average voter. The desire to ban those who spread misinformation presupposes that individuals are swayed by the rhetoric of media provocateurs, that voters are lack either the ability to recognize some information is dubious or the will to check sourcing for themselves. Then there is the more philosophical question of how one can ascertain what right and wrong is without access to all available information. The American polity finds itself at a crossroads. It must ask itself whether its comfortable with anyone–government agency or private company–having a monopoly on power when it comes to determining what are “true facts” suitable for public consumption and what is dangerously seductive propaganda.
The question of self-censorship, represented by actions such as Facebook’s new verification requirements, are not necessarily a positive either. Yes, companies changing their policies in response to user outrage is generally a positive. It’s the invisible hand of the free market at work. But, the current environment is one where there’s an implicit threat from government. As Congress drags social media executives before investigative committees and demands an accounting of their default on a responsibility whose existence is questionable (and even supposing it did exist, would be outside the bounds of government to enforce, falling as it does within the realm of private transactions), it is exerting its power. There’s an implicit threat to such actions inherent in the very ability of Congress to compel individuals to appear and testify.
This sends a message: Change your ways or we’ll change them for you. Any changes to policy made by Facebook, then, cannot be considered truly volitional.
But this is not to paint Facebook as a helpless victim of government power lust. The policies they are pursuing do not rebel against centralized control justified by protectionist instincts; they cement it. And this is no more moral when done by a private company than by a government agency. Certainly, a private company has a right to set its own policies. But when these policies run afoul of its fundamental purpose, they ought to be descried, not championed.
The cracking down on illicit accounts is not simply a matter of verifying that people are who they say they are: there’s an overtone of bias control.
The Hill quotes Zuckerberg explaining the rationale behind Facebook’s new verification policy:
“This will make it much harder for people to run pages using fake accounts, or to grow virally and spread misinformation or divisive content that way,” Zuckerberg explained.
Misinformation is an interesting term, because there’s an element of context to it. Misinformation is not an out-and-out falsehood but something that, in the wrong context, might lead to a wrong conclusion. Divisive content is equally troubling, if not more so, because divisiveness is, again, a product of context. What one person considers a divisive opinion might be another’s fundamental belief. In neither case are these terms connected absolutely to outcomes where the outcome is objectively harmful to all: they are context dependent. Yet, the rule itself will be absolutely applied. And it will be applied by Facebook’s administrators, meaning their own opinions about what is considered “misinformation” or “divisive” will prevail. Whether users will have the ability to appeal, and on what grounds, is unclear.
Now, of course, as a private corporation, Facebook has every right to pursue whatever policies it deems fit. Even if it desires to actively discriminate against one viewpoint or another on the ground that it is inflammatory or damaging somehow, it has a right to do so. But Facebook needs to be honest about the effects of this and not hide behind the supposedly laudable guise of promoting objective, fact-driven discourse. These policies are based in viewpoint discrimination and will inevitably promote one set of values over another.
Zuckerberg is not alone in his desire to weed out divisiveness; this is a favorite platitude of politicians, particularly those who make political hay by appearing on self-serious nightly news shows and painting themselves as the sole, sober voice of rationality in Washington. Of course, their disdain for divisiveness rarely prevents them from lobbing accusations against their political opponents.
Divisiveness, though, is not a negative. Humanity is inherently divided by nature of its being. The individualistic lens of life is inescapable. People are unique in their backgrounds, in their beliefs, in their temperaments and in their desires. Friction is inevitable wherever diversity is present. It is a mistake to cast this as a negative. Diversity of vision and opinion has built civilization: some of the most bitter clashes in history led to the greatest feuds. Think Galileo. The Founders.
America’s political foundation has divisiveness as a cornerstone. The revolutionary idea that man, by nature of his individual being, possesses inherent rights that cannot be taken away even by an act of law that carries with it the endorsement of an overwhelming majority, champions the sovereignty of the individual. It recognizes innate divisions within man and places fundamental respect for difference at the core of the nation’s official institutions and weaves this same sentiment into culture.
If Facebook, as a private corporation, desires to promote specific values, it is of course, their right. But it would be gratifying to see Mark Zuckerberg pay some deference to the foundational ideas of the nation that made his achievement possible. It is because America is a nation that respects difference, that breeds tolerance not necessarily for the content of disparate opinions, but for each individual’s right to think and act on those disparate opinions, that Zuckerberg was able to have the freedom to build Facebook. If he is going to market his creation as a platform for expression, it ought to preserve the ideas that made its inception possible and extend others the opportunity to do the same.
If Facebook wants to become more than a social media platform, to become an active agent in molding a society that upholds the values of its executors, this too is within its prerogative. But, if this is the case, Facebook becomes a vastly different animal. It is no longer a neutral tool for communication, but a lobbying platform.
The social media site may institute whatever policies it sees fit in order to promote a discourse it deems in the best interest of the users it serves, but it should not expect to do so in the face of a quiescent public: those whose viewpoints are being discriminated against have every right to fight back against the categorization of their thought as inherently biased. They must do so not out of a desire to have equal access to a private platform (though, when this gains a near monopoly on information, they certainly have an interest in lobbying for equal access) but because the issue of free speech transcends any one means of communication. If certain perspectives are painted in broad strokes as undesirable by the protectionist policies of Facebook, there is a danger that the stigma attached to them there will carry throughout society. It is this Facebook, which suddenly seems to have gained a deep-seated conscientiousness about its social responsibility, must be wary of in any policies that look to regulate the content of speech.
Also published on Medium.