But the thing Facebook, sorry, Meta, says it is trying to build—a place to “get together with friends and family, work, learn, play, shop, create”—sounds a whole lot like a city. And like any city, the metaverse will need rules, boundaries, and infrastructure if it’s to have as many people as possible thrive and enjoy it.
It doesn’t take a Hobbsean scholar to know that when you’ve got millions of people gathered together in one place, even virtually, someone is going to need to be in charge.
Right now the leading candidate is Mark Zuckerberg. How would the Facebook—er, Meta—CEO govern? Here are some considerations for a Zuckerberg-run metaverse:
The ability for users to stay safe online and in person will be paramount—and based on Facebook’s track record when it comes to enabling or preventing violence and harm on its platforms, there’s reason for concern. The company’s platforms have contributed to everything from genocide in Myanmar to hate speech in Europe and an all-out insurrection in Washington.
The company has made it clear that it will draw the line at illegal activity, but it’s been willing to put profit over safety before. Will it take the same approach to policing the metaverse?
Any virtual space where people spend the majority of their time will need to offer the assurance of both physical and mental safety, and be able to respond to threats quickly like the police are supposed to do in the physical world. Meta will likely need an enforcement mechanism stronger and more nimble than a “report user” button to prevent individual denizens of the metaverse from doing harm.
During a launch event on Oct. 28, Nick Clegg, the company’s head of global affairs and communications, spoke briefly about safety and privacy. He emphasized the importance of transparency in data collection and offering “easy-to-use” safety and parental controls. Zuckerberg also addressed the issue, saying Meta is getting outside consulting on the products it is building, and plans to a design “for safety and privacy and inclusion before the products even exist.”
Enforcement requires the establishment of rules, but those rules require adjudication. The old Facebook got a healthy taste of this when it had to form an oversight board (colloquially known as the “Facebook Supreme Court“) to make content and user-moderation decisions.
The 20-member board includes people from a range of national and professional backgrounds, all bringing a different perspective to questions of free speech and fairness. It was a move meant to make Facebook’s decision-making fairer and less despotic, as the board has the power to override even Zuckerberg. Since late 2020, the board has handed down a few meaty decisions, most notably the decision to suspend the account of US president Donald Trump after the January 2021 attack at the US Capitol.
But in an expanded online world, many more thorny issues will arise—probably more than a 20-person board can handle. For example, can you get robbed (of cryptocurrency) in the metaverse? Can you get digitally murdered in the metaverse?
Surely cases will arise that call for a metaverse court system, but who provides oversight when the courts are created by a private company?
In a world where the world is online, having an internet connection is a necessity. Facebook already is the internet for millions of people around the world, and Facebook messaging service WhatsApp is a vital line of communication for millions more, a fact that was put into stark relief during a site outage on Oct. 4 that affected more than 2 billion people.
If Meta and its products become even more interwoven with every aspect of life, it’s worth asking if the company has some responsibility to provide the physical infrastructure for those things as a basic utility, not a consumer commodity.
There is a precedent for this. In 2015, Facebook launched its “Free Basics” program, working with telecommunications companies across the southern hemisphere to provide internet connections exclusively through Facebook products, at little or no cost to users. The plan has been criticized by net-neutrality activists, however, for being a form of “digital colonialism.”
In a digital-first metaverse, even having an internet connection may not be enough. Perhaps the tools one would need to navigate the metaverse—a smartphone, a tablet, a virtual-reality headset—should be treated like essential infrastructure, just like roads and bridges, created and managed by a higher authority, and paid for by people through a common pool.
No matter what direction Meta takes its business, it’s unlikely to control the entire metaverse, just like Facebook today doesn’t control the entire internet. But as one of the biggest and earliest players, the company is likely to set the terms of engagement.
We’ve already seen the negative consequences that can arise when a tech company (specifically, this tech company) amasses power but refuses to take on the responsibilities of governance. For his part, Zuckerberg seems to be taking things a bit more seriously this time. In his “founder’s letter” announcing the new company, he says that safety and security “must be built into the metaverse from day one.” He also mentions the need for “new forms of governance,” without offering any details.
But it is the details that become constitutions, city codes, bylaws. Deciding, in detail, how things should be run is the everyday boring work of governing. It remains to be seen whether the presumed mayor of the metaverse is up to the task.