Daxdi now accepts payments with Bitcoin

The Reckoning Is Coming: Regulating Big Tech

(Image: Shutterstock.com)

Cambridge Analytica.

Russian hackers and election meddling.

The Equifax data breach.

Fake news. Twitter and Instagram harassment.

Facebook mining our personal data and—best-case scenario—unabashedly using it to sell us stuff.

What’s a society to do? Ours has begun clamoring for boycotts and regulation, even for breaking up the biggest tech giants.

For a decade (or two), the tech industry, led by the largest, most successful companies, has painted attempts to regulate it as stifling innovation; an impediment to the new, utopian “tech will solve everything” system these benevolent founders seek to build.

Maybe that’s true, but considering the aforementioned abuses, the “Don’t be evil” edict seems to hold less water, and #deletefacebook might finally be having its moment.

Presidential candidates have made trust-busting a part of their platforms.

Europe and California have instituted legislation designed to allow citizens greater control over their personal data and how it’s used.

Other states are following suit, buoyed by bipartisan support.

It feels like major tech regulation is coming, but whether it’s a culmination of decades of regulatory decisions or just a step on the path is unclear. 

'Free' Isn't Free

You probably know some of the basics of how internet advertising targets its viewers.

Sometimes, ads might seem a little too relevant, leading you to wonder whether your phone is listening to your conversations.

You feel uneasy about it, even as you admit that you’d rather see ads for stuff you like than for something completely uninteresting to you.

From the advertisers’ perspective, it’s much more efficient to target just a few people and make sure those people see their ads rather than waste time and money putting ads in front of people who don’t need or care about what they’re selling.

The companies that do this can even track whether a user who has seen a particular ad then visits the store in question. 

We’ve settled into a “freemium” model: In exchange for our data, we get to use free services, including email and social media.

This is how companies such as Facebook make money and still provide us with the services we enjoy (although research has shown that spending more time on Facebook makes you less happy, rather than more). 

(Image: Ink Drop/Shutterstock.com)

But there’s more than one reason to be concerned about letting our personal data be sucked up by tech companies.

There are many ways the wholesale gathering of data is being abused or could be abused, from blackmail to targeted harassment to political lies and election meddling.

It reinforces monopolies and has led to discrimination and exclusion, according to a 2020 report from the Norwegian Consumer Council.

At its worst, it disrupts the integrity of the democratic process (more on this later). 

Increasingly, private data collection is described in terms of human rights—your thoughts and opinions and ideas are your own, and so is any data that describes them.

Therefore, collection of it without your consent is theft.

There’s also the security of all this data and the risk to consumers (and the general public) when a company slips up and some entity—hackers, Russia, China—gets access to it.

“You’ve certainly had a lot of political chaos in the US and elsewhere, coinciding with the tech industry finally falling back to Earth and no longer getting a pass from our general skepticism of big companies,” says Mitch Stoltz, a senior staff attorney at the Electronic Frontier Foundation.

“If so many people weren’t getting the majority of their information about the world from Facebook, then Facebook’s policies about political advertising (or most anything else) wouldn’t feel like life and death.”

Policy suggestions include the Honest Ads Act, first introduced in 2017 by Senators Mark Warner and Amy Klobuchar, which would require online political ads to carry information about who paid for them and who they targeted, similar to how political advertising works on TV and radio.

This was in part a response to the Facebook-Cambridge Analytica scandal of 2016.

Cambridge Analytica Blows Up 

It’s easy to beat up on Facebook.

It’s not the only social network with questionable data-collection policies, but it is the biggest.

Facebook lets you build a personal profile, connect that profile to others, and communicate via messages, posts, and responses to others’ posts, photos, and videos.

It’s free to use, and the company makes its money by selling ads, which you see as you browse your pages.

What could go wrong? 

In 2013, a researcher named Aleksandr Kogan developed an app version of a personality quiz called “thisisyourdigitallife” and started sharing it on Facebook.

He’d pay users to take the test, ostensibly for the purposes of psychological research.

This was acceptable under Facebook policy at the time.

What wasn’t acceptable (according to Facebook, although it may have given its tacit approval, according to whistleblowers in the documentary The Great Hack) was that the quiz didn’t just record your answers—it also scraped all your data, including your likes, posts, and even private messages.

Worse, it collected data from all your Facebook friends, whether or not they took the quiz.

At best guess, the profiles of 87 million people were harvested. 

Zuckerberg on Capitol Hill, April 2018 (Photo by Yasin Ozturk/Anadolu Agency/Getty Images)

Kogan was a researcher at Cambridge University, as well as St.

Petersburg State University, but he shared that data with Cambridge Analytica.

The company used the data to create robust psychological profiles of people and target some of them with political ads that were most likely to influence them.

Steve Bannon, who was Cambridge Analytica’s vice president, brought this technique and data to the Trump 2016 campaign, which leveraged it to sway swing voters, often on the back of dubious or inflammatory information.

A similar tactic was employed by the company in the 2016 “Brexit” referendum. 

In 2017, data consultant and Cambridge Analytica employee Christopher Wylie blew the whistle on the company.

This set off a chain of events that would land Facebook in the hot seat and Mark Zuckerberg in front of the Senate Commerce and Judiciary Committees.

Giving this the best possible spin, it’s a newer, better version of what President Obama’s campaign did, leveraging clever social-media techniques and new technology to build a smoother, more effective, occasionally underhanded but not outright illegal or immoral political-advertising industry, which everyone would be using soon. 

A darker interpretation: It’s “weaponized data,” as the whistleblowers have called it; psyops that use information-warfare techniques borrowed from institutions like the Department of Defense to leverage our information against us, corrupting our democratic process to the point that we can’t even tell if we’re voting for (or against) something because we believe it or because a data-fueled AI knew just what psychological lever to push.

Even applied to advertisements, this is scary.

Did I buy a particular product because its manufacturer knew just how and when to make me want it? Which decisions that we make are our own?

The irony is that Facebook was sold to its early users as a privacy-forward service.

“You might say ‘Well, what happened before the last election—that was pretty darn malicious,’” says Vasant Dhar, a professor of data science at the NYU Stern Center of Business.

“Some people might say, ‘I don’t know—that wasn’t that malicious, there’s nothing wrong with using social media for influence; and besides, there’s no smoking gun, there’s no proof that it actually did anything.’ And that’s a reasonable position too.”

The irony is that Facebook was sold to its early users as a privacy-forward service.

You might remember how MySpace faded into oblivion after Facebook arrived.

That wasn’t an accident; Facebook intentionally painted itself as an alternative to the wide-open world of MySpace. 

Zuckerberg and co-founder Chris Hughes in 2004.

(Photo by Rick Friedman/Corbis via Getty Images)

At this time, “privacy was … a crucial form of competition,” researcher Dina Srinivasan, a Fellow at the Thurman Arnold Project at Yale University, wrote in her Berkeley Business Law Journal paper, "The Antitrust Case Against Facebook." Since social media was free, and no company had a stranglehold on the market, the promise of privacy was an important differentiation.

You needed a .edu email address to sign up for Facebook, and only your friends could see what you were saying.

Facebook made this promise initially: “We do not and will not use cookies to collect private information from any user.” In contrast, MySpace had a policy in which anyone could see anyone else’s profile.

Users, deciding they favored privacy, decamped en masse.

How Things Went Wonky

(Image: Daniel Chetroni/Shutterstock.com)

Later, as Facebook gathered market share—outlasting, outcompeting, or just buying other services—it tried to roll back some of those privacy promises.

In 2007, the company released Beacon, which tracked Facebook users while they visited other sites.

And in 2010, it introduced the “Like” button, which enabled the company to track users (whether or not they clicked on the button) on pages where it was installed. 

By 2014, after buying Instagram and with a record-setting IPO under its belt, Facebook announced publicly that it would be using code on third-party websites to track and surveil people—thus reneging on the promise it had used to establish market dominance in the first place.

In 2017, Facebook paid a $122 million fine in Europe for violating a promise it made not to share WhatsApp data with the rest of the company, which it then did. 

In 2019, the FTC announced a $5 billion settlement with Facebook for a variety of privacy violations, including Cambridge Analytica and lying about its facial-recognition software.

And in January of this year, Facebook said it would not limit political ads, even false ones.

And it won’t fact-check ads or prevent them from targeting particular groups, which is precisely what happened with Cambridge Analytica.

Currently, the company is facing intense criticism over its proposed cryptocurrency, Libra.

(Image: vchal/Shutterstock.com)

To scholars like Srinivasan, this is a classic example of a monopoly leveraging its power to make more money at the expense of consumers—not a fiscal expense, since the service is free, but by delivering a worse product; in this case, a product offering less privacy.

Market share in social media doesn’t work quite like it does in other industries: The network effect creates a positive feedback loop where, as a site gathers users, it becomes more attractive because of those users, making it particularly hard for a competitor to gain traction.

While a company’s size isn’t an indication that it has abused its power, we put up with privacy invasions from Facebook because we don’t have alternatives.

“I want to be a subscriber to a social network, like Facebook, which has more people,” says Nicholas Economides, a professor of economics at the NYU Stern School of Business.

“Big size is rewarded.

If some company manages to really [gain] big, big market share, like Facebook, or Google in its own area, then it gets big benefits.

Consumers really like to be with them.

That means they have abilities to control the market.”

At this point, Facebook had so much of the market that third parties such as news sites couldn’t very well uninstall their Like buttons—they needed them to drive traffic. 

Big Tech’s Version of Monopolies

Bill Gates and Steve Ballmer in 2000 (DAN LEVINE/AFP via Getty Images)

Now that we’re talking about monopolies, it’s time to bring in Microsoft.

In 1995, sensing that controlling how people moved across the internet might be even more valuable than the operating systems it already installed on everybody’s computers, Microsoft bundled the Internet Explorer browser into its Windows OS, thus making sure that every computer came with a ready-to-go default browser — Microsoft’s own. 

The Department of Justice sued Microsoft, and after a long trial and lots of testimony, a judge ruled that Microsoft be broken up into one part that runs the Windows operating system and another part that does everything else.

An appeals court later reduced the penalty, but weakening Microsoft paved the way for a period of technological innovation that gave us Google, Facebook, Amazon, and a renewed Apple.

Many economists say that this was the last major antitrust action. 

In the 1980s or so, an economic theory known as the Chicago School began to gain favor among lawmakers and judges.

It takes a laissez faire approach to antitrust law, limiting the definition of harm to consumers to price increases and claiming the market will sort everything else out.

When the price of your social media network, email system, or video hosting is free, it’s near impossible to bring an antitrust suit under this theory.

But we need to stop thinking about the users as the customers, according to NYU’s Dhar.

“Customers are the people paying them, and users aren’t paying them,” he says.

“The users are just supplying them the data that they’re using for the advertising.”

“The tech industry confounds a lot of the antitrust orthodoxy that is applied in the courts and the government enforcement agencies … because competition works differently,” says the EFF’s Stoltz.

“Instead of having multiple similar products competing, you have different products, but they compete with one another for access to data, for customer loyalty, and for venture capital.”

In spite of this, states are beginning to take action.

A coalition of 50 attorneys general, led by Ken Paxton from Texas, have announced an investigation into Google over its dominance in advertising and how it uses data to maintain that, and others have begun pursuing Facebook over allegations of anti-competitive advertising rates and product quality.

The House Judiciary Committee and Antitrust Subcommittee have been hearing arguments about the role of Amazon, Google, Facebook, and Apple to decide whether the companies have abused their market power.

And politicians at the national level, particularly during candidacy, have threatened specific actions, including splitting Instagram from Facebook. 

To some degree, this is self-interest, says NYU’s Economides.

Facebook’s News Feed and Google News reach a large enough portion of Americans that those platforms can have a big impact on what we see, intentionally or not.

Most people probably won’t scroll past their first page of results after a search, so what bubbles to the top (and what doesn’t) is hugely important....

(Image: Shutterstock.com)

Cambridge Analytica.

Russian hackers and election meddling.

The Equifax data breach.

Fake news. Twitter and Instagram harassment.

Facebook mining our personal data and—best-case scenario—unabashedly using it to sell us stuff.

What’s a society to do? Ours has begun clamoring for boycotts and regulation, even for breaking up the biggest tech giants.

For a decade (or two), the tech industry, led by the largest, most successful companies, has painted attempts to regulate it as stifling innovation; an impediment to the new, utopian “tech will solve everything” system these benevolent founders seek to build.

Maybe that’s true, but considering the aforementioned abuses, the “Don’t be evil” edict seems to hold less water, and #deletefacebook might finally be having its moment.

Presidential candidates have made trust-busting a part of their platforms.

Europe and California have instituted legislation designed to allow citizens greater control over their personal data and how it’s used.

Other states are following suit, buoyed by bipartisan support.

It feels like major tech regulation is coming, but whether it’s a culmination of decades of regulatory decisions or just a step on the path is unclear. 

'Free' Isn't Free

You probably know some of the basics of how internet advertising targets its viewers.

Sometimes, ads might seem a little too relevant, leading you to wonder whether your phone is listening to your conversations.

You feel uneasy about it, even as you admit that you’d rather see ads for stuff you like than for something completely uninteresting to you.

From the advertisers’ perspective, it’s much more efficient to target just a few people and make sure those people see their ads rather than waste time and money putting ads in front of people who don’t need or care about what they’re selling.

The companies that do this can even track whether a user who has seen a particular ad then visits the store in question. 

We’ve settled into a “freemium” model: In exchange for our data, we get to use free services, including email and social media.

This is how companies such as Facebook make money and still provide us with the services we enjoy (although research has shown that spending more time on Facebook makes you less happy, rather than more). 

(Image: Ink Drop/Shutterstock.com)

But there’s more than one reason to be concerned about letting our personal data be sucked up by tech companies.

There are many ways the wholesale gathering of data is being abused or could be abused, from blackmail to targeted harassment to political lies and election meddling.

It reinforces monopolies and has led to discrimination and exclusion, according to a 2020 report from the Norwegian Consumer Council.

At its worst, it disrupts the integrity of the democratic process (more on this later). 

Increasingly, private data collection is described in terms of human rights—your thoughts and opinions and ideas are your own, and so is any data that describes them.

Therefore, collection of it without your consent is theft.

There’s also the security of all this data and the risk to consumers (and the general public) when a company slips up and some entity—hackers, Russia, China—gets access to it.

“You’ve certainly had a lot of political chaos in the US and elsewhere, coinciding with the tech industry finally falling back to Earth and no longer getting a pass from our general skepticism of big companies,” says Mitch Stoltz, a senior staff attorney at the Electronic Frontier Foundation.

“If so many people weren’t getting the majority of their information about the world from Facebook, then Facebook’s policies about political advertising (or most anything else) wouldn’t feel like life and death.”

Policy suggestions include the Honest Ads Act, first introduced in 2017 by Senators Mark Warner and Amy Klobuchar, which would require online political ads to carry information about who paid for them and who they targeted, similar to how political advertising works on TV and radio.

This was in part a response to the Facebook-Cambridge Analytica scandal of 2016.

Cambridge Analytica Blows Up 

It’s easy to beat up on Facebook.

It’s not the only social network with questionable data-collection policies, but it is the biggest.

Facebook lets you build a personal profile, connect that profile to others, and communicate via messages, posts, and responses to others’ posts, photos, and videos.

It’s free to use, and the company makes its money by selling ads, which you see as you browse your pages.

What could go wrong? 

In 2013, a researcher named Aleksandr Kogan developed an app version of a personality quiz called “thisisyourdigitallife” and started sharing it on Facebook.

He’d pay users to take the test, ostensibly for the purposes of psychological research.

This was acceptable under Facebook policy at the time.

What wasn’t acceptable (according to Facebook, although it may have given its tacit approval, according to whistleblowers in the documentary The Great Hack) was that the quiz didn’t just record your answers—it also scraped all your data, including your likes, posts, and even private messages.

Worse, it collected data from all your Facebook friends, whether or not they took the quiz.

At best guess, the profiles of 87 million people were harvested. 

Zuckerberg on Capitol Hill, April 2018 (Photo by Yasin Ozturk/Anadolu Agency/Getty Images)

Kogan was a researcher at Cambridge University, as well as St.

Petersburg State University, but he shared that data with Cambridge Analytica.

The company used the data to create robust psychological profiles of people and target some of them with political ads that were most likely to influence them.

Steve Bannon, who was Cambridge Analytica’s vice president, brought this technique and data to the Trump 2016 campaign, which leveraged it to sway swing voters, often on the back of dubious or inflammatory information.

A similar tactic was employed by the company in the 2016 “Brexit” referendum. 

In 2017, data consultant and Cambridge Analytica employee Christopher Wylie blew the whistle on the company.

This set off a chain of events that would land Facebook in the hot seat and Mark Zuckerberg in front of the Senate Commerce and Judiciary Committees.

Giving this the best possible spin, it’s a newer, better version of what President Obama’s campaign did, leveraging clever social-media techniques and new technology to build a smoother, more effective, occasionally underhanded but not outright illegal or immoral political-advertising industry, which everyone would be using soon. 

A darker interpretation: It’s “weaponized data,” as the whistleblowers have called it; psyops that use information-warfare techniques borrowed from institutions like the Department of Defense to leverage our information against us, corrupting our democratic process to the point that we can’t even tell if we’re voting for (or against) something because we believe it or because a data-fueled AI knew just what psychological lever to push.

Even applied to advertisements, this is scary.

Did I buy a particular product because its manufacturer knew just how and when to make me want it? Which decisions that we make are our own?

The irony is that Facebook was sold to its early users as a privacy-forward service.

“You might say ‘Well, what happened before the last election—that was pretty darn malicious,’” says Vasant Dhar, a professor of data science at the NYU Stern Center of Business.

“Some people might say, ‘I don’t know—that wasn’t that malicious, there’s nothing wrong with using social media for influence; and besides, there’s no smoking gun, there’s no proof that it actually did anything.’ And that’s a reasonable position too.”

The irony is that Facebook was sold to its early users as a privacy-forward service.

You might remember how MySpace faded into oblivion after Facebook arrived.

That wasn’t an accident; Facebook intentionally painted itself as an alternative to the wide-open world of MySpace. 

Zuckerberg and co-founder Chris Hughes in 2004.

(Photo by Rick Friedman/Corbis via Getty Images)

At this time, “privacy was … a crucial form of competition,” researcher Dina Srinivasan, a Fellow at the Thurman Arnold Project at Yale University, wrote in her Berkeley Business Law Journal paper, "The Antitrust Case Against Facebook." Since social media was free, and no company had a stranglehold on the market, the promise of privacy was an important differentiation.

You needed a .edu email address to sign up for Facebook, and only your friends could see what you were saying.

Facebook made this promise initially: “We do not and will not use cookies to collect private information from any user.” In contrast, MySpace had a policy in which anyone could see anyone else’s profile.

Users, deciding they favored privacy, decamped en masse.

How Things Went Wonky

(Image: Daniel Chetroni/Shutterstock.com)

Later, as Facebook gathered market share—outlasting, outcompeting, or just buying other services—it tried to roll back some of those privacy promises.

In 2007, the company released Beacon, which tracked Facebook users while they visited other sites.

And in 2010, it introduced the “Like” button, which enabled the company to track users (whether or not they clicked on the button) on pages where it was installed. 

By 2014, after buying Instagram and with a record-setting IPO under its belt, Facebook announced publicly that it would be using code on third-party websites to track and surveil people—thus reneging on the promise it had used to establish market dominance in the first place.

In 2017, Facebook paid a $122 million fine in Europe for violating a promise it made not to share WhatsApp data with the rest of the company, which it then did. 

In 2019, the FTC announced a $5 billion settlement with Facebook for a variety of privacy violations, including Cambridge Analytica and lying about its facial-recognition software.

And in January of this year, Facebook said it would not limit political ads, even false ones.

And it won’t fact-check ads or prevent them from targeting particular groups, which is precisely what happened with Cambridge Analytica.

Currently, the company is facing intense criticism over its proposed cryptocurrency, Libra.

(Image: vchal/Shutterstock.com)

To scholars like Srinivasan, this is a classic example of a monopoly leveraging its power to make more money at the expense of consumers—not a fiscal expense, since the service is free, but by delivering a worse product; in this case, a product offering less privacy.

Market share in social media doesn’t work quite like it does in other industries: The network effect creates a positive feedback loop where, as a site gathers users, it becomes more attractive because of those users, making it particularly hard for a competitor to gain traction.

While a company’s size isn’t an indication that it has abused its power, we put up with privacy invasions from Facebook because we don’t have alternatives.

“I want to be a subscriber to a social network, like Facebook, which has more people,” says Nicholas Economides, a professor of economics at the NYU Stern School of Business.

“Big size is rewarded.

If some company manages to really [gain] big, big market share, like Facebook, or Google in its own area, then it gets big benefits.

Consumers really like to be with them.

That means they have abilities to control the market.”

At this point, Facebook had so much of the market that third parties such as news sites couldn’t very well uninstall their Like buttons—they needed them to drive traffic. 

Big Tech’s Version of Monopolies

Bill Gates and Steve Ballmer in 2000 (DAN LEVINE/AFP via Getty Images)

Now that we’re talking about monopolies, it’s time to bring in Microsoft.

In 1995, sensing that controlling how people moved across the internet might be even more valuable than the operating systems it already installed on everybody’s computers, Microsoft bundled the Internet Explorer browser into its Windows OS, thus making sure that every computer came with a ready-to-go default browser — Microsoft’s own. 

The Department of Justice sued Microsoft, and after a long trial and lots of testimony, a judge ruled that Microsoft be broken up into one part that runs the Windows operating system and another part that does everything else.

An appeals court later reduced the penalty, but weakening Microsoft paved the way for a period of technological innovation that gave us Google, Facebook, Amazon, and a renewed Apple.

Many economists say that this was the last major antitrust action. 

In the 1980s or so, an economic theory known as the Chicago School began to gain favor among lawmakers and judges.

It takes a laissez faire approach to antitrust law, limiting the definition of harm to consumers to price increases and claiming the market will sort everything else out.

When the price of your social media network, email system, or video hosting is free, it’s near impossible to bring an antitrust suit under this theory.

But we need to stop thinking about the users as the customers, according to NYU’s Dhar.

“Customers are the people paying them, and users aren’t paying them,” he says.

“The users are just supplying them the data that they’re using for the advertising.”

“The tech industry confounds a lot of the antitrust orthodoxy that is applied in the courts and the government enforcement agencies … because competition works differently,” says the EFF’s Stoltz.

“Instead of having multiple similar products competing, you have different products, but they compete with one another for access to data, for customer loyalty, and for venture capital.”

In spite of this, states are beginning to take action.

A coalition of 50 attorneys general, led by Ken Paxton from Texas, have announced an investigation into Google over its dominance in advertising and how it uses data to maintain that, and others have begun pursuing Facebook over allegations of anti-competitive advertising rates and product quality.

The House Judiciary Committee and Antitrust Subcommittee have been hearing arguments about the role of Amazon, Google, Facebook, and Apple to decide whether the companies have abused their market power.

And politicians at the national level, particularly during candidacy, have threatened specific actions, including splitting Instagram from Facebook. 

To some degree, this is self-interest, says NYU’s Economides.

Facebook’s News Feed and Google News reach a large enough portion of Americans that those platforms can have a big impact on what we see, intentionally or not.

Most people probably won’t scroll past their first page of results after a search, so what bubbles to the top (and what doesn’t) is hugely important....

PakaPuka

pakapuka.com Cookies

At pakapuka.com we use cookies (technical and profile cookies, both our own and third-party) to provide you with a better online experience and to send you personalized online commercial messages according to your preferences. If you select continue or access any content on our website without customizing your choices, you agree to the use of cookies.

For more information about our cookie policy and how to reject cookies

access here.

Preferences

Continue