States Gear Up to Limit Use of Biometrics and Biological Data

This may be the year when the limitation of biometric capture goes national. Right now, company’s using biometrics are driven by one state law, but others could soon join.

As limits on biometrics cascade forth from Illinois in private cases based on the state’s Biometric Information Protection Act (BIPA), other state legislatures have decided to place limits on the capture and use of biometric information. The private right of action and statutory damages offered by BIPA have made Illinois the experimental lab where U.S. companies learn what counts as a biometric program and what their limits on that program may be.  Illinois may soon have company.

New York’s legislature is considering restrictions of consumer biometrics this term, and the proposed act looks like Illinois’ BIPA, requiring written notice of taking a biometric identifier, notice of how the identifier will be used and disposed of, and written permission from the subject to do so. It also contains a broadly worded “thou shalt not profit from anyone’s biometric identifier” that could eviscerate the entire biometric technology industry if it is interpreted in an expansive fashion. The disclosure prohibitions are also surprisingly broad and could lead to liability for simply using a biometric tech processor. The legislation also contains a private right of action and statutory damages that seem lifted straight from BIPA.

New York has also proposed a less comprehensive bill that would restrict companies from using biometric information in marketing. I don’t understand the driver for this particular bill, if the legislature is more concerned about a company only marketing to people grouped by biometric data, so selling to people with brown eyes or with single whorls in their thumbprints, or whether it is concerned with the manipulation of serving ads using your own voice or earlobes to sell material. However, there must be a concern because someone wrote an Act to consider.

Legislation proposed in Maryland regulates biometric identifiers and requires companies that are capturing such information to publish a written retention policy that will establish “guidelines for permanently destroying biometric identifiers and biometric information on the earlier of” three years or when the initial purpose for obtaining the biometric identifiers was satisfied. The Maryland act, like New York and Illinois, includes the same private right of action and statutory damages clauses.

Virginia has proposed a bill directed primarily at employers who chose to use biometric tools with their employees. The bill requires written informed consent from an employee before the capturing and storing of the biometric data. The bill would also restrict employers from profiting from the biometric data of their workers.

South Carolina’s entry into this race is a consumer protection act with very broad definitions of personal information and biometric information. This bill is almost a “CCPA for biometrics” which addresses consumer rights to prevent the sale of biometric data, protections for children, and the prohibition on discrimination against consumers for protecting their biometric data. This act seems to anticipate a future world where companies are using biometric data in more expansive ways than much of what I have seen, which is primarily biometric use for identification, authentication, or other security purposes. There is some voice stress analysis in use, but it seems the target of this bill is anticipatory, rather than reactionary.

California’s legislature passed one of the most thoughtful and constructive biometric laws last year, but it was vetoed by Governor Gavin Newsom. A similar law was introduced into this year’s legislative session. The Genetic Information Privacy Act (GIPA) placed limits on what companies could do with DNA information gathered from California residents, addressing a major privacy loophole that affects the DNA entertainment industry.

In the U.S., HIPAA protects biological information that a person would give to a doctor, hospital, or pharmacist to assist in medical treatment, so DNA provided for this purpose would be covered under federal privacy protections. However, millions of people have decided to swab themselves and hand this DNA data – the core information describing a person’s physical being – to unregulated private companies who reserve the right to use your DNA information for all kinds of purposes. Some of these recreational DNA mills provide your data to law enforcement and some to the pharma industry, and at least one has been recently bought by big private equity firms looking to expand the range of what can be done with volunteered DNA. So this is a significant privacy problem, in part because most people who swab themselves for the benefit of these private companies are unaware of the risks and likely exposure of their biological information.

The newly introduced California law, like GIPA, would require direct-to-consumer genetic testing companies to honor a consumer’s revocation of consent to use the DNA sample and to destroy the biological sample within 30 days of revoking consent. It would also provide consumers access to their genetic data. The law would not provide a provide right of action but could be enforced by state or local officials. It may be written to overcome Gov. Newsom’s objects, which he claimed were related to restricting COVID-fighting efforts.

These legislative actions may or may not be passed into law. In any case, it is clear that the use of biometrics by businesses for consumers, marketing, and employees has sparked the imagination of state legislatures, and we are only likely to see more action in biometrics for years to come.

Should CDA Section 230 Be Changed?

In the current environment of reckoning for the societal power of Big Tech, one threat seems ever-present on the tongues of those who would cut these companies down to size. Enacting this threat is likely to have the opposite effect that many people intend, but it is still worth our consideration as an answer to problems in the way the digital world affects society.

Lawmakers and regulators have threatened to transform the way social media operates on the internet by revoking a law that protects internet hosting companies from liability for third-party content posted on the sites of those companies. The law, known as Section 230 of the Communications Decency Act (CDA), has affected the way the internet treats people’s posts for nearly two decades. Changing it would likely bring unintended consequences.

Both US political parties, when evincing concern about the size and power of digital social media companies, claim that the protections for lawsuits afforded by the CDA should be abolished. Both President Biden and Trump have advocated for its revocation. Some see this as a simple way to punish Facebook, Google, and Twitter for specific disfavored behavior. But doing so would be substantially more far-reaching than anticipated, with effects on every company allowing third party comments or content on their websites. That may not be a bad thing.

In its well-crafted guide to CDA Section 230, The Verge explains that the 1996 provision says an interactive computer service cannot be treated as the publisher of third-party content, thus protecting websites from many types of lawsuits including defamation. “Sen. Ron Wyden (D-OR) and Rep. Chris Cox (R-CA) crafted Section 230 so website owners could moderate sites without worrying about legal liability. The law is particularly vital for social media networks, but it covers many sites and services, including news outlets with comment sections. . . The Electronic Frontier Foundation calls it “the most important law protecting internet speech.”

Do you think Facebook is more like a telephone company or a newspaper? It has aspects of each. The U.S. has a long history of tight regulations for the good of the general public on industries it considers to be utilities. Internet companies have tended to fight the characterization of the internet or services provided there as being utilities. The LA Times and academic sources have recently argued that the Internet should be considered a utility, and the pandemic experience of working and learning at home has made the argument stronger. Revoking Section 230 would be a quick method of forcing more public responsiveness from internet companies.

The arguments in favor of eliminating Section 230 protections for social media contain substantial hyperbole and obvious false equivalencies.  Some people are saying this needs to happen because social media companies exercised too much power by blocking Trump’s accounts, and shrieking about tyranny and Constitutional issues.

Let’s not pretend this is a First Amendment question or our country “has become China” when the exact opposite is what brought this crisis to a head. The First Amendment protects speech against state action. The state is not acting here, so the First Amendment is not at issue. The state – and the President in this instance – can say anything publically and those thoughts will be noted, published, and disseminated over dozens of channels and outlets. As the former president and still the presumptive head of his party, the press will continue to pay attention. Private companies can decide which content to allow on the platforms they pay for. Television news is not required to publish all of Joe Biden’s musings, and Twitter shouldn’t be required to publish all of Trump’s musings. It has been interesting to watch a group of Americans who simultaneously insisted that U.S. businesses be allowed to operate with minimum regulation also for deeper regulation of American companies in this particular circumstance.

It is also absurd to claim that private companies censoring the harmful lies and deadly provocations of a U.S. President is somehow authoritarian. In authoritarian regimes like Russian, North Korea, or China, the word of the Dear Leader is not allowed to be censored; dissenters are censored. In the present social media controversy, the most powerful person in the world is being de-platformed by some private companies for leading a terrorist movement that kills people while attempting to destroy proven democratic outcomes. This is the opposite of totalitarian state action – private actions taken in defiance of an attempted totalitarian takeover.[1] One of the world’s most powerful business executives, Jack Ma, may have just emerged from self-imposed exile or government censure related to the digital platforms he controls. The government, and its leaders, have a monopoly on physical and financial force. China, and other totalitarian states, threaten businesses who question them, businesses don’t censor the government.

Also keep in mind that any company that hosts comments online, from MSNBC to Fox News, from TMZ to the New York Times, and everyone in between, would be affected if Section 230 were removed as a defense to lawsuits. WordPress and other companies hosting bloggers would be affected. Even companies like AT&T and Verizon, who may be providing technical hosting to the entities offering online opinion spaces could be affected and may need to change business models.  This is not simply a direct attack on Facebook and Twitter with no collateral damage.

Now that we know how deeply some people can be manipulated and twisted by online content, maybe we have reached a point where we should require the hosts of that content to be more responsive to how they are affecting our society and more accountable for cleaning up the garbage. We worship free speech in this country, but the time may have arrived for us to be more aggressive about removing harmful lies and hate speech from our digital multilogue. We already moderate content, so this would be a change in degree, not a fundamental change in kind.

One of the ways to encourage this to happen is to remove Section 230 protections from internet service providers and social media companies. So the people calling the loudest for Section 230’s revocation will be the ones who scream the loudest at the clear effects of that revocation – an internet where lies and irresponsible provocations are open to a lawsuit and therefore policed much more severely than they have been. For example, we saw how quickly online sex offers dried up and disappeared from places like Craig’s list when the possibility of host liability for the ads became an issue.

Section 230 of the CDA has performed its desired function – it showed us what an internet free market of ideas could be.  But now that we know the downside of being swamped in toxic commercial and political manipulations, maybe we open this market to the American legal system, encouraging our market to be more carefully managed for the protection of the most vulnerable. Section 230 may have lived its useful life and be ready for retirement.

[1] This is also a textbook strategy for addressing the leaders of terrorist organizations like Al Qaeda. De-platforming the leadership can start to defuse the effect of inciting hate and lies.

The New Age of Content Moderation(?)

The huge search and social media platforms of the internet are reaching an inflection point. For decades they have been able to deflect attention from their role as content providers. The issue is now front and center in our national debate and the long-reigning status quo is likely to settle into a new mode of operation, if not consensus.

Current political polarization and the murderous result of an ocean of bald-faced unsupported easily-refutable lies have turned an otherwise dry topic – where do big companies draw the line when deciding which third-party information to host on their systems – into a crisis for our democracy. We have finally reached the juncture where the promulgation and repetition of lies created such an obvious and attributable result that we can’t ignore the causes. Like the production of sauerkraut, failure to scrape all the scum off the top can render the entire concoction sickeningly poisonous.

Make no mistake, the large search and social media companies have always moderated content. The easiest place to see this is their censoring of heavily sexualized content. Google and Bing’s search algorithms and the Facebook/Instagram rules restrict pornography and other content they believe that many people would find objectionable or inappropriate for children.  If they didn’t, their systems would be swamped by sex advertisements and solicitations for the deeper debasements of the human id. How do I know this? For one reason, Google and Facebook both tell us so.  For another reason, I was on the content restriction team at CompuServe – a digital media company that contracted third parties for content and provided digital spaces for people to congregate. I saw firsthand that if the sex isn’t moderated or cut completely, demand for it will overwhelm the rest of the content. People’s desires may be uncomfortable to discuss, but they are predictable.

In fact, the kinds of statements inciting violence that were recently banned have violated Twitter rules for ages, and Twitter has previously dropped accounts that advocate hatred and violence. However, the stated community standards have not been followed consistently, and social media sites have financial incentives to prioritize controversial and incendiary content on their services – experience has demonstrated that people will spend more time and energy on the sites when those people are angry or upset.

Manipulating people’s emotions and polarizing their populations has been great business for Facebook and Twitter, and it was only in the aftermath of the obvious Russian manipulations leading up to the 2016 elections that a significant percentage of the general public in the U.S. considered calling social media companies into account for the results of their policies of incitement. If you remember the reactions of Zuckerberg at the time, he seemed equally surprised that his networking company, which had been started to let college students share thoughts with each other, could sway elections and be manipulated with serious social cost, though he later acknowledged the naivety.

So the coming changes in content moderation are a matter of prioritizing social responsibility for the platform’s economic interest in polarization and emotional manipulation. There have always been rules here, and the social media companies have always given lip service to enforcing community standards, but now the community may coerce the companies into taking those standards – and their place in our society – seriously. Facebook has acknowledged that its platform has been used to incite and encourage violence in some instances.

Europe, with different laws and social priorities toward freedom of speech, started this discussion in earnest with the big tech companies.  When Google executives were criminally charged in Italy for not removing illegal content and a CompuServe executive was arrested in Germany for allowing illegal goods to be sold online, U.S. companies learned that they needed to consider the differing community standards of the countries where their customers resided. Instagram and Twitter operate with greater content limitations in majority Islamic countries and other more restrictive societies. They made these adjustments overseas, so why can’t they make appropriate adjustments to their content moderation in the U.S. and Canada? Google has made content accommodations for the “Right to be Forgotten” in the European Union, so we know that such moves are possible to meet the standards of important communities.

We now know not only that encouraging conflict and distress creates micro-scale problems for individuals – bullied teens, victimized women, sufferers of depression and anxiety – but on a macro scale for our society as a whole. So we need a responsible discussion of how digital content management can be adjusted for the benefits of our communities, even if the adjustments harm the profits of big tech.

It is time for a reckoning with the power and incentives for digital content control, but this should not be driven by the grievance of one political party or the other.  It should be driven by a desire to promote the best in our society while reducing manipulation, division, and hate.

Why Big Tech Wants Your Body

Your body may be a wonderland or a wasteland, but it is a goldmine of data. Collectors of information have noticed.

In our midwinter exploration of the economic and legal foundations of data regulation, we next turn to a natural tool for personal identification, for ongoing transactions (like breathing, walking, and heartbeats), and for categorization – your body. Big tech wants your body, and apparently, we are willing to offer our bodily data upon the altar of big tech.

Regulators know this and are beginning to address biometrics – the measurements taken from our physical presence – in law. States like Illinois, Texas, and Washington have laws requiring permission of the data subject for the capture and use of certain biometric indicators. The European Union classifies biometric information as sensitive data and companies and governments are fined for capturing this data outside the strict rules.

We have all read about privacy concerns with fitness technology. Locations of secret military bases revealed by the public display of soldier’s running/exercise routes on fitness tracking apps. Divorce lawyers and suspicious lovers using fitness tracking data to find cheating spouses. (NFL Network correspondent Jane Slater discovered her ex-boyfriend’s fitness monitoring proving too much sweating and heavy breathing away from home in the wee hours of the morning – Slater wrote, “His physical activity levels were spiking. Spoiler alert: He was not enrolled in an Orangetheory class at 4 a.m.”) I’m certain law enforcement officials this week are using fitness trackers and smartphone geolocation to confirm the locations of the U.S. Capitol rioters caught on camera (preserved on Bellingcat) and identified with facial recognition software. Criminal charges will follow.

But many people don’t know how a collection of physical information about your body can be used by large tech companies.  Facial recognition software has become controversial, but there is no reason that the government couldn’t create a database with body measurements as a foundation. Some security programs use gait recognition and matching. The way you move through space is as unique as a fingerprint, and computer software has been developed to compare the walk of a masked bank robber with the walks of the police’s suspects of the crime. The Chinese government has developed gait recognition software as part of its population control measures.

Virtual tailoring can be important in a pandemic, when you may not want to visit someone who will be close enough to measure your neck and inseam. The MTailor app uses smartphone cameras to customize clothes for the past six years.

Amazon has been especially keen to take your body measurements. In the fashion selfie app that Amazon Shopping Service closed down last year, its Halo fitness app which the Washington Post cited as the most intrusive tech it had ever tested, to the new clothes sizing app that claims to customize shirts just for you, Amazon has been producing “consumer benefits” that require us to submit measurements to the company. The newest tech “uses your height, weight, and two photos to create a precise fit” for clothes that you would order from Amazon. According to the Washington Post, the Halo fitness tracker “tells you everything that’s wrong with you. You haven’t exercised or slept enough, reports Amazon’s $65 Halo Band. Your body has too much fat, the Halo’s app shows in a 3-D rendering of your near-naked body. And even: Your tone of voice is “overbearing” or “irritated,” the Halo determines, after listening through its tiny microphone on your wrist.”  Too much truth may be frustrating for all but the most masochistic of consumers. But Amazon benefits from all of this data.

For the clothing app, Amazon deletes the pictures you send to assist in the sizing, but it keeps the data that includes a virtual model of your body. This information can clearly be helpful for the described function, but it serves other purposes as well. If this information falls into your data aggregation file, held by and on behalf of information management companies everywhere, it can be used both to identify you by body type and specific body information, but to keep tabs on the changes in your body shape over time.  This could trigger sales contact based on assumptions of pregnancy, illness, or other health emergencies, or even age. Whether your body shape changes over time can lead to assumptions about a fitness routine, consumption, and lifestyle issues. Companies like Amazon will make sales suggestions – books, vitamins, workout equipment, diapers – based on these assumptions. Target operated a 20,000 3-D body scan survey in Australia and is likely to use that information for analytics of all kinds.

Some companies will identify you by your body movement. Some will use body information to sell you things or to combine that data with other information that places you in helpful sales categories – for example, a fit, active 20-year-old will receive different sales pitches than a heavy, sedentary 50-year old. And we will only ever see the tip of the iceberg.  For example, Microsoft has applied for a patent that would use body movement and facial expressions to evaluate the success of business meetings. The possibilities are endless. And, as discussed in my blog posts from the past two weeks, body data provided to Amazon, Target, or other companies can be used by that company for almost any reason.

Your body may belong to you, but the story it tells belongs to big tech.

One Transaction Generates Data to Feed Multitudes

Last week I jumped from the starting point of the newest U.S. anti-trust action against Google into a discussion about the legal and economic status of data. I would like to carry the discussion of data further.

To briefly recap: data is history (it describes someone at a given time or it describes something that happened at a given time); history is not subject to ownership by anyone; and while we have some laws restricting use or financial exploitation of the information, generally anyone who can figure out how to use data legally will be allowed to use it productively. Data is considered to be a commodity by many, including the states AGs who just sued Google. Is this more like harnessing the wind, drilling for oil, issuing securities, or cultivating crops? We are still figuring out the appropriate analogies, and the correct analogy may depend on the type of data collected and how it is used.

What do I mean when I say anyone can use data productively? In the U.S., except for limited restrictions, many parties can claim control and use of the descriptions of a transaction. For example, if Anita purchased a pair of slippers from a local store to be shipped to her house, and she paid with the store’s online application accessed from her smartphone, a dozen parties have claims on some or all of that information. Under current law, all of these parties could use the information to effectuate the transaction, and likely for internal purposes, and some could transfer that data to others in most circumstances.

Anita thinks this data is hers because it describes her transaction, and California and the EU give her some rights to limit how the information is shared. But there are many more “first parties” who feel they were part of the transaction and will keep/use records of it. Who would collect data from this transaction?

The store she purchased the slippers from, of course, maintains this data as its own sales record.  But also that store’s merchant bank, it’s payment processing company and probably the shipping company used to deliver the goods.  By the way, the shipping company could actually be an entire series of primary shippers, fulfillment coordinators, warehouse operators, and trucking or delivery contractors, all of whom now have Anita’s name, address and probably know what was shipped to her house. They may work for one company or they may represent several separate entities. The store may have a special purchase points discount program with an outside marketing firm managing this program and keeping Anita’s information in their databases.

The store’s online presence is likely monitored by Google Analytics or a similar data company. If Anita came to the store through an online advertisement, then the site hosting the ad, the company managing the ad buy, and the ad placement network would likely have detailed information on Anita’s purchase and may receive funds from the clickthrough and/or the purchase. There could be several variations of this set up which could include other parties receiving Anita’s information.

But Anita’s side of the transaction also creates interested parties. Since she found and purchased the slippers over her phone, the company that operates the application she used will capture all of the transaction data. So may the company that provides the core software for the phone and allowed the app to be downloaded – likely Apple, Google, or Samsung. The phone company that connected the transaction – Verizon, T-Mobile, or AT&T – may collect information about the transaction too.  All of these companies can include location data of when and where the transaction was actually completed and may charge to pass this data to either the company’s mentioned on the retailer’s side of the transaction, third parties interested in this transaction or to data aggregators.

And, of course, Anita needs to pay for the slippers, so her bank will keep the data, and so will the company sponsoring the payment application she used – Venmo, PayPal, MasterCard, Visa, AmEx. All of these financial companies think of themselves as data companies now and make significant money packaging up the data about all of our transactions, analyzing them with machine learning programs, and selling the information – aggregated or otherwise – to anyone who might be interested. Some of these companies will package up the names of everyone who bought slippers or footwear over the past month and sell contact with these people to other retailers who want to find live slipper-buyers.  Maybe a retailer’s analytics show that people who just purchased slippers will soon purchase sweatpants or a robe, or even cocoa mix and marshmallows, so they want to send out a coupon when they know a buyer is ready. And, as with the shipping companies, there are lots of business structures to serve these markets, with financial processors and marketing consultants and data analytic specialists, so the number of companies in the chain is likely higher than you might think.

Nearly all of the companies I just mentioned have a first-degree relationship to the transaction – the company performed a service that most people would recognize as part of that transaction. As you move further out along the chain to second and third-degree relationships, or companies that were not involved at all in making the introduction, the sale, the payment, or the delivery, you still find people making their living off of the data generated by Anita’s purchase of slippers.

I describe this transaction and the data it generates to help explain why the attention economy is complex and why it is difficult for Anita to say “the information about my purchase belongs to me, and not to anyone else.” Not only is history, not something that one person can own, but a dozen parties have a legitimate claim to that same sliver of history, and dozens more are likely making use of it in an intricate data-focused economy.

Technologies Lost in 2020

In 2020, a more tragic year than most, we lost giants of the technology world. We lost Larry Tesler, PARC’s magician, later at Apple, who helped develop the computer commands that run our lives, like ‘search and replace’ and ‘copy and paste,’ and other concepts crucial for user-friendly software. We lost Russell Kirsch, creator of the pixel and the first digital photo. We lost Gideon Gartner, who pioneered rigorous research for companies buying computing technologies and left his name on his legacy consultancy. But we also lost important technologies.

Technologies, like animals and plants, have life cycles. Tech innovation is born, if it can sustain investment and interest it lives, and eventually it either disappears or gives way to a better method of solving the problem. I keep a very old cash register, telephone, typewriter, and adding machine in my office to remind me about the fleeting life of tech.

Due to the pandemic, 2020 saw the rise of technologies that might have foundered or that found a new life accelerating their development and acceptance. Videoconferencing finally hit its stride, not just for meetings, but webinars, conferences, parties, and weddings. Grocery delivery, which failed so spectacularly in the 1990s and had failed to catch on in many markets since then, has exploded and gone public. Online wellness apps have exploded.

But many technologies and tech services died in 2020, and we stand here to remember them.

Adobe ends support for its Flash Player today and will block content from running in Flash Player in two weeks. If you wanted to do anything fun on the internet in the 1990s and early 2000s, you needed the Abode Flash Player.  It gave us a glimpse of what the medium could be. But Flash had technical problems and security issues and had been replaced by better, more efficient alternatives. According to PC World “Flash actually held on far longer than anyone expected, considering Apple co-founder and CEO Steve Jobs fired the first shot at Flash way back in 2010 with his famous open letter. Its decline started officially in 2017 when Adobe said it would kill support for Flash by the end of 2020. Browser makers also started to restrict Flash, and eventually blocked it entirely.” But Flash nostalgia fans keep heart because the Wayback Machine still emulates Flash animations in its software collection.

“What’s a Quibi?” is now a historical question and not a hot new trend. How quickly things change. It was a hot new trend during last year’s Super Bowl. According to Finance & Commerce, “ Quibi, short for “quick bites,” raised $1.75 billion from investors including major Hollywood players Disney, NBCUniversal and Viacom. But the service struggled to reach viewers, as short videos abound on the internet and the coronavirus pandemic kept many people at home. It announced it was shutting down in October, just months after its April launch.” As companies and governments fight over TikTok, even the well-funded, well-researched big players did not have the ability to play in the youth-driven short video market. Its owners expected Quibi to dominate the commute to work – which millions of us stopped doing just before its product release date. Couldn’t be worse timing.

If, as a tech company, you sell billions of dollars worth of clothes to consumers then it might make sense to charge those consumers $200 for a selfie camera that gives consumers fashion advice and proposes what they should buy to complete any outfit. However, the cost is relatively high, and apparently receiving personalized fashion advice from your clothing store both feels manipulative and gets old fast. For these reasons, among others, the Amazon Echo Look service ceased to work this year on July 24. However, the Amazon Shopping app still spins out fashion advice and can be accessed by calling for Alexa on other devices. By the way, Amazon also abandoned its Dash Wand product, a hand-held device with a built-in scanner to read barcodes for groceries you want to reorder.

Believe it or not, AT&T was still operating and selling a new DSL connection until October of this year. AT&T is not cutting off current subscribers, but won’t be taking on new subscribers, which could mean a total lack of wire-connected internet access in some rural areas. Google Fiber is still running strong, but its Google Fiber TV Service was dropped in February, except for existing customers – with no report on how long those customers can continue the service. Google claims that its customers don’t need traditional television.

Other notable services leaving us this year include Windows 7, for which Microsoft has stopped sending security updates; Google’s Daydream Virtual Reality platform for mobile phones, which is no longer supported; Slingbox decided to discontinue all products and services this year, and its products will gradually lose functionality as apps are phased out. Technology marches on, always crowing winners and casting losers into the ditch, leaving most to hover in between hoping to turn a profit next year. Thinking about the tech that disappeared this year gives us context for the lives of our current favorite products and services. Nothing lives forever.

 

What Law, Economics and the Newest Anti-Trust Law Suit Ask About Data

Two weeks ago I collected the major recent anti-trust/competition lawsuits, by regulators and competitors alike, filed against U.S. big technology companies. My point was that, after a long fallow period where these giants received the benefit of the doubt for their successful competitive practices, the public trust has seemed to turn, supporting lawsuits on a wide variety of theories.

Although I wrote to mark the mid-beginning of a trend likely to continue for decades, my article was premature, as a day after our publication Google was sued in an anti-trust action by 38 states. This lawsuit is the first action where I have seen the term “attention economy” stated, defined, and used as the basis for claims. The states use the metaphor of data being a resource, like oil, that that can be captured and refined into something worth selling.

The states claim that Google “uses its gargantuan collection of data to strengthen barriers of expansion and entry, which blunts and burdens firms that threaten its search-related monopolies (including general search services, general search text advertising, and general search advertising).” Setting aside the fact that Google has a significant direct competitor in Microsoft – a company powerful enough to be the subject of its own set of anti-trust suits by regulators and competitors in the past couple of decades – the claims are similar at their core to the anti-trust cases made against AT&T starting in 1974. Google has built an enormous resource so valuable that everyone uses it – like the telephone network fifty years ago – and they are leveraging this resource to 1) enter other fields as a leader, and 2) keep competitors out of their own revenue streams.

There is much to unpack in this complaint and I intend to do so in a later post. Here, as we career toward the blessed end of our annus horribilis (and we hope, not into another), I want to revisit the metaphorical concepts underlying many of these lawsuits. What are data, really, as a legal concept?

First, we need to parse the term. What we call data is history – a description of what happened and who it happened to – and nobody owns history. Of course, only limited aspects of history are recorded for posterity, but the information captured in the modern world is growing exponentially with cameras and IoT devices at every bank and intersection. Fading memories can reduce the impact of history, but computers can keep their historic information for as long as their owners like.

The classification of information at the base of this and many other lawsuits includes two types of data: transactional data and descriptive data. The combination of the two is especially valuable. It helps to know that 100 people bought left-handed baseball gloves, but it can be much more valuable to know that Tommy bought a left-handed baseball glove.

I am using transactional data in its broadest interpretation right now, captured information about every move made in our world. I’m talking about any activity that can be noted and recorded. This includes online searches, browsing to particular websites, remaining at an internet page for ten minutes – or leaving within seconds, watching videos, requesting videos and not watching them, browsing books or cooking utensils, translating phrases. It includes attending church services, riding the bus, walking in the park, visiting friends, and learning to juggle. And of course, it includes financial transactions, both online and off, where you purchase diapers or stay in a hotel room.

Descriptive data is simply information that can help identify you, which can be as simple as a name, address, or email. But for sophisticated analysts like Google, two or three items of information like your birth date, your gender, or even particular search terms may be enough, in conjunction, to identify you. This is why legislators have such a difficult time defining “identifiable” information.  Lists of name, address, and social security number work well for laws concerned about restricting identity theft because this limited data is what the thieves need.  However, for laws restricting business use of personal data like the GDPR or CCPA, broad – in those two cases impossibly broad – definitions of personally identifiable information recognize that companies can identify a person from aggregations of data that legislators can’t predict ahead of time.

The concepts are not mutually exclusive, as transactional data can be descriptive – regular purchases of feline treats, food, and litter can describe a person as a cat owner – and descriptive data can have clear transactional implications – if we know where you live and work we are likely to know where you order coffee or buy groceries. But it helps to understand the differences between the two types of data if you are considering the legal implications of data ownership and use.

As a general rule, U.S. law does not recognize ownership of data. Neither transactional information nor descriptive information is copyrightable subject matter. There is a line of cases that protects the economic value of certain “hot news” transactional information like the play-by-play call of baseball games for the people who invested in creating those games in the first place, but only for a very limited time, maybe as short as a few minutes, and then the data is available to everyone.

So, no matter what you would like to believe, you don’t own data that describes you or data created by your own actions. It is not possible to own this information.  So, if this thing (information) that is no one’s property has value, who gets to exploit its value? As stated immediately above, not the person described or the person whose actions created the data. While the EU protects such information from certain kinds of exploitation and claims that people have a human right to keep certain parts of this information private, no one has seriously offered a regime where you could make money by selling your own data.

Why not? In part, because no one has recognized that you might have an economic interest in data about you or your life, and in part because recognizing and accounting to you for the use of the data would be difficult, and would involve policy decisions we haven’t seriously debated yet. Individuals would need to push Google and others to provide credits for using our data, and the information giants have no incentive to do so. It has been suggested that data subjects should form bargaining collectives to fight for the value of their data, but I haven’t seen any data unionization gain traction. The government would need to step in to make this idea gain serious traction.  The market is unlikely to provide us economic management of our own descriptive or transactional data.

Google doesn’t own it either.  But Google holds lots of it and can provide transactional data in a timely fashion. (That’s another issue about transactional data.  It loses economic value quickly – if I know someone wants to buy a book now, I can sell it now. If I know someone wanted to buy the book last year, that information has different, and likely lesser, value to me.) The new lawsuit compares this data to oil.  I don’t agree. I would argue that, if Google’s data is an economically viable resource, the kind of data used by Google is more like a crop that is harvested and milled into something valuable.  Google doesn’t pick its data out of the ground or the air, instead it creates and cultivates a place – its search engine – for transactions (searches) to be initiated by people, collecting the descriptive results of the transactions Google facilitated. Placing a camera at an intersection and collecting information about passing pedestrians is more like drilling for oil – you take whatever you find. Google has cultivated an entire ecosystem where people express their needs and desires, and they harvest the information that is expressed there.

So does the fact that Google has created a place and method for people to voluntarily express their information mean that Google has more of a right to that data than anyone else does? Economically and legally, both oil and wheat are commodities that can be sold by whoever holds them and sold first by the person who can collect them. The court will need to decide. The anti-trust laws can punish Google for the way it wields its market power, depending on how that power is defined.  But the legal and economic thinking about how data functions in our society can change the way we live our lives, and who gets a financial benefit for the things we do.

Five Things to Do in Response to SolarWinds Compromise

The recent hack against FireEye and the U.S. Treasury and Commerce Department affected SolarWinds software for more than 18,000 software users including mostly private company clients in addition to the famously affected government entities.  SolarWinds has confirmed that a cyberattack to its systems inserted a vulnerability within the SolarWinds® Orion® Platform software builds for versions 2019.4 HF 5, 2020.2 with no hotfix, and 2020.2 HF 1 (see the SolarWinds Advisory if unsure which version you use). If your organization uses these products, prompt action may be needed to identify and mitigate potential security implications. The malware allows the (likely Russian) hackers to set a back door into companies using the Orion Platform.  Some targets have been attacked and mined for data right away, while others have nothing beyond the vulnerability as yet unexploited.

Thousands of SolarWinds customers have already received notice directly from SolarWinds that their products were not affected by the incident and no action is required. Otherwise, the following mitigation steps are recommended:

  1. Disconnect from the internet all Orion products for versions 2019.4 HF 5 and 2020.2 with no hotfix or 2020.2 HF 1 and update your versions as noted in the SolarWinds security advisory
  2. Identify and block all traffic to and from external sources where Orion software is installed
  3. Remove exemptions for Orion software file directories in your organization’s antivirus software and scan your systems
  4. Identify threat-actor controlled accounts and remove those accounts
  5. Continue monitoring systems for other suspicious activity and read updated advisories as more information about the attacks is discovered and released

SolarWinds and FireEye have also provided the following advisories that can help your organization determine what damage or data exposure, if any, was afflicted by the hackers and what else to do to protect your systems and data:

New HHS Rules Would Simplify Emergency Data Sharing Procedures

Officials at the Department of Health and Human Services (HHS) are proposing modifications to the HIPAA Privacy Rule to ease information sharing in emergencies. Some of the modifications being discussed would provide patients affirmative legal rights, and others would loosen restrictions on medical personnel. The timing of the discussions poses a challenge to the likelihood of implementation. Changes to the rules cannot be finalized before January 20, 2021, when President-elect Joe Biden will take office. Public comments are due 60 days after its publication in the Federal Register. HHS is seeking input from HIPAA-covered entities, other healthcare and technology stakeholders, consumers, activists, and patients. Trump Administration officials have touted these proposed changes as displaying a commitment to providing individuals with greater access to their health information and as a way to deregulate the health care industry.

The proposed rules give individuals greater rights. For example, individuals are permitted to use personal resources to view and capture images of their personal health information (PHI) when utilizing their right to inspect their PHI. The proposed rules would also require covered entities to inform individuals of their right to obtain copies of PHI when a summary of PHI is offered instead.

A change that would require significant guidance is the proposal to reduce the identity verification burden on individuals exercising their access rights considering HIPAA requires a covered entity to take reasonable steps to verify the identity of an individual making a request for access. The method of verification is left to the judgment of the covered entity. The proposed rules would require covered healthcare providers and health plans to respond to certain records requests received from other covered healthcare providers and health plans when directed by individuals pursuant to the right of access. Providers would have less time to respond to individual access requests as the threshold would go from thirty days to fifteen days.

The proposed rules would allow covered entities’ greater agency to disclose PHI to “social services agencies, community-based organizations, home and community-based service providers, and other similar third parties that provide health-related services.” Currently, covered entities are permitted discretionary disclosures of PHI based on their professional judgment. The proposed rules would make the measure more subjective with a standard based on that entity’s “good faith belief that the use or disclosure is in the best interests of the individual.”

HHS officials believe that the proposed rules would help “reduce the burden on providers and support new ways for them to innovate and coordinate care on behalf of patients” while ensuring HIPAA’s promise of privacy and security.

It’s An Ill Wind All Right; Will It Blow Anybody Any Good?

Announced within days of one another, two developments, one bureaucratic, one nefarious, showcased the growing chasm between the dream and the reality of our increasingly interconnected world.  On December 4, 2020, President Trump signed into law the “Internet of Things Cybersecurity Improvement Act of 2020,” which establishes security standards for Internet of Things (IoT) devices owned or controlled by the Federal government. And this week, with everyone focused on the Electoral College and the Pfizer vaccine, we learned again just how vulnerable the systems we rely upon for, well, just about everything, really are.

As reported in Krebs on Security, Russian hackers (probably) hacked SolarWinds’ Orion platform software that, among other things, helps the federal government and a range of Fortune 500 companies monitor the health of their IT networks.  If you have never heard of SolarWinds or its software, the scope of the problem might be lost on you. Make no mistake, it’s kind of a big deal.  SolarWinds’ customers include:

  • More than 425 of the US Fortune 500
  • All ten of the top ten US telecommunications companies
  • All five branches of the US Military
  • The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
  • All five of the top five US accounting firms
  • Hundreds of universities and colleges worldwide

Here is the eye-opener, as reported by David Sanger and his team at the New York Times: “The National Security Agency — the premier U.S. intelligence organization that both hacks into foreign networks and defends national security agencies from attacks — apparently did not know of the breach in the network-monitoring software made by SolarWinds until it was notified last week by FireEye.”  That’s right, the same seemingly all-powerful NSA that just a couple of months ago was in the news when the U.S. Court of Appeals for the Ninth Circuit handed down a ruling that the warrantless telephone dragnet that secretly collected millions of Americans’ telephone records may well have been unconstitutional did not know it had been hacked until FireEye, a private cybersecurity consulting company, told them so. And FireEye itself would not have known either but for its investigation of its own hack.

This brings me to my point:  it’s almost 2021 and the US has just now signed into law a bill requiring, among other things, the OMB to “develop and oversee the implementation of policies, principles, standards, or guidelines as necessary to address security vulnerabilities of information systems.” However salutary it is to require more care in how the US government buys connected devices, it sure seems like a belated drop in a very large bucket. In the meantime, everyone from Homeland Security on down is trying to figure out how something as innocuous-looking as a software upgrade could wreak such havoc.  It would take someone a lot less jaded than I not to think about horses and barn doors or days late and dollars short.

If you had “Entire Nation Hacked” on your 2020 bingo card, you may collect your winnings on the way out.  There is much to be done and in many ways, we are just getting started.  To paraphrase the Mishna: The day is short, the work is great, the workers are lazy, the reward is great, and the Russians are pressing.