No data mining in Colorado minds as state passes U.S.’s first brainwave privacy law

Patrick T. Fallon—Bloomberg via Getty Images

Your brainwaves are safe in Colorado, relatively speaking.

That’s because the U.S. state yesterday became the first to classify neural data as sensitive personal data, giving it protections under the Colorado Privacy Act, which came into force last year.

That means companies developing technology for reading brain activity would under certain circumstances have to get people’s consent before using the information they glean from their brainwaves—the same right people already had in the state when it comes to other biological information, like fingerprints and facial images. As with these other biometric data, the protections only apply to brain data that’s collected for identification purposes, as opposed to things like inferring emotions or seeing how the brain reacts to various stimuli.

The new law, signed yesterday by Gov. Jared Polis, is focused on consumer-level neurotech devices, as the processing of such information was already tightly controlled in the medical domain. So this isn’t really applicable to, say, Neuralink’s under-development implant-based brain-computer interface (BCI), but rather non-invasive BCIs like the hardware being sold by Emotiv or NeuroSky.

These aren’t exactly mass-market devices just yet, though companies like Meta, Apple and Snap are all working on their own entries in the field. So Colorado’s new law is pretty forward-looking, which is particularly notable in a country that’s only now getting around to maybe passing comprehensive federal privacy legislation.

Similar laws will probably be hot on its heels. Just this week, California’s Senate Judiciary Committee approved that state’s Neurorights Act. Lawmakers in Minnesota are working on their own version. And Chile has already given brain data constitutional protection, which its Supreme Court has already used to tell Emotiv to delete a citizen’s brain data.

The biggest driving force behind this push is the NeuroRights Foundation, a nonprofit that’s trying to preempt the unethical use of brain-reading technology by ensuring that heavy restrictions apply. The stakes are certainly high: The foundation’s mission talks about protecting personal identity and free will, along with privacy and fair treatment by algorithms using brain data as inputs.

“Everything that we are is within our mind,” NeuroRights Foundation cofounder Jared Genser told the New York Times yesterday. “What we think and feel, and the ability to decode that from the human brain, couldn’t be any more intrusive or personal to us.” Even though the new law doesn’t actually protect data gathered for those purposes—which is largely thanks to Big Tech lobbying—Genser said it was “a major step forward.”

I’ll be honest: I find it hard to envision a world in which consumer BCIs are commonplace, making this all feel like a very theoretical exercise. But given that neurotech is already expected to be a $15 billion market this year, with significant growth forecast for the coming years—and given the fundamental risks that are involved when machines read our thoughts—I’m glad to see legislators are getting active now.

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

This story was originally featured on Fortune.com

Advertisement