Paul Edwards, CEDR CEO & Founder, here. For anyone who was paying attention, from the late 1990s into the early 2000s, a company called Google managed to position itself unregulated so that it could collect and sell the data of every human being on earth.
Honestly, few, if any, of us understood how invasive data collection would be or the value of that data.
More companies jumped on board and started collecting data. The United States has still done little to nothing to limit how many of our tech companies abuse their power, how often they lose control of the data, and how our data is being used.
When Google rose to power, most of us failed to recognize or acknowledge the potential pitfalls of their model until it was seemingly too late. These included the erosion of our privacy and, perhaps more significantly, the relinquishment of our control over our own data.
Now, a much more powerful piece of technology is being deployed. Instead of the Federal Government leading the charge, several states have taken it upon themselves to create regulations.
As we progress with Artificial Intelligence and reach a new milestone where more than one million characters can be entered and processed by AI in a single query, the power of AI and its potential for making positive differences are becoming increasingly evident. Recently, I talked about it on my podcast, which you can find here.
This is a technology that has the potential to revolutionize our lives for the better, but it needs to be regulated to ensure it's used responsibly. As the title implies, a few states have decided to address AI head-on, and I like what Colorado has done with their regulation.
Colorado has taken a thoughtful and well-informed approach to cover consumers and employees. As HR professionals, it is incumbent on every manager and HR person to learn how AI can be helpful and to also ensure that it’s not misused.
Colorado’s balanced approach is a promising step toward harnessing AI's potential while addressing its risks. The Colorado AI Accountability Act (CAIA) mandates that developers and deployers of AI tools exercise reasonable care to prevent discrimination when using "high-risk" AI systems.
These systems significantly influence consequential decisions, such as employment opportunities. The Act applies to all Colorado employers, with specific exemptions for employers under 50, and aims to ensure that AI algorithms do not produce discriminatory outcomes based on characteristics like age, color, disability, or ethnicity.
The law is signed and goes into effect in 2026 (CEDR members, don’t worry. We will take care of any necessary updates to your handbook as they come down the pike).
I highly recommend reading Jessica Mason's detailed analysis at the National Law Review for a comprehensive understanding of this issue and to see what thoughtful regulation can look like.