We’ve been saying it for a while: in the absence of aggressive federal enforcement, the states are stepping up. And now it’s happening — loudly and clearly. The Massachusetts Attorney General just announced a $2.5 million settlement with Earnest, a student and personal loan lender, for violations related to fair lending, AI-driven underwriting, and consumer disclosure requirements.
This case is part of a growing trend: states are filling the regulatory void with their own enforcement, creating a patchwork of requirements that can be confusing and inconsistent. And Massachusetts isn’t alone. State-level scrutiny is expanding, with other states also ramping up their fair lending oversight. For financial institutions, including lenders relying on AI models, this is a moment to pay close attention.
Earnest, a lender known for its use of machine learning and non-traditional underwriting, allegedly failed to implement appropriate fair lending controls between 2014 and 2020. Among the violations:
The result: Massachusetts is holding the lender accountable for both algorithmic and human-led decisions that may have disproportionately harmed borrowers of color — particularly Black and Hispanic applicants.
This is more than just a regulatory slap on the wrist. Earnest is now required to overhaul its AI model governance, implement written procedures for overrides, and submit periodic compliance reports to the AG’s office. And although the conduct in question stopped years ago, the state’s willingness to dig into historical practices underscores a key takeaway: your past practices are still very much on the table.
Massachusetts isn’t alone. In recent months:
What ties all this together is a growing concern that algorithmic decision-making — while often pitched as objective and efficient — can quietly replicate or even amplify existing biases, especially when human overrides are involved without oversight.
State-level enforcement creates a challenging environment for compliance. Unlike federal agencies, which often provide detailed guidance and standardized expectations, state actions vary significantly. One state may prioritize AI governance, while another zeroes in on data disclosures or pricing exceptions.
For risk, compliance, and legal teams, this means:
And if you’re operating nationally? Multiply that complexity by fifty.
Federal fair lending enforcement hasn’t gone away — but it’s not as aggressive as it has been in the past. That’s where the states are stepping in. Massachusetts’ settlement with Earnest is just the latest example of state attorneys general acting like mini-CFPBs, aggressively investigating everything from underwriting and pricing to disclosures and model governance.
For financial institutions, it’s a warning shot. The compliance burden isn’t just about what regulators are doing today. It’s about what they’re willing to look back on — and whether your practices can withstand that scrutiny.
Now’s the time to shore up your fair lending program, especially if you’re leveraging AI or granting discretion in your approval process. Because while the rules may be uneven, the enforcement risk is not.
Preventing and detecting discrimination starts with a strong compliance management system. Learn how to uncover and analyze fair lending risk with the right tools in our free whitepaper.