<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

State Fair Lending Enforcement Is Heating Up: Massachusetts Hits Lender with $2.5 Million Settlement

author
3 min read
Jul 14, 2025

We’ve been saying it for a while: in the absence of aggressive federal enforcement, the states are stepping up. And now it’s happening — loudly and clearly. The Massachusetts Attorney General just announced a $2.5 million settlement with Earnest, a student and personal loan lender, for violations related to fair lending, AI-driven underwriting, and consumer disclosure requirements. 

This case is part of a growing trend: states are filling the regulatory void with their own enforcement, creating a patchwork of requirements that can be confusing and inconsistent. And Massachusetts isn’t alone. State-level scrutiny is expanding, with other states also ramping up their fair lending oversight. For financial institutions, including lenders relying on AI models, this is a moment to pay close attention. 

The Massachusetts Case: A Cautionary Tale 

Earnest, a lender known for its use of machine learning and non-traditional underwriting, allegedly failed to implement appropriate fair lending controls between 2014 and 2020. Among the violations: 

  • AI models that produced disparate impacts without adequate testing or controls to identify potential discrimination. 
  • Human discretion to override the model for approvals, denials, and pricing — without a written policy governing how those exceptions were to be made. 
  • Use of "knockout" rules, including one targeting immigration status, to automatically deny applications — again, without clear governance. 
  • Inaccurate adverse action notices that failed to properly disclose the reasons for denial, leaving borrowers confused and potentially unable to assert their rights. 

The result: Massachusetts is holding the lender accountable for both algorithmic and human-led decisions that may have disproportionately harmed borrowers of color — particularly Black and Hispanic applicants. 

This is more than just a regulatory slap on the wrist. Earnest is now required to overhaul its AI model governance, implement written procedures for overrides, and submit periodic compliance reports to the AG’s office. And although the conduct in question stopped years ago, the state’s willingness to dig into historical practices underscores a key takeaway: your past practices are still very much on the table. 

A Broader Pattern of State Enforcement 

Massachusetts isn’t alone. In recent months: 

  • New Jersey pursued a fair lending action against a cash advance company whose executives were caught on tape explicitly instructing employees not to work with certain racial or ethnic groups. The AG’s office is also investigating redlining by traditional lenders, using demographic lending data to root out disparities. 
  • New York has launched investigations into small business and consumer lenders using biased underwriting, misleading disclosures, and predatory practices, resulting in large settlements and public consent orders. The state’s FAIR Business Practices Act puts new consumer protection requirements on lenders.  
  • Other states like California, Oregon, and Texas are signaling increased scrutiny of AI-driven underwriting models under their own consumer protection and anti-discrimination laws.

What ties all this together is a growing concern that algorithmic decision-making — while often pitched as objective and efficient — can quietly replicate or even amplify existing biases, especially when human overrides are involved without oversight. 

What This Means for Financial Institutions 

State-level enforcement creates a challenging environment for compliance. Unlike federal agencies, which often provide detailed guidance and standardized expectations, state actions vary significantly. One state may prioritize AI governance, while another zeroes in on data disclosures or pricing exceptions. 

For risk, compliance, and legal teams, this means: 

  • AI models must be explainable and testable. If you can’t identify how a model might result in disparate impact, you’re exposed. 
  • Discretion needs guardrails. Human overrides should be documented, justified, and consistent with written policies. 
  • Adverse action notices must be accurate. These are regulatory documents, not just customer communications. 
  • Historical practices are not safe. Enforcement is retrospective — what you did five years ago can still come back to haunt you.

And if you’re operating nationally? Multiply that complexity by fifty. 

The Big Picture 

Federal fair lending enforcement hasn’t gone away — but it’s not as aggressive as it has been in the past. That’s where the states are stepping in. Massachusetts’ settlement with Earnest is just the latest example of state attorneys general acting like mini-CFPBs, aggressively investigating everything from underwriting and pricing to disclosures and model governance. 

For financial institutions, it’s a warning shot. The compliance burden isn’t just about what regulators are doing today. It’s about what they’re willing to look back on — and whether your practices can withstand that scrutiny. 

Now’s the time to shore up your fair lending program, especially if you’re leveraging AI or granting discretion in your approval process. Because while the rules may be uneven, the enforcement risk is not. 

Preventing and detecting discrimination starts with a strong compliance management system. Learn how to uncover and analyze fair lending risk with the right tools in our free whitepaper. 

Download the Whitepaper

 


Subscribe to the Nsight Blog