How Michigan’s New AI Laws Could Shape Your Business in 2026
- Oxbridge Legal Services

- Nov 24
- 3 min read

Artificial intelligence (AI) is dramatically changing how businesses operate in Michigan. While AI promises efficiency and cost savings, it also creates substantial legal and compliance challenges as legislators move to regulate its use. Businesses unprepared for this shift may find themselves at heightened risk.
Michigan's Emerging AI Regulatory Landscape
In 2025, Michigan introduced two potentially landmark bills targeting AI:
House Bill 4667 criminalizes the intentional misuse of AI for illegal activities, including fraud and discrimination. Violators face felony charges and significant penalties. This bill defines AI broadly, covering machine-learning systems, chatbots, generative models, and automated decision tools.
House Bill 4668 requires businesses deploying AI for vital decisions—such as hiring, promotions, and credit approval—to perform ongoing safety and risk assessments, commission third-party audits, and disclose their AI safety protocols. These rules apply to organizations of all sizes in sectors from manufacturing to services.
These proposals mirror national trends focusing on transparency, fairness, and accountability in AI regulation. Whether these laws pass immediately or in the near future, regulation is coming and businesses should prepare now.
What Does This Mean for Michigan Employers?
Under proposed House Bill 4668, the broad scope means that even small and mid-sized businesses using AI in employment decisions may need to:
Notify applicants and employees whenever AI influences hiring, promotions, or discipline.
Maintain detailed records of all AI systems used, including training data sources and decision criteria.
Obtain independent bias audits, especially for automated resume screening and performance evaluations.
Implement human review processes so individuals can challenge AI-driven decisions.
Provide reasonable accommodations for individuals with disabilities by offering accessible alternatives and human intervention as needed.
Potential Impact on Current Action
This may open the door to a review of actions taken before the law was in place and a post hoc adoption of a future standard. Keep in mind that Michigan’s Elliott-Larsen Civil Rights Act imposes a three-year statute of limitations on discrimination claims. A law forcing disclosure of AI actions may allow force disclosure related to past employment decisions that occurred years ago. Preparing now can protect your business against liability.
Practical Steps to Reduce Risk
Conduct regular AI impact assessments and require bias audits from all vendors.
Maintain an AI use register, documenting every AI system, its purpose, and any decisions linked to AI outputs.
Update contracts and vendor agreements to specify compliance obligations and audit cooperation.
Enhance transparency by clearly disclosing AI’s role in employment decisions and maintaining detailed records.
Maintain human oversight for all key decisions and develop internal practices to track and document AI use.
Update employee handbooks with AI policies and provide regular anti-discrimination training.
Ensure full accessibility for disabled applicants and employees, including offering alternatives and clear notification of rights.
Insurers are starting to have AI-related exclusions; businesses should carefully review their CGL policies to ensure coverage, watch for new exclusions, and consider specialized AI, cyber, or E&O policies to close insurance gaps.
Real-World Example: Harper v. Sirius XM
A recent federal lawsuit in Michigan illustrates what's at stake. In Harper v. Sirius XM Radio, LLC, a plaintiff alleges systemic racial bias after AI-powered hiring tools rejected him for over 150 positions. The plaintiff has moved for class certification. While this blog passes no judgment on the merits of the claims, it is clear that this case demonstrates how automated decision systems lacking transparency can expose businesses to class-action litigation under both federal and state civil rights laws.
Why Act Now?
If your company receives 100 employment applications and uses AI to process them, it is possible to generate 100 potential legal claims if the AI system produces biased or noncompliant decisions. This is already a risk under current law. With new regulations possibly taking effect in 2026, Michigan employers have a limited window to align policies and technology with evolving legal standards. Early compliance builds trust, reduces risk, and demonstrates good faith to employees and regulators.
If you are worried about your AI systems or need guidance on Michigan’s evolving laws, Oxbridge Legal Services can help you navigate this changing landscape.
Sources for this article:
H.B. 4667, 103d Leg., Reg. Sess. (Mich. 2025), https://legiscan.com/MI/bill/HB4667/2025.
H.B. 4668, 103d Leg., Reg. Sess. (Mich. 2025), https://www.michiganvotes.org/legislation/2025/house/bill-4668.
Harper v. Sirius XM Radio, LLC, No. 2:25-cv-12403 (E.D. Mich. 2025).
Michigan Civil Rights Commission, Resolution to Establish Guiding Principles for the Use of Artificial Intelligence in Michigan (Oct. 21, 2024), https://www.michigan.gov/mdcr/news/releases/2024/10/21/mcrc-passes-resolution-to-establish-guiding-principles-for-use-of-ai-in-m.
Future of Privacy Forum, The State of State AI: Legislative Approaches to AI in 2025 (Oct. 2025), https://fpf.org/blog/the-state-of-state-ai-legislative-approaches-to-ai-in-2025/, PDF: https://fpf.org/wp-content/uploads/2025/10/The-State-of-State-AI-2025.pdf.

