Overreach by Analogy: My Take on the NAIC Model Bulletin on Artificial Intelligence in Insurance.

In December 2023 the National Association of Insurance Commissioners adopted its Model Bulletin, ‘Use of Artificial Intelligence Systems by Insurers’, a document that has since been adopted in full or substantially similar form by approximately twenty-four states and the District of Columbia (Holland and Knight 2025; Quarles 2024). On its face, the Bulletin reaffirms an unobjectionable principle: insurers must comply with existing law regardless of the computational tools they employ. In practice, however, the Bulletin layers an expansive new compliance architecture, the so-called AIS Program, atop a body of statutory law that already prohibits the very harms the Bulletin claims to address. This essay argues that the Bulletin, while well-intentioned, suffers from three structural defects: a definition of “AI System” so capacious that it captures ordinary actuarial practice, a process-oriented compliance regime that imposes asymmetric costs on technologically sophisticated tools without a parallel regime for traditional methods, and an evidentiary posture that effectively reverses the burden of proof in market conduct examinations. The cumulative effect is to disadvantage precisely the analytical techniques most likely to improve underwriting accuracy and consumer welfare.

When Everything Is AI, the Word Loses Meaning

The Bulletin defines an “AI System” as “a machine based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, content (such as text, images, videos, or sounds), or other output influencing decisions made in real or virtual environments” (NAIC 2023, sec. 2). It separately defines a “Predictive Model” as “the mining of historic data using algorithms and/or machine learning to identify patterns and predict outcomes” (NAIC 2023, sec. 2). These definitions are not coextensive with what scholars and engineers ordinarily mean by “artificial intelligence.” A generalized linear model fitted in 1995 to predict auto loss frequency, a longstanding industry workhorse, satisfies both definitions. So does virtually every actuarial pricing exercise conducted since the widespread adoption of generalized linear models in the late twentieth century (Huang and Xin 2023).

This is not a hypothetical. The actuarial stuff has noted for decades that statistical pricing models are, by their nature, predictive models that mine historic data using algorithms (Frees 2009). To classify such longstanding methods as “AI Systems” subject to a parallel governance regime is to impose new burdens on what regulators already supervise through rate filings, actuarial opinion requirements, and the Property and Casualty Model Rating Law (NAIC Model Bulletin #1780). The American Academy of Actuaries has similarly cautioned that algorithmic methods exist on a continuum with traditional actuarial techniques rather than as a categorically distinct class (American Academy of Actuaries 2021). When a regulatory definition sweeps so broadly that it encompasses the field’s foundational tools, it ceases to function as a meaningful classification and instead operates as a roving compliance trigger.

The Bulletin’s drafters appear to have recognized this difficulty, writing that “controls and processes that an Insurer adopts and implements as part of its AIS Program should be reflective of, and commensurate with, the Insurer’s own assessment of the degree and nature of risk” (NAIC 2023, sec. 3). The proportionality principle might be welcome, but it’s unenforceable in practice. An insurer that deems its generalized linear model low risk, and therefore declines to subject it to a full AIS Program, has no safe harbor against a market conduct examiner who thinks that it sucks. The discretion lies entirely with the regulator (Buchanan Ingersoll and Rooney 2025).

Penalizing Sophistication

Another serious defect is that the Bulletin imposes process burdens on AI-driven decisions that have no counterpart for human or traditional decisions. An underwriter who declines an applicant on the basis of an unstructured judgment about credit history is subject to the Unfair Trade Practices Act (NAIC Model Bulletin #880) and to whatever rate filing requirements the state imposes. An underwriter who reaches the same decision through a machine learning model is subject to all of the foregoing AND a documented governance framework, validation testing, model drift monitoring, third-party diligence, internal audit functions, and an evidentiary record sufficient to satisfy the documentation expectations of Section 4 of the Bulletin (NAIC 2023, sec. 4).

The empirical evidence suggests that well-designed algorithmic methods are more consistent and less subject to idiosyncratic bias than human decision-making (Kleinberg et al. 2018). Cognitive psychology has documented for half a century that human judgment in applied settings is noisy, anchored on irrelevant features, and prone to systematic error (Kahneman, Sibony, and Sunstein 2021). To impose heavier procedural burdens on the more consistent decision-making technology is to invert the relationship between risk and regulation. The principle of technology-neutral regulation, long advocated in administrative law scholarship, holds that regulation should target outcomes rather than the technical means by which outcomes are produced (Reed 2007). The Bulletin’s structure violates this principle in spirit if not in letter, since it explicitly conditions extensive process obligations on the use of a particular class of computational technique.

The cost is not my theorizing here. Compliance with the AIS Program requires carriers to maintain model inventories, conduct bias analyses, document data lineage, validate generalization, monitor model drift, audit third-party vendors, and produce on demand the entire developmental history of any predictive model. Smaller insurers and new entrants, who lack the compliance infrastructure of national carriers, face proportionally larger cost increases (Geneva Association 2023). The predictable consequence is reduced competition, slower innovation, and a market in which the regulatory cost of deploying improved analytics may exceed the consumer welfare gains from doing so.

Borrowing a Doctrine That Was Never Adopted

The Bulletin repeatedly invokes the language of “unfair discrimination” and directs insurers to test for “bias” and “unfair discrimination in the insurance practices resulting from the use of the Predictive Model” (NAIC 2023, sec. 2.4). The text never defines what unfair discrimination means in the AI context, and this omission is consequential. The Unfair Trade Practices Act (NAIC Model #880), the foundational source of authority cited by the Bulletin, prohibits discrimination between individuals of the same class and equal expectation of life or risk. It is, in its traditional reading, a disparate treatment statute. It does not, by its terms, prohibit facially neutral practices that produce demographic disparities, which is the disparate impact framework imported from federal employment and housing law (American Bar Association 2024).
Several states, most prominently Colorado under SB 21-169 and its implementing regulations, have moved aggressively to require quantitative disparate-impact testing for life insurers and, beginning in October 2025, for private passenger automobile and health insurance as well (Buchanan Ingersoll and Rooney 2025; Grant Thornton 2023). New York’s Department of Financial Services Circular Letter 2024-7 takes a similar approach (New York Department of Financial Services 2024). The NAIC Bulletin uses the rhetoric of disparate impact while declining to specify which standard applies, leaving insurers to guess at their compliance obligations and inviting precisely the regulatory patchwork that uniform model guidance is supposed to prevent (Fenwick 2026).

There is a deeper conceptual problem, . . . classification. The actuarial concept of fairness, traceable to neoclassical economists including Kenneth Arrow, holds that policyholders bearing the same expected loss should pay the same premium (Heras, Pradier, and Teira 2020; Meyers and Van Hoyweghen 2018). To require that models produce demographically equal outcomes regardless of risk is to require either that risk classification be abandoned or that the costs of higher risk groups be subsidized by lower risk groups, a result that is economically equivalent to a tax (Huang and Xin 2023). Reasonable people may favor such redistribution as a matter of social policy. What is objectionable is to impose it through opaque examiner discretion under a Model Bulletin rather than through legislation that openly acknowledges the trade-off.

Compounding this crap, recent technical research has shown that proxy-based race inference, the very mechanism regulators ask insurers to deploy in the absence of self-reported demographics, can systematically distort fairness audits, sometimes overestimating and sometimes underestimating actual disparities (Chen et al. 2026). Insurers asked to test for disparate impact may therefore be required to use methods that are themselves measurement instruments of contested validity.

Section 4 and the Reversal of Burden

Section 4 of the Bulletin contains an extensive list of materials that an insurer “can expect” to produce in any market conduct action concerning AI: the written AIS Program itself, governance documentation, training materials, model inventories, data lineage records, third party contracts, audit reports, and validation studies (NAIC 2023, sec. 4). The Bulletin asserts that this list is non prescriptive and that insurers may demonstrate compliance through alternative means. As a matter of regulatory practice, however, an insurer that cannot produce these materials on demand is in a substantially weaker examination posture than one that can, regardless of whether its decisions actually violated the underlying statutes.


This is a SERIOUS reversal of the customary burden of proof. The Unfair Trade Practices Act requires the regulator to establish a violation. The Bulletin’s documentation expectations effectively require the insurer to maintain, in advance, the evidentiary apparatus needed to disprove a violation that has not yet been alleged. Administrative law scholars have long noted that documentation regimes of this kind function as a form of de facto licensing, raising the cost of conduct that remains nominally lawful (Sunstein 2013). The Bulletin imposes such a regime on the use of advanced analytics without a corresponding statutory mandate.

A Defensible Framework

None of the foregoing is to deny that AI systems can produce inaccurate, opaque, or discriminatory outcomes. I’m not defending that shit. Nor am I stating that some regulatory response is warranted. My argument is that the NAIC’s chosen approach is poorly calibrated to the underlying risks. A more defensible framework would define AI Systems narrowly enough to exclude conventional actuarial methods already supervised under existing rate filing regimes; condition heightened process obligations on the demonstrated risk of a particular use case rather than on the technological category to which it belongs; distinguish clearly between disparate treatment, which existing unfair trade practice law already prohibits, and disparate impact, which is a policy choice that should be made by legislatures rather than imported through bulletin guidance; and replace open ended documentation expectations with specific, enumerated requirements that give insurers an ascertainable safe harbor.

The NAIC’s Model Bulletin reflects a regulatory impulse that they think is justified based on their perception of rapid technological change, i.e., their conviction that novel tools require novel oversight. The impulse is understandable, and in some way may be correct, BUT the Bulletin imposes asymmetric costs on the analytical methods most likely to improve accuracy and reduce idiosyncratic bias, leaves core terms undefined in ways that invite inconsistent enforcement, and shifts evidentiary burdens onto insurers without statutory warrant. State insurance departments adopting the Bulletin have got to narrow the definitional scope, clarify the meaning of unfair discrimination, and provide enumerated safe harbors. Failing that, insurers will face precisely the regulatory friction that consumers, who ultimately pay through reduced competition and higher premiums, can least afford. MGAs, producers, underwriting intermediaries, carriers, . . . all of us expect better.

C. Constantin Poindexter, MA, JD, CPCU, AFSB, ASLI, ARe, AINS, AIS, CPLP

Bibliography

  • American Academy of Actuaries. 2021. Big Data and Algorithms in Actuarial Modeling and Consumer Impacts. Washington, DC: American Academy of Actuaries.
  • American Bar Association. 2024. “Regulation by the EEOC and the States of Algorithmic Bias in High Risk Use Cases.” The Business Lawyer 80 (Winter 2024–2025).
  • Buchanan Ingersoll and Rooney PC. 2025. “When Algorithms Underwrite: Insurance Regulators Demanding Explainable AI Systems.” Client Alert, October 9, 2025.
  • Chen, Jiahao, et al. 2026. “How Proxy Race Distorts Regression Based Fairness Audits.” Working paper, arXiv preprint 2603.17106.
  • Fenwick. 2026. “Tracking the Evolution of AI Insurance Regulation.” Fenwick and West LLP Insights, February 4, 2026.
  • Frees, Edward W. 2009. Regression Modeling with Actuarial and Financial Applications. Cambridge: Cambridge University Press.
  • Geneva Association. 2023. Promoting Responsible Artificial Intelligence in Insurance. Zurich: The Geneva Association.
  • Grant Thornton. 2023. “Model Bias Rules Target Insurance Practices.” Grant Thornton Insights, August 3, 2023.
  • Heras, Antonio J., Pierre Charles Pradier, and David Teira. 2020. “What Was Fair in Actuarial Fairness?” History of the Human Sciences 33 (2): 91–114.
  • Holland and Knight. 2025. “The Implications and Scope of the NAIC Model Bulletin on the Use of AI by Insurers.” Holland and Knight Insights, May 20, 2025.
  • Huang, Fei, and Xi Xin. 2023. “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models.” North American Actuarial Journal 28 (2): 285–319.
  • Kahneman, Daniel, Olivier Sibony, and Cass R. Sunstein. 2021. Noise: A Flaw in Human Judgment. New York: Little, Brown.
  • Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. “Human Decisions and Machine Predictions.” Quarterly Journal of Economics 133 (1): 237–293.
  • Meyers, Gert, and Ine Van Hoyweghen. 2018. “Enacting Actuarial Fairness in Insurance: From Fair Discrimination to Behaviour Based Fairness.” Science as Culture 27 (4): 413–438.
  • National Association of Insurance Commissioners (NAIC). 2023. Model Bulletin: Use of Artificial Intelligence Systems by Insurers. Adopted December 4, 2023. Kansas City, MO: NAIC.
  • New York Department of Financial Services. 2024. Insurance Circular Letter No. 7 (2024): Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing. Albany, NY: NYDFS.
  • Quarles. 2024. “Nearly Half of States Have Now Adopted NAIC Model Bulletin on Insurers’ Use of AI.” Quarles and Brady LLP, June 6, 2024.
  • Reed, Chris. 2007. “Taking Sides on Technology Neutrality.” SCRIPTed 4 (3): 263–284.
  • Sunstein, Cass R. 2013. Simpler: The Future of Government. New York: Simon and Schuster.
Share this post: