General

From Level 3 to Investable: Actuarial Science and Statistical Modeling in Life Settlements, According to Yaniv Bertele

Life settlements sit in a category that makes institutional investors simultaneously curious and cautious. The asset class can offer actuarially driven cash flows with limited linkage to traditional market cycles, yet it also carries the characteristics that investment committees treat as non‑negotiable diligence triggers: model reliance, long duration, and valuation sensitivity to unobservable inputs.

That is the practical reality of Level 3 classification under IFRS 13 and ASC 820. Unlike Level 1 assets, where prices are discoverable in active markets, or Level 2 assets, where observable inputs anchor valuation, life settlements demand an evidentiary framework capable of surviving the two questions that matter most to institutional allocators: How do you value it? and How do you prove it?

For Yaniv Bertele, founder of EverOak Innovations, that is precisely where actuarial science and statistical modeling move from “nice to have” to core infrastructure. “In life settlements, valuation isn’t something you reference-it’s something you build, defend, and continuously validate,” he says. “If you can’t explain the drivers and quantify the uncertainty, you don’t have a valuation method; you have a belief.

Why the valuation problem is the investment problem

The modern life‑settlement portfolio is not a single bet on a single insured. It is a set of long‑dated, policy‑level cash flow streams where premium obligations must be managed, maturities unfold probabilistically, and returns depend materially on the quality of life expectancy (LE) estimation and servicing discipline.

Historically, the industry leaned heavily on specialist judgment: physicians and underwriting professionals reviewing attending physician statements, medical histories, and treatments to produce an LE. That expertise remains important, but it has also produced a long‑standing friction point for institutions: two qualified reviewers can look at the same file and reach different conclusions, and those differences can translate into meaningfully different projected IRRs.

Dispersion is the enemy of scale,” Bertele says. “When the same case produces materially different life expectancies across providers, investors aren’t just seeing variability-they’re seeing a methodology that is difficult to reproduce and hard to audit.

That is the opening for a more disciplined quantitative approach: not to eliminate judgment, but to standardize, document, and validate the full decision process.

Actuarial standards, but with statistical rigor

Actuarial practice in life settlements sits within established norms, including ASOP‑48, which frames how life expectancy estimates should be developed and supported. Yet ASOP‑48 allows legitimate variation in methodology and assumptions. In Bertele’s view, that flexibility makes governance and documentation even more important, because it increases the range of “defensible” approaches and, by extension, the dispersion institutions must underwrite.

A quantitative model, when designed responsibly, does two things that institutions value: it scales consistency and it forces explicit assumptions. Instead of relying primarily on qualitative weighting across dozens of health factors, statistical models can learn patterns across large case histories and quantify how specific features influence outcomes. In practice, that may include multivariate statistical methods that incorporate demographics, diagnostic patterns, medication regimens, lab values, treatment histories, and comorbidity interactions.

The promise of modeling isn’t that you eliminate uncertainty,” Bertele says. “It’s that you measure it, put confidence intervals around it, and understand what moves the estimate-so an investment committee can underwrite the process rather than the personality behind it.

He is careful, however, about where quantitative ambition can become counterproductive. Highly complex machine learning can improve predictive accuracy but fail the test that matters most in institutional settings: interpretability. Black‑box predictions may be impressive in a lab and unusable in a committee room.

Institutional adoption requires explainability,” he says. “If the model can’t tell you why it reached a conclusion, you’re not building trust-you’re building dependence.

The hard part: data, privacy, and reproducibility

Quantitative mortality assessment is only as strong as the data pipeline beneath it. Medical records are unstructured, inconsistent across providers, and often voluminous. Building a model therefore requires not only analytics but operational discipline: digitization workflows, normalization of medical information, consistent coding practices, and defensible handling of missingness.

It also requires stringent privacy controls. Life settlements involve sensitive medical information, and institutions expect asset managers to treat record handling as a first‑class operational risk. That means secure transmission, access controls, auditable logs, and vendor oversight that prevents privacy failures from becoming reputational crises.

In longevity finance, the model is only as credible as the operational system behind it,” Bertele says. “Governance isn’t separate from analytics-it’s what makes analytics allocatable.

Comorbidity and the limits of intuition

One reason Bertele emphasizes statistical frameworks is the non‑linear nature of health deterioration. Real‑world mortality risk is rarely additive. Individuals with multiple chronic conditions often experience interactions where combined conditions produce risk that is greater than the sum of individual impairments. Capturing that dynamic consistently is difficult for manual underwriting-especially at scale-because it demands systematic treatment of interactions and non‑linear relationships.

The human mind is good at pattern recognition, but it’s not good at weighting hundreds of interacting variables consistently across thousands of cases,” he notes. “That’s exactly where rigorous modeling adds value-when it’s explainable and continuously calibrated.

Moving from single‑policy valuation to portfolio risk

Even the best policy‑level underwriting is only the beginning. Institutions ultimately allocate to portfolios, not files, and life settlement portfolios behave differently than traditional fixed income. Cash flows are irregular. Maturity timing is uncertain. Premium schedules must be funded. And portfolio outcomes depend on the distribution of maturities, not a single expected value.

That is why Bertele emphasizes simulation. Monte Carlo techniques-common in scientific modeling and increasingly standard in sophisticated financial risk management-allow managers to generate thousands of possible future paths across longevity outcomes and premium dynamics. Each path produces a full cash‑flow profile, enabling calculation of expected returns, variability, and tail outcomes.

A deterministic model gives you one story; a simulation gives you a range of realities,” Bertele says. “It’s the difference between presenting a point estimate and presenting a risk distribution. Institutional investors live in distributions.

Correlation modeling becomes relevant here in ways many investors initially underestimate. A life settlement portfolio can be diversified across policy sizes, insured demographics, carrier mix, and health profiles, but certain exposures may still cluster. Concentrations in age bands, condition categories, or geographic regions can create correlated outcomes under specific stressors. Bertele points to the importance of modeling cohort‑level behavior rather than assuming independence by default, especially when rare events disproportionately affect vulnerable populations.

Diversification in life settlements is real, but it has to be measured,” he says. “You’re not diversifying equities-you’re diversifying human longevity. That requires a different set of correlation questions.

Liquidity planning is another portfolio‑level discipline. Unlike securities with standardized trading venues, individual policies do not offer frictionless secondary liquidity. Managers must forecast maturity distributions to plan premium funding, distributions, and capital deployment. Probabilistic cash‑flow projections built from policy‑level health profiles and historical maturity patterns create a more defensible liquidity framework than intuition or static schedules.

Why IRR is so sensitive-and why that changes governance

Life settlements are particularly sensitive to life expectancy assumptions. Small changes in LE can materially change projected IRR because premium obligations extend and the timing of the terminal cash flow shifts. In a long‑duration asset, compounding amplifies these differences.

For Bertele, that sensitivity is exactly why valuation methodology is not a back‑office topic-it is the investment thesis. The tighter and more defensible the confidence band around LE assumptions, the more credible the portfolio’s expected performance range becomes.

If your core input is unstable, your IRR is unstable,” he says. “So the question becomes: are we managing the asset, or are we managing a spreadsheet?

Explainable AI as the new institutional baseline

As allocators become more comfortable with quantitative underwriting, their expectations evolve. They no longer ask only, “Do you have a model?” They ask, “Can we understand it, monitor it, and challenge it?”

Bertele describes the institutional baseline as a combination of interpretability and governance: feature importance, sensitivity analysis, documented validation protocols, and ongoing performance monitoring against realized maturities. The method must be readable, not merely predictive.

A model that performs well but can’t be explained is not institutional,” he says. “Institutions don’t outsource fiduciary responsibility to a black box. They require documentation, controls, and the ability to audit how decisions are made.

He also views regulatory direction as reinforcing this norm. Even without naming specific statutes, the momentum across major jurisdictions is toward greater model accountability, disclosure expectations, and governance discipline in financial decisioning systems. Managers who build explainability and validation into their process early will find themselves aligned with where due diligence is already heading.

Academic rigor, third‑party validation, and credibility at scale

One of the long‑term challenges in life settlements has been credibility gap: investors know the asset exists, but many have lacked confidence in how returns are underwritten and how risks are controlled. Bertele believes the industry’s next phase depends on a higher level of external validation-peer engagement, actuarial review, benchmarking, and transparent reporting that allows allocators to compare managers on process, not marketing.

Independent validation from actuarial consulting firms can play a role by testing data quality, methodology, calibration practices, and documentation. Industry benchmarking-when structured responsibly-can help managers demonstrate that assumptions are within rational ranges and that deviations are explainable rather than accidental.

Institutions don’t need certainty; they need defensible process,” Bertele says. “When you combine statistical rigor, explainable frameworks, and third‑party verification, you turn a Level 3 asset from ‘interesting’ into allocatable.

The broader implication for allocators

The life settlements market rewards a blend of skills: actuarial knowledge, statistical competence, operational controls, and the ability to communicate risk in the language of capital markets. Bertele’s view is that team composition should therefore be evaluated alongside track record. Traditional insurance veterans may bring sourcing and domain familiarity; cross‑disciplinary teams may bring stronger modeling and governance infrastructure. The strongest platforms integrate both, ensuring that underwriting sophistication is matched by institutional controls.

In Bertele’s framing, the evolution of life settlements is less about turning a niche into a trend and more about making the asset class professionally legible. “The future of this market is transparency you can audit,” he says. “When valuation is reproducible and risk is modeled like a portfolio, not a pile of files, institutions can participate with confidence.

Also visit Digital Global Times for more quality informative content.

Zeeshan

Writing has always been a big part of who I am. I love expressing my opinions in the form of written words and even though I may not be an expert in certain topics, I believe that I can form my words in ways that make the topic understandable to others. Conatct: zeeshant371@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *