1. Where do the thresholds come from?

Every threshold in CIMCalc has a documented provenance — the specific source that justifies why a particular value triggers a KILL, FLAG, CONCERN, or CLEAR verdict. We categorize thresholds into three tiers based on their source:

Category Source Examples
A — Academic Published, peer-reviewed research with original coefficients Beneish M-Score, Altman Z-Score, Dechow F-Score, Piotroski F-Score, Sloan Accrual Ratio
B — Industry Damodaran (NYU Stern), ReadyRatios, CFA Institute curriculum Liquidity ratios, debt service coverage, revenue retention metrics
C — CIMCalc Calibrated from pipeline analysis against benchmark companies Operational efficiency, margin analysis, earnings quality, growth metrics

Threshold changes are version-controlled. Regression tests verify that threshold changes do not silently alter benchmark verdicts — if a change flips a verdict on a known company, the test fails and requires explicit review.

No hardcoded values. All thresholds live in versioned YAML configuration files, never in application code. This is an architectural invariant enforced by automated tests.

2. Which academic models do you use?

Five calculators use coefficients and cutoffs directly from peer-reviewed research. We do not modify these coefficients — the published values are used as-is.

Beneish M-Score

Detects earnings manipulation using 8 financial ratios (DSRI, GMI, AQI, SGI, DEPI, SGAI, TATA, LVGI). The composite score indicates the probability that a company is manipulating its reported earnings.

Beneish, M.D. (1999). "The Detection of Earnings Manipulation." Financial Analysts Journal, 55(5), 24–36.

Altman Z″-Score (Private Company Variant)

Predicts bankruptcy probability using the Z″ (double-prime) variant, which does not require market capitalization — making it applicable to private companies and CIM targets.

Altman, E.I. (1968). "Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy." The Journal of Finance, 23(4), 589–609. Revised for private companies (1983, 2002).

Dechow F-Score

Logistic regression model estimating the probability of financial misstatement, based on characteristics common to SEC enforcement actions.

Dechow, P.M., Ge, W., Larson, C.R., & Sloan, R.G. (2011). "Predicting Material Accounting Misstatements." Contemporary Accounting Research, 28(1), 17–82.

Piotroski F-Score

A 9-point composite score measuring whether a company's financial position is strengthening or deteriorating. Tests across three categories: profitability, leverage, and efficiency.

Piotroski, J.D. (2000). "Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers." Journal of Accounting Research, 38, 1–41.

Sloan Accrual Ratio

Measures the proportion of earnings attributable to accruals versus cash. High-accrual earnings are less persistent and more likely to reverse.

Sloan, R.G. (1996). "Do Stock Prices Fully Reflect Information in Accruals and Cash Flows?" The Accounting Review, 71(3), 289–315.
These are not our models. We implement the published formulas with their original coefficients and cutoffs. The academic community has validated these models across thousands of companies and decades of data. Our contribution is applying them systematically in the context of a CIM screen.

3. Where do industry benchmarks come from?

Five metrics use thresholds sourced from established industry reference data rather than peer-reviewed models:

Metric Area Source Why This Source
Liquidity ratios Damodaran (NYU Stern) industry averages; CFA Institute curriculum The most widely referenced free dataset of industry-level financial ratios, updated annually across US equities
Debt service metrics Lending industry standard; CFA fixed income curriculum Universal lender requirements — standardized thresholds across commercial lending
Revenue retention metrics SaaS industry benchmarks (public company disclosures); ReadyRatios Widely reported in S-1 filings, the standard metrics for recurring revenue businesses

About Damodaran's datasets

Professor Aswath Damodaran at NYU Stern maintains the most comprehensive publicly available dataset of financial metrics by industry, updated annually from US equity filings. His data covers current ratios, margins, leverage, returns on capital, and dozens of other ratios across 90+ industry groups. CIMCalc uses these industry averages as reference points for calibrating thresholds on financial health metrics.

Damodaran, A. "Data: Current." NYU Stern School of Business. pages.stern.nyu.edu/~adamodar

4. What about CIMCalc-calibrated thresholds?

The majority of CIMCalc's thresholds are calibrated through our own pipeline analysis. These cover metrics where no single authoritative academic cutoff exists — margins, cash flow conversion, operating efficiency, customer dynamics, and dozens more.

The calibration process works like this:

Step 1: Start with professional consensus. For each metric, we review the ranges that PE professionals, lenders, and financial analysts use in practice.

Step 2: Run against benchmark companies. We process the threshold through our benchmark suite — real companies across CIM documents and SEC 10-K filings — and verify that the threshold produces verdicts consistent with what a competent analyst would conclude.

Step 3: Adjust per intent. Default thresholds are calibrated for a conservative acquisition buyer. Each acquisition intent then overrides specific thresholds where the risk tolerance differs. For example, a distressed turnaround intent tolerates much higher leverage than a yield acquisition intent because high leverage is expected, not disqualifying, in a turnaround thesis.

Step 4: Regression-test. Every threshold change is tested against our regression suite. If a change flips a verdict on a known benchmark, the test fails and requires explicit review before merge.

Transparency note: These thresholds are the most judgment-dependent part of the system and represent our core intellectual property. We document every override and its rationale internally. As more real-world CIMs flow through the system, we calibrate these thresholds empirically — the goal is to converge toward thresholds that match what experienced PE professionals would flag independently.

5. Why do different buyers get different verdicts?

This is the core of CIMCalc. The same financial data means different things depending on what you're trying to buy.

Example: A CIM target shows high debt-to-EBITDA and negative operating cash flow. For a yield buyer, these are immediate kills — the cash flow you're buying is consumed by debt service. For a distressed turnaround buyer, these numbers are the thesis — they're why the asset is cheap. Same data, different verdict.

CIMCalc supports multiple canonical acquisition intents, each with its own threshold overrides:

Intent What It Looks For How Thresholds Differ
Yield / Cash Cow Stable cash flows, low leverage, predictable revenue Tightest leverage and cash flow thresholds; strictest on customer concentration
Growth Expansion Revenue growth, market opportunity, scalable model Relaxed on current profitability; stricter on growth deceleration and unit economics
Distressed Turnaround Discount to asset value, restructuring opportunity Significantly higher leverage tolerance; accepts negative cash flow; focuses on asset coverage
Strategic IP / Technology IP portfolio, patents, technology assets De-emphasizes operational metrics; elevates technology risk domain
Platform / Roll-up Fragmented market, add-on potential, infrastructure Evaluates operational scalability and integration complexity
Founder Transition Stable business with owner-dependency risk Elevates key-person and customer-relationship risks
Operational Improvement Margin expansion opportunity, cost optimization Tolerates lower current margins; stricter on revenue stability

How intents are assigned

You describe your acquisition thesis in plain English. The system decomposes your thesis into a weighted blend of these 7 intents. For example, "I want a stable business I can hold for 10 years with reliable distributions" might decompose to 75% Yield / Cash Cow, 15% Founder Transition, and 10% Operational Improvement.

Each intent activates its own threshold overrides. The weighted blend determines which risk domains are prioritized and what "bad" means for your specific thesis. Only values that differ from the defaults are overridden — everything else inherits from the base threshold set.

You can try this yourself in the interactive demo — enter a thesis and see how the system decomposes it into intents and adjusts the verdict accordingly.

6. How do you validate the system?

Validation happens at three levels: unit testing, benchmark verification, and adversarial review.

Automated test suite

Every calculator, threshold evaluation, verdict gate, and pipeline stage has automated tests. These are synthetic tests — known inputs, expected outputs — that verify correctness at the component level.

Benchmark companies

We process real financial documents and verify that the system produces verdicts a competent analyst would agree with. The benchmark suite includes CIM documents (PDF) across multiple currencies and SEC 10-K filings parsed from EDGAR in iXBRL format.

The benchmark set is deliberately diverse — it includes profitable blue chips, high-growth SaaS companies, distressed companies, and asset-heavy industrials. This diversity ensures our thresholds work across company profiles, not just one type of business.

Why EDGAR?

SEC 10-K filings are the gold standard for validation because the financial data is audited, standardized (US GAAP), and machine-readable (iXBRL format). When CIMCalc processes a 10-K and produces a verdict, we can independently verify every extracted number against the original filing. This makes EDGAR the ground truth for extraction accuracy and threshold calibration.

Adversarial review

Beyond automated tests, we run adversarial review sessions where each report is evaluated against a systematic framework: Are the extracted numbers correct? Do the verdicts make sense given the thesis? Are there material omissions? Does the narrative contradict the data?

Regression snapshots

Every benchmark company has a frozen snapshot of its expected verdicts and key signal values. Any code change that alters these snapshots fails the regression suite and requires explicit review. This prevents threshold drift — you can't accidentally change what "bad" means for a known company.

7. What are the quality controls?

CIMCalc uses multiple layers of automated quality checking to catch errors before they reach your report.

Sanity bounds

Physics checks on calculator outputs. When a calculated value falls outside a plausible range for that metric, the value is flagged and excluded from the analysis rather than producing misleading verdicts. These are not risk thresholds — they're checks that catch calculator errors and data extraction problems.

Cross-signal consistency checks

When two independent calculators produce signals that should agree but don't, at least one is wrong. Consistency checks detect these contradictions automatically. High-confidence conflicts suppress the involved signals before they reach the verdict layer.

Grounding validation

Every claim in the report is checked against the source document to ensure factual accuracy, narrative consistency, and completeness. The system verifies that claims trace back to specific document locations, that the analytical narrative is consistent with extracted data, and that material risks present in the data are not omitted from the report.

Quality grading

Every report receives a quality grade based on extraction coverage, signal density, and grounding validation results:

Grade What You Get Auto-Refund
A Comprehensive report. The document provided enough data to analyze most risk domains with high confidence. Verdict is well-supported. None — full value delivered
B Solid report with gaps. Some risk domains could not be fully analyzed due to missing data in the source document. Verdict is directionally reliable but flagged areas may need manual follow-up. 15% automatic refund
C Limited report. The source document lacked sufficient financial data for a thorough analysis. Use as a starting point for investigation, not a standalone screen. 30% automatic refund
F No report delivered. The document did not contain enough structured financial data to produce a meaningful verdict. 100% refund

Refunds are automatic. No support tickets, no negotiation. We grade every report ourselves, and if the quality doesn't meet our standards, you get your money back before you have to ask. We'd rather refund a report than deliver one we're not confident in.

Why grades exist: Not all documents are created equal. A 60-page CIM with three years of audited financials will produce a Grade A report. A 10-page teaser with a single year of summary data might produce a Grade C. The grade tells you how much confidence to place in the output — and the auto-refund ensures you only pay for value received.

8. Is this investment advice?

No. CIMCalc is a screening calculator. It measures financial signals against your specified thresholds and reports what it finds. It does not recommend whether to buy, hold, or pass on any investment.

A CIMCalc report is the equivalent of a thermometer reading, not a doctor's diagnosis. It tells you the temperature — you decide what to do about it.

CIMCalc does not replace due diligence, legal review, Quality of Earnings analysis, or professional investment judgment. It replaces the first 48 hours you spend deciding whether a deal is worth investigating in the first place.

Computation, not advice. Every report states this explicitly. CIMCalc is not a registered investment advisor, broker-dealer, or financial planner. The verdicts (KILL, FLAG, CONCERN, CLEAR) are computational outputs based on the thresholds you configured, not buy/sell recommendations.

Want to understand each signal in detail?

Explore the Signal Library →

33 calculators, 252+ signals, explained in plain language.