What is Responsible AI Use?
In the latest Tepperspectives Video, Tepper School researchers Taya Cohen and Sofía Rodríguez Chaves look at how ethics and moral character influence responsible AI use in the absence of policy or regulations.
Tepper School researchers introduce Accounting Classification Entropy, a novel measure derived from information theory that quantifies the structural information in corporate financial reports, which is proven to be significantly associated with stock returns, trading volumes, and financial analysts' resource allocation.
Are traditional measures of financial reporting information failing to capture a fundamental element of a firm’s profile? What if the structural classification of a company’s balance sheet, rather than just the magnitude of its numbers, holds the key to predicting market reactions and analyst behavior?
In the new paper, “Accounting Classification Entropy,” a team of researchers from the Tepper School of Business at Carnegie Mellon University (CMU) and the University of Minnesota has developed a new tool to quantify the hidden information embedded in the structural arrangement of corporate financial reports. The new measures, called Accounting Classification Entropy (ACE) and Accounting Classification Entropy-Relative (ACER), apply the core entropy concept from Claude Shannon’s information theory and Solomon Kullback and Richard Leibler’s relative entropy to accounting data. This new entropy-focused framework moves beyond traditional, single-dimensional metrics like earnings-per-share to objectively measure the information conveyed by the entire structure of classified financial statement numbers.
The study proposes ACER as a theory-based yet practical tool for capturing subtle shifts in a firm’s internal resource allocation over time. “Our core idea is to apply entropy from information theory to quantify innovation in the common-sized financial statements between two different periods,” said Gaoqing Zhang, an author on the paper.
A key theoretical breakthrough is the proof that the entropy measures (ACE and ACER) are the only measures that satisfy a “grouping property.” This property is essential for summarizing classified datasets like financial statements, as it requires that any information gained from a finer accounting classification (e.g., breaking a large account into sub-accounts) must be proportional to the new information generated, scaled by the size of the account being disaggregated. This formal grounding ensures the measure’s consistency and rigor across various hierarchical classification structures.
“The ACER can meaningfully quantify information in financial statements,” said Jane Pyo, “The information captured by the ACER is not fully explained by existing firm characteristics or fixed effects commonly used in accounting research, emphasizing the measure’s novel contribution.” The measure proved to be significantly associated with absolute stock returns and trading volumes following a financial statement release, confirming its immediate relevance to capital market participants.
The measure’s inherent clarity and objectivity are key advantages over other methods, which often rely on outcomes affected by external subjective factors. “The measure is transparent, easy to implement, and adaptable across various classification-based settings,” added Pierre Liang.
The measure is constructed directly from the classified accounting numbers rather than relying on external decision outcomes or market reactions. “We refer to this property as ‘internal’ to emphasize that the measure is derived from within the accounting system itself, rather than from outcomes of external decisions.” This self-contained framework represents a departure from the dominant “decision-making” approach in modern accounting research, instead drawing inspiration from the “classical measurement approach.” This independence from market effects makes ACER especially powerful for research on information processing cost and capacity, as it quantifies the information available for processing, rather than the data processed.
The team further showcased ACER’s empirical power by applying it to a central question in finance: how financial analysts allocate their limited processing capacity. They found a positive relationship between a firm’s ACER and the resources analysts dedicate to it. When a firm’s ACER is higher, analysts respond by extending their forecasting horizons to cover more periods and issuing more frequent revisions.
Finally, the researchers extended the framework using the concept of mutual information to analyze the fundamental relationship between a company’s assets and its funding sources (liabilities and equity). This analysis revealed that the informative power of conventional accounting structure varies systematically across firms.
“The results indicate that conventional balance sheet structures convey greater informational content for established firms than for younger or new-economy firms,” concluded Gaoqing Zhang. This suggests that for companies with simpler or less conventional models, such as those relying heavily on intangible assets, the traditional classification structure may not be as effective at constraining uncertainty for users. The researchers suggest that a potential reason for this weaker relationship in newer firms is the use of more flexible or unstructured financing arrangements.
Overall, the accounting classification entropy framework provides a mathematically rigorous and practical foundation for quantifying the often-overlooked structural dimension of financial reporting, offering a new perspective for how financial statements represent and convey information about a firm.
The study, titled “Accounting Classification Entropy,” was authored by Nan Li of the University of Minnesota, and Pierre Jinghong Liang, Jane Jae Yeon Pyo, and Gaoqing Zhang of Carnegie Mellon University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5588192
Visit the Accounting AI Research Lab website to learn more.