
Principle 8: Review and Learning
The Common & Emerging Practices, a new series of resources from the Impact Principles, aims to capture key insights from notable trends in common practices in implementing the Impact Principles by our Signatories and highlight promising emerging practices and key gaps. By sharing these common and emerging best practices in impact management, we seek to elevate impact practice in the market and ensure that capital is mobilized at scale with integrity to drive meaningful impact outcomes.
The resources related to Common & Emerging Practices will be released in phases through website publication of initial drafts for each of the nine principles in series, followed by draft and final consolidated reports with stakeholder engagement.
Click here to learn more about the Common & Emerging Practices series.
Click here to sign up for ongoing updates or to provide your feedback.
Overview
—

Principle 8 underscores the importance for impact investors to systematically review, document and improve their impact strategies, decisions and processes based on actual results and lessons learned. An intentional and structured approach to learning helps investors and the broader impact investing field optimize capital allocation to scale high-impact solutions and mitigate unintended negative impacts.
Without continuous learning and improvement based on what drives impact — and what does not — investors risk perpetuating ineffective models and over-emphasizing isolated successes without the potential to drive outcomes at scale. By valuing and promoting shared learning to fill knowledge gaps, the impact investing field as a whole can better address systemic challenges.
CHALLENGES IN THE IMPLEMENTATION OF PRINCIPLE 8
Leading impact investors systematically integrate lessons learned into decision-making as a core business practice that is essential to driving impact performance. However, several challenges remain in the implementation of Principle 8 at both the organizational and field levels.
Lack of formal feedback loops: Many investors still lack formal structures to ensure the consistent integration of lessons learned from impact assessments into future decision-making and improved processes. For others, conducting formal impact assessments may be a work-in-progress to begin with, limiting the foundation upon which learning and improvement can be built. Effective feedback loops also require disciplined documentation of expected results as well as assumptions and gaps in evidence related to the theory of change, which can inform an intentional learning agenda post-investment and during reviews.
Challenges with data quality and standardization of metrics: There remains significant variability in the definition, quality, collection methodologies and disaggregation of data, making it difficult to compare or aggregate impact findings in a way that meaningfully informs learning and improvement. While progress has been made in standardizing indicators including IRIS+, HIPSO and Joint Impact Indicators, further harmonization and the broad adoption of standard metrics are critical to accelerating and sustaining this progress.
Difficulty in comparing performance: Unlike financial benchmarks, impact benchmarks are still evolving. A lack of data consistency and reporting standards, as well as the limited availability of industry-wide benchmarks across sectors and impact themes, make comparing impact performance a persistent challenge for the field. In addition, there is a risk of over-simplifying or overlooking the nuances of different impact and investment strategies, contexts and target beneficiaries, especially when addressing complex and deeply rooted social challenges. Without access to comparable data, investors may struggle to assess relative effectiveness, set realistic targets or identify leading practices and strategies that could be scaled or adapted.
Measuring outcomes versus outputs: Many investors find it challenging to measure the long-term outcomes of their interventions beyond tracking short-term outputs. Measuring outcomes often requires deeper engagement with investees and beneficiaries, longer timeframes of review and more complex methodologies, which may not be embedded in current business processes. As a result, investors may miss critical insights into whether their interventions are truly effective in driving real-world results in people’s lives and for the planet. Similarly, identifying and measuring indirect and systemic impact can be difficult, making comprehensive outcome measurement a more aspirational goal for investors.
Understanding and learning from investor contribution: Impact investments operate in complex, real-world contexts where multiple factors and stakeholders influence results. It is difficult to isolate what changed due to investors’ influence compared to other external influences or investees’ own initiatives independent of the investments. Moreover, terms like “attribution” and “counterfactuals” can be conceptually confusing and inconsistently applied. Taking a learning-oriented approach to assessing what happened as a result of an investor's actions and, where feasible and appropriate, considering what might not have happened without their investments, creates space for pragmatic, transparent reflection on contribution while supporting continuous improvement in impact strategy and execution.
Sharing negative impacts and lessons learned: Fear of reputational risk, a lack of incentives and the absence of established industry norms discourage investors from publicly sharing instances of impact underperformance or unintended negative consequences. This limits collective learning and prevents the broader field from understanding what doesn’t work, slowing progress toward more effective impact solutions. Establishing a culture of humility and norms that value transparency and shared learning is essential to advancing the field and enhancing impact.
KEY OBSERVATIONS IN THE IMPLEMENTATION OF PRINCIPLE 8
Principle 8 serves as a critical driver of continuous improvement in impact management as investors seek to enhance the effectiveness of their strategies and optimize the delivery of impact. Effective implementation of Principle 8 is not just about reviewing performance against expectations, but also creating a culture of learning and developing governance structures, processes and tools that ensure that regular review and deliberate insights lead to better decisions and outcomes. Importantly, how organizations learn and share lessons also shapes the broader market, helping to close knowledge gaps, scale strategies that deliver high impact and avoid the repeating of mistakes in a field that strives to address complex and systemic challenges.
Notable observations include:
Culture of learning, supported by leadership, capacity, and incentives: Organizations that demonstrate strong impact transparency and learning cultures are driven by the commitment of leadership, which in turn supports resourcing and capacity to enable robust impact management. This support may take various forms, including impact champions, cross-functional impact working groups or dedicated impact or sustainability teams, any of which may be tasked with monitoring and evaluating impact performance across portfolios, generating insights, facilitating shared learning and continuous improvement, and providing training. Along with dedicated capacity, integrating impact and learning throughout business processes and incentive structures beyond the impact team ensures that impact is prioritized across the organization and not siloed in specialized roles.
Formalized governance and process for learning: Some organizations establish formal governance bodies, such as impact steering committees or advisory committees, to review results and lessons learned and provide guidance and accountability for continuous improvement. These bodies typically meet quarterly or annually to review impact performance and lessons, complemented by ongoing informal mechanisms such as portfolio review discussions, learning sessions, check-ins and site visits that maintain a continuous learning loop.
Types of review and documentation: Investors adopt multiple methods to review and document impact performance and lessons learned, each serving different purposes for understanding and communicating impact. Key tools and approaches used by organizations to assess impact performance include impact reports, surveys or interviews, case studies, in-depth evaluations or impact studies and benchmarking. [See Exhibit 8a]
—
EXHIBIT 8a. Types of Impact Performance Review and Documentation
Audience and channels for reports and learning: Reports and other documentation of achievements of impact and lessons learned may be shared with internal or external audiences, such as boards and committees, or investors, investees and the public. Disseminating results, findings and lessons learned with the broad ecosystem can contribute to the advancement of best practices in the field, and it is a strategy often adopted by investors with theories of change pursuing systems change impact objectives. Studies, findings and lessons learned may be shared in impact reports, standalone publications, dedicated external databases or websites, or industry forums.
Points of reference for performance comparison: Impact performance is compared against various points of reference including baselines, targets or expectations, thresholds, previous years or trends over lifecycle, and peer or industry benchmarks.
Aligning with industry standards, frameworks, and norms: Investors are seeking alignment with industry standards, frameworks and norms to improve the credibility of their impact measurement, management and reporting and ground their practices in widely accepted and evolving best practices. For example, the Impact Performance Reporting Norms, an initiative led by Impact Frontiers, establish shared expectations for the content and structure of impact reporting. They provide a consensus-based framework for reporting positive and negative impacts as well as quantitative and qualitative results, thereby promoting greater transparency, rigor and consistency of impact performance reporting.
Evaluation for in-depth insights: Ongoing impact measurement and monitoring processes and findings can be complemented by evaluations or impact studies for deeper insights and address particular knowledge gaps within organization strategies or at the sector level. Evaluations can be conducted internally or through third-parties and at varying levels or scope and different times across investment or fund lifecycles. [See Exhibit 8b]
—
EXHIBIT 8b. Key Considerations in Impact Evaluations
Enhancing learning through collaboration: Learning can be enhanced through collaboration both within the organization as well as externally — for example, with investees, co-investors, ecosystem partners, or the broader sector or field. [See Exhibit 8c]
—
EXHIBIT 8c. Collaborative Learning Approaches in Impact Management
- Data and tech-enabled learning: The activities of reviewing, learning and improving can be made more efficient and effective with systematic data collection and technology-based data management platforms that enable centralized management, monitoring and reporting of impact. These technology platforms may streamline data collection processes with investees and also produce individual investment and aggregated portfolio-level virtual scorecards, trends and dashboards to enhance analysis, insights and decision-making. There is also emerging interest in the potential for leveraging artificial intelligence in impact measurement and management.
Verification as a tool for learning and improvement. Independent verifications, as required by Principle 9 of the Impact Principles, can provide useful assessments on the strength of impact management systems and processes, as well as recommendations for improvement. Similarly, re-verification or periodic verifications help to track the progress of enhancement in systems and processes over time.
Common, Emerging & Nascent Practices in the Implementation of Principle 8
—
Note: The findings and observations are primarily based on an analysis of the most recently published 166 Signatory Disclosure Statements at the time of the review in early to mid-2024.
An analysis of Signatory disclosures reveals that while Signatories have operationalized Principle 8, the depth and sophistication of practices vary. Most Signatories disclose conducting regular reviews and using impact review findings to inform and improve investment strategies and management processes. A majority of Signatories are also producing impact reports that are shared with their investors or publicly. However, less than half disclose comparing actual to expected impact results. Other more structured and advanced practices are still nascent — including reviewing unintended impacts, conducting independent evaluations, developing stakeholder feedback loops and case studies, measuring outcomes and benchmarking. These emerging and nascent practices signal opportunities for the field to advance more transparent, rigorous and inclusive learning mechanisms to drive long-term outcomes at scale.
Incorporating findings into investment processes and frameworks. 93% disclosed using findings and lessons learned from periodic review process to refine their impact strategies and theses, make strategic decisions for portfolio or individual investments and improve impact management frameworks, policies and processes.
Regular review and documentation of impact performance. 89% disclosed reviewing impact performance of investments on at least an annual basis, with 32% disclosing quarterly reviews. Many conducted portfolio or fund-level reviews in addition to reviews of individual investments.
Developing impact reports. 60% disclosed producing impact reports, which are usually either only shared with investors or limited partners or published on website to be shared publicly. 17% provided links to their public impact reports in their OPIM disclosure statements.
Comparing expected versus actual impact. 41% disclosed comparing expected versus actual impacts, with a focus on current portfolio companies or funds or exited investments. Comparisons were typically based on annual performance or over the lifecycle of the investment.
Reviewing industry standards. 17% disclosed reviewing and adjusting impact practices based on new and evolving industry standards and frameworks, best practices and learning.
Reviewing unintended impacts. 14% disclosed monitoring and reviewing unintended consequences, which may be positive or negative, or evidence of impact drifts for their underlying investments.
Independent evaluations. 15% disclosed having a third party or internal independent department conduct an evaluation or study on the impact of their investments and investment processes, distinct from independent verification.
Customer and stakeholder surveys. 11% disclosed collecting feedback from customers, beneficiaries or other external and internal stakeholders, such as investees, ecosystem partners and employees, as part of the review process.
Case studies. 11% disclosed developing and publishing case studies on investments and findings from the review process.
Review of exited investments. 8% disclosed conducting post-exit reviews of investments to assess their impact and how impact is sustained.
Other nascent practices disclosed by a limited number of Signatories (<5%), but representing a potential next frontier for the field and focus areas of recent initiatives, include:
- Review of outcome-level data. Some Signatories explicitly note going beyond output-level data to review outcomes on beneficiaries or at a systems-level.
- Integrated performance. A few Signatories disclosed reviewing or reporting on integrated financial and impact performances.
- Benchmarking performance. A few Signatories disclosed conducting comparison of their impact performances against industry or peer-groups by participating in benchmarking surveys or studies, or conducting portfolio-level benchmarking of investees against their peers.
- Review of outcome-level data. Some Signatories explicitly note going beyond output-level data to review outcomes on beneficiaries or at a systems-level.
Note that given the variance in comprehensiveness of Signatories' disclosure statements, many of the nascent practices in particular, may be under-reported compared to actual practices.
Principle 8 Signatory Practice Spotlights
—