The same standard applies to everyone. If a research organization surveyed 30,000 people with trained interviewers in 17 languages, that rigor shows up in the score. If another organization circulated an online survey through its own social media followers and called the results representative, that shows up too. We don't grade on who you are or what you found. We grade on whether your methods can actually back up what you're claiming.
What This Is
Reports about India and Indian Americans get cited in Congress, quoted in the New York Times, and used to shape legislation. Most people who cite them never look under the hood. They don't ask how the data was collected, whether the sample was representative, or whether anyone outside the organization can verify the numbers.
The Citation Integrity Dashboard does that work. We score reports on the strength of their research methods — not on whether their conclusions are right or wrong. A report can reach the right answer through bad methods. A report can use solid methods and reach conclusions you disagree with. We care about the methods.
Not Partisan
Every report is scored against the same eight-dimension rubric, published before any scoring begins. The weights don't change between organizations. A report from an advocacy coalition is held to the same standard as a report from Pew Research Center. You can read the rubric, check the math, and see exactly how every score was calculated.
The current corpus covers reports on India, Indian Americans, and Hindu communities. For years, organizations have published reports in this space that shaped media coverage and policy decisions. Some of that research is excellent. Some of it would not survive a first-year methods seminar. Until now, nobody scored them side by side using the same yardstick.
- 27
- Reports scored
- 12
- Organizations
- 1999–2026
- Years covered
- 8
- Dimensions scored
How Scoring Works
Each report is scored on eight dimensions: Are terms clearly defined? Is the classification process reliable? Does the sample support the claims? Is coverage one-sided? Are sources independent? Can the data be verified? Is the organization transparent about funding and governance? Does it engage with criticism?
Each dimension is scored 0 to 10. The rubric has guardrails — a report that fails on sampling or verification cannot score well overall, no matter how polished the rest looks. Every score comes with the specific evidence behind it, pulled directly from the report. You don't have to take our word for anything.
Corrections & Responses
Any organization we evaluate can submit a response. We publish it in full, unedited, on the report's page. If it identifies a factual error in our scoring, we correct the score and publish the correction with a date.
Disagreements about our rubric or our weights get published as submitted. They don't change scores. Only factual errors do.
To submit a correction or response: [email protected]