Moving Beyond Principles to Practice
Global discourse on AI ethics often revolves around high-level principles like fairness, accountability, and transparency. The Institute's Center for Techno-Social Futures argues that without concrete, procedural mechanisms tied to specific communities, these principles remain abstract. In response, they have developed the Community-Centric AI Framework (CCAF), a living document and set of practices co-created with residents of several West Virginia towns. The CCAF is not a checklist for developers; it is a process model for ongoing negotiation and partnership between technologists and the people whose lives will be shaped by the technology.
Pillars of the Framework
The CCAF rests on four actionable pillars, each with defined processes:
- Community Sovereignty in Data & Design: Before any project begins, a Community Review Board (CRB) is established. This board, composed of local stakeholders (not just leadership), has the authority to approve, modify, or veto data collection plans and primary system objectives. All training data gathered locally remains under a data trust controlled by the CRB, which licenses it for specific, approved uses.
- Explainability as a Cultural Practice: The framework mandates that system outputs be explainable in terms meaningful to the community. This has led to novel interface designs that use local metaphors (e.g., representing confidence intervals as 'certain as a stone' vs. 'slippery as a wet root') and requires developers to participate in public forums where they must explain their work without jargon.
- Benefit-Sharing Covenants: Any commercial product or service derived from research conducted under the CCAF must include a legally binding covenant. This covenant outlines direct financial returns to the community trust fund, preferential hiring/training programs, and guarantees of continued local access to the technology at below-market rates.
- Sunset and Adaptation Protocols: Every deployed system has a formal review schedule and a predefined 'sunset' process. The CRB and developers jointly assess unintended consequences, shifting needs, and technological obsolescence. The system must be adaptable by local technicians, and if it is to be decommissioned, there is a plan for knowledge transfer and replacement.
Case Study and Wider Implications
A pilot project implementing the CCAF involved an AI system for optimizing shared, autonomous vehicle routes in a small town. The CRB, which included seniors, shift workers, and small business owners, vetoed the initial algorithm that minimized total fleet mileage because it disadvantaged residents on less dense routes. They worked with engineers to create a multi-objective algorithm that balanced efficiency with equitable access. The resulting covenant ensures a percentage of ride revenue funds local road maintenance. This model challenges the 'deploy and disengage' approach common in tech. It argues that ethical AI isn't a feature you build in once, but a continuous relationship you maintain. The framework is garnering attention from municipal governments worldwide and from indigenous communities seeking to assert control over technological change. By grounding ethics in the lived experience and sovereignty of a place, the Institute is providing a tangible, if challenging, path toward technology that truly serves, rather than subsumes, the community.