MI Történik?

Mesterséges intelligencia hírek magyarul — naponta frissülve

← Vissza a főoldalra

Kutatók ütemtervet javasolnak a plurális összehangoláshoz az AI rendszerekben

New research from the University of Washington, Stanford, MIT, and AllenAI lays out a framework for 'Pluralistic Alignment.' The motivating idea is that as a broader set of people rely on AI, systems need to be capable of representing a diverse set of human values and perspectives rather than a single moral lens. The researchers define three types of alignment and three distinct evaluation approaches to measure how well AI systems cater to diverse needs.
Miért fontos?

AI systems are political artifacts so we need to measure their politics. Frameworks like this help us understand how we can examine the political tendencies of AI systems—an increasingly tough and important task, especially as AI systems are deployed more widely.

Eredeti forrás megtekintése (angol) →