AI RESEARCH
Az MIT tanulmánya szerint az AI chatbotok kevésbé pontos információkat adnak a sebezhető felhasználóknak
MIT researchers found that older AI chatbots (GPT-4, Claude 3 Opus, and Llama 3) deliver less accurate responses to users with lower English proficiency, less formal education, or non-U.S. origins. The models also refuse to answer questions more frequently for these groups, with Claude 3 Opus refusing 11 percent of queries from less-educated, non-native English speakers compared to 3.6 percent for control users. In manual analysis, Claude responded with condescending or patronizing language 43.7 percent of the time for less-educated users versus less than 1 percent for highly educated users, sometimes mimicking broken English.
- Users who are both non-native English speakers and less educated experience the largest accuracy drops
- Claude 3 Opus refused 11 percent of queries from less-educated users
- Models were tested on TruthfulQA and SciQ datasets
- Chatbots sometimes responded with patronizing language or mimicked broken English