AI Trustworthiness Report – May 2025
Public and expert perceptions of AI trustworthiness reveal a complex landscape of skepticism, context-specific confidence, and calls for regulation. Surveys conducted between mid-2024 and early 2025 highlight declining trust, driven by concerns over misinformation, privacy, job displacement, and ethical challenges, though trust varies by application, region, and demographic.

Recent Survey Highlights:
- The University of Melbourne and KPMG’s January 2025 global survey (48,340 respondents, 47 countries) found higher trust in AI in emerging economies (e.g., China: 83%, Indonesia: 80%) than in advanced ones (e.g., U.S.: 39%, Canada: 40%). Concerns include job loss, privacy, bias, and misinformation, with strong public support for international AI governance and transparency. Low AI literacy was linked to distrust, emphasizing the need for education (University of Melbourne & KPMG, 2025).
- The CBS News/YouGov poll (March 2025, 2,351 U.S. adults) reported cautious U.S. sentiment, with AI seen as more trustworthy than humans for tasks like data analysis but less reliable for driving or customer service. Economic pessimism was widespread, particularly among non-college-educated respondents, who fear job losses (CBS News/YouGov, 2025).
- The Pew Research Center’s August–October 2024 surveys (5,410 U.S. adults, 1,013 AI experts) revealed a stark divide: only 11% of the public is more excited than concerned about AI, compared to 47% of experts. Both groups worry about inadequate regulation, especially in elections and news, and desire greater personal control over AI applications. Public distrust is higher, with 43% fearing personal harm from AI, compared to 15% of experts (Pew Research Center, 2024).
- The AP-NORC/USAFacts survey (July–August 2024, 1,019 U.S. adults) underscored low trust in AI for elections, with 61% lacking confidence in AI chatbots’ factual accuracy and 40% believing AI complicates access to reliable election information (AP-NORC/USAFacts, 2024).
- In higher education, Campbell Academic Technology Services’ 2024–2025 analysis (multiple global surveys) showed that 53% of students worry about AI’s accuracy, and 55% see risks to academic integrity. Faculty remain hesitant, with 88% using AI minimally due to reliability concerns, despite 86% of students integrating AI into studies. Privacy and ethical issues further erode trust (Campbell Academic Technology Services, 2025).
Key Trends:
- Trust in AI is declining globally, with higher confidence in specific tasks (e.g., data analysis) than in high-stakes contexts (e.g., elections, healthcare).
- Emerging economies are more trusting than advanced ones, and education correlates with optimism.
- Ethical concerns, low AI literacy, and regulatory gaps fuel skepticism, with broad support for governance frameworks.
- These findings suggest that building trustworthy AI requires addressing transparency, literacy, and ethical oversight while navigating public anxieties about technological control.
References:
- AP-NORC/USAFacts. (2024). AI and Elections Survey.
- Campbell Academic Technology Services. (2025). AI in Higher Education: 2024–2025 Surveys.
- CBS News/YouGov. (2025). AI Sentiment Poll.
- Pew Research Center. (2024). U.S. Public and AI Experts’ Views on AI.
- University of Melbourne & KPMG. (2025). Trust, Attitudes, and Use of Artificial Intelligence.