1. Despite substantial investments in the development of generative AI technologies and the progressive narrowing of the gap between machine and human capabilities, analyses reveal the emergence of a new gap concerning trust in AI (Harvard). This trust gap can be seen as the sum of the real and perceived risks associated with AI, including disinformation, safety and security, the black box problem, ethical concerns, instability, hallucinations in large language models (LLM) - where models generate inaccurate information- job losses and social inequalities, environmental impacts, etc. It is important to bridge this trust gap to encourage widespread adoption of AI. Meanwhile, a recent McKinsey study found that consumers generally believe that the companies they do business with provide data protection and cybersecurity, and that AI-based products are no less trustworthy than human-based products. Central to these concerns are the technology companies that control the development of AI (Brookings.edu) (Media Sources on the IPSAPortal).
2. The future of higher education, increasingly, is a political issue. A Gallup poll conducted this year with the Lumina Foundation shows that Americans' confidence in higher education has declined over time. While in 2015, more than half (57%) of the population reported a high level of confidence in the higher education system, by 2024 that percentage had shrunk to 36%. Looking across the political spectrum, it is Republicans who are driving this trend, with only a fifth of respondents expressing confidence today. The main reasons for this are concerns about the politicization of higher education (41%), a mismatch with the skills needed for jobs (37%), and high costs (28%). At the same time, one-third of the population still considers higher education to be a fundamental value for society, although they express doubts about the ability of the higher education system to adapt to the emerging needs of the population (Research Institutes on the IPSAPortal).
|