Skip to content

Public Trust in Artificial Intelligence: Challenges and Opportunities

Public Trust in Artificial Intelligence: Challenges and Opportunities

While politicians discuss the promises of artificial intelligence in terms of growth and efficiency, new reports reveal a gap in public trust towards this technology. Increasing skepticism among the public poses a significant challenge to government plans for adopting AI.

Report and Public Sentiment Towards AI

The Tony Blair Institute for Global Change, in collaboration with Ipsos, conducted an in-depth study to determine the precise figures of this anxiety. The study found that a lack of trust is the main reason people hesitate to use generative AI. This is not just a fleeting concern but a real obstacle to the revolution that politicians are promoting.

The report showed a striking division in how people perceive AI. On one hand, more than half of the people have used generative AI tools in the past year, a rapid adoption for a technology that was not widely known just a few years ago.

Trust Increases with Usage

Interestingly, the report indicates that public trust in AI increases with usage. Among those who have never used AI, 56% see it as a threat to society. However, among weekly users, this figure drops to 26%. This reflects the saying “familiarity breeds comfort,” as positive experiences with AI help alleviate fears.

The report also highlights that trust in AI is influenced by demographic factors. Young people are generally more optimistic, while older generations are more cautious. Technology professionals feel prepared for what lies ahead, unlike workers in sectors such as healthcare and education who feel distrustful.

Different Applications and Their Impact on Trust

One of the most revealing points in the report is that our feelings towards AI change depending on its function. We welcome the use of AI in easing traffic flow or speeding up cancer detection because we see the direct benefit in our lives.

However, when it comes to monitoring job performance or targeting us with political ads, acceptance drops sharply. This suggests that our concerns are not about AI growth itself but about the purpose of its use.

Building Trust in AI to Support Growth

The report provides a clear path to building what is called “justified trust.” First, the government must change how it talks about AI. Instead of abstract promises of GDP growth, the focus should be on what AI means for people’s lives.

Second, it is essential to demonstrate AI’s effectiveness in improving public services. Success criteria should be based on people’s actual experiences, not just technical indicators.

Finally, appropriate rules and training are needed. Regulatory bodies must have the power and knowledge to control AI, and everyone needs access to training to use these new tools safely and effectively.

Conclusion

Building public trust in AI to support its growth requires building trust in the people and institutions responsible for its development. If the government can demonstrate its commitment to making AI work for everyone, it may succeed in gaining public trust and promoting the adoption of this advanced technology.