The rich are more likely to use AI, exposing a new digital divide

Hidden AI in Daily Apps Widens Divide Between Wealthy and Less Affluent Users

Sharing is caring!

The rich are more likely to use AI, exposing a new digital divide

The rich are more likely to use AI, exposing a new digital divide – Image for illustrative purposes only (Image credits: Unsplash)

Millions of Americans engage with artificial intelligence every day without realizing it, from personalized recommendations on streaming services to curated social media feeds. A recent study highlights how this subtle integration exacerbates a new form of digital inequality, as those with higher incomes and education levels prove far more aware and active in leveraging these tools.[1][2]

Communication researcher Professor Sai Wang and her team at Hong Kong Baptist University examined survey responses from more than 10,000 U.S. adults. Their findings, published in the journal Information, Communication & Technology, underscore a clear pattern: socioeconomic status strongly influences AI engagement.[1]

The Core of the AI Awareness Gap

Professor Wang’s team drew from the Pew Research Center’s American Trends Panel, a nationally representative survey of 10,087 U.S. adults. Respondents shared their understanding of AI and attitudes toward it, allowing researchers to measure awareness, familiarity, and actual usage.

Higher education and household income emerged as key predictors. Individuals with advanced degrees or greater financial resources reported stronger recognition of AI in various contexts and greater confidence in their knowledge of the technology. Education, in particular, correlated most closely with hands-on use.[1]

This gap extends beyond flashy chatbots or image generators. The study focused on “hidden” AI embedded in routine digital interactions, where many users remain unaware of its presence.

Spotting AI Where It Hides in Plain Sight

AI powers countless everyday applications without fanfare. Recommendation algorithms on platforms like Netflix and Spotify tailor content to user preferences, yet most people attribute suggestions to chance or neutral curation rather than intelligent systems.[1]

Social media feeds operate similarly, prioritizing posts through opaque AI decisions. Professor Wang noted that such seamless integration marks a departure from traditional digital divides, which centered on access or basic skills. Here, people interact with AI unknowingly, diminishing the role of deliberate engagement.[1]

The researchers defined awareness as spotting AI’s role in these contexts, while familiarity captured self-perceived expertise. Surprisingly, perceived familiarity outperformed actual usage as a predictor of awareness, suggesting that feeling informed matters more than direct experience in recognizing hidden AI.

Socioeconomic Factors Fuel the Disparity

Wealthier and more educated individuals benefit from prior digital savvy and encouragement to experiment with new tools. Previous research supports this: those with stronger skills gain confidence faster, propelling a cycle of greater adoption.

Lower socioeconomic status groups lag in all metrics – awareness, familiarity, and use. This uneven landscape allows advantaged users to optimize AI for personal gain, such as tailoring job applications for AI-driven screening processes.

Professor Wang warned, “Closing the AI awareness gap is essential, because if only people with higher income or education are aware of AI and its uses, this may reinforce social inequalities.”[1]

Greater knowledge also equips users to navigate risks, like deepfakes, while the unaware face higher deception odds. The study, U.S.-focused, leaves open questions about global patterns, though other nations like South Korea and China show higher average AI awareness.[1]

Strategies to Narrow the Emerging Chasm

Simply expanding access to AI devices falls short against hidden integrations. The researchers advocate targeted education to build familiarity among underserved groups.

Potential steps include community workshops with plain-language explanations and real-world examples. Integrating basic AI literacy into school curricula could demystify concepts like recommendation engines.

  • Outreach campaigns highlighting AI in daily apps
  • Guidance on ethical AI use and risk identification
  • Programs teaching how to spot “hidden” AI functions

Professor Wang emphasized practical relevance: “This could involve outreach campaigns or community workshops that use clear language and practical examples to make AI more understandable and relevant for low-SES communities.”[1]

Toward an Inclusive AI Era

As AI permeates more aspects of life, unchecked divides risk deepening societal rifts. The Hong Kong Baptist University team’s work signals urgency for proactive measures.

By prioritizing awareness and familiarity, policymakers and educators can help ensure technology uplifts all, rather than entrenching advantages. The researchers concluded, “It is imperative to work toward a more inclusive digital future in which technology empowers everyone and does not further marginalize any group.”[1]

Future studies may test these interventions and extend insights beyond U.S. borders, addressing how global AI adoption shapes equity.

About the author
Lucas Hayes

Leave a Comment