Every week I review the latest public opinion research on artificial intelligence. Some weeks the data is abstract — vibes, attitudes, early signals. This is not one of those weeks.
This week the polling is concrete and urgent. Workers watching compensation get redirected into AI budgets. Parents discovering their kids' schools have no AI policy. Game developers building a transparency framework from the ground up. The common thread isn't that AI is failing — it's that the systems around AI are failing to keep pace. The technology is moving. The institutions need to catch up.
Let's dig in.
The AI Tax on Your Paycheck
ResumeBuilder.com surveyed 866 U.S. business leaders and found that number. Not future tense. Now.
The cuts are broad-based: 61% of affected companies slashed bonuses, 60% cut equity or stock awards, 59% froze raises, 53% cut benefits, and 43% went after base salaries. And 26% have conducted or are planning layoffs specifically to fund AI.
The quiet part out loud? Among companies already making cuts, 92% say AI investment is a higher priority than employee satisfaction, and 88% say the weak job market makes it easier to cut compensation without losing talent.
Now, context matters. An EY CEO survey found two-thirds of CEOs expect to maintain or grow headcount — so we're not looking at mass layoffs. What we're looking at is a transition where companies are betting big on AI and asking workers to share the cost of that bet. The question isn't whether the investment will pay off (I think it will), but whether companies are being transparent enough about the trade-off.
Workers are feeling it. AudienceNet found a clean reversal from last year: 44% of workers now say AI does more harm than good for finding jobs, building wealth, and quality of life — versus 38% who say it's a net positive. That's a flip from 2024, when more workers said AI did more good than harm. Optimism dropped roughly 10 percentage points in a single year.
MetLife (n=7,500) adds texture: 59% of employees fear AI will make their jobs obsolete, and 24% feel they must actively compete with AI at work. MyPerfectResume calls it "career fog" — 70% of workers say they've questioned their career path in the past year, and 66% say their careers feel stalled or on autopilot. I think "career fog" is the right way to describe the uncertainty facing knowledge workers right now. You're not necessarily losing your job today. You just can't see where your career goes from here.
Blue Rose Research (n=2,716) captures the political dimension: 79% of Americans are concerned the government has no plan to protect workers from AI-driven job losses — the fastest-surging anxiety issue they've tracked over the past year. 57% say AI is advancing "too fast."
These numbers aren't a rejection of AI. They're a demand for better leadership through the transition. Workers aren't saying "stop building AI" — they're saying "tell me where I fit in what comes next."
That's a solvable problem, and it's one that companies and policymakers need to take seriously before the anxiety hardens into opposition.
The Paradox in Finance
Here's a fascinating dichotomy buried in the Randstad data: accounting and financial services workers are both the most worried about AI replacing their jobs AND the most confident in their ability to use it productively.
That's higher worry than manufacturing (42%), transport & logistics (42%), and engineering (27%). And 57% are so afraid of job insecurity that they avoid raising concerns with their managers.
But here's the kicker: 71% say AI makes them more productive — far above engineering (58%), manufacturing (53%), and transport (52%). They're the power users who can see exactly how the technology works well enough to understand how it could eventually replace them.
Think about that. The better AI makes you at your job, the more clearly you can see how it could eventually do your job without you. That's not irrational anxiety — it's pattern recognition. But there's a flip side: these are also the workers best positioned to evolve with the technology. The 80% of financial services workers who say they feel prepared to use AI aren't wrong. The challenge is making sure their employers invest in that evolution rather than just pocketing the productivity gains.
Pajama Time: The Follow-Up
A quick follow-up on last week's physician AI story. Doximity's 2026 report (n=3,151) digs into a concept I love: "pajama time" — the after-hours documentation work that keeps doctors tethered to their laptops late into the evening.
Only 23% of physicians say AI has actually reduced pajama time so far. But 66% believe it will — a classic case of optimism outpacing real-world impact. We've seen this pattern over and over: people expect AI to transform their work in the future while reporting modest effects in the present.
But I'd bet on the optimists here. Among physicians already using AI, 75% report reduced administrative burden and 73% report improved work-life balance. The early adopters are seeing real results — it just needs time to scale. Healthcare might be where AI delivers on its quality-of-life promises first.
NOT GOOD: Teens, AI, and Nude Images
I need to flag something deeply concerning. A George Mason University study published in PLOS One surveyed 557 U.S. teens aged 13-17 and found:
- 55% said they had created at least one AI-generated "nudified" image of someone
- 54% reported having received an AI-generated sexualized image
- 36% said someone had created a sexualized AI image of them without their consent
- 33% said such an image had been distributed without their consent
Read those numbers again. More than a third of teens report being victimized by non-consensual AI-generated sexual imagery. This is not a fringe behavior. It is mainstream among American teenagers.
Most of the public discourse around kids and AI has centered on chatbots — are they safe, are kids getting attached, what guardrails do we need. But I think this concern is going to rise exponentially as image generation tools get better and easier to access. The chatbot conversation is important. This one is urgent.
It also helps explain why "regulate AI" numbers are so high. Rasmussen recently found 61% of voters back government regulation of AI. A lot of the strength behind that number isn't about workplace automation or existential risk — it's parents who are terrified about what's happening to their kids. And honestly? This is an area where regulation is exactly the right response. The major AI companies should be racing to solve this before Congress does it for them.
Parents Are Ready. Schools Aren't.
Speaking of parents: Echelon Insights surveyed 1,511 K-12 parents for the National Parents Union and the numbers are striking, though not surprising if you've been paying attention.
The headline guardrails stats are high but predictable — 86% want AI chatbots to show pop-up warnings before harmful content, 85% want parental alerts, 79% want parental permission requirements. Cross-partisan consensus, as expected.
But the most interesting finding, especially for me as a parent of four kids and a graduate school professor: 47% of parents say their child's school has NOT provided information about its AI policy. Only 37% have received any communication. And 57% say they haven't been asked for input or feedback on AI use in schools.
Parents aren't anti-AI. They just want a seat at the table. Schools that figure out how to bring parents in will be way ahead of those that keep sending home silence.
This is going to become a major issue. Trust me. Teachers are on the bleeding edge of AI integration and they're navigating it in real time. So are kids. The good news: parents are engaged, informed, and ready to be part of the conversation — 52% see equal benefits and downsides, which is ambivalence, not hostility.
Game Devs: "Tell Us When You Use It"
Shifting to creative industries — a GamesIndustry.biz survey of 826 game developers reveals a profession that wants transparency above all else.
88% say Valve should require developers to declare any generative AI usage on Steam. 77% would voluntarily self-declare AI usage even for concept work or efficiency tools — going beyond what Valve currently mandates. That's not anti-AI sentiment. That's pro-disclosure sentiment. A meaningful distinction.
The adoption numbers tell the rest of the story: 66% say their studio uses no generative AI at all, and around 85% say AI should never be used for voice generation, text generation, or music/audio in final products. Yet these are among the least-used applications anyway (voice: 2%, text: 2%, music/audio: 1%). The one exception: 83% say AI-generated placeholder audio is acceptable if replaced by real actors later.
This is the creative industry's version of the labeling debate. Think about last year's controversy over AI use in The Brutalist, and the more complex conversation around the AI-generated Val Kilmer appearing in a new role (approved by his family). The game industry's position is clear: use it if you must, but tell people about it.
Data Centers: A Communication Problem
Two polls this week reinforce what I've been saying — AI data centers have a growing public perception problem, and more awareness is making it worse, not better. This is a challenge the industry can fix, but only if it starts really engaging.
Pew Research finds the numbers are staggeringly lopsided: only 4% of Americans say data centers are "mostly good" for the environment. 39% say mostly bad. On energy bills: 6% good, 38% bad. On neighbors' quality of life: 6% good, 30% bad.
And here's the trend I keep flagging: familiarity makes it worse. Roughly two-thirds of Americans who've heard "a lot" about data centers say they're mostly bad for energy prices, versus 42% among those who know only "a little." Younger adults are the most negative — 54% of adults under 30 say data centers are bad for the environment, vs. 26% of those 65+.
Data For Progress (n=1,149) adds a nuance: consumers are still more likely to blame utilities for high prices than data centers — 63% blame high utility profits, while 60% blame data center energy demand. So data centers aren't the top villain yet. But the gap is closing, and the industry needs to get ahead of this before it does. I'd love to see a poll that distinguishes between data center use for streaming/web versus AI or crypto — I suspect the public would react very differently. There's a case to be made that data centers power the services Americans love. Nobody's making it yet.
This Week's Shorter Stories
Anthropic vs. the Pentagon: Change Research (n=1,541) gets more granular than last week's YouGov number. A plurality (42%) say AI companies should have the right to decide how their tech is used regardless of what the military wants. Only 17% say the military should be able to commandeer any important U.S.-made technology. In the specific Anthropic-Hegseth dispute: 47% side with Anthropic, 31% with Hegseth, 21% unsure. Unsurprisingly, it's a partisan split — 62% of Republicans back Hegseth, 78% of Democrats back Anthropic. But even among military families, Anthropic holds a plurality (44%-35%).
AI and Housing: Redfin/Ipsos (n=4,000) finds AI pessimism extending to secondary impacts: 59% of Americans believe AI will eliminate jobs and make it harder to afford homes. Only 30% think AI will boost the economy enough to help homeownership. Of course, immediate economic concerns like tariffs still dominate — 65% say tariffs will keep interest rates high and strain housing. AI job losses are the slow-burn fear. Tariffs are the kitchen fire.
AI Slop Resumes: Robert Half (n=1,500 Canadian hiring managers) finds 89% report heavier workloads from AI-tailored job applications, with 61% saying it's actually slowed down hiring. 64% say AI resumes make it harder to verify real skills. It's a Canadian survey, but there's no reason to think this isn't happening in the U.S. too. AI is making it easier to apply and harder to hire — simultaneously.
Next Week
- School AI policies — the Echelon data suggests a parental awareness gap that's about to close. The districts that move first on transparent AI policies will set the standard.
- Compensation transparency — if the ResumeBuilder numbers are directionally right, Q1 earnings calls should start reflecting AI-driven comp restructuring. Companies that frame this as shared investment rather than extraction will fare better.
- Data center messaging — the Pew and DFP polls show awareness climbing and favorability sinking. The industry needs a proactive narrative. Who moves first?
- Teen AI safety — the George Mason study is the kind of data that moves Congress. This is a tractable problem — watch for platform-level solutions alongside legislation.
What Else We Tracked This Week
Not everything made the cut above, but Poll Vault discovered all of these in the past 7 days:
- Nearly 4 in 5 Americans Fear Government Has No Plan to Protect Workers from AI — Blue Rose Research
- Nearly Half of BYU-Idaho Students Fear AI Will Hurt Their Job Prospects — BYU-Idaho
- Trump Approval Underwater at 41% as Democrats Lead 2026 Ballot by 5 Points — Echelon Insights
This week Poll Vault tracked 17 AI polls — see them all →
Due Data is powered by Poll Vault. Get the full data behind every poll mentioned above.
Have a question or a poll I missed? Reply directly or find me on Bluesky.
Want the full data behind every poll mentioned above?
Create free account →