Companies are already using AI to do the work that used to go to new graduates. Most students have no idea.

Here’s a conversation happening in boardrooms, marketing departments, and HR teams right now that isn’t happening in most university classrooms:

“We can use AI to handle the research, the first drafts, the data summaries, the competitive analysis. All the things we used to hire entry-level people for. So what do we actually need entry-level people to do?”

That’s not a hypothetical. That’s a real shift happening across business, marketing, and communications right now. And the students graduating into it are largely unprepared for what they’re walking into.

Not because they’re not talented. Because nobody told them the rules changed.

The gap is bigger than anyone wants to admit

EDUCAUSE’s 2024 AI Landscape Study, which surveyed more than 900 higher education technology professionals, found that fewer than one in three students say their institution has prepared them to use AI effectively in their careers. That’s not a rounding error. That’s a structural failure.

At the same time, McKinsey’s research on AI in the workplace shows that companies are already deploying AI for the task work that used to build foundational professional skills: market research, content creation, data analysis, summarization, and reporting. The work that used to be a new graduate’s entire first year.

The World Economic Forum’s Future of Jobs Report 2025 found that organizations could automate 30% of entry-level work hours, and that 41% of employers already plan to reduce their workforce in roles exposed to AI-induced skills obsolescence. Meanwhile, the 2024 Microsoft and LinkedIn Work Trend Index, based on a survey of 31,000 professionals across 31 countries, found that 66% of leaders say they wouldn’t hire someone without AI skills.

So here’s where we are: employers need AI-competent people. Universities aren’t producing them at scale. And students are caught in the middle, expected to arrive AI-ready for roles they’ve never seen, using skills nobody systematically taught them.

It’s not about knowing the tools

Here’s the part that gets missed in almost every AI literacy conversation: the gap isn’t tool familiarity. Most students have used ChatGPT. Many use it regularly. That’s not the problem.

The problem is judgment.

Researchers at Harvard Business School, in a landmark study conducted with Boston Consulting Group, introduced a concept called the “jagged technological frontier.” The idea is this: AI performs surprisingly well on some tasks and surprisingly poorly on others, and the line between them isn’t intuitive. Confident AI output doesn’t signal accurate AI output. Polished writing doesn’t mean verified facts. A well-structured analysis can be built entirely on invented data.

Knowing how to use AI isn’t the same as knowing when to trust it, how to verify it, what data you should and shouldn’t put into it, and how to take professional responsibility for what it helps you produce.

Those are the skills employers are screening for. Those are the skills most students don’t have. And those are the skills that almost no university curriculum is systematically building.

Why universities are behind, and why that’s not an excuse

Universities move slowly. That’s not a criticism; it’s how institutions that maintain academic rigor and curriculum integrity are supposed to work. You can’t redesign a four-year business program in a semester because a new technology showed up.

But AI didn’t show up last semester. It’s been reshaping knowledge work for years, and the pace of change has accelerated dramatically. Most universities are now scrambling to integrate AI into existing courses, which is meaningful progress, but it’s patchy, inconsistent, and mostly focused on using AI as a tool rather than developing the professional judgment to use it responsibly.

As of 2024, only one U.S. university, Purdue, requires an AI literacy course for graduation. The rest are providing opportunities, not requirements. That means whether a student leaves with meaningful AI competency depends almost entirely on which professors they happened to take, which electives they happened to choose, and whether anyone in their program thought to address it at all.

That’s a lottery. And the students losing it are the ones who didn’t know they were playing.

The double bind

What makes this particularly difficult for students is that they’re facing the problem from two directions at once.

On one side: entry-level opportunities are shrinking. Since January 2024, entry-level job postings have fallen by 29% according to analysis of 126 million job postings worldwide. The task work that used to be the on-ramp to a career, the research, the drafting, the data pulls, the summaries, is increasingly being handled by AI. There are fewer seats at the table for people who can only do the work AI can now do.

On the other side: the seats that remain require more than the entry-level work used to require. They require judgment. Verification. Professional accountability. The ability to direct AI, evaluate its outputs critically, and take responsibility for the final product in front of a client, a manager, or a stakeholder.

Students are expected to have developed those skills without the professional experience that used to be how you developed them. That’s the bind. The on-ramp is narrower, and the destination is higher.

What this actually means for you

If you’re a business, marketing, or communications student reading this, here’s the honest version:

Using AI is not a differentiator anymore. It’s table stakes. Every candidate you’re competing against has used it. The question employers are starting to ask isn’t whether you use AI. It’s whether you use it well. Whether you know when to trust it and when to check it. Whether you understand what data you shouldn’t feed it. Whether you can produce work your employer can actually use, not just AI output that needs to be completely redone.

The students who are positioning themselves well right now aren’t the ones who can name the most AI tools. They’re the ones developing a practice: prompt with precision, verify before you submit, protect sensitive data, stay current as the technology evolves, and take professional ownership of everything AI helps you produce.

That’s a learnable skill set. It’s not rocket science. But it requires someone to lay it out clearly, what it is, why it matters, and how to develop it, before you walk into your first interview or internship.

Most students aren’t getting that from their programs. The ones who seek it out on their own are going to have a meaningful advantage over the ones who don’t.

That’s what this site is for.


Fred Faulkner is the founder of Get AI Literate, an AI literacy platform built for college students in business, marketing, and communications. The 7-Pillar AI Literacy Framework and free AI Literacy Assessment are available at GetAILiterate.ai.

Find out where you actually stand

The free AI Literacy Assessment evaluates your skills across all 7 pillars and gives you a specific breakdown of where your gaps are. Takes 10 minutes. No email required.