AI in general and generative AI (genAI) in particular are already transforming the way work and learn. Universities must adapt their curricula and teaching methods to prepare students for this new reality.
Impact on programmers and data scientists
If you think "I do not need AI to program (or build my data-science pipeline etc.)" or "I tried AI and it was kind of okay", then you have not really tried genAI. If you know what you want to do, AI makes implementing your ideas a lot faster!
Who benefits?
People with problem-solving skills and the ability to communicate their solution ideas and break them down into right-size pieces. You explain what you want to do, and the AI will fill in all that annoying syntax to implement your ideas.
Whose jobs are in danger?
Anyone whose main expertise is "syntax". Consider predicting bird-migration patterns. You want to be the person who knows how to leverage data about bird sightings, vegetation, and weather to increase prediction accuracy. Not the one whose main expertise is knowing what each Python library does or how to string a bunch of standard pre-processing and prediction tools together into a data-science pipeline.
What is the future of (higher) education?
Most of the material taught in K-12 schools and undergrad curricula is "commodity knowledge" that exists in textbooks, tutorials, and videos. AI can suck in all this knowledge, enabling it to not only solve standard textbook and homework problems (even if the educator creates new versions of those "problem templates"), but also to serve as an interactive tutor for learners of all ages. Hence educators must rethink everything: what skills we teach, how we teach them, and how we evaluate students. For the latter, it would be naive to rely on honor codes alone: if studying and solving assignments the "right" way takes significantly more time and yields lower scores than taking a shortcut via AI, then this creates a strong incentive for taking the shortcut. Instead of complaining about cheating, we must find ways to make it less appealing.
What should educators do?
While many questions remain open, a few principles have started to emerge: (1) Teach problem-solving skills, not syntax skills. For instance, instead of arguing "is Python or Java better for my object-oriented design course", focus on the principles that allow the learner to solve a problem in an object-oriented way. If you know what objects you need, AI can generate them (almost perfectly) in your favorite language. (2) Create advanced project-oriented courses where students solve non-trivial problems that go beyond standard textbook exercises and where they need to make creative decisions that have more than one "right answer". (3) Base the course grade on skills that can be evaluated in a controlled environment. For example, it makes no sense to rely on scores from homework assignments and take-home exams when AI can answer the corresponding questions with little to no human guidance. Instead, consider oral-exam-style interactions where students demonstrate their problem-solving skills directly to the instructor or teaching assistant. And for many exams, it may be best to rely on the classic paper-and-pencil approach in a controlled environment.
What should companies looking to hire recent graduates do?
If you evaluate an applicant based on questions that are better answered by AI, then you should either hire the AI or come up with better interview questions. AI will ace the typical small-problem pre-interview coding test. Can you come up with tests that measure big-picture problem solving and communication skills? Since academia faces similar challenges, it makes sense for companies to collaborate with educators in designing new evaluation methods.