Here I share my thoughts on various topics related to technology and education. Please note the following, especially if you disagree:
- The opinions expressed here are my own and do not represent those of my employer or any other organization I am affiliated with.
- Given the venue and space constraints, these posts are brief and may not fully capture the complexity of the topics discussed.
- These posts are meant to stimulate thought and discussion; they may not reflect my most current views as I continue to learn.
If you would like to share the post below, please use this long-term link.
Starting in late January of 2026, I am discussing insights from a unique experience working with an innovative startup, PredictAP, that leverages AI to solve hard problems in the real-estate management domain. This collaboration began in the summer of 2021 with AI Jumpstart, a program initiated by visionaries in the Massachusetts government and at Northeastern University with the goal of bringing together AI-focused small Massachusetts businesses and faculty experts to create synergies and foster innovation. Since then, I have been deeply involved in this project, which included spending an entire year during my sabbatical as a member of the engineering team.
From Theory to Real-World Impact: How an Innovative Startup Made Me a Better Professor (January 24, 2026)
TL;DR
After many years in academia, which included collaborations with domain scientists on real-world problems, an industry collaboration still taught me valuable new skills that made me a better professor. These range from modern software engineering practice and team dynamics to first-hand experience with industrial-scale AI systems. Several factors were essential to this outcome:
- I worked with a small startup, which allowed holistic engagement with end-to-end challenges rather than being siloed into a narrow role.
- Despite its size, the team included top experts across all relevant technical areas.
- The company already had committed customers, providing real-world problems, real data, and constructive feedback to drive meaningful research and development (R&D).
- The problems were (and still are) complex and required deep sustained engagement.
- Everyone in the organization, from founders to engineers, was fully committed to innovation and the thoughtful use of AI technologies.
Full Post
In my academic work, I study algorithms and systems that scale with data size, complexity, and velocity. Over the years, I have collaborated with companies and with domain scientists across fields such as ornithology, neuroscience, physics, and even rocket science (specifically combustion). These projects led to the outcomes one hopes for in academia: peer-reviewed publications, prototype systems, grant funding, and successful PhD completions. As a result, when I began consulting with PredictAP in late 2021, I did not expect to learn anything fundamentally new.
That assumption quickly proved wrong. As the initial 100 hours of AI Jumpstart–sponsored consulting came to a close, the work became increasingly compelling. What initially appeared to be a "boring accounting problem centered on extracting text from invoice images (OCR)" turned out to be a much deeper challenge: building systems that can holistically understand invoice content and automatically code invoices according to each customer's often incomplete and evolving policies. It became clear that achieving real impact would require much deeper involvement.
Fortunately, I had the opportunity to take a sabbatical during the 2022-2023 academic year. Contrary to popular belief, a sabbatical is not a vacation, but a chance to immerse oneself in new ideas, acquire new skills, and explore directions that are difficult to pursue during a regular academic year. Although I had initially planned to spend that year at a well-known research lab, the opportunity to work with PredictAP was simply too compelling. I decided to join the company full-time for the year.
This placed me inside a small, fast-moving, and highly focused team of talented engineers. The learning curve was steep: I had to understand the data, the product, the customers, and the company's software development practices. Yet it was precisely these challenges, combined with the generosity and support of everyone from fellow engineers to the founders, that made the experience so valuable.
So what did I learn, and how has it made me a better professor? A few highlights:
Modern, professional software engineering practices
My students and I had previously built substantial research software, but working within PredictAP's engineering culture, guided by seasoned professionals with experience at top technology companies, showed me how much more effective research software can be when built using contemporary best practices. In addition, PredictAP's ML engineers demonstrated how to structure machine learning code for maintainability, robustness, and extensibility—qualities that are often neglected in academic settings.
First-hand experience with large language models (LLMs) and generative AI at scale
During my sabbatical, LLMs and generative AI began their rapid ascent. In academia, learning and experimenting with such technologies would have had to compete with teaching, service, and existing research commitments. At PredictAP, these tools were central to the company's mission, given their relevance to language and document understanding. This allowed me to gain deep, practical experience applying these technologies to real customer data, at scale, in cloud environments. That kind of experience is extremely difficult to replicate in academia, where one often must first secure funding through lengthy proposal processes before meaningful experimentation can begin.
AI's implications for teaching
Few doubt that AI will profoundly reshape education. (See also my earlier post.) Beyond the challenges around academic integrity, I believe the deeper issue is deciding what skills we should teach (and how to do so), at a time when AI systems can perform many tasks once considered uniquely human. If professors use AI to create assignments and students use AI to solve them, what value remains in that educational loop? Employers are already finding that traditional coding interview problems are becoming obsolete, since AI systems can solve them more effectively than most candidates. This suggests that education must shift toward higher-order thinking, problem framing, and complex problem-solving skills that cannot easily be outsourced to AI. One of the best ways to cultivate such skills is through sustained work on complex, real-world problems—exactly like automated invoice coding. While real customer data is confidential, I plan to design a semester-long project course around synthetic datasets that capture the same challenges, with assignments inspired by our work at PredictAP.
A deeper appreciation for the complexity of "invoice coding"
Finally, I came to appreciate that invoice coding is far from a trivial OCR problem. It is a rich, multi-faceted challenge that shares important characteristics with domains like autonomous driving and healthcare AI, rather than with stereotypes of "old-school accounting." But that is a topic for another article.
In summary, if you work in academia and have the opportunity to collaborate deeply with a small, innovative company tackling real-world problems, give it a serious consideration. Worry less about it taking away time from your research, and think how it may enrich your perspective, skills, and ultimately your effectiveness as a professor. But maybe wait until after tenure—just in case.