Imagen-The Disconnect Between Deep Learning Research and Practical AI Applications: Jeremy Howard's Insights

Imagen-The Disconnect Between Deep Learning Research and Practical AI Applications: Jeremy Howard's Insights

Introduction

In a thought-provoking episode of the Artificial Intelligence podcast hosted by Lex Fridman, Jeremy Howard delivers a candid assessment of the current state of deep learning research. Howard, the founder of fast.ai—a research institute dedicated to making deep learning more accessible—brings his unique perspective as a Distinguished Research Scientist at the University of San Francisco, former president of Kaggle, top competitor, successful entrepreneur, and educator to this conversation.

The discussion explores a critical disconnect between academic research and practical applications in deep learning. Howard challenges the status quo of AI research culture, arguing that the incentive structures in academia often lead to incremental improvements on well-trodden paths rather than innovative solutions to real-world problems. For practitioners, researchers, and anyone interested in the future of AI, this conversation sheds light on how we might redirect research efforts toward more impactful outcomes.

The Fundamental Disconnect Between Academic Research and Practical Impact

Howard doesn't mince words when assessing the current state of deep learning research: "Most of the research in the deep learning world is a total waste of time." This bold statement stems from his observation of problematic incentive structures in academic research.

According to Howard, researchers face significant pressure to publish, which means working on topics their peers can easily recognize and evaluate. He explains:

"Scientists need to be published which means they need to work on things that their peers are extremely familiar with and can recognize in advance in that area. So that means that they all need to work on the same thing."

The result is a cycle where researchers focus on making minor advances in heavily studied areas, often with limited practical utility. The system lacks incentives for academics to pursue work with significant real-world applications, creating a disconnect between theoretical research and practical problem-solving.

Neglected Areas with Transformative Potential

Howard identifies several research areas that, despite their immense practical value, receive insufficient attention from the academic community. Two particular areas stand out:

Transfer Learning: The Underappreciated Game-Changer

Transfer learning—using knowledge gained from solving one problem to help solve a different but related problem—represents a significant opportunity for democratizing AI. Howard emphasizes its importance:

"If we can do better at transfer learning, then it's this like world-changing thing where suddenly lots more people can do world-class work with less resources and less data. But almost nobody works on that."

Howard's personal experience with transfer learning in Natural Language Processing (NLP) demonstrates its untapped potential. Despite having limited knowledge in NLP, he developed ULMFiT (Universal Language Model Fine-tuning), a transfer learning method that achieved state-of-the-art results on major benchmarks with just a few days of work.

"I wanted to teach people NLP, and I thought I only want to teach people practical stuff. I think the only practical stuff is transfer learning, and I couldn't find any examples of transfer learning in NLP, so I just did it."

This work—which Howard admits he didn't even write up himself, crediting his colleague Sebastian Ruder for the actual paper—was published at ACL (Association for Computational Linguistics), one of the top computational linguistics conferences. This success story illustrates how practical approaches can make significant impacts, even when they come from outside traditional academic channels.

Active Learning: Maximizing Human Input

Active learning—strategically selecting which data a human should label to maximize model improvement—represents another neglected area with enormous practical value. Howard notes:

"Active learning is great, but almost nobody [is] working on it because it's just not a trendy thing right now."

Interestingly, practitioners consistently reinvent active learning principles when faced with real-world constraints. Howard explains:

"Everybody kind of reinvents active learning when they actually have to work in practice because they start labeling things and they think, 'Gosh, this is taking a long time and it's very expensive.' And then they start thinking, 'Well, why am I labeling everything? The machine is only making mistakes on those two classes—they're the hard ones. Maybe I ought to start labeling those two classes.'"

This natural progression leads practitioners to wonder, "Why can't I just get the system to tell me which things are going to be hardest?" This intuitive approach to optimizing human effort in machine learning workflows emerges organically in practical settings but remains understudied in academic research.

The Academic Publication Paradox

Howard's limited engagement with academic publishing highlights another aspect of the research culture problem. Despite his significant contributions to the field, he admits:

"I've only really ever written one paper. I hate writing papers, and I didn't even write it—it was my colleague Sebastian Ruder who actually wrote it. I just did the research for it."

For Howard, the incentive to publish doesn't align with his goals of creating practical impact. He candidly states, "I don't care whether I get citations or papers... there's nothing in my life that makes that important, which is why I've never actually bothered to write a paper myself."

This freedom from publication pressure allows Howard to focus on practical advancements rather than incremental improvements designed to satisfy academic reviewers. However, he acknowledges that junior researchers don't have this luxury:

"For people who do [need publications], I guess they have to pick the kind of safe option, which is like, yeah, make a slight improvement on something that everybody is already working on."

This creates a systemic issue where potentially transformative research directions remain unexplored because they don't fit neatly into established research paradigms or publication expectations.

Bridging Theory and Practice

The conversation between Fridman and Howard illustrates a critical tension in the AI research ecosystem. While academic researchers often pursue theoretical advancements that can be easily published, practitioners solving real-world problems develop practical innovations that may never enter the academic literature.

Howard's work at fast.ai represents an attempt to bridge this gap by teaching pragmatic, results-oriented approaches to deep learning. His emphasis on transfer learning and active learning—techniques that maximize practical impact with limited resources—demonstrates a path forward for more useful research.

The success of his ULMFiT method suggests that significant breakthroughs can come from approaching problems with a practical mindset rather than strictly following academic trends. This raises important questions about how the AI research community might better align incentives with real-world impact.

Conclusion: Reimagining AI Research Priorities

Jeremy Howard's perspective challenges us to reconsider how we evaluate and prioritize AI research. While incremental improvements on established benchmarks dominate academic publishing, transformative techniques like transfer learning and active learning—which could democratize AI and make it more resource-efficient—receive disproportionately little attention.

The conversation points to a need for structural changes in how research is evaluated and rewarded. Rather than focusing exclusively on novelty within established paradigms, the field might benefit from greater emphasis on practical impact, resource efficiency, and accessibility.

For practitioners and industry professionals, Howard's message offers validation for pragmatic approaches that prioritize solving real problems over publishing papers. For academic researchers, it presents a challenge to consider how their work might bridge the gap between theory and practice more effectively.

As AI continues to transform industries and societies, aligning research priorities with practical impact becomes increasingly important. Howard's candid assessment reminds us that the most valuable contributions might not always be the most publishable ones.

Key Points

  1. Most academic deep learning research focuses on incremental improvements to established methods rather than solving practical problems.
  2. Transfer learning has transformative potential by enabling high-quality results with fewer resources and less data, yet remains relatively understudied in academia.
  3. Active learning—optimizing which data humans should label—is frequently reinvented by practitioners but receives limited academic attention.
  4. Academic publication incentives often discourage researchers from pursuing novel, practically useful directions in favor of "safe" incremental improvements.
  5. Howard's ULMFiT work demonstrates how practical approaches can achieve state-of-the-art results even when developed outside traditional academic frameworks.
  6. The disconnect between academic research and practical applications creates inefficiencies in advancing AI for real-world use cases.
  7. Bridging theory and practice requires reimagining how research is evaluated and incentivized within the AI community.

For the full conversation, watch the video here.

Subscribe to Discuss Digital

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe