As excitement around the utility of large language models (LLMs) intensifies, many companies are debating what role AI should play in their strategies and what skills will drive sustainable success.
At Columbia Business School’s Digital Innovation Conference, Mike Schuster, Head of our AI Core Team, shared his outlook. He championed the importance of programming skills, noting that humans will need to remain in the loop for the foreseeable future, and argued that it’s important to ground our enthusiasm around LLMs’ capabilities in reality.
Explore Mike’s perspective in this overview of a conversation with Ciamac Moallemi.
Analyze real-world use cases
With 25 years of experience in machine learning research and application, including contributions to the development of Google Translate before joining Two Sigma in 2018, Mike shared his optimism around the potential of advanced AI. He envisions a future that includes highly efficient LLMs. However, he also deliberately highlighted the necessity of looking at current use cases rather than hypotheticals as a realistic measure of the advantage offered by LLMs.
“I’m trying to fight a little bit against the hype that we hear often… We need to keep in mind that the reality [of LLMs] is much more mundane.” Mike detailed, “We can do some things a lot faster. We can process more data quicker. We can train models faster and cheaper. We can run more experiments.”
Mike pointed out that, while the mainstream buzz around LLMs has boomed in the last year or so, these technologies have actually been in use for much longer. At Two Sigma, for example, our scientists have utilized generative AI for well over 5 years and natural language processing (NLP) for over a decade. Here’s what we’ve observed:
When it comes to the deployment of LLMs for very complex financial problems, these models serve a variety of roles, from enhancing productivity to facilitating feature extraction and more. Mike illustrated this with an example, “Let’s say you have earnings calls or Fed speeches, and you have all the transcripts from all the Fed speeches, you would ask the model, ‘Okay, get me all the Fed speeches from the last 20 years. Can I map this on the interest rate change that will follow?’” By trial and error with prompt engineering, he continues, researchers learn how to extract features from the data.
Noting the inherent limitations of accumulating financial data—there are only a certain number of trading days in a year, after all—Mike points to the need for researchers to be patient and scientific. At the same time, the intense competitive environment in finance compels continuous innovation to inch toward the highest level of precision.
The highly competitive nature of the financial sector makes it essential to pursue cutting-edge AI applications, notes Mike, even when there is a risk of tangible failure. Ultimately, as a research-driven organization, Two Sigma understands that failure is a key part of the scientific process; without the information it provides, there can be no innovation.
Lean into human expertise
Mike added that creativity is a crucial component of successful enterprise AI adoption, emphasizing that to effectively harness the power of LLMs and other AI advancements, business leaders should think about how to balance cutting-edge technology with strategic human insight.
He sees LLMs as a tool for augmenting human productivity and broadening opportunities for even more inventive solutions over time, noting again that these models have been around for decades before just recently seeing incredible performance gains. Mike foresees a future where AI continues to become more and more efficient, advancing alongside human expertise.
Furthermore, he described the interdependence of collaboration and innovation: “You need people who understand all of these [technological challenges]. Because it is so complex,… one person by him or herself cannot do these things anymore. Right? It needs to be a group of people.” Creative planning and fine-tuned reasoning skills honed by lived human experience and acquired domain expertise are still essential for sustainable innovation. Unique perspectives generate a higher likelihood for successful solutions.
Master core skills
Approaching the end of the session, Ciamic invited Mike to respond to a prior guest’s prediction that programming is destined for obsolescence. Mike provided a more nuanced perspective, underscoring the enduring value of foundational skills and asserting that learning to program is akin to mastering an instrument.
Telling people not to learn to code anymore is “basically the same thing as saying you shouldn’t learn how to play piano because you have the radio,” he asserted. Coding, like music, teaches you to think creatively within a system, to break down problems clearly, and to stick with a complex skill until you’ve mastered it. The importance of learning to program, Mike says, is about “the creativity that you need, and the clarity of thinking… getting used to a longer process of learning something.”
These habits of mind are vital not only in engineering but also in finance, where we deal with millions of parameters, building prediction models that go far beyond text, and need “scientific common sense” to understand the market behaviors we’re simulating.
Watch the full conversation for more of Mike’s insights on AI in finance.