Blog

Roundtable Recap: Senior Engineers Discuss LLMs and the Future of AI

The rapid advance of AI tech has become a central focus for Mission thought leaders, in lockstep with the software engineering community as a whole, and arguably every other industry. While it’s been a topic that’s organically come up in pretty much every episode of our podcast, The Better Build, we’ve also made it a point to host more focused discussions on the subject. A recent such event was held on June 14th, for which we gathered some of our top Leads to meet and discuss, in open-ended fashion, their thoughts, opinions, and gut reactions to this extraordinary and ever-evolving technology.

Mission tech lead Francis Roch generated topics for the discussion and moderated it as well. Here are some of those who participated: Kayvan A. Sylvan, Fadi Asfour, David Raistrick, Siamak Sartipi, Tao-Nhan Nguyen, Mihai Ciobanu, Anas Siddiqui, Nicholas Brochu, and Mission Co-founder and CTO Fred Brunel, each sharing their own perspectives and unique experiences with AI, raising salient points from all angles on the matter.

And just before we get to what those views were, a quick reminder that events just like these are a regular occurrence when you’re a part of Mission’s network, whether as member or a company we work with, so if this is leaving you with FOMO, join us today at mission.dev.

Anticipation vs. Anxiety

Francis Roch began the discussion by noting how AI is already affecting people’s lives. Roch then shared his experience with OpenAI APIs, specifically since the release of GPT-4. He discussed how large language models can be used to extract valuable business insights from vast data sets, emphasizing the significance of data organization and querying in natural language. However, along with the excitement, there’s been plenty of anxiety too, some justified, some not. As Roch put it, “It’s been a challenge for me to sort of calm everybody down and say, Hey, it’s just, you know, an algorithm putting words together to kind of make sense.”

It’s worth noting that the conversation in general did not linger on the usual Terminator style anxieties and exaggerated visions of an AI saturated society that accompany most casual conversations on the topic. Nor even the, “AI will take our jobs” fear, which the speakers did address a bit, but didn’t linger on. The primary concern was more about security and privacy around data.

Privacy Concerns Addressed

Top of mind for many companies is concern over the data they share with open models like ChatGPT. As Kayvan Sylvan put it, “”I’m not sure how many people feel comfortable about talking directly to open AI’s GPT-4 about their proprietary documents…” Nicholas Brochu added, “”The first thing people are going to want to do is not give all their data to the AI… I’m actually even surprised that people just do it freely like that. They just send their data, right?”

Francis Roch addressed this and what companies may do about it, “It’s always boiled down to two things: how to organize your data correctly. That’s half the battle. Some people do it right. Some people do it less right. But you want your data organized, have a really good sort of fortress with all the data in it, and then it’s how to ask questions of it.”

Democratization of AI

One of the central topics of the discussion was on the side of what’s to be excited about, namely how open source models put this technology in the hands of the public in a major way. Kayvan Sylvan mentions the accessibility of new and powerful models, saying, “The models have leaked, the weights have leaked, and people are able to take them and do their own thing with them.” This led to the concept of the “democratization” of AI versus private monopoly and control of this paradigm shifting technology.

One of the primary sources for tools that enable users to build, train and deploy ML models based on open source (OS) code and technologies is Hugging Face, whose CEO, Clement Delangue, recently assured the US House that open-source AI is “extremely aligned” with American interests, by which he seemed to mean the democratic quality of the platform. Another site, Gradio, purports to be “the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere!”

This wide accessibility of the tech is a singular and important aspect of it. As David Raistrick put it, “Many of us have worked with AI and machine learning models of various sorts and various products over the years, but none of them put it in the people’s hands in the way that this did.”

AI as Enabler

The speakers addressed how advancements in AI, specifically large language models like GPT-4, have become a powerful tool for extracting meaningful insights from massive data sets. These AI models enable businesses to make more informed decisions by enhancing their ability to query data in natural language and offer more significant insights. The conversation also touched on how AI is democratizing access to complex computations through the rise of open source models, empowering more people to leverage the power of AI without needing extensive technical expertise.

“Not only is is performance increasing with the the the quantization stuff, you can run it on consumer hardware like some of these models. So that’s that’s you can run it a MacBook that that is the big, big, big deal,” says Nicholas Brochu.

Trying to interfere with the widespread accessibility may backfire. If users find it that helpful, and suddenly you switch to a pay model, it may limit its use, as WhatsApp learned, as Mihai Ciobanu pointed out. Edmon Marine Clota had a counter view, believing paths to monetize AI models are available, such as those uploaded to the site Replicate, some of which users are already enthusiastically using. This would appeal to those who want to experiment, test, and goes as quickly as possible to market.

AI may also have an impact on previous applications that were once enablers in their own time, possibly phasing them out. As Mihai Ciobanu put it, “If something can auto correct your code and it’s built on enough data, Stack Overflow also becomes irrelevant.”

Conclusion

This just an overview of some of the key topics of the conversation, the video for which can be viewed in full by our members. If you’ve made it to the end and aren’t a part of our network yet, then it’s a perfect time to head over to our website and kickoff the process that’ll grant you access to the resources our senior engineers our using daily to continue levelling up. And be sure to follow us on LinkedIn for more content updates.

You may also want to read