Battery Ventures Executive Innovation Retreat. Dharmesh Thakker (BV), myself and Jacob Sisk (Meta)

Battery Ventures Executive Innovation Retreat. Dharmesh Thakker (BV), myself and Jacob Sisk (Meta)

Earlier this year, I had the unique opportunity of listening, learning and sharing with technology and data/analytics leaders across financial services, healthcare, technology and entertainment industries on a range of topics related to AI/ML and Innovation. It was just a couple of months after ChatGPT, DALL-E and Stable Diffusion dropped and the rumors around Microsoft’s interest in OpenAI began swirling; you can imagine the discussions weaved in and out and around the topic of Generative AI. That event seems like ancient history given the pace of change we are experiencing with AI!

The hype cycle of Gen AI had officially begun! In its wake, many founders started companies eager to take off (perhaps not land) as a "Gen AI startup". Executives at mid-to-large firms began discussions on what this all means for their business and debated whether their strategy should be the tried-and-true "wait and see" approach or the riskier "get in the game" approach. Phrases like "iPhone moment" became common in boardrooms, but the path forward wasn't clear.

The pace of innovation was (and still is) moving at a breakneck speed that most organizations weren't equipped to keep up with. In fact, it’s moving so fast that we’ve already been through many inflection points in months that would typically take years. Below are some insights based on my observations and journey in Generative AI.

Inflection Point Zero: Innovate or Die

OIG..uqgkGhLJ1L.awJnTDNt.jpeg

Fast forward from November 2022 (launch of ChatGPT) and we have now reached our first inflection point, where it is clear that waiting cannot be the strategy.

As the field of generative AI continues to rapidly evolve, businesses are facing multiple critical inflection points. They must decide whether to lean in and keep up with innovation, or risk falling behind their competition. However, it's important to recognize that even those who have taken a hands-on approach to innovation and embraced large language models (LLMs) understand that it's not as easy as simply typing a prompt.

The hard lesson is that expecting a general-purpose, large language model to meet a targeted business use case with precision, remain within expected boundaries, only respond with facts on the first attempt, and scale to many users and use cases with the right UX is clearly more than asking ChatGPT to "Plan my vacation to Spain for me" and marveling at the results.

The key takeaway: All large businesses are required to participate.

Inflection Point 1: No One Will Be Fired For Choosing OpenAI

https://logowik.com/content/uploads/images/openai5002.jpg

When it comes to proprietary LLMs, OpenAI stands out as a clear leader. They set many standards, including what a Gen AI startup and business model should look like, how high-performing LLMs should behave, what a killer app (ChatGPT) for LLMs should feel like, how an active participant in the conversation on AI's future (and our society's future) should lead, and how much investment is required to stay at the forefront.

Many hobby developers and startups are building with / on OpenAI’s platform and collection of foundation models. OpenAI has embraced their community by expanding into plugins, business services and more. That’s absolutely a strong choice at this point!

Inflection Point 2: For Everyone Else, There’s Open Source!

https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo-with-title.svg

Databricks_Logo.png

Perhaps the greatest surprise is how quickly Open Source models came on the scene and how incredible their rise has been. Thanks to Meta for triggering this shift with the release of Llama and the successful leaking of weights and resulting copycat models. The emergent leaders such as Databricks, HuggingFace and the like have embraced the Open Source community by contributing models and creating platforms and exchanges.

Open Source was always going to be an alternative - just as it has for almost every other technology! Tracking and evaluating the HuggingFace LLM Leaderboard models is now a full-time job given that new models are showing up every two days and cover a significant number of domains and needs.

<aside> 💡 The Falcon LLM model from the Technology Innovation Institute (TII) of the United Arab Emirates (UAE) shot to the top of the charts from seemingly nowhere with a high performing LLM. The immediate adoption by the Open Source community forced them to change their licensing from a restrictive license to the truly open Apache License Version 2.0.

</aside>

Along with the Open Source foundation models came an entire ecosystem designed to enable development of Gen AI use cases by filling the gap between model, developer and the enduser experience.

<aside> 💡 LangChain and LLamaIndex are two very popular frameworks for building LLM applications.

</aside>

Inflection Point 3: Chasing is Exhausting

_175568b6-cd2d-4a55-ac1b-5cae1dfe9bbb.jpg

Generative AI and Cloud Service Providers come in pairs and they’re hyper competitive with a mix of proprietary, partner and open source offerings. Like choosing a Cloud Service Provider, choosing a Generative AI provider (also a Cloud Service Provider) can easily become paralyzing.

Determining which one to choose is a matter of preference and really a choice between two options: