What’s all the Chat(GPT)ter About?

二月 20, 2023 | Richard So


Share

Introduction to generative AI

To start the year, the hottest topic for technology investors has been the rise of an artificial intelligence service called ChatGPT. Below we review what all the ‘buzz’ is about and the investment implications that are surfacing.

What is it?

ChatGPT is an artificial intelligence (AI) chatbot that was developed by OpenAI and based on a “large language model” (LLM). A large language model gains knowledge from immense datasets and employs a deep learning algorithm to recognize, predict, summarize, and generate text and other content. At the time of this writing, the current model is based on “GPT-3.5”, which means ChatGPT was trained on a blend of text and code from Wikipedia, Google, etc., from before Q4 2021. After training, ChatGPT analyzes the data and calculates the statistical probability of which text should come next to generate the text for a conversation. Thus, to the user, the chatbot appears capable of interacting with humans in a conversational manner.

ChatGPT is one form of “Generative AI,” which refers to AI algorithms that can generate data to resemble human made content including text, pictures, audio and video. It can be useful to first segment the term ‘generative AI’ into a ‘model layer’ and an ‘application layer’. In the model layer, there are a lot of large language models including Google LaMDA, Meta Sphere and OpenAI GPT. Each LLM specializes in performing varied tasks including, translation, response formation, classification and text generation. In the application layer, these are the interfaces and workflow tools that make the AI models accessible to businesses and consumers to solve problems and possibly entertain.

According to some AI experts, the actual infrastructure for generative AI, including computing hardware and model creation, will become commoditized as the technology becomes mainstream. Hence, the applications (coding, video, texting, images, etc.) and industry domain specialization (i.e., healthcare, telecom, etc.) will become the key differentiating factors among generative AI offerings. These are thought to be the opportunity sets that will bring about many new AI start-up companies. That said, training with domain knowledge is expected to be more challenging relative to the current LLM models that are only trained by the knowledge attained from social media, Wikipedia, and public databases. Domain knowledge will likely take a long time as the data can be more fragmented and difficult to access.

Investment Implications

AI computing workload has doubled every 3-4 months since 2012, which has been driven by more complex AI models. For example, GPT-2.0 contained 1.5 billion parameters, GPT-3.0 contained 175 billion parameters, and GPT-3.5 contained 555 billion parameters. The term ‘Parameters,’ refers to the parts of the model that has been learned from historical training data, and in effect defines the skill of a model. The amount of training data has also grown exponentially with GPT-3.0 trained with 45 Terabytes of data versus only 40 Gigabytes of data for GPT-2.0. Overall, some experts believe that an increase of parameters in AI models may no longer improve performance by a significant amount, however, the data size for training should continue to grow to improve accuracy. Hence, a growing computer workload will require stronger computing power, which will be positive for chipset vendors. This should also spur demand for memory vendors as more advanced memory and storage will be required.

Overall, investors should expect an increase in AI computing hardware demand as the adoption of generative AI grows. There are 4 building blocks supporting AI computing hardware. This includes a) the chipset, b) the software/development tool kit, c) the ecosystem/libraries for faster and easier machine learning and d) data center integration. Some AI start-ups may have or use chipsets with stronger computing power, however a lack of software and ecosystem support will prevent broader adoption among AI developers. Currently, the market leader in AI computing hardware remains NVDA, however, many challenges are arising with open-source ecosystems that can provide shortened development cycles and the ability to migrate to different AI chipsets.

2023 appears to be a tipping point for generate AI to take off. Large tech companies have the resources to invest in AI and are spurred to incorporate generative AI into their core services. For example, Microsoft and Google seek to incorporate it into their respective search engines, and Meta seeks to incorporate it into Facebook and the Metaverse platform. Smaller AI start-ups are likely to focus on specific industry domains which the large cap tech companies seem less interested in at the moment. Therefore, investors and consumers should expect more commercialization and monetization opportunities for generative AI in the next few years.

The new and exciting use cases that generative AI can provide should be on the radar of all growth investors. That being said, this nascent technology is very much still in development, and we believe that it should only be viewed as an upside optionality to existing businesses that already have a core and profitable offering. Over time, there will be many new entrants and possible disruptors in this space, and therefore, we would refrain from investing in companies whose existence primarily relies on generative AI alone. Speak to your advisor to review the investment case and suitability of adding more speculative investments in your portfolio.