Introduction
What is the LLM Playbook?
Given the explosive growth and diverse range of methodologies in the field of large language models (LLMs), there's an inherent need for structured and clear communication. That's where this playbook comes in. Unlike more technical blogs or exhaustive resources that delve deep into mathematical rigor, this playbook serves a unique purpose. It is primarily a platform where I can systematize and articulate my understanding of the rapid advancements of LLM training, optimization, and deployment. In collating my observations and insights, I aim to bring clarity to an area that is complex and ever-changing.
While this playbook is invaluable for my own cognitive structuring, it is also intended to be a resource for others. Whether you are a newcomer looking for a guided introduction or a seasoned practitioner seeking up-to-date insights, this document aims to provide a curated view of the key developments shaping the future of large language models. Given my background as a medical doctor, you won't find an abundance of math-heavy equations or theoretical proofs here. Instead, the approach is designed to be intuitive, aiming to make the subject matter accessible to a broader audience. That said, I do assume that you have a basic understanding of Python and deep learning, as (relatively unoptimized) code snippets and examples will frequently be used to illustrate points.
Thank you for joining this educational journey, and I hope you find the playbook as enlightening as I find the process of maintaining it.
About Me
My name is Cyril Zakka, I'm a medical doctor and postdoctoral fellow in the Hiesinger Lab in the Department of Cardiothoracic Surgery at Stanford University. My research interests primarily involve building and deploying large multimodal networks for medical imaging and autonomous robotic surgery.
If you have any feedback, comments, or questions please don't hesitate to reach out: