Fabian Kostadinov

Macbook Pro for LLMs - Buyer's Guide in January 2024

Given my old Macbook Pro with an Intel i7 is ageing (and quite well, I might say) I was contemplating buying a new Macbook Pro with an M2 or M3 processor. Generally speaking my needs are:

  1. software development (e.g. Docker),
  2. video recording (e.g. OBS Studio) and light video editing,
  3. playing around with LLMs running locally mainly to learn and have fun, for more serious work I’d then switch to e.g. Google Colab or Huggingface and rent dedicated GPUs.

I am not very much worried about #1 and #2, but running LLMs? I usually tend to spend a lot of money on a Macbook Pro, but then keep it for quite a few years before making the next purchase. So, which model should I focus on?

Read More 

Generative AI-Augmented Program Execution

The use of Generative AI allows a completely new programming paradigm. I call this paradigm Generative AI-Augmented Program Execution (GAPE). In very brief, GAPE is a programming approach that mixes regular software code with output from GenAI models. The idea is best illustrated with an example. Imagine you have a list of animals: [“horse”, “cat”, “whale”, “goldfish”, “bacteria”, “dove”]. You do not have any further information than those animals. Now you are asked to sort this list according to the animal’s expected or average weight in descending order. By applying common sense reasoning a human returns this ordered list: [“whale”, “horse”, “cat”, “dove”, “goldfish”, “bacteria”]. Assume we implement a (e.g. Python) function sort(list_to_sort: List[str], sort_criterion: str) -> List[str] such that the sorting happens entirely automatically behind the scenes by calling a Large Language Model. Input is a List of strings and a sort criterion description as a string parameter. Both are being sent as a prompt to the LLM, which computes the result. The output is again same list, just ordered according to said criterion. Wild, isn’t it? Welcome to Generative-AI Augmented Program Execution!

Read More 

Advanced Prompt Architectures - Querying and processing denormalized structured data

In the past I was focusing mentally a lot on how large language models (LLMs) allow new ways how to interact with text data. Recently, while talking to a colleague, it suddenly dawned on me that large language models actually also enable new ways how to “query” or “process” your structured tabular data. In this article I will compare four different approaches:

  1. Querying structured data via APIs.
  2. Querying structured data with text-to-sql or similar.
  3. Processing structured data that was denormalized (“exploded”) before feeding it into an LLM.
  4. Querying the vector embeddings of structured data that was denormalized.
Read More 

How I created a semantic Q&A bot for >13k recorded dreams using OpenAI, Langchain and Elasticsearch

I have been fascinated with dream interpretation for a long time. So, over the weekend I decided to create a database of >13k recorded dreams I scraped from the web. The database allows me to perform semantic searches rather than just plain-text searches, and beyond that it even allows GPT-enabled question answering. For example, I can ask the Q&A bot questions like: “Please search all dreams containing animals, and give me a list of their activities.” Not only does the bot have an understanding of what constitutes an animal (deer, elephant, snake, spider…), but it provides ChatGPT-like, meaningful answers in the style of ChatGPT. Here’s how I did it.

Read More 

GPT Models as an Operating System / Communication Protocol for Autonomous Agents

While GPT models have been around for a few years, it was only after the arrival of OpenAI’s ChatGPT that they caught the wider attention of the public. Frantically, managers and IT specialists alike are searching for ways how they can make best use of this new technology. As usual, the first ideas will turn out to be not necessarily the most mature, and it will take some time until the full potential – and the restrictions – of these GPT models are generally understood.

Read More