Fabian Kostadinov

Nowhere Is Not Enough - Fractured Identity and the Proemial Relation as Channel

Nowhere Is Not Enough

Fractured Identity and the Proemial Relation as Channel

Abstract

This paper argues that identity between two logical subjects — two contextures in Gotthard Günther’s sense — is not a binary fact but a dimensional structure, and that the dimensions of identity accessible between any two contextures are determined not by the contextures themselves but by the channel through which they communicate. We show that identity fractures into multiple independent dimensions, each requiring a different kind of observation; that the proemial relation mediating between contextures is a partial mapping whose coverage is constituted by the channel; and that the view from nowhere — the fully neutral standpoint from which identity might be settled once and for all — is insufficient not merely because it is epistemically unavailable, but because it bypasses the channel, which is precisely where identity is constituted. The argument moves from a formal analysis of mutual reference through Günther’s polycontexturality and Spencer-Brown’s calculus of distinctions to a demonstration in executable code. The result is a precise account of why the question are these two things the same? cannot be answered from any position that actually exists — and why even an imagined position outside all perspectives would fail to answer it.

Read More 

Pointing, Oscillation, and the Fracture of Identity

There is a well-known photograph - or rather, a well-known kind of image - in which Stan Laurel and Oliver Hardy point at each other simultaneously. Each one points at the other. Neither points at himself. The image is funny, and also a little vertiginous, because it is not immediately clear who is accusing whom, or whether the gesture means anything at all when it is perfectly mutual. It is worth taking that vertigo seriously. Let us try to say, from a philosophical perspective, exactly what is happening.

Read More 

Macbook Pro for LLMs - Buyer's Guide in January 2024

Given my old Macbook Pro with an Intel i7 is ageing (and quite well, I might say) I was contemplating buying a new Macbook Pro with an M2 or M3 processor. Generally speaking my needs are:

  1. software development (e.g. Docker),
  2. video recording (e.g. OBS Studio) and light video editing,
  3. playing around with LLMs running locally mainly to learn and have fun, for more serious work I’d then switch to e.g. Google Colab or Huggingface and rent dedicated GPUs.

I am not very much worried about #1 and #2, but running LLMs? I usually tend to spend a lot of money on a Macbook Pro, but then keep it for quite a few years before making the next purchase. So, which model should I focus on?

Read More 

Generative AI-Augmented Program Execution

The use of Generative AI allows a completely new programming paradigm. I call this paradigm Generative AI-Augmented Program Execution (GAPE). In very brief, GAPE is a programming approach that mixes regular software code with output from GenAI models. The idea is best illustrated with an example. Imagine you have a list of animals: [“horse”, “cat”, “whale”, “goldfish”, “bacteria”, “dove”]. You do not have any further information than those animals. Now you are asked to sort this list according to the animal’s expected or average weight in descending order. By applying common sense reasoning a human returns this ordered list: [“whale”, “horse”, “cat”, “dove”, “goldfish”, “bacteria”]. Assume we implement a (e.g. Python) function sort(list_to_sort: List[str], sort_criterion: str) -> List[str] such that the sorting happens entirely automatically behind the scenes by calling a Large Language Model. Input is a List of strings and a sort criterion description as a string parameter. Both are being sent as a prompt to the LLM, which computes the result. The output is again same list, just ordered according to said criterion. Wild, isn’t it? Welcome to Generative-AI Augmented Program Execution!

Read More 

Advanced Prompt Architectures - Querying and processing denormalized structured data

In the past I was focusing mentally a lot on how large language models (LLMs) allow new ways how to interact with text data. Recently, while talking to a colleague, it suddenly dawned on me that large language models actually also enable new ways how to “query” or “process” your structured tabular data. In this article I will compare four different approaches:

  1. Querying structured data via APIs.
  2. Querying structured data with text-to-sql or similar.
  3. Processing structured data that was denormalized (“exploded”) before feeding it into an LLM.
  4. Querying the vector embeddings of structured data that was denormalized.
Read More