Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 28 Comments
Joined 10 months ago
cake
Cake day: May 8th, 2024

help-circle


  • They have no ability to actually reason

    I’m curious about this kind of statement. “Reasoning” is not a clearly defined scientific term, in that it has a myriad different meanings depending on context.

    For example, there has been science showing that LLMs cannot use “formal reasoning”, which is a branch of mathematics dedicated to proving theorems. However, the majority of humans can’t use formal reasoning. This would make humans “unable to actually reason” and therefore not Generally Intelligent.

    At the other end of the spectrum, if you take a more casual definition of reasoning, for example Aristotle’s discursive reasoning, then that’s an ability LLMs definitely have. They can produce sequential movements of thought, where one proposition leads logically to another, such as answering the classic : “if humans are mortal, and Socrates is a human, is Socrates mortal ?”. They demonstrate the ability to do it beyond their training data, meaning they do encode in their weights a “world model” which they use to solve new problems absent from their training data.

    Whether or not this is categorically the same as human reasoning is immaterial in this discussion. The distinct quality of human thought is a metaphysical concept which cannot be proved or disproved using the scientific method.






  • Honestly the use case i’m working on is pretty mind blowing. User records an unstructured voice note like “i am out of item 12, also prices of items 13 & 15 is down to 4 dollars 99, also shipping for all items above 1kg is now 3 dollars 99” and the LLM will search the database for items >1kg (using tool calling) then generate a JSON representing the changes to be made. We use that JSON to make a simple UI where the user can review the changes - then voilà it’s sent to the backend which persists the change in database. In the ideal case the user never even pulls up the virtual keyboard on their phone, it’s just “talk, check, click, done”.



  • I’m currently a guy working on something like this ! It’s even simpler as you can have structured output on the chatgpt API. Basically you give it a JSON schema and it’s guaranteed to respond with JSON that validates against that schema. Spent a couple weeks hacking at it and i’m positively impressed, I have had clean JSON 100% of the time, and the data extraction is pretty reliable too.

    The tooling is actually reaching a sweet spot right now where it makes sense to integrate LLMs in production code (if the use case makes sense and you haven’t just shoe-horned it in for the hype).



  • Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.

    There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.

    If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.