Community Open Thread 34 - November 3, 2023

Happy Friday, I just moved my desk a few feet to the left so my landlord has space to remove the air conditioning unit from my home office window - it’s finally getting cold outside! And just under an hour ago, I wrapped up our first Glitch Jams Live livestream. Shouts out to everyone who joined, thanks to you I have to do it again next Friday. :smiling_imp:

Link time:

(Shouts out to anyone who saw this post when I accidentally published before it was done, you’re true fans!)

Now it’s your turn to share what you’re working on, what apps, blogs or podcasts you’re enjoying, what you dressed up for on Halloween, etc. Maybe I’ll share a photo of my costume - either way, see you on .


I watched the stream later - thanks for the shoutout! And thank you for respecting the pronouns! (I don’t usually write it in my profiles because I don’t want to impose it to others, so I’m pleasantly surprised)

Speaking about p5.js, I met Raphaël the Processing Community Lead today, and I thought more crossovers with the p5.js community could be fun :wink:

1 Like

I’m making a serious effort to learn enough gentoo in order to try to run it in a glitch container :skull:

:penguin: :penguin: :robot:


Working on some fun local ai stuffs, previously I wrote a node.js Project Dysnomia bot for interacting with the system directly for checking out what the specific llm loaded is good at and not good at

I’m now running a python bot on the same token since most of the cool llm stuff is in python (and also slash commands are pretty easy to get started with). The python script is mainly to test some cool RAG (Retrieval Augmented Generation) libraries for a domain specific project.

RAG is really cool because basically the idea is that when a “general” llm is asked a question, a system tries to pull up relevant texts on the subject to hint to the language model to form a good response. This let’s you get accurate answers without the need for expensive fine tuning of the model itself.

The neat thing about the “open source” llms is that the field seems to be pretty competitive, if you check out r/localllama which was recently shouted out by a team from NVIDIA, you’ll see that people keep finding or making better and better models for smaller sizes.