Tweetalyzer: Get your Twitter profile analyzed & roasted by a super-smart AI

Thought this was a cool and fun project. Tweetalyzer roasts and analyzes your Twitter using Gemini 1.5 (gemini 2.0 as soon as ratelimits are dropped).

It’s also very easy to make your own - just add your own Gemini API key and your Scraper API key (I’m using Kaito since they are fast and have nice support, you can get your own for free by sending them a DM - not sponsored btw).

example card

3 Likes

for whatever reason this got 60 000 views and it broke, WHAT :sob::sob:

i don’t think these are bots too, since splitbee runs on the client-side. ignore the 3 events, it’s because splitbee only supports sending 2 500

explore the data for yourself: https://app.splitbee.io/projects/trcom-ptrck.glitch.me

1 Like

@glitch_support please help the site is lagging af can you please give me more ram? I’ve already boosted the project but it didn’t help :sob:

:sob:

Maybe try Vercel instead of Glitch, and use the glitch url as a redirect. I don’t know if vercel would be faster

i fixed it by moving to bun, went from 100% to 30% CPU usage. thanks @wh0 :slight_smile:

1 Like

Cool. Don’t understand any of that but okkaaay.

1 Like

bun is a really fast javascript runtime built with zig.

that’s only using 34 MB of RAM? I haven’t looked into the source, but don’t modern language AIs use like tens of gigabytes?

obvious edit: oh right it’s using Gemini remotely. but then what actually uses the local app CPU?

maybe it’s this user counter part?

setInterval(function () {
  fs.readFile(".data/log.txt", "utf-8", (_, f) => {
    count = new Set(f.split("\n").filter(Boolean)).size;
  });
}, 2000)

if it’s going to be reading and deduping this every two seconds, I feel like you might as well do this once on startup and keep a running Set in memory

// on startup
const uniqueUsers = new Set();
// prepopulate from log

// on rating
writeToLog(user);
uniqueUsers.add(user);

// get count 
const count = uniqueUsers.size;
2 Likes

no, that was added after the problem and after I moved to bun. now everything is fine.
I think it was mainly due to the high number of users

ah cool. good on bun being more efficient :sweat_smile:

1 Like

You know, you should get you own domain for this project. If it has gotten this popular, it might be worth looking into ¯_(ツ)_/¯

1 Like

Just proxying the Gemini/OpenAI requests (I have a system where there’s a 50/50 chance to use each of them) and the scraper requests, counting and logging requests and serving everything.

1 Like

update: switched to the latest version of bun using I've done it glitch - #11 by wh0. if you want to switch your project to bun too, I recommend remixing ~bun-starter since it already has most of the stuff (functional server, package.json for dependencies, etc) already pre-installed

1 Like