So I’ve curiously run into the issue with my project, where I’m getting the dreaded EADDRINUSE error. The last thing I did before noticing this error was adding a file to my assets cdn directory.
But the server is still unable to start. Even after cleaning up the storage and running his recommended commands. I remixed the project, and the remix starts with no problems.
@Tim@Gareth This is definitely a bug with glitch - I’m seeing it every now and then too. It tends to occur when I’m near the memory limits of the server. It’s as if the server doesn’t get properly stopped before glitch tries to start it again. I’m using puppeteer and I get this about 1 in every few dozen restarts. I was previously remixing it to get past this because I didn’t know we had access to kill and commands like that. Thanks @christian-svr!
Edit: It may have to do with the forcing exit not going well:
Your app is taking a while to stop...
Your app did not stop in 5 seconds, forcing exit.
thanks for your report it would be great if you could share with us your project name (in a PM if you prefer to keep it private) so that we can take a look and eventually fix the issue for everyone
I created a minimal test that replicates it, but unfortunately it does take a long time (about 10 minutes for me) of restarting the app to get the error to occur:
Try copy and pasting the (async function() { while(1) {...} })() block so there’s two of them, then keep pressing space to edit the file so it continuously restarts half-way through starting up, or while puppeteer is running. Then take the second block away so it’s back to just one block, and keep doing the same thing. Again, it took me about 10 minutes of randomly pressing space every few seconds to get the error to occur.
I think it tends to occur when the server is near its memory limits, but I really have no idea what’s specifically causing it.
Once the error has occurred, the server doesn’t start, even after pressing space to trigger a restart lots of times, but if you visit the public URL of the server, it is actually working still. The reason is of course because the old server is still running:
EDIT: I just set up an uptimerobot monitor to ping it so it hopefully stays awake so you can see the error when you read this message. Just in case it’s really hard to reproduce.
thank you for your report and your demo! We were indeed able to figure out how to improve our process manager a bit, and the specific issue you see should be gone now (at least the listen on port 3000 part), and the number of processes in the container.
Unfortunately, we can’t manage all the side cases, because in general an user app can do… basically what they want, so if they don’t want the app to shut down nicely, they can do it.
But in this case, we were able to manage at least a bit of it, thanks to you too!