Is there a way to increase the allowed size of post requests?

Hi, in my project, I’m making a post request where one of the parameters could be up to (256^2)*4 characters long. Unsurprisingly, this is too large for the server to handle, and so I get a request entity too large 413 error. I’ve looked around, and there do seem to be ways to increase the max. file size, but none that would work with glitch.
Is there any way to do this with Glitch? If there isn’t, I could probably decrease the size by about 75%, but I really don’t want to.
Any help would be much appreciated.

If you want the full error, here it is:

PayloadTooLargeError: request entity too large
    at readStream (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at getRawBody (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at read (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at urlencodedParser (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at Layer.handle [as handle_request] (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at trim_prefix (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at /rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at Function.process_params (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at next (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/
    at jsonParser (/rbd/pnpm-volume/30a81e72-677a-4450-8dde-4baf535d90b5/node_modules/

Thanks in advance for your help.

Can you try using binary instead of sending text? Can you compress the data you’re sending?

1 Like

The error above is being thrown in the body-parser library, so if you can find another lib for reading the request body, you might have a better shot. It’s not a Glitch limitation.

Secondly, (ihack has beat me to the punch here…) you’re transferring the image data as ASCII number characters and there’s definitely a more compact way of doing it (you’re essentially using a byte per character when you could be using a byte per number) … you could look into that!


Instead of body parser, you could simply use this Express middleware:

1 Like

Ah, right, I didn’t know it was the lib rather than Glitch. Thanks for that @SteGriff! So I took a look at body-parser specific stuff, and found that there’s a parameter that you can pass to set the limit. However, it would be nice to compress it to make it more efficient, how do you suggest doing that? Base-64 maybe?

1 Like

No problem! The trick is to look at the bottom line of the stack trace and see what file it occurred in.

Base64 will be ok as long as you run it on the bytes you want to store (numbers 0 to 255) and not strings of them!..

I have not tried this before, but Uint8Array should help you here:

If you have numbers above 255, use Uint16Array instead.

Create an array with your numeric values in a Uint8Array then run btoa() on it, like in this example:

Hope it helps!

1 Like

to add on to this: see the limit option the default limit is 100kb, which indeed is less than the estimate that the asker posted

1 Like

Hmm, this doesn’t really seem to make anything better, if anything it makes it longer :frowning_face: I’m not really sure what the Uint8Array thing is doing here, but including it seems to basically undo any encoding :man_shrugging: Anyway I’ve also tried normal atob without the Uint8Array, but tgat also seems to make it longer… I’ve got no idea what’s going on, but I think I’ll keep it as it was originally for now, and I’ll post here if I find a better way of doing things. Thanks for your help anyway!

1 Like

Hey again.

This has been really bothering me (in a good way). Thank you for posting this problem, it’s a good intersection of JS and Computer Science.

I did some experiments. Essentially we are bumping into fundamental limits of what you’re allowed to transmit in JSON, which boils down to the primitives of string, number, and boolean.

What’s been bothering me is that you have data like [43,16,163] which are all small integers that can be stored in a single 8-bit byte each. In binary, they look like this:


(Each 0/1 is a bit, hence 8 bits)

But to store them in JSON, they have to go as strings one way or another, because even if we use the number type, each character takes up one 8-bit byte character, plus commas. So instead of storing three bytes as above, we transmit, at minimum:

00110100	00110011	00101100	
00110001	00110110	00101100	
00110001	00110110	00110011

…which is 4 3 comma 1 6 comma 1 6 3.

It SUCKS that you have to use 3 bytes to store the number 163 in JSON. It turns out this is why BSON (Binary JSON) exists.

About base64

Base64 doesn’t really make things smaller. It reduces your character set from any possible ASCII or unicode character down to 64 “safe” characters. So in the case of your numbers, going from string array to base64 will naturally make the set bigger because we’re using a smaller character set to represent the same data.

What can we do

The smallest representation of your data would be to say, well, what ASCII character is the 43rd, the 16th, the 163rd? But you quickly get problems because the first 32 characters are “control characters” i.e. weird garbage. And JSON doesn’t allow you to transmit those.

(Find an ASCII table at:


Let’s look at some disgusting magic:

This guy is looking for a way to do what you’re doing, pack binary digits 0-255 into a unicode string.

The accepted answer has final code like this:

function pack(bytes) {
    var chars = [];
    for(var i = 0, n = bytes.length; i < n;) {
        chars.push(((bytes[i++] & 0xff) << 8) | (bytes[i++] & 0xff));
    return String.fromCharCode.apply(null, chars);

function unpack(str) {
    var bytes = [];
    for(var i = 0, n = str.length; i < n; i++) {
        var char = str.charCodeAt(i);
        bytes.push(char >>> 8, char & 0xFF);
    return bytes;

Here’s an output from a program I made to test various packing methods on your data:

$ node ui8a.js
intArrayString: 43,16,163,14,248,184,59,227,243,146,7,218,100,90,204,129,118,86,
intArrayString - Length:  115

intArrayBase64 - Length:  156

packIntArray: ⬐ꌎ㯣ߚ摚첁癖ᳫ荲圼엟꠽푥
packIntArray - Length:  16

unpackIntArrayString: 43,16,163,14,248,184,59,227,243,146,7,218,100,90,204,129,1
unpackIntArrayString - Length:  115

Remember I said “disgusting magic” so this isn’t necessarily the way to go.

HOWEVER what we’re seeing here is that we can pack 32 ints into just 16 characters using the 2-byte unicode strings in JavaScript and it will come out as a dreadful mix of asian languages and missing unicode chars: ⬐ꌎ㯣ߚ摚첁癖ᳫ荲圼엟꠽푥

Unpacking the data, we get back exactly the same that you started with, i.e. it “round-trips” correctly: 43,16,163,14,248,184,59,227,243,146,7,218,100,90,204,129,1 18,86,28,235,131,114,87,60,197,223,168,61,212,101,233,203

I think this is promising and if your JSON libraries allow this string through, this lets you truly encode your binary data as binary instead of as a massive waste of space!

I’ve dropped my test program source code in a gist:

You need to run npm install abab for it to work.

Let me know what you think :slight_smile:


Best and longest answer on the forum so far!


Right… I’ve read through it once and I’ll have to read through it another few times to begin to comprehend it… but it sounds like it’s going to work really well! Thanks!

Edit: ok, I’ve given up understanding how this works, all I know is that it works really well… I’m just going to try it out with everything else…

1 Like

There are some issues with this experiment. Let me first post this one:

and the corresponding code:

const intArrayBase64 = btoa(intArray);

That’s probably not what you were going for. If we decoded that, here’s what we get:


That is to say, it’s the above intArrayString result, as UTF-8 (coincides with ASCII here) bytes base64 encoded. I think you meant to take those numerical values as bytes and base64-ecode those. That results in this:


44 bytes

The other issue is that this

under-reports what would be compared against the body length limit. It’s not the string length, it’s the number of bytes once that string is put into a JSON string literal and then encoded into bytes. The other samples were all within the ASCII range, so we would have spent one byte per character anyway. Let’s ignore the escaping to go from a string value to a JSON literal. It’s quite likely that this would be encoded with UTF-8. I think that comes out to 47 bytes, but someone should double check that.


Hey! Yes you got what I was trying to get. I was honestly surprised at the result I got and chalked up to a “JavaScript sucks” thing. Can you post a code sample of how you encoded it as a true int array?

I get what you mean about the length count being different to the byte count but as its always transported as JSON (UTF8 strings) I feel like it kind of doesn’t matter In the context of OP’s original problem. IMHO.

Thank you for the corrections on this, hope you can post some code for me :grin:


I didn’t actually code it up, but the input to btoa is this weird format where each JavaScript character is the fromCharCode of the byte value. So you’d have to write a loop to go through and construct that special input string. Or use some function other than btoa.

edit: oh here’s a nice Q&A on the subject

For the experiment, I was using CyberChef‘A-Za-z0-9%2B/%3D’,true)From_Decimal(‘Comma’,false)To_Base64(‘A-Za-z0-9%2B/%3D’)&input=TkRNc01UWXNNVFl6TERFMExESTBPQ3d4T0RRc05Ua3NNakkzTERJME15d3hORFlzTnl3eU1UZ3NNVEF3TERrdwpMREl3TkN3eE1qa3NNVEU0TERnMkxESTRMREl6TlN3eE16RXNNVEUwTERnM0xEWXdMREU1Tnl3eU1qTXNNVFk0TERZeExESXhNaXd4TURFcwpNak16TERJd013PT0K

And just to expand on my different opinion, what was found to be the source of the PayloadTooLargeError is documented as judging by the byte length:

[the limit option] Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; …


@SteGriff Somehow this ends up with nearly double the content-length:
Using the pack method, the content length was 619128 bytes.
Without compression, it was 364742 bytes. :sob:
I have no idea why, but it seems @wh0 was correct.
It is a mystery.

I’m gonna have a look at how I can do something about this, I’ll get back to you within 1-2 hours!

1 Like

Ooh, sounds exciting, thanks!

Can the generated image only use black and white, or can it use any scale of black/white?

It uses any scale of grey. The way I’m doing it is generating a random number between 0 and 255 inclusive and setting the r, g and b to that number. I’m sending an array (or a compressed array as a string) of the 256^2 random values.

Edit: but that’s a good idea… I could represent it as a ration of black to white… interesting idea… :bulb:

Successfully managed to create it.

The encoded image uses 102087 bytes, or roughly 100 kb, if you run it over with gzip or something I guess it could be a lot smaller.

Here’s the source code:

I’m sorry but I don’t have the time to explain it atm, am in the middle of moving :wink:

EDIT: Feel free to use the source code but remember to credit me by linking to the repo.

EDIT 2: If you intend to send it over http you don’t need to use the stringify, you simply use encode(image[1]) as the data, that way the length will be 65.536 bytes, which is 64 kb.