Crawler doesn't work

My crawler (uses requests and bs4) finds all links in a site, and with some extra programming, it crawls the site. But for some reason when I use it on, it returns urllib.error.HTTPError: HTTP Error 403: Forbidden. Why?

Glitch sites will block their own IPs to prevent pinging.

1 Like