Crawler doesn't work

My crawler (uses requests and bs4) finds all links in a site, and with some extra programming, it crawls the site. But for some reason when I use it on glitch.com, it returns urllib.error.HTTPError: HTTP Error 403: Forbidden. Why?

Glitch sites will block their own IPs to prevent pinging.

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.