RLesur/crrri

websites that don't like to be scraped

Closed this issue · 1 comments

Any tips on how to deal with websites that don't like to be scraped and have a service in front of them keeping an eye out for automated agents? What distinguishes a headless browser from a normal browser from the perspective of the server?

changing the user agent seemed to do the trick :)