so, yet another tag standard with no enforcement whatsoever?
Scrapers already ignore robots.txt, and pretend to be users to bypass restrictions. Why would anyone bother with with this at all?
The only thing that has some teeth is the encrypted content that allows scrapers to bypass paywalls.. but that seems to be relying on each scraper being manually registered with content owner. And if you (a content owner) are already establishing contracts with individual scrapers, it is not clear what is the whole point of this extra complexity - just do IP-based auth, or api token, or something similar, and bill them monthly for queries.
so, yet another tag standard with no enforcement whatsoever?
Scrapers already ignore robots.txt, and pretend to be users to bypass restrictions. Why would anyone bother with with this at all?
The only thing that has some teeth is the encrypted content that allows scrapers to bypass paywalls.. but that seems to be relying on each scraper being manually registered with content owner. And if you (a content owner) are already establishing contracts with individual scrapers, it is not clear what is the whole point of this extra complexity - just do IP-based auth, or api token, or something similar, and bill them monthly for queries.