at://did:plc:44ybard66vv44zksje25o7dz/app.bsky.feed.post/3lweyleh7w22t
Back to Collection
Record JSON
{
"$type": "app.bsky.feed.post",
"createdAt": "2025-08-14T18:42:28.912Z",
"langs": [
"en"
],
"reply": {
"parent": {
"cid": "bafyreicb42pmlqnorv5rf2ffix3lcm27ajf77y2yb52jnyslmhp2kzt7sa",
"uri": "at://did:plc:44ybard66vv44zksje25o7dz/app.bsky.feed.post/3lweyieya5k2u"
},
"root": {
"cid": "bafyreih6cdvyrgr6b5unfhnwkhjk6bloc23id3srf3xjd2asde7bbmgsua",
"uri": "at://did:plc:44ybard66vv44zksje25o7dz/app.bsky.feed.post/3lwesgi5h7s2x"
}
},
"text": "this isn't exactly a clear-cut case: the robots.txt for that domain is very permissive, so the crawlers aren't violating intent/expectation\n\nit is just super wasteful. how do they have budget to endlessly re-crawl static content? google+bing+archive.org don't"
}