Interesting topic that is about to be ruled on regarding online issues.
Is it that hard for tech companies to simply flag false, misleading or dangerous information as such and therefore not use it as part of their recommendations? Heck maybe even recommend therapy for someone who's incessantly viewing objectionable content. Isn't it their ethical responsibility to monitor content they recommend? I think it is. False and misleading information will always be available but that shouldn't stop it from being labeled as such when it is found to be. I'm not saying burn all the books. I'm saying make sure the librarian is recommending content that doesn't harm others.