Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Seeing how I had to become very knowledgeable because I’m an instance owner in the last few hours because of Lemmy and bad actors, this is absolutely not true.

    • toothbrush@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Im curious: What are the legal duties of a fediverse hoster regarding illegal content currently? Do you really have to remove illegal content proactively? Because as far as I know, thats just in the EU and only if you are one of the major digital services(which fediverse server hosters arent)