• CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    3 months ago

    iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

    • Johanno@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 months ago

        This ones from 2019 Link
        I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

    • Tryptaminev@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.