• MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

    I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

    I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

    • homura1650@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      3 months ago

      The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

      For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

      In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

      An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself

          The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field’s long-term goals.[16]

          https://en.m.wikipedia.org/wiki/Artificial_intelligence