Google says it's working to make its AI and ML models more transparent, but can this be a reality?

In it's latest conference, Google said that it's working to make its artificial intelligence and machine learning models more transparent as a way to defend against bias, but how realistic do we think this is? What do you think?


  • Evolution will mean it's affectiveness will be minimal

  • My question would be - transparent to who? Data scientists and experts who may be able to deconstruct what each model is built on and the features? Or the rest of the population with an interest and keen eye towards understanding but without the specialized training?

  • It feels to me that this may be one area where we start approaching a conceptual recursion. We've started using machines to "think" in a sense, and through innovation these advances are made more and more efficient. When that becomes too complex for us to understand, we use another machine to interpret for us. But, we have to teach it how to interpret, right? Then it must learn on its own and outpace our understanding.

    Eventually, might that machine's output describing the management decisions executed over a farm of AI's require the same type of abstraction and simplification? And so on...?

    It's an interesting thought to consider, if nothing else.

Sign In or Register to comment.