Google says it's working to make its AI and ML models more transparent, but can this be a reality?

In it's latest conference, Google said that it's working to make its artificial intelligence and machine learning models more transparent as a way to defend against bias, but how realistic do we think this is? What do you think?


  • Evolution will mean it's affectiveness will be minimal

  • My question would be - transparent to who? Data scientists and experts who may be able to deconstruct what each model is built on and the features? Or the rest of the population with an interest and keen eye towards understanding but without the specialized training?

Sign In or Register to comment.