I’m going to seed my first play of The Veil with this.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
I’m going to seed my first play of The Veil with this.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
This goes along the line of changing the term to “artifical instinct” instead of intelligence because of the reliance on observing others and experiencing instead of just rote repetition its more true learning.
I think this article overstates the case. We can’t explain how people work, and they fuck up all the time. Should we not be using air traffic controller AIs that crash plans a tenth of the time of their human counterparts, because we don’t know what they’re thinking?
I suspect the deeper reason is that we’ll have nobody to blame/sue. See = litigious American society.
Yeah, the title is a little sensational, although the article unpacks it and makes some good points. We can certainly understand how deep learning works. However, once we let a large network loose on mountains of data, we can no longer trace back a causal chain. At least, not easily. In machine learning lingo this gets into the difference between “prediction” and “inference”. The latter being the trickier part – and what the article focuses on.
Probably an aside, but I remember seeing those images of cats and whatnot generated by running neural networks backwards to find the strongest stimulus for that particular recognition category. Spooky.
That’s just damn terrifying.