Machine Learning Is Not Learning
Can linear functions be considered to think?
Machine Learning, Artificial Intelligence, and Cognitive Computing are not learning, intelligent, or cognitive.
Whatever it is that we want to label what is the current hype in the tech industry, usually described as "intelligent", "cognitive", or "learning", it is all, still (like, since modern computers were invented back when Turing was decoding Enigma in WWII) simply linear functions. Put some input into the box, and get some output out the other end. Put the same input into the box again, and get the exact same output, again. It is predictable in the sense that causality is predictable. But because these new algorithms deal with such huge datasets that no human can fit into their short-term working memory at the same time, many ascribe the process going on as somehow being "intelligent". It's not. It's just using more 1s and 0s than we were using before.
For example, think of a sorting algorithm. Given a list of randomly arranged elements, a sorting algorithm will re-order those elements in a certain way that we desire (e.g., alphabetical, ascending numerical, etc.). When we compare the newly "ordered" list to the previous "unordered" or "chaotic" list, we easily notice that one is more "intelligible" than the other. It makes more sense to us, it's more useful, and as such is better. But these notions of "order" and "usefulness" and "sense" and "intelligibility" are not inherant in the list itself. These are attributes that necessitate the presence of a conscious human brain in order for them to be ascribed to the set of data being observed. On their own (abscent a human brain) the "ordered" list is no different than the "random" list: they're both just a set of 1s and 0s stored in some silicon. Only when a human is present can one of them be deemed "ordered" and have that make sense. It is our intelligence which is what makes these mechanical processes mean something.
Similarly, recent algorithms of the neural-cognitive-ml-ai-connectionist-hidden-layer variety are only intelligent is so far as a human brain deems them to be so. They aren't cabaple of planning world domination on their own; they're just stupid algorithms that take input and generate output. But because the data sets are so huge, we (human brains) are impressed with their usefulness to us because we can find ways of plugging them into our problem domains in ways that boggle our minds.
It must have felt the same way when Turing cracked Engima. When he flipped the switch and the rotors of his machines started clicking and whirring, crunching numbers and matching patterns, finally to output a string of 1s and 0s that could be transcribed to Roman letters matching German communiques. It must have seemed magical to see this hunk of metal and electricity turn what appeared to be a random chaotic jumble of characters into an intelligible, sensible, useful paragraph of human writing; almost as if it was intelligent. But it's not. It's just math. The machine doesn't know anything. It just mechanically takes input and generates output.
So, don't drink the Kool-Aid that is being passed out during this current AI summer. We're still dealing with boxes of logic that work as linear functions just like the first computers did. The only difference today is that we have more memory enabling us to work with bigger data sets. Skynet is not going to take over the world, the singularity makes for great SF, and your home assistant is nothing more than a database lookup machine tapping your living room for the NSA.