Artificial vs. Human Intelligence: Beyond the Hype

 

With A.I. and deep learning being touted as “the cutting edge of the cutting edge,” many software engineers are contemplating the major career shift to become data scientists. 

According to Crunchbase, the amount of money poured into A.I.-related startups over the last year totalled $15.6B, in 1,752 rounds of funding (the largest single investment was Toutiao at $2B).  

With all this hype and money flying around, it’s easy to get swept up in the clamor. But data scientists in the field propose that we stay grounded, and understand that the technology has limitations in terms of what it can achieve.  

Even Geoffrey Hinton, co-inventor of the back-propagation method that drives most A.I. technology innovations, has said: “My view is throw it all away and start again.” 

Dr. Filip Piękniewski, an established computer scientist who develops machine learning models, recently broke down the key reasons he expects that an “AI winter” is headed our way: “Much like before a stock market crash there are signs of the impending collapse, but the narrative is so strong that it is very easy to ignore them, even if they are in plain sight. In my opinion there are such signs visible already of a huge decline in deep learning (and probably in AI in general).”

 

What are the limitations of A.I.?

The main limitation of A.I. and deep learning boils down to the same primary issue: as useful as it is to us in our modern world, there’s no way it can ever compete with the complexity of human intelligence. Richard Waters of the Financial Times frames the fundamental limitation of deep learning this way: “The technology cannot deal with many of the problems that humans will want computers to handle.” Meaning that if we rely on deep learning technology for purposes beyond its capabilities, it can lead to grave results.  

Google software engineer and machine learning researcher Francois Chollet provides a detailed explanation of why deep learning models should not be overestimated. Chollet demonstrates how “anything that requires reasoning — like programming, or applying the scientific method — long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them.”  

 

Stay up-to-date on the limitations of A.I. and deep learning

Two outspoken computer science professors at New York University are worth following on the subject. Gary Marcus is a professor of psychology and neural science, and Ernest Davis is a professor of computer science.

They made a persuasive case for skepticism of big data and A.I. back in 2014, and Gary Marcus is still considered one of the most prominent advocates of maintaining this skepticism. Marcus and Davis’ most recent New York Times article argues that “No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world… There is no end to the variety of life — or to the ways in which we can talk about that variety.” 

It’s important to mention that these people sharing their skepticism about A.I. and deep learning are long-term data science professionals who continue to develop deep learning models. Their warnings are not intended to shut down innovation, but rather to keep the general public conscious of how innovations in the field should fit in with our expectations and with ethical behavior.  

Gary Marcus continues to emphasize this message to his critics, explaining that “Despite all of the problems I have sketched, I don’t think that we need to abandon deep learning. Rather, we need to reconceptualize it: not as a universal solvent, but simply as one tool among many, a power screwdriver in a world in which we also need hammers, wrenches, and pliers, not to mention chisels and drills, voltmeters, logic probes, and oscilloscopes.”

 

For software engineers thinking about developing their data science skills, most signs point to the continued prominence of A.I. and deep learning in technological innovation. However, it seems that you would be wise to stay informed and maintain a healthy dose of realism in the midst of all the hype swirling around it.