The Data Daily

From Infallible Computers To Infallible AI & Data As Truth

Last updated: 08-01-2020

Read original article here

From Infallible Computers To Infallible AI & Data As Truth

The dawn of digital computing in the late 1940's and early 1950's ushered in the popular image of these new “thinking machines” as infallible data analysts that would herald the end of statistical errors. In the place of error-prone humans mis-entering, mis-tabulating, mis-copying, mis-analyzing and misreporting information, machines would mechanize the entire analytic workflow, performing data-related tasks with robotic perfection. Today we are seeing the same popular image taking root with respect to AI and data, as data becomes synonymous with “infallible truth” and AI “eliminates human error” by bringing automated perfection to analysis.

The dawn of the digital era brought with it the public idea of computers as infallible data processing machines that would do for rote thinking what mechanization had done for rote physical work. While their creators understood their enormous limitations and relatively narrow application domains, the press and public saw them as something entirely different: a new era of primitive silicon lifeforms that could outsource our thinking in the same way we were increasingly outsourcing our physical labor to the automation revolution.

A typical article of the era was a Wall Street Journal feature on August 22, 1949 heralding a “new ‘electric brain’” that was “12,000 times faster than humans” yet could perform years of human work in mere hours “without a single error.” Even better, it was “really two machines – each checking the other for accuracy” eliminating error-prone humans from the world of calculation.

This ideal of mechanized perfection was so ingrained in press and public imagery of the era that even the Abbott and Costello Show lampooned the difference between ideal and reality. A skit in their February 27, 1954 episode featured an IRS official dismissing the possibility that the agency could possibly have made a mistake with the memorable “our bookkeeping is done by machines and they’re infallible” before quietly acknowledging to a coworker moments later that there was indeed an error.

Today such hype has become accepted fact, with nearly all numeric calculation of any scale outsourced to these electric brains. Few Fortune 50 companies would trust their human accounting staff to manually tabulate all of their ledgers today – the likelihood of error is simply too high and the consequences too great.

However, 70 years later that infallibility is still limited to the narrow domain of tabulating numbers. Computers are purpose-built for numeric calculation. As they have been pushed into the broader domain of knowledge management over the decades, the limitations of their classical architectures have become increasingly apparent. The impact of programming errors has also been increasingly understood by the public. Even if the machines themselves may be seen as infallible, there is greater public recognition that the instructions those machines are following are written by those same error-prone humans.

Today it is AI and data that have emerged as the infallible brains and lifeblood of the modern digital revolution. There is also a similar divide between the field’s recognition of the severe limitations of their tools and public and press understanding of those constraints.

Much like their hardware predecessors three quarters of a century ago, today’s AI software is described in terms of breathless hype and hyperbole as being capable of human-like thought and eliminating error from complex analytic pipelines.

Similarly, data has emerged as the new definition of indisputable “truth” that divines the one possible answer to every question.

While AI and data analytics leaders openly discuss the tremendous limitations of their fields, their cautionary words cannot begin to compete with the hype deluge from the marketing departments of their companies. Much as it did with computers three quarters of a century ago, news coverage today routinely touts AI and data-driven decision-making as eliminating error and replacing flawed human judgement with mathematical perfection. Almost 70 years to the day from its 1949 story, an article in today’s issue of the Journal notes how AI and data-driven analysis is helping industry “reduce human error.”

Even normally staid universities have jumped on this promotional bandwagon. I receive at least one university press release a day touting its researchers’ AI and data-related research as eliminating human error, finding truth through data and placing the AI singularity mere months away.

In short, the public image of infallible calculating machines that defined the early era of computing has become today’s image of infallible AI and data-driven truth. As AI and “big data” mature, it is likely that the shine will wear off this public ideal and they will take their place alongside their historical brethren as powerful, though far from infallible, tools.

Read the rest of this article here