Is AI winter coming?

Originally written: 2020 July 25

If I asked somebody "is AI winter coming?", they'd probably think I'm crazy. After all, media and industry tout machine learning as the next big thing. In fact, we see its applications surround us: chatbots, face detection, google translate, etc. Thousands of AI startups sprawl the globe, many subsidized and supported by governments wanting to partake in the "fourth industrial revolution".

Then why am I asking the titular question: is AI winter coming?

What even is AI winter?

AI is an intrinsically marketable field; you see AI capable of doing everything in Sci-Fi films. Anything people can do, AI can too, if not better. People talk about the "singularity point" where AI will outsmart humans. However, AI's superb marketability actually presents a problem: technology's nowhere close to its marketing. AI is too marketable, and easy to overhype, setting the stage for disappointment.

For this reason, AI has seen several "season" cycles ever since its inception. During the spring and summer, governments and industry invest heavily in AI, leading to burgeoning research. However, during the fall, the investors start to realize that AI lacks its prescribed potential, and cut grants and eliminate departments when they hit their deep disappointment during the winter.

AI had two major winters: one during the 1970's, and the other during the late 1980's. The first came after 15 years of excitement over the nascent field of AI. With the rise of computers, machine learning algorithms previously calculated by hand could be run much faster, and much more accurately. Arthur Samuel developed a respectable checkers engine in 1956, and Frank Rosenblatt proposed the first neural network (called "perceptron" at the time) in 1955. Most notably, a large amount of hype surrounded machine translation, when the US government succeeded in translating rudimentary sentences between Russian and English in 1954. Combined with famed linguist Noam Chomsky's work on syntax, machine translation seemed just around the corner.

But it was not. Apocryphal stories mention that translating "the spirit is willing, but the flesh weak" back and forth with Russian resulted in "the vodka is good but the meat is rotten". Likewise, "out of sight, out of mind" returned as "blind idiot". Eventually, the ALPAC Report in 1966 deemed that after ten years of funding, machine translation still proved far more expensive, less accurate, and slower than human translation. This ended the US National Research Council's funding completely on the field. Likewise, in Britain, the Lighthill Report published in 1973 stated that "in no part of the field have the discoveries made so far produced the major impact that was then promised," virtually ending the British government's support of AI.

Eventually, interest in AI arose again in the 1980's with so-called "expert systems," a complicated system of if-then statements designed to emulate human decision making processes. Carnegie Mellon University developed one for Digital Equipment Corporation (DEC), then one of the biggest minicomputer manufacturers (producers of the famed PDP-8 and PDP-11 machines). DEC estimated that the system saved the company over 40 million dollars over the course of six years, and companies globally started implementing expert systems. In 1984, the magazine Business Week celebrated the development, declaring that "artificial intelligence has finally come of age."

Unsurprisingly, the hype did not live up to the expectations. Expert systems were carefully crafted for specific purposes, and they were difficult and expensive to design for new tasks. In the 1980's, computational power and storage costs still cost exorbitant fees. "Lisp machines", specialized to run the lisp language mainly used for expert systems, sold for $70,000 in 1980, equivalent to ‚Äč$ 219,100 per machine in 2020 value. Eventually, the high costs and lack of flexible success led to yet another disappointment in AI, leading to its second winter.

AI summer, now

The second winter eventually passed.

Thanks to an incredible rise of computational power and development of new algorithms, AI has seen unprecedented advancements in recent years. Geoffrey Hinton made the use of neural networks, first proposed in 1955, possible with the backpropogation algorithm. Yann LeCun sophisticated convolutional neural networks (CNN), finding massive success in computer vision and making it the poster child for deep learning. Jurgen Schmidhuber created the long short-term memory (LSTM) networks, making natural language processing (NLP) a tractable field. Andrew Ng started the use of GPU's, speeding up computations by orders of magnitude.

Interest in AI bloomed in 2012, when Alex Krizhevsky used his AlexNet to win the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with an astounding 10.8% gap above the runner-up. Krizhevsky, advised by Hinton, used CNN's on GPU's, and heralded the new age of deep learning. Krizhevsky's paper on AlexNet has gained more than 60,000 citations in under 8 years, and the massive amount of research ever since has pushed AI forward. Now Google Translate often offers better translations than second-language learners, especially for similar languages or languages with a lot of data. Facebook's auto-tagging features can be scarily accurate. YouTube's recommendation algorithm (based on their groundbreaking paper) keeps us entrapped to the platform.

Artificial intelligence has seen a lot of hype previously, but it's never been this ubiquitous. It's not simply AlphaGo defeating the world Go champion -- it's in our cars, it's in our pockets, and you probably crossed some machine learning algorithm on the way to read this article.

So what's the deal?

Well, AI is making inroads into our lives like never before, but it still didn't fail to over-inflate its promises. Recently, Germany banned Tesla from marketing its cars as "autopilot", because well... it wasn't autopilot. You had to turn on the blinkers (signal lights) for the system to take effect. The company had over-packaged the term "autopilot". Similarly, a recent report showed that 40% of proclaimed "AI startups" in Europe show no evidence of AI. They just use the buzz words -- artificial intelligence, machine learning, big data -- because unfortunately, those buzz words sell. It seems like about every SaaS solution and their grandfathers are saying that their product runs on big data & AI. Even when their product doesn't need AI, or when it may be detrimental to the company to implement it.

However, the disappointment on unfulfilled promises hasn't hit yet -- the hype train is only accelerating. For instance, one of the biggest machine learning conferences, NIPS, has seen a five-fold increase in submitted papers in just five years:

The field is truly advancing at a dizzying rate. Take machine translation, for instance. For years, experts concocted a careful system of "statistical machine translation", but that changed in 2014 when Google released their "sequence-to-sequence" deep learning machine translation model in 2014. The model, which only took a handful of months to develop, surpassed the statistical model developed for decades. 2014 didn't end there; later that year, Dzmitry Bahdanau and Kyunghyun Cho came up with two new models: attention and the Gated Recurrent Unit (GRU), further improving machine translation accuracy to new heights. Attention remained the state-of-the-art technique for machine translation, but soon in 2017 a new model based on attention called transformers reached the new state of art. This only lasted a while until Google honed transformers to publish their famous BERT model in 2018, which reached the state of art benchmark not only in machine translation but in 11 different NLP benchmarks.

The rapid advance of research no doubt is something to be celebrated, and hopefully it helps technology catch up with the hype. However, it's causing an incredible inflow of talent to the machine learning pool. Machine learning researchers and engineers get paid easily over six figures, and along with its exciting developments it's hard not to be attracted to work in AI. The flip side is, though, that this supply may surpass demand. Surely right now the demand for ML engineers still heavily surpasses supply, but it may no longer hold true once the AI bubble pops. Previous AI winters destroyed careers of those who followed the hype, and this may repeat once again when businesses close out their unprofitable departments (even the data science giant Palantir has never reported a profit).


For the record, I don't want AI winter to come.

I see so many people entering the field, and truthfully, so many exciting new developments come out. I have a lot of fun reading papers, like Google's recent summarization model PEGASUS. However, seeing inflated expectations in industry and attending misguided seminars have stilled fear in me. The technology, even at its speed of advancement, doesn't seem to match what the general populace expects. The advent of ubiquitous machine learning also creates new security problems like adversial attacks, potentially causing disastrous consequences.

I don't want AI winter to come. But I'm afraid of it.