Artificial intelligence (AI) has been in an all-time rise with industry experts working every moment to utilize it to the fullest. But after every rise of wave in the ocean, there comes a fall. AI has had periods of intense research enthusiasm interspersed with downturns – the AI winters. Characterized by reduced funding, diminished interest and skepticism about AI’s potential, world has managed to overcome them. To understand how the experts surmounted it we have to understand what caused the AI winters.
History of AI is laden with two AI winters. Because of the technological limitations constituted by insufficient hardware to support the initial breakthrough and advanced algorithms in the 70s and 80s, the world witnessed the first AI winter. Confidence in AI waned and The Lighthill report (1973) in UK criticized its practicality, influencing the government funding into AI research. Interest in it was revived by development of practical applications like medical consultation program ‘MYCIN’ and customer order configuration system ‘XCON’.
When the global economic downturn in the late 80s struck the pockets of corporations investing across various fields, including AI, another winter had struck causing research and development stagnation. Further, progress in AI was slow because challenges in machine learning and natural language processing remained unresolved. Sophisticated algorithms and introduction of statistical methods in machine learning in the late 90s saw resurgence in AI. Improvements in neural network and development of “deep learning” played a crucial role. Availability of large datasets and the computational power of such large data further helped deep learning. Interdisciplinary research collaboration across various domains led to advanced algorithms and models.
What the world is witnessing now is significant technological advancements and widespread adoption across various industries. When Graphics Processing Units (GPUs) emerged as an indispensable tool due to its ability of parallel processing, its use expanded beyond graphic rendering in computers and gaming consoles. Then came Tensor Processing Units (TPUs) pioneered by Google. Due to widespread use of internet and large data being generated, TPUs were specifically designed to address the growing demands of machine learning enabling fast training and inference times.
Sundar Pichai, CEO of Alphabet compared AI to fire and electricity talking about the advantages and perils of AI. As long as we can harness the strengths of AI to uplift human race aligning it to the human values, AI can sustain and humanity will sustain.
References:
Crevier D. AI: The Tumultuous History of the Search for Artificial Intelligence. New York, NY: Basic Books; 1993.
Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 3rd ed. Upper Saddle River, NJ: Prentice Hall; 2009.
McCorduck P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. 2nd ed. Natick, MA: A K Peters Ltd; 2004.
Lighthill J. Artificial Intelligence: A General Survey. Science Research Council; 1973.
Minsky M, Papert S. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press; 1969.
Buchanan BG. A (Very) Brief History of Artificial Intelligence. AI Mag. 2005;26(4):53-60. doi:10.1609/aimag.v26i4.1848
Marcus G, Davis E. Rebooting AI: Building Artificial Intelligence We Can Trust. New York, NY: Pantheon Books; 2019.
Hendler J, Mulvehill AM. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity. New York, NY: Apress; 2016.
Assessed and Endorsed by the MedReport Medical Review Board
Comments