The concept of Artificial Intelligence changed several times in the last decades. Mixing definitions coming from different ages adds to the current confusion on its meaning.
The article contains a simplified timeline of the conceptual history of artificial intelligence[1].
Year | Milestone |
---|---|
1956 | AI research was born at a workshop at Dartmouth College. |
1956-1973 | Researchers try to attain "general" artificial intelligence. The most prominent researchers were sure to be able to create a machine able to solve arbitrary problems in no more than 20 years. The most popular approach aimed to build a symbolic system that the machines could use to reason about the world. A second popular vision - connectivism - sought to achieve intelligence through learning. Researches are heavily funded by governmental agencies. |
1974 | The first "AI Winter" starts. Defense and other governmental agencies become disillusioned for the lack of practical results and cut fundings. |
1980-1987 | AI attracts funds again because of the success of the Expert Systems, the prehistoric ancestors of modern search engines. However, the commercial failure of the LISP machines rings the death bell for symbolic AI and its hopes to achieve general intelligence. |
1987 | The long "Second AI Winter" starts. Nobody wants to waste money on AI research. |
Late '90s - 2012 | All over the '90s AI is considered a 4-letter word. However, some know-how starts to be applied to the solution of specific problems with success. In the 21st century deep learning and other statistical AI techniques starts to develop and become dominant by 2012 thanks to their outstanding practical results. |
2015 | It is consider a landmark year because Google declares more than 2,700 internal projects that uses AI against the "sporadic usage" of 2012. |
2023 | Most talk show hosts and guests firmly believe that a machine currently as intelligent as a mono-cellular creature could outwit them and conquer the world soon. |