AI Tutorial

Full History of AI (Timeline, Founder, Evolution, Development)


Artificial Intelligence is now an old concept that has evolved from merely being a term in the realm of science fiction to being our everyday reality that affects every facet of our lives. Researchers have been working on this technology for years, and it has witnessed several milestones and breakthroughs over the years. AI history and its timeline reflect its theoretical foundation, implementation, and advancements. 

In this blog, we will trace artificial intelligence history chronologically, highlighting all the important events, developments, and impacts.

Who Invented Artificial Intelligence (AI)?

Artificial intelligence (AI) has evolved over time through the work of many researchers and scientists. It doesn't have a single inventor, but rather, it has developed gradually through the contributions of numerous individuals and milestones. 

Here are some key figures and moments in the history of AI:

  • Alan Turing 

While not directly related to AI as we know it today, Alan Turing's concept of a theoretical computing machine, known as the Turing machine, laid the theoretical foundation for modern computers and computational processes, which are essential for AI.

  • John McCarthy

John McCarthy is often credited with coining the term "artificial intelligence" and organizing the Dartmouth Workshop, where AI as a field was launched in the summer of 1956. McCarthy is considered one of the founding fathers of AI.

  • Marvin Minsky and John McCarthy

These two researchers, along with others, made significant early contributions to AI research, including the development of the first AI programming language, LISP, by McCarthy.

  • Arthur Samuel

Arthur Samuel is known for his work in machine learning and the development of the first self-learning program, which played checkers and improved its performance through experience.

  • Herbert A. Simon and Allen Newell

They developed the Logic Theorist, a program that could prove mathematical theorems, and the General Problem Solver (GPS), a problem-solving program. Their work contributed to the development of AI problem-solving techniques.

  • Joseph Weizenbaum

He created the ELIZA program, a natural language processing program that simulated conversation with a human. ELIZA is considered one of the early chatbots.

  • Ray Kurzweil 

While not an inventor of AI, Ray Kurzweil is a notable figure in AI and futurism. He has made significant contributions to speech recognition and is known for his predictions about the future of AI and human-machine convergence.

Early History of AI (1840s-1950s)

Starting with the early history of AI, this was the time when the theoretical foundation of artificial intelligence was formed. Philosophers were trying to explain the human mind as a symbolic system. The modern field of AI started shaping up in the mid-20th century. 

  • 1843

The world’s first computer programmer, Ada Lovelace, proposed an idea explaining that machines can manipulate symbols, which marked the foundation of the fundamental concept of AI.

  • 1936

Alan Turing introduced the concept of a universal machine, which later became the Turing Machine, a theoretical device capable of solving computations if provided with enough time and resources. This laid down the foundation of digital computers and the principle of computability.

  • 1943

Warren McCulloch and Walter Pitts came up with the first mathematical model of a neural network, which paved the way for learning machines.

  • 1949

This was an important year in the history of artificial intelligence as Donald Hebb introduced a learning theory known as Hebbian learning. It updated the rule for modifying the connection strength between neurons. Also, it became the fundamental concept in the development of artificial neural networks.

Birth and Development of AI (1950s-1960s)

In the mid-20th century, AI was born officially. This AI timeline also witnessed the introduction of the term ‘Artificial Intelligence.’ Let’s know about the evolution of artificial intelligence technology from 1950s to 1960s:

  • 1950

Alan Turing proposed the Turing Test to determine a machine's ability to exhibit intelligent behavior. In the same year, Claude Shannon published a paper on machine learning chess.

  • 1955

Allen Newell and Herbert A. Simon developed the first artificial intelligence program named Logic Theorist. This AI program not only proved 38 out of 52 math theorems but also found new and more elegant proofs of a few theorems.

  • 1956

Dartmouth Conference officially introduced the term ‘Artificial Intelligence, and it was coined as an academic field. Also, American Computer scientist John McCarthy along with Marvin Minsky, Herbert Simon, and Allen Newell, became leaders in AI research.

  • 1957

Frank Rosenblatt invented the first artificial neuron, the Perceptron.

  • 1958 

John McCarthy developed Lisp, a popular programming language in AI research.

  • 1959 

Continuing the development of AI, Arthur Samuel created a self-learning program to play checkers, which demonstrates the power of machine learning.

  • 1965

Joseph Weizenbaum developed ELIZA, which is a natural language processing computer program. It demonstrates the potential of AI when it comes to understanding and generating human language.

During this timeline of artificial intelligence, high-level computer languages, such as LISP, FORTRAN, and COBOL, were invented, and the enthusiasm for AI was high at the time.

AI Winter and the Rise of Expert Systems (1970s-1980s)

Although AI witnessed initial excitement, it lacked significant progress in the coming years. Hence, the artificial intelligence timeline between the 1970s to 1980s is known as AI Winter. It was the time when computer scientists had to deal with the shortage of funding from the government for AI research, and there was a decreased interest in the field.

  • 1972

In this year, one of the first expert systems, Dendral, was developed, which marked the shift toward addressing specific issues.

  • 1980

The Japanese Fifth Generation Computer Systems project set the goal of developing an intelligent computer but failed to meet the target.

  • 1986

The re-introduction of the backpropagation algorithm lead to a resurgence in neural network research.

The Internet Era and Machine Learning (1990s-2000s)

Thanks to the emergence of the Internet, we have massive data at our disposal, which fuels the development of machine learning algorithms. Here’s a look at the evolution of artificial intelligence between 1990s to 2000s.

  • 1997

Deep Blue by IBM defeated the world chess champion Garry Kasparov, which was a memorable moment in the development of AI.

  • 1999 

Sony introduced AIBO, a robotic pet, which showcased the capabilities of AI.

  • Early 2000s 

Machine learning techniques, such as support vector machines, become popular.

  • 2005

Stanford's DARPA Grand Challenge sees autonomous vehicles navigate a desert course.

  • 2011

IBM's Watson wins the quiz show Jeopardy!, showcasing natural language processing capabilities.

  • 2012

Geoffrey Hinton's deep learning breakthroughs lead to significant advances in image recognition.

  • 2014

Facebook AI Research (FAIR) is established, focusing on deep learning and AI research.

  • 2016

AlphaGo program by Google beat the world champion Go player, Lee Sedol, which is a significant moment in the evolution of AI as it showcased the ability to learn and make decisions. 

  • 2018

This year marked the beginning of the journey of Generative pre-trained transformers (GPT) as OpenAI, a renowned AI company in the USA, introduced the first-ever GPT model. This was a prominent moment in the field of generative artificial intelligence. 

Also, IBM’s Project Debater debated on different complex topics with two master debaters and performed great. 

Google showcased an AI program known as Duplex, a virtual assistant that took a hairdresser appointment on call, and the lady on the other side didn’t even realize she was talking to a machine.

The 2020s: GPT-3 and Beyond

The evolution of artificial intelligence over the years has made the impossible possible today in the 21st century. Let’s know about the latest developments in the history of AI timeline:

  • GPT Characteristics

GPT is a type of large language model (LLM) that uses the transformer architecture. These models are trained on a large number of unlabelled text data, so they can generate content that closely resembles human writing. As of 2023, LLMs that share these features are broadly referred to as GPTs.

  • GPT-n Series

OpenAI has launched a series of advanced GPT models called the GPT-n series. Every model in the series is more capable than its predecessor due to its increased size and training. 

These advanced models form the basis for task-specific GPT systems, which include fine-tuned models for following instructions. One of the applications of these models is ChatGPT chatbot services. 

  • March 2023

The latest development in the AI evolution is GPT-4, which represents the current success of GPT development by OpenAI.

  • Other GPT Models

Many organizations have adopted the term GPT. For example, EleutherAI developed a series of GPT foundation models, and Cerebras recently developed seven models. Also, companies from different industries have developed varied GPT models customized according to their requirements, such as Bloomberg's "BloombergGPT" for finance and Salesforce's "EinsteinGPT" for customer relationship management (CRM).

Artificial intelligence evolution is extensive, and the technology has witnessed major changes over the years. It’s still growing and evolving, and you can expect several AI developments

FAQs Related to Artificial Intelligence History and Evolution

The term "artificial intelligence" (AI) was coined by John McCarthy, an American computer scientist, in 1956. McCarthy is considered one of the founding fathers of AI and played a pivotal role in organizing the Dartmouth Workshop in the summer of 1956, which is often regarded as the birth of AI as a field. He used the term "artificial intelligence" to describe the goal of creating machines and computer programs capable of intelligent behavior and problem-solving, a goal that has since been central to the field of AI.
In India, Dr. Raj Reddy is often known as the "Father of Artificial Intelligence." He is an Indian-American computer scientist and one of the pioneering figures in the field of AI. Dr. Raj Reddy was born in India and later moved to the United States, where he made significant contributions to AI research. Dr. Reddy is best known for his work in the areas of speech recognition and natural language processing. He received the Turing Award in 1994, which is one of the highest honors in the field of computer science, for his contributions to AI and computer science research. His work has had a significant impact on the development of AI technology and its applications, both in India and internationally.
Artificial intelligence (AI) has existed since the mid-20th century. It was officially born as a recognized field in 1956 when John McCarthy organized the Dartmouth Workshop, where researchers gathered to discuss the possibility of creating intelligent machines and coined the term "artificial intelligence." So, AI has been in existence for over six decades as a formal field of research and development.
Some of the earliest AI programs included the Logic Theorist, General Problem Solver (GPS), and ELIZA, developed in the 1950s and 1960s.
The "AI winter" refers to periods of reduced funding and interest in AI research due to overly ambitious expectations and limited progress during the 1970s and 1980s.
Expert systems were AI programs designed to mimic human expertise in specific domains. They gained popularity in the 1980s and were used for tasks like medical diagnosis and decision support.
Machine learning, including neural networks, saw a resurgence in the 1990s with advancements like backpropagation. Deep learning, a subset of neural networks, gained prominence in the 2010s.
IBM's Deep Blue made history in 1997 by defeating world chess champion Garry Kasparov, showcasing AI's ability to excel in complex games.
Ethical considerations and regulation have become essential in AI development to ensure fairness, transparency, and responsible use of AI technologies, given their increasing impact on society.
Recent milestones include AlphaGo's victory in Go, advances in natural language processing (e.g., GPT-3), and AI applications in healthcare, autonomous vehicles, and finance.
AI in the 21st century has seen rapid advancements in deep learning, reinforcement learning, and applications like self-driving cars and conversational AI, with a growing focus on ethical considerations and AI ethics.
The Turing Test, proposed by Alan Turing, is a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. It's significant because it set an early benchmark for AI researchers to strive toward.
AI applications started to impact everyday life in the late 20th century with developments in areas like speech recognition, recommendation systems (e.g., Netflix, Amazon), and personal assistants (e.g., Siri, Alexa).
AI has improved diagnosis and treatment in healthcare through image analysis and predictive analytics. In finance, it's used for algorithmic trading, fraud detection, and risk assessment.
Challenges include achieving human-level intelligence, addressing bias in AI algorithms, ensuring ethical AI use, and developing AI systems that can learn from limited data.
The future of AI may involve more advanced autonomous systems, AI-driven healthcare breakthroughs, further human-AI collaboration, and AI addressing global challenges such as climate change.
Did you find this article helpful?