Artificial Intelligence : Definition, history and risks (2023)

Back to articles

  • 9 Jan
  • 3:59 pm
  • m de lecture
  • Machine Learning
Artificial Intelligence : Definition, history and risks (1)

Artificial intelligence is going to change the world, but it is still misunderstood by many people. Through this file, discover everything you need to know about AI: definition, functioning, history, different categories, case of use and applications …

Artificial intelligence (AI) is such a vast and revolutionary technology that it is difficult to give a precise definition. It can be considered a branch of computer science, with the goal of creating machines capable of performing tasks traditionally requiring human intelligence.

However, AI is an interdisciplinary science with multiple approaches. Today, Machine Learning and Deep Learning are two techniques used in companies in all industries.

What is Artificial Intelligence ?

In 1950, mathematician Alan Turing asked himself, “Can machines think ?

In reality, this simple question would change the world.

He wrote an article “Computing Machinery and Intelligence” and the resulting “Turing Test” laid the foundations for artificial intelligence, its vision, and its goals.

The goal of artificial intelligence is to answer Turing’s question affirmatively.

Its aim is to replicate or simulate human intelligence in machines.

This is an ambitious goal that raises many questions and is the subject of debate. That is why there is not yet a single definition of artificial intelligence.

The description of “intelligent machines” does not explain what artificial intelligence really is or what makes a machine intelligent. To try to solve this problem, Stuart Russell and Peter Norvig published a book : “Artificial Intelligence: A Modern Approach“. In this book, the two experts unite their work on the theme of intelligent agents in machines. According to them, “AI is the study of agents that receive perceptions of the environment and perform actions

From their perspective, there are four distinct approaches that have historically defined the field of artificial intelligence :

  • Human thought
  • Rational thought
  • Human action
  • Rational action.

The first two approaches are related to reasoning and thought processing, while the latter two have to do with behavior. In their book, P. Norvig and S. Russell focus primarily on rational agents capable of acting to achieve the best result.

MIT Professor of Artificial Intelligence Patrick Winston defines AI as “constraint-driven algorithms exposed by representations supporting models linking thought, perception, and action”.

Another modern definition :
Machines that react to simulations in the way humans do, with the ability to contemplate, judge, and intend. These systems are capable of making decisions that normally require a human level of expertise. They have three qualities that constitute the essence of artificial intelligence: intentionality, intelligence, and adaptability.

These different definitions may seem abstract and complex. However, they establish Artificial Intelligence as a computer science.

In 2017, during the Japan AI Experience, DataRobot CEO Jeremy Achin gave his own modern, humorous definition of AI: “Artificial intelligence is a computer system capable of performing tasks that normally require human intelligence… a lot of these AI systems rely on Machine Learning, some on Deep Learning, and some on very boring things like rules“.

Learn to use Machine Learning

What are the uses of Artificial Intelligence ?

Artificial intelligence has several goals, including learning, reasoning and perception. It is used in all industries, so much so that the applications are infinite and impossible to list exhaustively.

In the field of health, it is used to develop personalized treatments, discover new drugs, or analyze medical imaging such as X-rays and MRIs. Virtual assistants can also help patients and remind them to take pills or play sports to stay fit.

The retail sector uses AI to deliver personalized recommendations and advertising to customers. It also makes it possible to optimize the layout of products or to better manage inventories.

In factories, artificial intelligence analyzes IoT equipment data to predict load and demand with Deep Learning. It also allows us to anticipate a possible malfunction to intervene early.

On the other hand, Banks exploit AI for fraud prevention and detection purposes. The technology also allows you to check if a customer will be able to pay the credit they request, and automate data management tasks.

These are just a few examples of industries using artificial intelligence. As you can see, this revolutionary technology will change all sectors of activity in the coming years…

The history of Artificial Intelligence

The history of artificial intelligence began in 1943 with the publication of the article “A Logical Calculus of Ideas Immanent in Nervous Activity” by Warren McCullough and Walter Pitts. In this paper, the scientists present the first mathematical model for the creation of a neural network.

The first neural network computer, Snarc, was created in 1950 by two Harvard students: Marvin Minsky and Dean Edmonds. That same year, Alan Turing published the Turing Test, which is still used today to evaluate AI.

In 1952, Arthur Samuel created software capable of learning to play chess autonomously. The term artificial intelligence will be used for the first time at the Dartmouth Summer Research Project on Artificial Intelligence conference by John McCarthy in 1956.

During this event, researchers present the goals and vision of AI. Many see this conference as the true birth of artificial intelligence as it is known today.

In 1959, Arthur Samuel invented the term Machine Learning by working at IBM. John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project. In 1963, John McCarthy also created the AI Lab at Stanford University.

JUMPSTART YOUR CAREER
IN A DATASCIENCE

Are you interested in a career change into Big Data, but don’t know where to start? Then you should take a look at our Data Science training course

Check out the courses!

Artificial Intelligence : Definition, history and risks (2)

JUMPSTART YOUR CAREER
IN A DATASCIENCE

Are you interested in a career change into Big Data, but don’t know where to start?

Then you should take a look at our Data Science training course

Check out the courses!

Artificial Intelligence : Definition, history and risks (3)

In the following years, doubt will put a chill on the field of AI. In 1966, the ALPAC report highlighted the lack of progress in machine translation research aimed at translating Russian language instantly in a cold war context. Many U.S. government-funded projects will be canceled.

Similarly, in 1973, the British government published its report “Lighthill” highlighting the disappointments of AI research. Once again, budget cuts cut research projects. This period of doubt will last until 1980, and is now termed the “first winter of AI”.

This winter will end with the creation of R1 (XCON) by Digital Equipment Corporations. This commercial expert system is designed to configure orders for new computer systems, and causes a real investment boom that will continue for more than a decade.

Unfortunately, the machine market “Lisp” collapsed in 1987 due to the appearance of cheaper alternatives. This is the “second winter of AI “. Companies are losing interest in expert systems. The American and Japanese governments are abandoning their research projects, and billions of dollars have been spent for nothing.

Ten years later, in 1997, the history of AI was marked by a major event. IBM’s IA Deep Blue triumphs over world chess champion Gary Kasparov. For the first time, man is defeated by the machine.

Ten years later, technological advances are enabling a renewal of artificial intelligence. In 2008, Google made tremendous progress in speech recognition and launched this feature in its smartphone apps.

In 2012, Andrew Ng fed a neural network with 10 million YouTube videos as a training data set. Through Deep Learning, this neural network learns to recognize a cat without being taught what a cat is. This is the beginning of a new era for Deep Learning.

Another AI win over the Man in 2016, with Google DeepMind’s AlphaGo system winning over Go’s champion Lee Sedol. Artificial intelligence also conquers the field of video games, including DeepMind AlphaStar on Starcraft or OpenAI Five on Dota 2.

Deep Learning and Machine Learning are now used by companies in all industries for a multitude of applications. AI continues to grow and surprise with its performance. The dream of general artificial intelligence is getting closer and closer to reality…

General AI vs Specialized AI

General artificial intelligence is a type of AI that is capable of performing a wide range of tasks, rather than being limited to a specific set of functions. It is also referred to as “strong AI” or “human-level AI.” In theory, a general AI system would be able to perform any intellectual task that a human being can, such as understanding natural language, learning, planning, and problem-solving.

Thus, the creation of a general AI remains for the moment the “Holy Grail” of AI researchers. It is an ambitious quest, but full of pitfalls. Despite technical advances, it remains very difficult to design a machine with complete cognitive abilities.

Specialized artificial intelligence (AI), on the other hand, is designed to perform a specific task or a set of tasks. It is also known as “narrow AI,” “weak AI,” or “task-specific AI.” For example, a specialized AI system could be trained to recognize objects in images, translate text from one language to another, or play chess at a high level (IBM’s IA Deep Blue). Specialized AI systems are currently more common than general AI systems, as they can be trained to perform specific tasks with a high degree of accuracy.

However, even if such a machine may seem intelligent, it is much more limited than human intelligence. This is only an imitation of the latter.

Examples include Google’s web search engine, image recognition software, virtual assistants like Apple Siri or Amazon Alexa, autonomous vehicles, and software like IBM Watson.

Discover DataScientest courses

Machine Learning and Deep Learning : What differentiates ?

Machine learning and Deep Learning are the two main artificial intelligence techniques currently used. The distinction between Artificial Intelligence, Machine Learning, and Deep Learning can be confusing.

In reality, Artificial Intelligence can be defined as a set of algorithms and techniques aimed at imitating human intelligence., and deep learning is a machine learning technique.

Machine learning is a category of AI and involves feeding a computer with data. Machine Learning uses analysis techniques on this data to “learn” how to perform a task.

To do this, it does not need to be specifically programmed using millions of lines of code. This is why it is referred to as “automatic learning“.

Machine learning can be “supervised” or “unsupervised.” Supervised learning is based on labeled data sets, while unsupervised learning is done using unlabeled data sets.

Deep Learning is a type of machine learning that is directly inspired by the architecture of human brain neurons. An artificial neural network is composed of multiple layers through which data is processed. This allows the machine to “deepen” its learning by identifying connections and altering ingested data to achieve the best results.

The dangers of Artificial Intelligence

Artificial intelligence holds many promises for humanity… but it could also pose a more dangerous threat than the nuclear bomb.

” Through its ability to learn and evolve autonomously, AI could one day surpass human intelligence. It could then decide to turn against its creators ” .

This grim omen may seem like a science fiction film, but it is a real possibility. Eminent experts have already sounded the alarm about artificial intelligence, like Stephen Hawking, Elon Musk or Bill Gates.

They see AI as an imminent and inevitable risk for years to come. That is why they are calling on governments to regulate this area so that it develops ethically and securely. More than 100 experts also called on the United Nations to ban “killer robots” and other autonomous military weapons.

However, other experts believe that the future of artificial intelligence depends solely on how humans choose to use it. Even seemingly harmless AI could be misused and misused. We can already see this with the rise of “DeepFakes”: fake videos created through Deep Learning to stage a person in a compromising situation.

Artificial intelligence will continue to develop rapidly over the next few years. It is up to humanity to choose which direction this development will take…

You know all about artificial intelligence. Now discover our complete dossier on Data Science, and take a closer look at Machine Learning.

Discover DataScientest courses

Related articles

Artificial Intelligence : Definition, history and risks (4)

Team redac

  • 6 Jan
  • 10:05 am
  • m de lecture
  • Career
Artificial Intelligence : Definition, history and risks (5)

Team redac

  • 5 Jan
  • 2:38 pm
  • m de lecture
  • Python

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!

Artificial Intelligence : Definition, history and risks (6)

FAQs

What is artificial intelligence and its history? ›

Artificial Intelligence is a method of making a computer, a computer-controlled robot, or a software think intelligently like the human mind. AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. The outcome of these studies develops intelligent software and systems.

What is the risk from artificial intelligence? ›

These concerns include the risk of bias, lack of clarity for some AI algorithms, privacy issues for data used for AI model training, security issues, and AI implementation responsibilities in clinical settings. There are some ethical problems faced by AI clinical applications.

What is artificial intelligence a short answer? ›

AI has become a catchall term for applications that perform complex tasks that once required human input, such as communicating with customers online or playing chess. The term is often used interchangeably with its subfields, which include machine learning (ML) and deep learning.

What is artificial intelligence defined as? ›

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

What is the importance of knowing the history of artificial intelligence? ›

Knowing the history of AI is important in understanding where AI is now and where it may go in the future. In this article, we cover all the major developments in AI, from the groundwork laid in the early 1900s, to the major strides made in recent years.

When was AI first defined? ›

AI was a term first coined at Dartmouth College in 1956. Cognitive scientist Marvin Minsky was optimistic about the technology's future.

What are the risks and benefits of artificial intelligence? ›

So errors are reduced and the chance of reaching accuracy with a greater degree of precision is a possibility. Example: In Weather Forecasting using AI they have reduced the majority of human error.
...
  • High Costs of Creation: ...
  • Making Humans Lazy: ...
  • Unemployment: ...
  • No Emotions: ...
  • Lacking Out of Box Thinking:

What are the 4 risks dangers of AI? ›

  • How can artificial intelligence be dangerous? ...
  • Autonomous weapons. ...
  • Social manipulation. ...
  • Invasion of privacy and social grading. ...
  • Misalignment between our goals and the machine's. ...
  • Discrimination.

What are 3 negative effects of artificial intelligence? ›

Here are some key ones:
  • AI Bias. Since AI algorithms are built by humans, they can have built-in bias by those who either intentionally or inadvertently introduce them into the algorithm. ...
  • Loss of Certain Jobs. ...
  • A shift in Human Experience. ...
  • Global Regulations. ...
  • Accelerated Hacking. ...
  • AI Terrorism.

Is artificial intelligence a threat to humans? ›

Is Artificial Intelligence a Threat? The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

What is artificial intelligence give one example? ›

Facial Detection and Recognition

Using virtual filters on our faces when taking pictures and using face ID for unlocking our phones are two examples of artificial intelligence that are now part of our daily lives.

What is artificial intelligence Big definition? ›

Artificial intelligence defined

Artificial intelligence is a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.

What words best define artificial intelligence? ›

The Encyclopedia Britannica states, “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Intelligent beings are those that can adapt to changing circumstances.

Which phrase is the best definition of artificial intelligence? ›

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

How artificial intelligence is useful in risky areas? ›

AI in Risky Situations

One of the main benefits of artificial intelligence is this. By creating an AI robot that can perform perilous tasks on our behalf, we can get beyond many of the dangerous restrictions that humans face.

What is the main importance of artificial intelligence? ›

Today, the amount of data that is generated, by both humans and machines, far outpaces humans' ability to absorb, interpret, and make complex decisions based on that data. Artificial intelligence forms the basis for all computer learning and is the future of all complex decision making.

How has artificial intelligence impact the world? ›

Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans can, making life simpler, safer, and more efficient.

How did artificial intelligence start? ›

The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning.

Who created artificial intelligence? ›

December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica.

Who first defined artificial intelligence? ›

The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject.

What is the negative impact of artificial intelligence on environment? ›

Training a single AI system can emit over 250,000 pounds of carbon dioxide. In fact, the use of AI technology across all sectors produces carbon dioxide emissions at a level comparable to the aviation industry.

Where is artificial intelligence mostly used? ›

Manufacturing and Production. Live Stock and Inventory Management. Self-driving Cars or Autonomous Vehicles. Healthcare and Medical Imaging Analysis.

Where is artificial intelligence most commonly used? ›

Use of AI in Following Things/Fields/Areas:
  • Retail, Shopping and Fashion.
  • Security and Surveillance.
  • Sports Analytics and Activities.
  • Manufacturing and Production.
  • Live Stock and Inventory Management.
  • Self-driving Cars or Autonomous Vehicles.
  • Healthcare and Medical Imaging Analysis.
  • Warehousing and Logistic Supply Chain.

When was AI first invented? ›

The earliest successful AI program was written in 1951 by Christopher Strachey, later director of the Programming Research Group at the University of Oxford.

Who invented artificial intelligence and when? ›

In 1955, John McCarthy coined the term Artificial Intelligence, which he proposed in the famous Dartmouth conference in 1956. This conference attended by 10-computer scientists, saw McCarthy explore ways in which machines can learn and reason like humans.

How was AI first invented? ›

Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons. Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning.

Where did artificial intelligence come from? ›

The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined.

What is the goal of artificial intelligence? ›

In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it's not a replacement for humans – and won't be anytime soon.

What was AI first used for? ›

The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.

What are the benefits of artificial intelligence? ›

What are the advantages of Artificial Intelligence?
  • AI drives down the time taken to perform a task. ...
  • AI enables the execution of hitherto complex tasks without significant cost outlays.
  • AI operates 24x7 without interruption or breaks and has no downtime.
  • AI augments the capabilities of differently abled individuals.

What was the first AI called? ›

December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica.

Who was the first person to use the term artificial intelligence? ›

In 1955, John McCarthy, one of the pioneers of AI, was the first to define the term artificial intelligence, roughly as “The goal of AI is to develop machines that behave as though they were intelligent.”

How has AI changed the world? ›

In addition to applications in finance, marketing, entertainment, business, social media, advertising, agriculture, and many other industries, the application of AI has revolutionized the world, making it easier for people to contact friends, send emails, and ride a ride-sharing app.

Why should we be worried about AI? ›

Asked to explain in their own words what concerns them most about AI, some of those who are more concerned than excited cite their worries about potential loss of jobs, privacy considerations and the prospect that AI's ascent might surpass human skills – and others say it will lead to a loss of human connection, be ...

What are the 4 types of artificial intelligence? ›

According to the current system of classification, there are four primary AI types: reactive, limited memory, theory of mind, and self-aware.

References

Top Articles
Latest Posts
Article information

Author: Chrissy Homenick

Last Updated: 14/11/2023

Views: 6133

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Chrissy Homenick

Birthday: 2001-10-22

Address: 611 Kuhn Oval, Feltonbury, NY 02783-3818

Phone: +96619177651654

Job: Mining Representative

Hobby: amateur radio, Sculling, Knife making, Gardening, Watching movies, Gunsmithing, Video gaming

Introduction: My name is Chrissy Homenick, I am a tender, funny, determined, tender, glorious, fancy, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.