From searches or driving a car). However,

From
SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly.
While science fiction often portrays AI as robots with human-like
characteristics, AI can encompass anything from Google’s search algorithms to
IBM’s Watson to autonomous weapons.

Artificial intelligence today
is properly known as narrow AI (weak
AI), where it is designed to perform a narrow task (e.g. facial recognition, Internet
searches or driving a car). However, the long-term goal of
many researchers is to create general AI (AGI or strong AI). While
narrow AI may outperform humans in specific tasks, like playing chess or
solving equations, AGI would outperform humans at nearly every
cognitive task.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

A study launched in the fall of 2014, “The
One Hundred Year Study on Artificial Intelligence”, is a long-term
investigation of the field of Artificial Intelligence (AI) and its influences
on people, their communities, and society. As its main goal, its Standing
Committee formulates a Study Panel every five years to assess the current state
of AI. The Study Panel reviews AI’s progress in the years following the
immediately prior report, envisions the potential advances that lie ahead, and
describes the technical and societal challenges and opportunities these
advances raise. This study not only apprises’ the world about the positive
advances in the field of AI but also warns about how AI can be dangerous in
future

 

In future,
an increased awareness on developing systems that
are human-conscious, meaning that they are
specifically designed for, the traits of
the humans with whom they may
be supposed to interact. There is a
lot of interest in searching
for new, innovative methods to expand interactive
and scalable methods to teach robots. In forthcoming years,
new object recognition capabilities and robotic platforms,
which are human-safe, will grow, as
will records-driven products and their markets

 

Stephen Hawking, Elon Musk, Steve
Wozniak, Bill Gates, and many other big names in science and technology have
recently expressed concern in
the media and
via open letters about the risks
posed by AI.

Most researchers agree that a super-intelligent AI is unlikely to
exhibit human emotions like love or hate. Therefore AI cannot be expected to
become intentionally benevolent. Instead, when considering how AI
might become a risk, experts think two scenarios most likely:

1.  The AI is programmed for destruction: Autonomous weapons are artificial
intelligence systems, programmed to kill. In the wrong hands, these weapons
could be the reason of mass causalities. Moreover, an AI arms race could
inadvertently lead to an AI war .To avoid losing to the enemy, these weapons
would be designed to be extremely difficult to simply “switch
off,” so humans could easily lose control of such a scenario. This risk is
present with present narrow AI as well, but it grows as levels of AI
intelligence and independence increase.

 

2.     The AI is programmed to do something useful, but it develops a negative approach for reaching its goal: This case occurs when we
fail to align our goals with AIs’. For example,if
you ask an obedient intelligent automobile to take you to the
airport as fast as possible, it’d get you there
chased through helicopters and blanketed in vomit,
doing now not what you wanted however actually what
you requested for. Another example is a superintelligent machine is tasked with a ambitious geoengineering venture, it
would wreak havoc with our ecosystem as
a side effect, and look
at human tries to prevent it as a danger to
be met.

 

As these examples
illustrate, the concern about advanced AI isn’t
malevolence however competence. You’re probably not an
evil ant-hater who steps on ants out of malice, however in case
you’re in charge of a
hydroelectric green energy project and there’s an
anthill within the vicinity to be flooded, too bad for the
ants.