Weaponization of Artificial Intelligence
what is Artificial Intelligence?
Artificial intelligence in its most basic sense means an artificial or
computerized system which acts like a human brain, at times even far more
accurate and dangerous.
Evolution of artificial intelligence can be traced from our day to day life,
like for example chat bots, auto input, or driverless cars, life feels so
easy when you have a intelligent brain doing all your things, but there is a
dark side to it .
How does a Machine Learn?
A machine learns through programming, very simple isn’t it, actually no, if
we talk about programming then we talk about bias, yes if we want the
artificial brain to think like a human being then someone needs to put that
thinking to the machine, now every human being is diverse, with different mind
sets, ideologies, thinking and also different prospective,
So how can we be
sure that a machine is not behaving like its programmer?
The answer to it is
Neural networking, nowadays machines learn from experiences which means they
are thinking, like human beings which means they are also biased and dangerous.
The dark side
As of 2019 many countries are funding various research and development programs
to understand and develop artificial intelligence which has led to a lot of
worries among the citizens about its use and future repercussions with this,
lets talk about Weaponization.
As of now countries like Russia, US, China,are using automated weapons in
warfare so that they can reduce human involvement and also reduce deaths, but
the most important part of machine learning is the law, who will people blame
for any accidents, as of now the EU countries do have legislations to to fix
responsibility and liability, German Traffic Act imposes the responsibility
for managing an automated or semi-automated vehicle on the owner and envisages
partial involvement of the Federal Ministry of Transport and the Digital
Infrastructure. A more comprehensive and understandable approach to the
definition of current and prospective legislation regarding robotics is
presented in the EU resolution on robotics (European Parliament Resolution,
2017).
It defines types of AI use, covers issues of liability, ethics, and
provides basic rules of conduct for developers, operators, and manufacturers in
the field of robotics, the rules base on the three laws of robot technology by Azimov (1942). Russia also has a draft law, the Grishin law, which amends the
civil code of Russian federations and fixes responsibility on the manufacturers, owners and programmers, but when we talk about weaponization, we talk about
Lethal Autonomous Weapons, which can kill people at times even civilians.
The main problem with weaponization of artificial intelligence is learning and
biasness, a robot or any system which can make decisions based on the
circumstances is very well to make mistakes but only to learn from it, that is
where the problem lies, It Learns, now if a human being wanted to learn
something what it does is, it makes a mistake and then learns from it, now in
a warfare situation a decisional mistake might kill a civilian or even worse its
own soldiers, because you cannot predict what a artificial brain might be
thinking because its learning and then taking decisions which means it can have
a devastating effect on lives.
Artificial intelligence  maybe far more worse
than nuclear attacks, because it’s a super artificial machine which has the
capacity to multiply itself and play a dominant character, now if we talk about
learning then a machine will do something like kill a civilian in order to learn
from it, because human beings also do the same in order to learn something we
must fail, but in this case the failure might be too much to come out for and
over that we cannot fix liability on someone.
If we go back to 10o or 200 years back when various experiments were being
conducted in the field  of surgery and medicine, doctors used to develop and
experiment methods (ex lobotomy) to cure patients but in due process it would
kill them, but they did learn from it and also thousands to life was lost
mostly prisoners .Therefore my point is, that we should not create human like
creatures which are far more capable than us and at one point may probably
overtake us, a mistake in a battlefield will be remembered by all of us, but
that dosent stop people (machines) from making another mistake, because there is
always something to learn from it.
Written By: Trishit Kumar Satpati
Law Article in India
You May Like
Please Drop Your Comments