float_enroll
float_enroll
The Use of Artificial Intelligence in Weapons According to International Humanitarian Law

November 04, 2020

Article

General

post-img

One of those includes artificial intelligence which is an increasingly used and developed form of technology born sometime in the 1950s. The term of “intelligence” itself means “the ability to acquire and apply different skills and knowledge to solve a given problem” and the use of “general mental capacity to solve, reason, and learn various situations”. Meanwhile, “AI” can simply be defined as “the theory and development of computer systems to carry out tasks normally requiring human intelligence”. The foundation of AI’s development began in 1950, by mathematician Alan Turing who posed the question of whether machines could think like humans did, and how. Turing published a paper “Computing Machinery and Intelligence” and established the “Turing Test”, which is designed to determine whether or not a computer is “intelligent” by testing if they can imitate an input given by a human and produce an output that human users cannot identify as coming from a machine. Even within a small nation such as Singapore, they are currently working on incorporating AI into their defense structure to “maximize combat effectiveness with minimum manpower.” Its development is rampant due to its promising contribution for the military in the future which could make work safer for the army, like its ability to spot and detect objects that are out of human view, helping personnel make decisions through analysis, increase accuracy in targeting, and more. Due to the potential damages and harm to human life, many experts including Elon Musk and the three co-founders of AI’s company “Google DeepMind” have pledged not to develop AI weapons that could create devastating results. They, and thousands of other AI experts, have opposed the existence of automated weapons that rely on AI without sufficient human control and constantly warns the public of the repercussions, whether moral or pragmatic, in enforcing such weapons. Still, there are no specific internationally agreed rules or conventions about AI in weapons yet. A disharmony in agreement or understanding about what can be used in the AI battle field and how much human intervention should be applied could result fatally.

To see more articles from Panah Kirana, visit https://www.panahkirana.com/

Image source: International Committee of the Red Cross (ICRC)