THE GIST of Editorial for UPSC Exams : 28 FEBRUARY 2019 (Rules for the Machine (Indian Express)

Rules for the Machine (Indian Express)

Mains Paper 2: Science and technology
Prelims level: AI
Mains level: Awareness in the fields of IT, Space, Computers, robotics, nanotechnology, biotechnology and issues relating to intellectual property rights.

Context

  •  Algorithm, is merely a set of instructions that can be used to solve a problem.
  •  The reasons for the increasing reliance on algorithms are evident.
  •  First, an algorithm can make decisions more efficiently than human beings, thus indicating its superiority to human rationality.
  •  Second, an algorithm can provide emotional distance it could be less “uncomfortable” to let a machine make difficult decisions for you.

Use of AI in India

  •  The use of AI in governance in India is still nascent.
  •  However, this will soon change as the use of machine learning algorithms in various spheres has either been conceptualised or has commenced already.
  •  Maharashtra and Delhi police have taken the lead in adopting predictive policing technologies. Further, the Ministry of Civil Aviation has planned to install facial recognition at airports to ease security.
  •  The primary source of algorithmic bias is its training data.
  •  An algorithm’s prediction is as good as the data it is fed.
  •  A machine learning algorithm is designed to learn from patterns in its source data.
  •  Sometimes, such data may be polluted due to record-keeping flaws, biased community inputs and historical trends.
  •  Other sources of bias include insufficient data, correlation without causation and a lack of diversity in the database.
  •  The algorithm is encouraged to replicate existing biases and a vicious circle is created.

The extant law in India is glaringly inadequate

  •  Our framework of constitutional and administrative law is not geared towards assessing decisions made by non-human actors.
  •  Further, India has not yet passed a data protection law.
  •  The draft Personal Data Protection Bill, 2018, proposed by the Srikrishna Committee has provided the rights to confirmation and access, sans the right to receive explanations about algorithmic decisions.
  •  The existing SPDI rules issued under the IT Act, 2000 do not cover algorithmic bias.
  •  Possible solutions to algorithmic bias could be legal and organisational. The first step to a legal response would be passing an adequate personal data protection law.
  •  The draft law of the Srikrishna Committee provides a framework to begin the conversation on algorithmic bias.
  •  The right to the logic of automated decisions can be provided to individuals. Such a right will have to balance the need for algorithmic transparency with organisational interests.
  •  A general anti-discrimination and equality legislation can be passed, barring algorithmic discrimination on the basis of gender, caste, religion, sexual orientation, disability etc in both the public and private sectors.
  •  Additionally, organisational measures can be pegged to a specific legislation on algorithmic bias.
  •  In the interests of transparency, entities ought to shed light on the working of their algorithms.
  •  This will entail a move away from the current opacity and corporate secrecy.
  •  However, considering the complexity of most machine learning algorithms, seeking absolute transparency alone may not be practical.

Way forward

  •  Developers should design fair algorithms that respect data authenticity and account for representation.
  •  Further, organisations could develop internal audit mechanisms to inspect whether the algorithm meets its intended purpose, and whether it discriminates between similarly placed individuals.
  •  Organisations could also outsource the auditing to certified auditors.
  •  Entities relying on evaluative algorithms should have public-facing grievance redressal mechanisms.
  •  Here, an individual can confirm that an algorithm has been used to make a decision about them, and the factors that prompted it.An aggrieved individual or community should be able to challenge the decision. Finally, the use of algorithms by government agencies may require public notice to enable scrutiny.
  •  Considering their pervasiveness, algorithms cannot be allowed to operate as unaccountable black boxes.
  •  The law in India, as well as companies reaping the benefits of AI, must take note and evolve at a suitable pace.

Online Coaching for UPSC PRE Exam

General Studies Pre. Cum Mains Study Materials

Prelims Questions:

Q.1) Which of the following is true about "CAATSA (Countering America's Adversaries Through Sanctions Act)", recently in news ?
A. The act is passed by USA for countries having significant defence relations with North Korea.
B. It would be tough for India to carry on defence deals with USA if the act is not diluted on case to case basis.
C. Both 1 and 2
D. Neither 1 nor 2

Answer: B

Mains Questions:
Q.1) IS AI A DANGER TO HUMANITY? COMMENT.