Canada's AI policy development: Learnings for India

    By Sharmila Nair

    Highlights

    Canada’s AI policy framework creates and implements ethics by design for any automated decision system, and the Government is leading the way by ensuring that its use of any automated system abides by such standards.

    Share this Article:

    AI foundational policies and ethical principles serve as the backbone of Canada’s digital growth through various AI initiatives. At the outset, the fundamental difference evident between the AI strategies adopted by Canadian Government and the United States is that Canada’s research-oriented model seeks to integrate the basic ideas of accountability and transparency into innovations, from the get-go.

    With a C$125 million investment, Canada commenced developing its ethical, policy and legal implications for AI through the Pan-Canadian Artificial Intelligence Strategy in 2017. This coupled with the 2019 Directive on Automated Decision Making, reduces the risk significantly for unethical AI practices and forces compliance, by companies, by compelling them to release their source code for such innovations to the Government. While this is crucial to ensure ‘explainability’ behind automated decisions, it is also contentious to a great extent. However, understanding and implementing policies, ensuring trustworthiness and accountability to the public, could often require the implementation of stringent measures. Canada’s AI strategies could serve to influence India’s policy approach, especially on the grounds of accountability from the Government. The Directive includes a very crucial tool, i.e., the AIA – the Algorithmic Impact Assessment, for vendors/ companies; building AI solutions for the Government, which identifies the risks and varying impact levels of any innovations/solutions provided to the Government by third party entities. The higher the rank attained, the higher is the requirement for ‘accountability’ based on higher risks from automated decisions.

    The Indian AI ecosystem is similar to Canada’s, where the Government has a significant role in the policies pertaining to the regulation of AI. This is contrary to the United States, where internal company policies were enforced by leading organisations such as Google including the Partnerships on AI, which was later followed by the Executive Order in 2019. The AIA tool not only increases the trustworthiness of AI initiatives but also displays a public commitment to support ethical AI, devoid of bias. Inclusion of the AIA for any vendor-to-public AI solutions could also work towards ensuring homogeneity in AI policies.

    In October 2019, the CIO Strategy Council made public a new National Standards of Canada, intended for all organisations, including public and private companies, not-for-profit organisations and even the Government, to help design a responsible AI system. Its members include EY, Accenture, Deloitte, KPMG, Microsoft, MindBridge and even VMware. One of the most vital requirements mentioned therein is for ‘human in the loop’, ‘human on the loop’ or ‘human out of the loop’. These concepts vary the extent of human involvement in automated decisions across all verticals. While ‘human out of the loop’ may suffice for simple administrative tasks, it may be vital to include a ‘human in the loop’ in medical interventions. Classifications such as this would prove vital for any decisions which rank higher on the AIA (detailed above) with a greater impact on the public at large. Therefore, classifying automated systems, on the basis of a closed-circuit or even a non-locked algorithm, would also create sufficient barrier checks on data quality (including a requirement for real-time data) and expected outcomes. Notwithstanding, the degree of human involvement, it would be crucial to allaying fears of AI domination by ensuring a redressal mechanism, for the public, by every AI vendor/company.

    Backed by strong research in foundational policies and therefore a greater trust in data quality, investments for Canadian AI startups have been on the rise with US$660 million raised in 2018 itself. At present, AI leadership-reports only ranks countries based on the speed of innovations, whether or not supported by an efficient and ethical policy framework. Instead, Canada has balanced its AI growth with detailed procedural norms of accountability and privacy, which is widely debatable for several other countries, ranking higher on the scale of the innovation.

    While the Indian Government has already put AI into the application and is parsing through several policy regulations to ensure ‘AI for All’, it would be beneficial to include the understandings of the AIA while simultaneously addressing the resultant impact on the elasticity of intellectual property norms of AI vendors. Furthermore, in addition to the classification of AI solution providers on the basis of human involvement, the Government could also establish procedural norms of productivity, based on data quality checks. India’s thriving ecosystem requires the necessary checks and balances for the efficient and ethical deployment of AI solutions and Canada’s transparent guidelines may provide the necessary boost for an accountable and trustworthy policy framework.

    (Image credit: Photo by Social Soup Social Media from Pexels)

    About the author

    Sharmila Nair

    Sharmila Nair is an alumnus of Gujarat National Law University (B.A. LL.B. Hons.) and University of California, Berkeley (LL.M.). She specialises in Intellectual Property and Technology laws. Her research area includes AI-policy structural requirements for India, vis-à-vis other jurisdictions, and its application in the Indian ecosystem.

    Previous Article

    Interview: Professor V Kamakoti, Chair of National AI Task Force

    Next Article

    Future of space exploration

    Want to get your article featured?

    Leave your email address here so our team can contact you.

    Suggested Articles