Oxford University launches commission on AI for good governance



Oxford University has formed a commission that will work with policymakers to advise the world leaders to use AI and ML in public administration and governance.

Share this Article:

The Oxford Internet Institute, a division of social sciences division of the University of Oxford, has formed a commission on Artificial Intelligence and Good Governance (OxCAIGG) to analyse the AI implementation and procurement challenges faced by governments around the world. The commission is going to work with academia, technology experts and policy-makers and also will make recommendations on how AI-related tools can be adapted and adopted by policy-makers for good governance. 

In its first paper of the commission stresses four principles for the government to use AI. The four principles inclusive design, informed procurement, purposeful implementation and persistent accountability aims of protecting democracy. The commission will also make reports in the coming months apart from forming guidelines for best practice for policy-makers and government officials. The report also outlines four significant challenges relating to AI development and application that need to be overcome for AI to be put to work for good governance and leverage it as a ‘force for good’ in government responses to the COVID-19 pandemic. 

The commission will stress health care initially. It will also look into open cities and military and policing applications; the two areas where governments around the world are investing significant resources.

According to Professor Philip Howard, Director of the Oxford Internet Institute, Chair of OxCAIGG and the co-author of the think piece, AI will have an important role to play in building our post-coronavirus world. “The pandemic will certainly supercharge the pressure for widespread surveillance, data collection, and the use of AI to deliver more efficient public services. Innovative AI will need to be governed accordingly. Machine learning, coronavirus tracking apps, cross-platform data sets, and AI-driven public health research shouldn’t pose a risk to fundamental human rights and legal safeguards,” he said. 

The industry is going to promote AI solutions to public agencies pretty aggressively in the years ahead. Thus the commission will be working to make sure that policy-makers ask the right questions and can interpret the answers. Howard says that procurement practices are often pegged to very simple statements such as the fair distribution of resources. In many a time, local government officials would not be having a thorough knowledge of AI. Thus accountability can be tricky. 

The committee has presence form academia, government and business, including Dame Wendy Hall, chair of the Ada Lovelace Institute and Dr Yuichiro Anzai, chair of the Council for Artificial Intelligence Strategy and adviser to the Japanese government on strategic policy.


Previous News

NITI Aayog publishes draft paper on Responsible AI for All

Next News

IMD to seek AI help in weather forecasting

Suggested Articles