Robot hand working on an AI playbook
Image generated with assistance from DALL-E.

AI playbook: Unlocking its potential for society

Over the last year we’ve seen breakthroughs in AI innovation driven by developments in systems that can generate new information – including text, images and video, code, and audio content – based on simple prompts or existing data. Natural language interfaces can now connect to a reasoning engine that will usher in a new category of computing. These generative AI capabilities have the potential to drive breakthroughs in areas like healthcare, scientific research, and sustainability. The growth of these conversational interfaces will help unlock the benefits of AI across society, allowing anyone to use cutting-edge AI regardless of their background or level of technical skill.

While the potential is significant, there is also concern that it may undermine information integrity, exacerbate bias, and inequality and harm jobs, education, and the environment.  That is why we must collectively commit to meeting the opportunity responsibly and work to bring the benefits of AI to all in society including protecting and advancing fundamental rights.

For instance, we’ve seen an increase in the prevalence of misleading information online which has raised questions about the development of technical standards to certify the source and history of media content. Innovation solutions like Truepic Project Providence leverage technology to maintain the provenance or origin of images captured from storage to display. This enables users to verify images as authentic and transparently display their time, date, location, and source to viewers. With this technology, modifications to the images can be detected and the authentic source of images can be proven. In the Ukraine, Project Providence is currently being used to film and describe war related damage to cultural heritage sites. Having transparent and authentic documentation of the damage will be critical to pursue reparations and restoring the damage in the future. 

As a starting point, we must think about the fundamental rights impacts of how we build and deploy AI technology. This requires conducting human rights impact assessments and advancing responsible practices in our technology supply chains – which is not just raw materials to finished goods, it now includes innovation in product design to the end of the product lifecycle. 

As organizations continue to focus on traditional supply chains, they also need to consider the “digital supply chain” which includes the people involved in the evaluation and training of AI models as well as the data – how it is captured, organized, and consumed. To mitigate potential harm against people, there must be accountability in tech development and deployment, so that AI can be leveraged broadly to help people in their day-to-day lives and make progress on our greatest societal challenges.

We also must be critically aware of how people access and connect to AI technology. As with the internet and smartphones, AI will reshape personal and professional life, helping people advance critical thinking, stimulate creative expression, and be more productive. But AI requires connectivity and there are approximately 2.7 billion people, roughly one third of the world’s population, without internet access. As advancements in AI continue, these communities are at risk of being left behind. To make sure all people are included in the future created by AI, it requires us to think of internet connectivity as a prerequisite for inclusive access to AI. 

Once connected, there is an enormous opportunity for AI innovation to help close gaps and bridge divides. To deploy AI responsibly, we need to build technology that is inclusive and accessible by design, work to incorporate disability data and protect against ableism, and push for paradigm shifts that empower communities. For instance, a digital virtual assistant enabled with AI capabilities can generate the same level of context and understanding as a human volunteer. By simply submitting a question and a picture of their product, a blind or low vision customer can access effective tech support in an accessible way. This is accessibility by design and if we continue to do this right, AI could help close the disability divide and empower people with disabilities.  

As we think about communicating in this AI moment, it’s important to find the right balance between embracing nuance and keeping it simple. Embracing nuance requires being transparent about what we’re learning and sharing successes while also innovating responsibility to avoid potential harm and being accountable when we get it wrong. Keeping it simple means being clear and concise about the capabilities of these tools and honest about the limitations.  

We all have a role to play to help anticipate and guard against potential harm. We need government, academia, civil society, and industry to come together to make sure that, as AI becomes a bigger part of our lives, we put in place norms and standards to guide responsible use. By working together to address the challenges, we can embrace the opportunities and bring the benefits of AI to all in society.


Teresa Huston is corporate vice president of the Technology for Fundamental Rights at Microsoft. Her team works to support people’s fundamental rights and address the challenges created by technology by promoting responsible business practices, using data and technology to expand accessibility and meaningful connectivity, and advance fair and inclusive societies.