How Grammarly’s Team Members Create a Responsible AI Culture

Dive into the foundational principles and practices that Grammarly’s team employs to embed responsibility in AI’s evolution.

Written by Lucas Dean
Published on Apr. 29, 2024
Image: Shutterstock
Image: Shutterstock
Brand Studio Logo

The introduction of AI in the public domain has elicited both amazement and apprehension. Its capabilities are vast, from generating photorealistic images and analyzing massive data sets to coding software solutions and crafting engaging conversational text.

However, once the initial awe surrounding this technology fades, addressing the serious ethical implications accompanying AI becomes crucial. Concerns such as data privacy, the transparency of AI decisions and ingrained biases in machine learning models reveal the depth and complexity of issues at hand.

Despite its sophisticated abilities, generative AI is akin to a human child in its formative stage. Much like a child learns behaviors and ethics from its surroundings and parents, AI systems learn from the data and guidance provided by their developers.

These developers are guides who must instill principles of morality and responsibility in these systems — a formidable yet essential task to ensure that as AI matures, it does so with a sound ethical framework in place.

At Grammarly, Engineering Manager of Responsible AI Knar Hovakimyan ensures effective AI is balanced with ethical concerns and practices. The company’s generative AI enhances the writing process by offering contextually aware suggestions that improve clarity, correctness and tone across various stages of writing. 

Hovakimyan delved into what the role entails and how she advances responsible AI. 

 

Image of Knar Hovakimyan
Knar Hovakimyan
Engineering Manager, Responsible AI

Grammarly offers real-time writing assistance to over 30 million individuals and 30,000 teams, enhancing communication through its various subscription-based services without selling user data and allowing for integration into other products via its developer platform.

 

What are the main practices you employ to create a responsible AI culture, whether through transparent policies, employee training and so on? How have these practices proven beneficial?

I lead the responsible AI team at Grammarly. We set standards for responsible AI development and work hands-on to make Grammarly’s product offerings as safe and fair as possible. Of course, having one team focused on Responsible AI doesn’t make a responsible AI culture on its own. To use and develop AI responsibly, all individuals working with AI must understand, to some extent, its functionality, risks and the impact of the technology. Part of the work of Grammarly’s Responsible AI team is to prove that responsible AI is a high-stakes affair, not an afterthought or a system that polices other work and introduces friction. 

 

“Responsible AI is a high-stakes affair, not an afterthought or a system that polices other work and introduces friction.”

 

The responsible AI team at Grammarly helps build up this culture by doing the heavy lifting. We conduct deep research and distill our learnings into training and knowledge sharing across the company. We build automated tools for assessing AI safety and safeguarding against known risks. One of our tools is even open-source to help support a responsible AI culture external to our company. By providing AI developers access to clear standards for safe AI use and tools that enable them to meet the standards, we ensure buy-in across the company and ship AI products responsibly, aligned with our safety criteria.

 

Why is it critical for your organization and others in your industry to establish a responsible AI culture?

AI isn’t perfect — it can behave unpredictably, hallucinate or generate unsafe content, and these issues have the potential to harm reputations or adversely affect business decisions. 

To use AI safely and effectively, it’s essential to take responsibility for understanding and reducing these risks. This is particularly important at Grammarly, where we use AI in our daily work and develop AI product offerings for use by other organizations and individuals. Every positive experience during an interaction with AI products compounds user trust, a little at a time. However, a single negative interaction can wipe out vast stores of goodwill and harm businesses or individuals. Trust is hard to win and easily lost, so we must get it right over and over again. This requires taking a responsible and thoughtful approach at each stage of AI development and AI usage.

 

What is the biggest lesson you or your team have learned as a result of establishing AI governance?

Every day, we find new use cases for generative AI. That also unearths new risks. In grappling with these risks, we have realized that as powerful as AI tools are, they cannot circumvent human intention. One of the most essential components of responsible AI is informed decision-making by human beings involved in developing or using AI products. Responsibly using AI means understanding what the AI you’re using can do, where that might go wrong, choosing the proper use cases and implementing safeguards against the risks.

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies.