Why you need an organizational AI ethics committee to do AI right

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Look now.


Artificial intelligence (AI) may still feel a bit futuristic to many, but the average consumer will be surprised at where AI can be found. It is no longer a science fiction concept limited to Hollywood and feature films or top secret technology found only in computer science labs at the Googles and Metas of the world – quite the opposite. Today, AI not only powers many of our online shopping and social media recommendations, customer service inquiries and loan approvals, but it also actively creates music, wins art competitions and beats humans in games that have been around for thousands of years.

Because of this growing awareness gap around AI’s expansive capabilities, a critical first step for any organization or business using or delivering it should be to form an AI ethics committee. This committee will be tasked with two main initiatives: engagement and education.

The ethics committee will not only prevent the mishandling and unethical applications of artificial intelligence when it is used and implemented. It will also work closely with regulators to set realistic parameters and formulate rules that proactively protect individuals from potential pitfalls and biases. Furthermore, it would educate consumers and allow them to view AI through a neutral lens supported by critical thinking. Users should understand that AI can change how we live and work and can also perpetuate biases and discriminatory practices that have plagued humanity for centuries.

The case for an AI ethics committee

Leading institutions working on artificial intelligence are probably the most aware of its potential to positively change the world, as well as cause harm. Some may be more experienced than others in the room, but internal oversight is important for organizations of all sizes and with management of varying experience. For example, the Google engineer who himself was convinced that an NLP (Natural Language Processing) model was actually sentient AI (it was not) is a clear example that even education and internal ethical parameters must be prioritized. Starting AI development off on the right foot is critical to its (and our) future success.

Event

Intelligent Security Summit

Learn the critical role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

Microsoft, for example, is constantly innovating with artificial intelligence – and puts ethical considerations first. The software giant recently announced the ability to use AI to summarize Teams meetings. It can mean less notes and more strategic thinking on the spot. But despite this win, that doesn’t mean perfect AI innovation has come from the software company either. Over the summer, Microsoft scrapped its AI facial analysis tools due to the risk of bias.

Although the development was not perfect every time, it shows the importance of having ethical guidelines in place to determine the level of risk. In the case of Microsoft’s AI facial analysis, these guidelines determined that the risks outweighed the rewards, protecting us all from something that could have had potentially harmful outcomes—like the difference between getting an urgently needed monthly support check and being unfairly denied. aid.

Choose proactive over passive AI

Internal AI ethics committees act as checks and balances for the development and advancement of new technologies. They also enable an organization to fully inform and formulate consistent opinions on how regulators can protect all citizens from harmful AI. While the White House’s proposal for an AI Bill of Rights shows that active regulation is just around the corner, industry experts still need knowledgeable insight into what’s best for citizens and organizations regarding safe AI.

Once an organization has committed to building an AI ethics committee, it is important to practice three proactive, as opposed to passive, approaches:

1. Build with intention

The first step is to sit down with the committee and together finalize what the end goal is. Be diligent when researching. Talk to technical managers, communicators and anyone across the organization who might have something to add about the direction of the committee – diversity in input is essential. It can be easy to lose track of the scope and primary function of an AI ethics committee if goals and objectives are not established early, and the end product may deviate from the original intent. Find solutions, build a timeline and stick to it.

2. Don’t boil the sea

Just like the great blue ocean around the world, AI is a complex field that expands far and goes deep, with many unexplored trenches. When starting the committee, don’t take too much or too broad a scope. Be focused and deliberate in your AI plans. Know what your use of this technology aims to solve or improve.

Be open to different perspectives

A background in deep technology is helpful, but a well-rounded committee includes diverse perspectives and stakeholders. This diversity makes it possible to express valuable opinions about potential ethical AI threats. Include the legal team, creative, media and engineering. This will give the company and its customers representation in all areas where ethical dilemmas may arise. Create a company-wide “call to action” or create a questionnaire to define goals – remember that the goal here is to expand your dialogue.

Education and commitment save the day

AI ethics committees facilitate two aspects of success for an organization using AI: education and engagement. Educating everyone internally, from engineers to Todd and Mary in accounting, about the pitfalls of AI will better equip organizations to inform regulators, consumers and others in the industry and foster communities engaged in and educated on AI issues.

CF Su is VP of Machine Learning at Hyperscience.

Data Decision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people involved in data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You may even consider contributing an article of your own!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *