The future of generative AI and its ethical implications

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Look now.


Generative AI is revolutionizing how we experience the internet and the world around us. Global AI investment increased from $12.75 billion in 2015 to $93.5 billion in 2021, and the market is projected to reach $422.37 billion by 2028.

While this prospect may make it sound like generative AI is the “silver bullet” for moving our global society forward, it comes with an important footnote: The ethical implications are not yet well defined. This is a serious problem that can hamper continued growth and expansion.

Which generative AI gets right

Most generative AI use cases provide lower cost, higher value solutions. For example, generative adversarial networks (GANs) are particularly well suited to advancing medical research and accelerating the discovery of new drugs.

It is also becoming clear that generative AI is the future of text, image and code generation. Tools like GPT-3 and DALLE-2 are already seeing widespread use in AI text and image generation. They have become so good at these tasks that it is almost impossible to distinguish human-generated content from AI-generated content.

Event

Intelligent Security Summit

Learn the critical role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

The million-dollar question: What are the ethical implications of this technology?

Generative AI technology is advancing so quickly that it is already outpacing our ability to imagine future risks. We must answer critical ethical questions on a global scale if we hope to stay ahead of the curve and see long-term, sustainable market growth.

First, it is important to briefly discuss how foundation models such as GPT-3, DALLE-2 and related tools work. They are deep learning tools that essentially try to “outperform” other models by creating more realistic images, text and speech. Then, labs like OpenAI and Midjourney AI train on massive datasets from billions of users to create better, more sophisticated outputs.

There are many exciting, positive uses for these tools. But we would be remiss as a society not to recognize the potential for exploitation and the legal gray areas this technology reveals.

For example, two important questions are currently under debate:

Should a program be able to attribute its results to itself, even if the output is derived from many inputs?

Although there is no universal standard for this, the situation has already come up in legal spheres. The US Patent and Trademark Office and the European Patent Office have rejected patent applications filed by “DABUS” AI developers (who are behind the Artificial Inventor Project) because the applications cited AI as the inventor. Both patent offices ruled that non-human inventors are not eligible for legal recognition. However, South Africa and Australia have ruled that AI can be recognized as an inventor on patent applications. In addition, New York-based artist Kris Kashtanova recently received the first US copyright to create a graphic novel with AI-generated artwork.

One side of the debate says that generative AI is essentially an instrument that can be used by a human creator (like using Photoshop to create or modify an image). The other side says that the rights should belong to the AI ​​and possibly the developers. It is understandable that developers who create the most successful AI models would want the rights to content creation. But it is highly unlikely that this will succeed in the long term.

It is also important to note that these AI models are reactive. This means that the models can only “react” or produce output according to what they are given. Once again, it puts control in the hands of humans. Even the models that are left to refine themselves are still ultimately driven by the data that humans feed them; Therefore, AI cannot really be an original creator.

How do we deal with the ethics of deepfakes, intellectual property and AI-generated works imitating specific human creators?

People can easily find themselves the target of AI-generated fake videos, explicit content and propaganda. This raises concerns about privacy and consent. There is also the looming possibility that people will be out of a job when AI can create content in their style with or without their permission.

A final problem arises from the many cases where generative AI models consistently exhibit biases based on the datasets they are trained on. This can further complicate the ethical issues, because we have to consider that the data used as training data is the intellectual property of others, some of whom may or may not consent to their data being used for that purpose.

Adequate laws have not yet been written to address these issues around AI exits. Generally speaking, however, if it is determined that AI is just a tool, then it follows that the systems cannot be responsible for the work they create. After all, if Photoshop is used to create a fake pornographic image of someone without consent, we blame the creator and not the tool.

If we look at AI as a tool, which seems most logical, we cannot directly attribute ethics to the model. Instead, we need to look deeper into the claims made about the tool and the people who use it. This is where the true ethical debate lies.

For example, if AI can generate a credible thesis project for a student based on a few inputs, is it ethical for the student to pass it off as their own original work? If someone uses a person’s likeness in a database to create a video (malicious or benign), does the person whose likeness has been used have a say in what is done with that creation?

These questions only scratch the surface of the possible ethical implications that we as a society must work with to continue advancing and refining generative AI.

Despite the moral debates, generative AI has a bright, limitless future

Right now, IT infrastructure reuse is a growing trend driving the generative AI market. This lowers barriers to entry and encourages faster, more widespread technology adoption. Because of this trend, we can expect more indie developers to come out with exciting new programs and platforms, especially when tools like GitHub Copilot and Builder.ai are available.

The field of machine learning is no longer exclusive. That means more industries than ever can gain a competitive advantage by using AI to create better, more optimized workflows, analytics processes, and customer or employee support programs.

In addition to these advances, Gartner predicts that by 2025, at least 30% of all new medicines and discovered materials will come from generative AI models.

Finally, there is no doubt that content such as stock images, text and program coding will shift to being largely AI-generated. Likewise, deceptive content will become more difficult to distinguish, so we can expect to see the development of new AI models to combat the spread of unethical or deceptive content.

Generative AI is still in its early stages. There will be growing pains as the global community decides how to deal with the ethical implications of technology’s capabilities. But with so much positive potential, there’s no doubt it will continue to revolutionize how we use the internet.

Andrew Gershfeld is a partner at Flint Capital.

Grigory Sapunov is the CTO of Inten.to.

Data Decision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people involved in data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You may even consider contributing an article of your own!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *