A team of researchers recently developed an algorithm that generates original reviews for wines and beers.

Considering that computers cant taste booze, this makes for a curious use-case for machine learning.

The AI sommelier was trained on a database containing hundreds of thousands of beer and wine reviews.

Is it ethical to use AI-generated content without crediting the machine?

In essence, it aggregates those reviews and picks out keywords.

The big question here is: who is this for?

40% off TNW Conference!

Thats all well and fine, but its hard to imagine any of these fictional people actually exist.

Will the people who benefit fromusingthis AI be transparent with the people consuming the content it creates?

Whats in a review?

With an AI, were merely seeing whatever its operator cherry-picks.

Its the same with any content-generation scheme.

The most famous AI for content-generation is OpenAIs GPT-3.

Yet even GPT-3 requires a heavy hand when it comes to output moderation and curation.

This begs the question: how ethical is it to generate content without crediting the machine?

Is that ethical?

Thats not a question we can answer without applying intellectual rigor to a specific example of its software.

Not anymore

The answer to all three is: no.

And, arguably, this is a bigger problem than plagiarism.

At least theres a source document when humans plagiarize each other.

That doesnt make the use of AI-generated content inherently unethical.

Can an AI sommelier be a force for good?

The researchers state that the antidote to the shady use of AI is transparency.

Clearly, there are far more questions than answers when it comes to the ethical use of AI-generated content.

Also tagged with