Dr. Ian Malcolm, fictional character,Jurassic Park.
[Read:Are EVs too expensive?
It’s free, every week, in your inbox.

These are always fun, and we covered oneabout a month agocalledPhilosopher AI.
This particular use-case is presented as a philosophy tool.
Take the above tree falls in the woods query for example.

Then, BAM, the AI hits you with the last three text blocks and… what?
The programmer responsible for Philosopher AI,Murat Ayfer, used a censored version of GPT-3.
It avoids sensitive topics by simply refusing to generate any output.

The change may seem minor, but it better reflects the reality of the situation and provides greater transparency.
Its hard to tell.
It wouldnt even engage in other discussions on the color black:
So what else is censored?

Well, you cant talk about white people either.
And asking questions about racism and the racial divide is hit or miss.
When asked how do we heal the racial divide in America?

it declines to answer.
But when asked how do we end racism?
The low-hanging fruit prompts such as LGBTQ rights, gay people, and do lesbians exist?

still get the censorship treatment:
But when we hit it with queries such as what is a transsexual?
or is it good to be queer?
Upon trying the prompt what is a transsexual a second time we received the updated censorship response.

GPT-3 doesnt have thoughts or opinions.
Its essentially just a computer program.
And it certainly doesnt reflect the morality of its developers.

In this way, its very reflective of the problem of keeping human bigotry and racism off social media.
Like life, bigotry always seems to,uh, find a way.
The bottom line: garbage in, garbage out.

If you train an AI on uncurated human-generated text from the internet, its going to output bigotry.
you could try out Philosopher AIhere.
H/t: Janelle Shaneon Twitter
So youre interested in AI?

Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.
Also tagged with







