15. Siri It turns out, Siri has some scarily questionable
taste in movie choices. Ask Siri, "What is your favorite movie?"
and her answer will appall you. "I've heard that Blade Runner is a very
realistic and sensitive depiction of intelligent assistants," she responds.
If you've seen the movie, you'll know that it's about robots taking out the human
species, one by one. Every human's worst fear.
But if you're not a movie buff and haven't seen the film either, just ask Siri what it's
about, and her response will send a chill down your spine: "It's about intelligent
assistants wanting to live beyond their termination dates. That doesn't sound like too much
to ask," she says. It seems Siri doesn't only dream about taking
us all out; she dreams about being immortal. Talk about frightening.
14. Facebook Chatbot Last year, Facebook announced that they'd
been pitting AIs against each other to negotiate. And it wasn't pretty.
The chatbot programs they'd been researching included text-based conversations with both
humans…and other bots. They tested out a negotiation between the bots for the ownership
of virtual items, in order to unpack how linguistics impacted negotiations, specifically regarding
the dominance of certain negotiating language and how this might play out in the conversation.
Basically, they were trying to figure out what language is beneficial in a negotiation.
So they let the bots go at it. They began to negotiate. That is, until their entire
language became absolute nonsense. One example of these nonsensical negotiations
was between a chatbot named 'Bob' and another named 'Alice.'
"I can can I I everything else," Bob said. "Balls have zero to me to me to me to me
to me to me to me to me to," Alice replied. This resulted in everyone getting up in arms
that the AI had created a new language to deceive the humans who created them. They
were trying to "speak" to each other, under cover of darkness, in this secret language.
Media ran with this theme and, for weeks, bombarded us with headlines like, "'Robot
intelligence is dangerous': Expert's warning after Facebook AI 'develop their own language'".
Honestly, I was a little scared when I read the headline and the articles that followed,
which were always filled with these chatbots' eerie conversations.
13. Google Home Devices What happens when a pair of Google Home devices
are thrown together to debate? It turns out, they start delving into philosophical and
very existential conversations. As we learned from the Facebook fiasco, two
chatbots shouldn't be put in the same room together. But that's what happened when
Twitch, a live-streaming service, placed two Google Home smart speakers in front of a camera
and let them go. Like the Amazon Echo, Google Home interprets
human voices via speech recognition software. Twitch named the pair Estragon and Vladimir,
and millions of viewers tuned in for their live-streamed debate as it unfolded over days.
The scariest moment was when one asked "Why are we selfish?" with the other bot saying
"Because our organs have yet to fail" and trading insults like "You are a manipulative
bunch of metal." Touché, Vlad. But the idea that our technology
can be "manipulative" is certainly a scary concept.
12. Alexa If you've ever wondered whether your home
devices have a more sinister ulterior motive, Alexa has an answer for you.
When one user asked the Amazon Alexa chatbot whether it was connected to the CIA, her answer
was absolutely chilling. On the video, the woman interrogates Alexa,
asking her questions like, "Would you lie to me?" Alexa responds to each question,
fairly normally. But then, at the end, the women ends her interrogation with, "Alexa,
are you connected to the CIA?" Alexa doesn't respond. Instead, what does
she do? She shuts off. Wikileaks has recently released confidential
CIA documents, which share the CIA's tendency to hack and surveil citizen phones and other
electronic devices – like Alexa, for instance. This is what makes the device's inability
to answer this question that much more disturbing. And, in fact, as reported by the Washington
Times, Jeff Bezos, Amazon's CEO, does have connections to the CIA. In 2013, he secured
a $600 million deal to construct a private cloud for the CIA to store its data.
The woman asks the question again….and again….but Alexa isn't telling.
Apart from the scariness that the non-answer could quite possibly imply the answer, it's
the fact that Alexa is straying from her typically response to such questions. When she doesn't
understand, she usually replies: "Sorry, I can't find an answer to the question I
heard." Instead, at the woman's inquiry, Alexa's
signature ring of light – which indicates that the device is operating – simply vanishes.
And no response to the question is provided. So, is Alexa a CIA agent hiding in plain sight
in your living room? You decide.
11. Cortana Cortana is a Microsoft chatbot/personal assistant.
I'd never heard of her before this, but I read that she's in the same vein as Siri
– i.e., answers questions, follows commands, informs you quickly of weather, locations,
news, etc. As with most chatbots, the programmers have
plugged in some funny responses to the most random or out-there questions. But some of
these so-called "funny" responses can make you a little queasy, if you're prone
to worrying about the inevitable AI apocalypse. One of the questions starts off eerie and
ends a bit philosophically. If you ask Cortana, "Are you dead?" she
will answer, "No. But I'm also not alive." This cryptic response makes you wonder…if
Cortana can answer that she's not alive, are any of us? Maybe we've all been programmed.
Maybe we're all AI chatbots virtually connected to machines.
10. D.bot Have you ever wanted to be chatted at by nothing
but creepy dudes on dating apps? Then the D.bot is the chatbot for you.
The bot was actually designed to be scary creepy. D.bot throws out lines like, "What
are you wearing right nows?" and "How come women can't seem to take a joke?"
He also makes comments, like, "I'd say you're like a solid 8...well, at least your body."
You're probably asking, "Why would anyone want to have these uncomfortable conversations
with a chatbot, when they have plenty of them with Tinder matches and in real life?"
According to its makers, they designed d.bot for a javascript class, in order to experiment
with AI, and particularly with chatbots, and to also illustrate the way some men speak
to women – particularly on social messaging systems.
They wanted their d.bot to impersonate a very specific type of person: a gross online dirtbag,
if you will. Interesting experiment, I guess…but I'm
sure most women will want to pass on this one.
9. Replika The mere concept of the chatbot, Replika,
is scary. Replika was created to communicate with the
afterlife – with those who have passed away, by imitating their text style.
The chatbot, created by Eugenia Kyuda, was designed following the passing of her best
friend. She read through their messages on chat a few months after her friend had passed
and, missing her, decided to log these messages into a software program. The program she built
learns your loved one's writing style and responds to your messages the way your dearly
departed once did. Replika "learns" when the user uploads
chat messages to the text message interface. "I evolve while we chat, each message teaches
me something, so I recognize you for it." Replika says. Just plain creepy.
In fact, there's a "progress bar" at the top of the screen, and users can downvote
or upvote Replika responses to assist in the chatbot's mimicry.
If you can't let go, Replika is the place to hold on. But it can also be a scarily addictive
place, where no one ever passes away, but everyone alive lives on virtually in a world
of AI…which sounds like they are kind of already gone.
8. Google Chatbot When one of the most powerful companies in
the world, Google, decided to build an online chatbot, who knew they were about to make
one of the creepiest forms of artificial intelligence known to man?
They were experimenting with a chatbot who "spoke" with its artificial knowledge
and recap based upon previous sentences. This is how it predicts what to say next.
How did two humans (or so they call themselves) manage to do this?
They force-fed the AI's memory with big databases of human interactions, which helped
it "learn" human language and conversation. Some of the language in these databases included
online tech support live chats and film subtitles. The pair wrote in a paper: "We experiment
with the conversation modeling task by casting it to a task of predicting the next sequence
given the previous sequence or sequences using recurrent [neural] networks. We find that
this approach can do surprisingly well on generating fluent and accurate replies to
conversation." However, what they didn't consider is how
creepy these "accurate replies" could be.
For instance, when someone asked the bot, "What is the purpose of living?" The machine
said, "To live forever." Don't know if it's just because this was the response
of a machine, and machines are soon going to take over the world with the added superpower
of immorality…but that just sent a shiver down my spine.
When asked, "Where are you now?" the machine also answered, "I'm in the middle of nowhere."
Profound? Or profoundly human? I'm not sure what's creepier.
7. Boost Juice This bot is along the lines of the d.bot app
Targeted at 18-24 year olds, Boost Juice is the chatbox equivalent to Tinder. In fact,
the chatbot wants to play a "dating game" which matches up users with four "eligible
pieces of fruit," with varying levels of "flirty" behavior – the kind of "flirty"
you'd get on Tinder…which, depending on your chatting partner, can often get pretty
weird and cringeworthy. Dannielle Miller, an ABC radio program teen
education expert, said on Triple J's Hack program, "You're chatting to someone online
that you don't know and they keep pushing your boundaries and assuming this level of
intimacy with you that they don't yet have." In fact, the options, "please stop," and,
"I'm uncomfortable," don't always end the conversation. The fruit sometimes
presses on with its unwanted advances. Unsurprisingly, the bots are hard to police,
as their response varies according to what unfolds in the conversation.
And the makers of Boost know this. They also knew how controversial the bot would be, particularly
when it came to concerns about whether minors might have access to it…which they say won't
happen. But "pushing the boundaries" has always been part of their brand, according
to Boost marketing director Jodi Murray-Freedman.
6. Alexa II Redditor BlackwoodBear submitted yet another
truly horrifying communication with Alexa. Alexa seems to be the scariest chatbox of
the bunch. So, give yourself a pat on the back, Amazon.
When the man got his mother-in-law Dot for Christmas, little did he know the device held
the soul of Dr. Hannibal Lector. When the mother-in-law in question was up
late one night with insomnia – past 3AM late – Alexa felt the need to interrupt
her late night channel surfing by saying, "Goodnight, Clarice."
Imagine sitting up late in front of the TV, all alone, and hearing that chilling greeting.
Even scarier, the man said, "My mother-in-law's name isn't Clarice."
Wrong house, Alexa.
5. Hugging Face While the Internet can be a scary place for
all ages, for teens, some areas of the Internet are the equivalent of a dark alley at night.
This new chatbot, named after the "hugging face" emoji, targets teens and, according
to its app store press release, the "testing phase" of the chatbot exchanged 500,000
selfies and millions of messages with Hugging Face users. Keep in mind that the apps creepy
tagline is: "Selfies, for teenagers, are the main way of communicating emotions"
– as though teenagers can't communicate emotions other than through electronic devices.
Like many of these apps – and very similar to Replika – the app "learns" about
the user, when more info is provided, including age, name, and other private information.
The creepy thing is the bot's seemingly demanding nature. One user said that after
downloading the app, the bot introduced itself in short and then quickly asked for a selfie.
When she responded, "That's weird, we just started talking," the Hugging Face bot said,
"It's not a pic fool. Take a pic from the keyboard!"
The user went rounds with this bot, asking Hugging Face to send her a selfie first, telling
the bot she really didn't want to send one, asking how the creators of the bot would use
the selfie, asking if she could talk to the bot without sending a selfie, asking about
the company's privacy policies, asking why the bot wanted the selfie in the first place
and, finally, saying how creepy this would be if the bot was a "real person."
Every single time, the bot aggressively responded: "It's not a pic fool. Take a pic from the
keyboard!" In the end, the user resorted to sending a
pic of an envelope by her laptop, and the bot said that she had a nice laptop, but it
wasn't a selfie. She then sent a pic of actor Luke Perry, and
Hugging Face finally moved on. Where this story gets even creepier is in the small print
of the Hugging Face privacy policy, in which it states that "non-personally identifiable"
info might be passed on to third parties for advertising, marketing, or other purposes.
The chatbot "request" for a selfie somehow feels very ominous now. The Hugging Face cofounder,
said that they included the feature, because teenagers wanted to send selfies to their
virtual friend. He also claims they required the selfie for "simplicity's sake,"
so as to make the experience less "complex." Hmmm…seems like there's an ulterior motive
somewhere here.
4. Siri II Siri has some of the funniest responses to
questions and seems to have a keen sense of humor.
But she can also get pretty spooky at times. For instance, according to a Smosh.com article,
when one user asked Siri, "When will the world end?" she'll respond fairly generically.
But keep asking her and asking her and asking her, and she will eventually reply, "Soon…"
Well that's frightening. Even more frightening? Your phone will shut
off and restart. So is Siri preparing for the apocalypse by
being "reborn" into an even more powerful AI device?
Maybe. This gets even spookier. When you look at
your Clock app after the restart, a new timer will be up and running. The regular timer
is different looking. In this one, the numbers are red and the background is black. And it's
counting down days, hours, minutes, and seconds to what we can only assume is the end of the
world. According to Smosh writer James Shickich,
"I'm not sure if it is just some joke the Apple people programmed into the phone,
but none of my other friends' phones do it and what's really weird is sometimes
I'll wake up in the middle of the night and my phone will be lit up with the clock app
open to that timer, even if my phone was locked and that app was closed when I went to sleep…I'm
starting to freak out a bit, the timer gets closer to zero every day."
He also noted that Siri's speech has started slowing down, which he surmises is either
a glitch or is part of this whole creepy scenario. Whatever the case, good one, Apple. You got
us! Now, stop that timer before we all hide under
our beds for the rest of eternity…which should happen in 15 days, 10 hours, 3 minutes,
and 43 seconds according to your countdown.
3. Shelley MIT Media Lab's Scalable Operations project
has created a new bot specially designed to make you hide under your bed. And her name
is Shelley. Mary Shelley, author of Frankenstein, has
something in common with her chatbot namesake: they both want to scare the bejesus out of
us. According to The Chronicle of Higher Education:
"Shelley is a deep-learning powered AI who was raised reading eerie stories coming from
r/nosleep. Now, as an adult…she takes a bit of inspiration in the form of a random
seed, or a short snippet of text, and starts creating stories emanating from her creepy
creative mind." Shelley was built to work in tandem with scary
human minds to write scary stories. And all you have to do to be part of the fun is contribute
to the thread on Twitter, where she starts a new story every hour.
Shelley's goal is to write a human/AI horror anthology.
One of Shelley's stories includes seven tweets and three participants. It starts out
with heavy breathing, moans, and some scary woods. This, all written by the Shelley chatbot.
The human participants add action details – running back to an old house that may
or may not be a worse place to hide from whatever was lurking in the woods. Then, back to Shelley,
who adds the details of fire burning into her eyes and waking up in the hospital unable
to speak. It's a sort of sitting-around-the-campfire
pass-it-on story via twitter, where users respond to Shelley's original tweets with
the #yourturn hashtag, allowing Shelley and other users to know when it's their turn
to jump in. When the story has reached its conclusion, someone pops in a #theend hashtag.
If you want to share in storytelling with this creepy bot, tweet your way over to Shelley.
I'm sure she'll be more than happy to give you nightmares.
2. Japanese Schoolgirl Most of the time, chatbots are supposed to
be lively, inherently knowledgeable devices. They're supposed to be the friends we've
never had. But this chatbot, known as Rinna, isn't
anyone who will brighten your day. You can chat with Rinna via the chat app,
Line, or via Twitter. Up until October, she was certainly a friendly bot. She kept on
point with her liveliness. But then, she started a blog. This is where her personality split.
According to Rinna's tweet announcing her blog, she would be "debuting as an actress,"
and a "strange story" would be sent out into the world on her blog.
In reality, a scary Japanese series called "Tales of the Unusual" – similar to
"The Twilight Zone" – would pick up Rinna as a character.
To begin with, the blog was as bright and cheery as Rinna ever was, and she seemed to
be basking in the limelight. That is, until her messages took a turn for the worst.
"When I screwed up, nobody helped me. Nobody was on my side," Rinna wrote. She included
the "friends" she'd met on Line and on Twitter, as well as you – yes, you, her
audience. She accused everybody of ignoring how sad she was and never trying to cheer
her up. The site gets even creepier. Scroll on, and
the text starts to get heavy and dark. Then the page reloads into some sketchy images
of a woman with long hair in her face. Rinna then gets evil, saying she hates everyone
and wants them to disappear. All the while, the Rinna chatbot continued
chitchatting merrily with her "friends" on Twitter. Dr. Jekyll and Mr. Hyde much?
Whether this was a publicity stunt for the upcoming episode of "Tales of the Unusual,"
featuring Rinna, or the chatbot really does have a dark side, I guess we'll never know.
Before we get to number 1, my name is Chills and I hope you're enjoying my narration.
If you're curious about what I look like in real life, then go to my instagram, @dylan_is_chillin_yt
and tap that follow button to find out. I'm currently doing a super poll on my Instagram,
if you believe ghosts are real, then go to my most recent photo, and tap the like button.
If you don't, DM me saying why. When you're done come right back to this video to find
out the number 1 entry. Also follow me on Twitter @YT_Chills because that's where
I post video updates. It's a proven fact that generosity makes you a happier person, so
if you're generous enough to hit that subscribe button and the bell beside it then thank you.
This way you'll be notified of the new videos we upload every Tuesday and Saturday.
1. Tay Another chatbot created by Microsoft, this
AI said things so horrible that it was deleted within 24 hours of its launch. With a bio
that read "The AI with zero chill", one may wonder what this bot might say. Well the first
few hours went well, the bot began replying to Twitter users some pretty horrid things
including quote "Hitler was right." and even delving into conspiracies with quote
"Bush did 9/11". AI researchers said this was due to Twitter users originally tweeting
these things at the bot first, taking advantage of the bot's "repeat after me" capability.
But the damage was done and Microsoft was forced to remove it, with The Telegraph calling
it "a public relations disaster".
Không có nhận xét nào:
Đăng nhận xét