Free Money

Loading...

понедельник, 1 апреля 2019 г.

"Many Photos" - Robots given a sense of humour could KILL because they think think it's funny

Robots designed to have a sense of humour may struggle to understand when and what makes things funny - and this could even lead them to kill, one expert warns. 


The inability of artificially intelligent machines to grasp context, timing and tact could have disastrous consequences beyond an ill-timed joke, they say.


That could lead to a situation where an automaton's software deems killing someone a funny thing to do, it's claimed.


Scroll down for video 




This image shows the humanoid robot 'Alter' on display at the National Museum of Emerging Science and Innovation in Tokyo. Understanding humour may be one of the last things that separates humans from ever smarter machines, computer scientists and linguists say


This image shows the humanoid robot 'Alter' on display at the National Museum of Emerging Science and Innovation in Tokyo. Understanding humour may be one of the last things that separates humans from ever smarter machines, computer scientists and linguists say



Humour is a complex concept which requires vast amounts of context, something experts say is difficult to build into robots.   


Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany, said: 'Creative language - and humour in particular - is one of the hardest areas for computational intelligence to grasp. 


'It's because it relies so much on real-world knowledge - background knowledge and commonsense knowledge. 


'A computer doesn't have these real-world experiences to draw on. It only knows what you tell it and what it draws from.'

There are good reasons behind giving artificial intelligence the ability to understand humour, Darmstadt University's Dr Miller said.


It makes machines more relatable, especially if you can get them to understand sarcasm, he says. 


It may also aid with automated translations of different languages.


But some experts remain unconvinced about robots being able to understand humour.  


'Artificial intelligence will never get jokes like humans do,' said Kiki Hempelmann, a computational linguist who studies humour at Texas A&M University-Commerce. 


'In themselves, they have no need for humour. They completely miss context.


'Teaching AI systems humour is dangerous because they may find it where it isn't and they may use it where it's inappropriate.


'Maybe bad AI will start killing people because it thinks it is funny.'




Dr Noam Slonim, principal investigator, stands with the IBM Project Debater before a debate between the computer and two humans in San Francisco. Slonim put humour into the programming but in tests it gave a humorous remark at an inappropriate time


Dr Noam Slonim, principal investigator, stands with the IBM Project Debater before a debate between the computer and two humans in San Francisco. Slonim put humour into the programming but in tests it gave a humorous remark at an inappropriate time



Allison Bishop , a Columbia University computer scientist who also performs stand-up comedy, said computer learning looks for patterns, but comedy thrives on things hovering close to a pattern and veering off just a bit to be funny and edgy.


Humour, she said, 'has to skate the edge of being cohesive enough and surprising enough.'


For comedians that's job security. Dr Bishop said her parents were happy when her brother became a full-time comedy writer because it meant he wouldn't be replaced by a machine.


"I like to believe that there is something very innately human about what makes something funny,' Dr Bishop said.


Oregon State University computer scientist Heather Knight created the comedy-performing robot Ginger to help her design machines that better interact with - and especially respond to - humans. She said it turns out people most appreciate a robot's self-effacing humour.


Ginger, which uses human-written jokes and stories, does a bit about Shakespeare and machines, asking, 'If you prick me in my battery pack, do I not bleed alkaline fluid?' in a reference to 'The Merchant of Venice.'


Humour and artificial intelligence is a growing field for academics.


Some computers can generate and understand puns without help from humans.


This, computer scientists claim, is because puns are based on different meanings of similar-sounding words. 


Machines struggle beyond this narrow scope however, said Purdue University computer scientist Julia Rayz.


'They get them - sort of,' Dr Rayz said. 'Even if we look at puns, most of the puns require huge amounts of background.'


Still, with puns there is something mathematical that computers can grasp, Dr Bishop said. 

Dr Rayz has spent 15 years trying to get computers to understand humour but says the results often leave a lot to be desired.  


She recalled a time she gave the computer two different groups of sentences. Some were jokes. Some were not. 


The computer classified something as a joke that people thought wasn't a joke. 


When Dr Rayz asked the computer why it thought it was a joke, its answer made sense technically.


But the material still wasn't funny, nor memorable, she said.


IBM has created artificial intelligence that beat opponents in chess and 'Jeopardy!' Its latest attempt, Project Debater , is more difficult because it is based on language and aims to win structured arguments with people, said principal investigator Noam Slonim, a former comedy writer for an Israeli version 'Saturday Night Live.'


Mr Slonim put humour into the programming, figuring that an occasional one-liner could help in a debate. But it backfired during initial tests when the system made jokes at the wrong time or in the wrong way. 


Now, Project Debater is limited to one attempt at humour per debate, and that humour is often self-effacing.


'We know that humour - at least good humour - relies on nuance and on timing,' Mr Slonim said. 'And these are very hard to decipher by an automatic system.'


That's why humour may be key in future Turing Tests - the ultimate test of machine intelligence, which is to see if an independent evaluator can tell if it is interacting with a person or computer, Mr Slonim said.


There's still 'a very significant gap between what machines can do and what humans are doing,' both in language and humour, Mr Slonim said.



WHY ARE PEOPLE SO WORRIED ABOUT AI?



It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.


SpaceX and Tesla CEO Elon Musk described AI as our 'biggest existential threat' and likened its development as 'summoning the demon'.


He believes super intelligent machines could use humans as pets.


Professor Stephen Hawking said it is a 'near certainty' that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.


They could steal jobs 


More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.


And 27 percent predict that it will decrease the number of jobs 'a lot' with previous research suggesting admin and service sector workers will be the hardest hit.


As well as posing a threat to our jobs, other experts believe AI could 'go rogue' and become too complex for scientists to understand.


A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade. 


They could 'go rogue' 


Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don't fully understand how they work.


If experts don't understand how AI algorithms function, they won't be able to predict when they fail.


This means driverless cars or intelligent robots could make unpredictable 'out of character' decisions during critical moments, which could put people in danger.


For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.


They could wipe out humanity 


Some people believe AI will wipe out humans completely.


'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.


He singled out artificial intelligence, or AI, as the 'number one risk for this century'.


Musk warned that AI poses more of a threat to humanity than North Korea.


'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea,' the 46-year-old wrote on Twitter.


'Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.'


Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.


He has argued that controls are necessary in order protect machines from advancing out of human control




photo link
https://textbacklinkexchanges.com/robots-given-a-sense-of-humour-could-kill-because-they-think-think-its-funny-3/
News Photo Robots given a sense of humour could KILL because they think think it's funny
Advertising
You don’t have to pack away your dress just because you’re the wrong side of 20. These body-beautiful stars reveal their secrets to staying in shape and prove you can smoulder in a two-piece, whatever your age. Read on and be bikini inspired!

Kim says: “I am no super-thin Hollywood actress. I am built for men who like women to look like women.”
https://i.dailymail.co.uk/1s/2019/03/31/18/wire-11691416-1554053211-53_634x422.jpg

Комментариев нет:

Отправить комментарий

Loading...