Showing 15 posts tagged ai

Beginning in July, Mr. Ferrara said, these articles will be written using software from a company called Automated Insights, which The Associated Press has invested in, paired with data from Zacks Investment Research. On its website, Automated Insights says it can produce copy “written with the tone, personality and variability of a human writer.”

The A.P. Plans to Automate Quarterly Earnings Articles -

The algorithm is very good at using what you’ve just bought to recommend things that you’ll want to buy, he observed, but it can be hard to tell why. Perhaps you’ll be attracted to the content of the recommendation–or perhaps it’s the fact that the cover is also green, or that the print is in Helvetica font. In contrast, a skilled librarian is usually going to recommend a book solely because of its intellectual value, without any lurking, contentless variables. The librarian is therefore likelier to send a person in a direction they wouldn’t otherwise have gone in a way that will advance their thinking, education, or aesthetic taste, because they’re not just meeting needs that have already been expressed.

Would You Rather Get Tips from an Expert or an Algorithm? - Conor Friedersdorf - The Atlantic

Scientists long believed humans could distinguish six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. But earlier this year, researchers at Ohio State University found that humans are capable of reliably recognizing more than 20 facial expressions and corresponding emotional states—including a vast array of compound emotions like “happy surprise” or “angry fear.” Recognizing tone of voice and identifying facial expressions are tasks in the realm of perception where, traditionally, humans perform better than computers. Or, rather, this used to be the case. As facial recognition software improves, computers are getting the edge. The Ohio State study, when attempted by a facial recognition software program, achieved an accuracy rate on the order of 96.9 percent in the identification of the six basic emotions, and 76.9 percent in the case of the compound emotions. Computers are now adept at figuring out how we feel.

Computers Are Getting Better Than Humans Are at Facial Recognition - Norberto Andrade - The Atlantic

IBM’s Watson supercomputer has already mastered Jeopardy! and can even whip up an innovative recipe. Next step: it’ll be elected to the presidency after dominating against humans in a series of debates. The computer’s new Debater function is what it sounds like: after being given a topic, Watson will mine millions of Wikipedia articles until it determines the pros and cons of a controversial topic, and will the enumerate the merits of both sides. Argument over. Move along. Or, maybe not. … Watson searches Wikis for the pros and cons of banning the sale of violent videogames to minors. After less than a minute, the computer churns out a few points, but they’re conflicting: Watson suggests violent videogames both cause violent acts and that there is not a causal link between violent games and real violence. Which, in fact, is about right. Different studies have come to wildly different conclusions about the correlation between violence in games and violent acts. That’s why Watson doesn’t yet make value decisions about which side of a debate is “correct,” but only lists the points generally brought up by both sides. So you’ll still have to make up your own mind about what’s right. (Ugh, I know. Sorry.) But if nothing else, contrarianism just got a lot easier.

IBM’s Watson Can Now Argue For You | Popular Science (via myserendipities)

(via myserendipities)

For this essay, Mr. Perelman has entered only one keyword: “privacy.” With the click of a button, the program produced a string of bloated sentences that, though grammatically correct and structurally sound, have no coherent meaning. Not to humans, anyway. But Mr. Perelman is not trying to impress humans. He is trying to fool machines.

Writing Instructor, Skeptical of Automated Grading, Pits Machine vs. Machine - Technology - The Chronicle of Higher Education

Google’s “deep learning” clusters of computers churn through massives chunks of data looking for patterns—and it seems they’ve gotten good at it. So good, in fact, that Google announced at the Machine Learning Conference in San Francisco that its deep learning clusters have learned to recognize objects on their own.

How Google’s “Deep Learning” Is Outsmarting Its Human Employees ⚙ Co.Labs ⚙ code community

So you say “Why does water on the side of the glass move up?” and it can say “Well, that’s the cohesive forces of water and the glass” and it will explain Van der Waals forces or whatever it might be. But literally you’re able to have an AI answer any question you want no matter how stupid you think it might be. So you can spend your time with your fellow students and your faculty members in a way that builds empathy and builds connection and builds community, which is what you should be doing with other humans.

AI Will Deliver Education on Demand | In Their Own Words | Big Think

In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.

Quote by Cambridge philosophy professor Huw PriceCambridge.  Quote found at website article “Cambridge to study technology’s risk to humans” (via horizonwatching)

(via horizonwatching)

A new computer game 'bot' acts just like a real person

A computer “bot” that hunts down and kills opponents in a video game has been judged to display behaviour that is indistinguishable from a human.

The bot, called UT^2, claimed first prize in the annual BotPrize competition.

Bots are computer program that control video game characters and play against real people

UT^2 fooled other players and judges that it was human during a game to win the prize.

» via BBC

Brain, Damaged: Army Says Its Software Mind Is 'Not Survivable'

It’s the backbone of the U.S. Army’s intelligence network in Afghanistan. And, according to the Army’s own internal testers, it’s a piece of junk: difficult to operate, prone to crashes, and extremely hackable.

The $2.3 billion Distributed Common Ground System-Army, or DCGS-A, is supposed to serve as the primary source for mining intelligence and surveillance data on the battlefield — everything from informants’ tips to drone camera footage to militants’ recorded phone calls. But after a limited test in May and June, the Army Test and Evaluation Command concluded that the system is “Effective with Significant Limitations, Not Suitable, and Not Survivable.”

» via Wired