Showing 10 posts tagged ai

Google’s “deep learning” clusters of computers churn through massives chunks of data looking for patterns—and it seems they’ve gotten good at it. So good, in fact, that Google announced at the Machine Learning Conference in San Francisco that its deep learning clusters have learned to recognize objects on their own.

How Google’s “Deep Learning” Is Outsmarting Its Human Employees ⚙ Co.Labs ⚙ code community

So you say “Why does water on the side of the glass move up?” and it can say “Well, that’s the cohesive forces of water and the glass” and it will explain Van der Waals forces or whatever it might be. But literally you’re able to have an AI answer any question you want no matter how stupid you think it might be. So you can spend your time with your fellow students and your faculty members in a way that builds empathy and builds connection and builds community, which is what you should be doing with other humans.

AI Will Deliver Education on Demand | In Their Own Words | Big Think

In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.

Quote by Cambridge philosophy professor Huw PriceCambridge.  Quote found at phys.org website article “Cambridge to study technology’s risk to humans” (via horizonwatching)

(via horizonwatching)

A new computer game 'bot' acts just like a real person

A computer “bot” that hunts down and kills opponents in a video game has been judged to display behaviour that is indistinguishable from a human.

The bot, called UT^2, claimed first prize in the annual BotPrize competition.

Bots are computer program that control video game characters and play against real people

UT^2 fooled other players and judges that it was human during a game to win the prize.

» via BBC

Brain, Damaged: Army Says Its Software Mind Is 'Not Survivable'

It’s the backbone of the U.S. Army’s intelligence network in Afghanistan. And, according to the Army’s own internal testers, it’s a piece of junk: difficult to operate, prone to crashes, and extremely hackable.

The $2.3 billion Distributed Common Ground System-Army, or DCGS-A, is supposed to serve as the primary source for mining intelligence and surveillance data on the battlefield — everything from informants’ tips to drone camera footage to militants’ recorded phone calls. But after a limited test in May and June, the Army Test and Evaluation Command concluded that the system is “Effective with Significant Limitations, Not Suitable, and Not Survivable.”

» via Wired

Computer Watches Humans Play Connect Four, Then Beats Them

A computer scientist has published a paper detailing how systems can successfully win at boardgames after watching two minute-long videos of humans playing.

Using visual recognition software while processing video clips of people playing Connect 4, Gomoku, Pawns and Breakthrough — including games ending with wins, ties or those left unfinished — the system would recognise the board, the pieces and the different moves that lead to each outcome.

A unique formula then enabled the system to examine all viable moves when playing and, using data gathered from all possible outcomes, calculate the most appropriate move.

» via Wired

Essay-Grading Software, as Teacher’s Aide

The essay-scoring competition that just concluded offered a mere $60,000 as a first prize, but it drew 159 teams. At the same time, the Hewlett Foundation sponsored a study of automated essay-scoring engines now offered by commercial vendors. The researchers found that these produced scores effectively identical to those of human graders.

Barbara Chow, education program director at the Hewlett Foundation, says: “We had heard the claim that the machine algorithms are as good as human graders, but we wanted to create a neutral and fair platform to assess the various claims of the vendors. It turns out the claims are not hype.”

If the thought of an algorithm replacing a human causes queasiness, consider this: In states’ standardized tests, each essay is typically scored by two human graders; machine scoring replaces only one of the two. And humans are not necessarily ideal graders: they provide an average of only three minutes of attention per essay, Ms. Chow says.

» via The New York Times (Subscription may be required for some content)

This Computer Program Is Smarter Than 96 Percent of Humans

Computers will never be able to subjugate the human race with their current level of intellect. Thankfully, a team of Swedish researchers have developed an AI with an IQ of 150.

The program was developed by the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg in Göteborg, Sweden. Its intelligence score is based off of results from standard non-verbal test questions, which are designed to eliminate cultural and linguistic biases by testing knowledge rather than reasoning. Most of the test questions revolve around predicting the next shape in a progressive matrix test or guessing the next number in a sequence.

Since the speed in which the program solves questions isn’t a scoring factor—and some questions are designed to be unanswerable by either biological or electronic minds—the program is designed to supplement its pattern recognition algorithms with human psychological traits. As Claes Strannegård, a researcher at the university explains, “We’re trying to make programs that can discover the same types of patterns that humans see.”

» via Gizmodo

Viewpoint: AI will change our relationship with tech

In 1984, Canadian movie director James Cameron imagined a world in which computers achieved self-awareness and set about systematically destroying humankind.

Skynet, the Terminator series computer network, was to go live in 2011 and bring the world to an end.

Of course, we have just survived 2011 without such a cataclysmic event. And the closest we got to computers achieving self-awareness was Apple’s Siri.

It doesn’t promise self-awareness per se, but does promise to listen and to learn - and hopefully not systematically destroy us.

It seems likely that in 2012 a computer will pass the Turing Test - which might get us closer to a digital machine with true artificial intelligence (AI). The irony is that most of us will not care.

» via BBC

AI aims to solve in-game chatter

"Chatbot" technology is being used in an attempt to solve one of "the last uncracked problems" in games design.

221b, released in the run-up to the new Sherlock Holmes movie, harnesses the software to allow conversations between players and in-game characters.

Gamers, who assume the character of either Sherlock Holmes or Dr Watson, must interrogate virtual witnesses and suspects to progress in the game.

Success depends upon getting the right answers from these characters.

"It’s our role to predict what you might know at that point in the game and the questions you might ask," said Rollo Carpenter of Existor, which provided the technology.

"The ways that you might say things to them are almost unlimited."

» via BBC News