© Copyright, 2015 (PHNO)
 http://newsflash.org

PHNO SCIENCE & INFOTECH NEWS
(Mini Reads followed by Full Reports)

CONTINUING WEB ACCESSIBILITY AWARENESS: This page is dedicated to my accessibility student Mrs. Olympia Monfero of Thorncliffe Park, Toronto, Ontario, Canada. For the past ten years she has been my ace and steadfast Adaptive/Assistive Program student. Almost blind she uses 2 special programs, ZoomText and Kurzweil 1000 Now at 93, believe me, her memory is still a lot better than mine. She is now almost totally blind, now sees mostly shadows and knows who I am through my familiar voice.Here she is, the unforgettable and my most inspirational student 'Mommy Monfero':


TEN YEARS AGO -Mommy Monfero, 83 years old, diabetic, totally blind in one eye, only 20% vision left in other eye (in photo at left or top), checks large-print hard copy of an enlarged Word document she printed. She learned quickly using her MS Word 2003 to type her 'personal journal' mostly about her life with her late husband during the Japanese war in the Philippines. She was an accountant by profession in the Philippines, she learned and uses Excel for her Club's financial tasks. She loves expressing her artistic ideas designing patiently and meticulously her Seniors Club Christmas Party Programmes on Microsoft Publisher with my help readjusting sizes and fonts of her own choice and design. She uses  ZoomText to read her emails & Kurzweil 1000 scans & reads her postal delivered mail, newsletters, church bulletin, Sunday homilies and prayer books. She is totally a keyboard hotkeys user and types her keyboard keys shortcuts (all from memory) totally depending on what she hears from the ZoomText voice technology. She only sees shadows now on the enlarged screen. Today, she still has a fantastic memory & marked concentration power. She enjoys opening photo albums and videos her grandchildren uploads on her PC and listens to the videos from her family esp her grandchildren on You Tube. Today, at 93, she has upgraded her desktop PC to Windows 7 and attempting to navigate her timeline on Facebook where she is already joined by her children, grandchildren and greatgrand kids on Social Media. But she still needs me there beside her when on her Facebook timeline. I still have to describe to her the photos of her new grandchild as she  listens to comments and posts from her daughters and grandchildren via the Voice. This  'Artificial Intelligence' app is what we have been waiting for so she can be independently a FB user. Thank you Matt King, TY, Facebook! and TY! Wired!! lee quesada PHNO webmaster.PHOTO FROM  YEAR 2005 PHOTO FROM 'LEEQUESADA' PERSONAL  HOMEPAGE.

--------------------------------------------------------------------------------

MORE DEEP LEARNING WITH Google+

INSIDE THE ARTIFICIAL BRAIN THAT’S REMAKING THE GOOGLE EMPIRE [“Google is not really a search company. It’s a machine-learning company.” Since its birth in the company’s secretive X Labs three years ago, the Google Brain has flourished inside the company, giving its army of software engineers a way to apply cutting-edge machine-learning algorithms to a growing array of problems. And in many ways, it seems likely to give Google an edge as it expands into new territory over the next decade, much in the way that its search algorithms and data center expertise helped build its massively successful advertising business during the last ten years.]


GETTY IMAGE -IT WAS ONE of the most tedious jobs on the internet. A team of Googlers would spend day after day staring at computer screens, scrutinizing tiny snippets of street photographs, asking themselves the same question over and over again: “Am I looking at an address or not?’ Click. Yes. Click. Yes. Click. No. This was a critical part of building the company’s Google Maps service. Knowing the precise address of a building is really helpful information for mapmakers. But that didn’t make life any easier for those poor Googlers who had to figure out whether a string of numbers captured by Google’s roving Street View cars was a phone number, a graffiti tag, or a legitimate address. Then, a few months ago, they were relieved of their agony, after some Google engineers trained the company’s machines to handle this thankless task. Traditionally, computers have muffed this advanced kind of image recognition, and Google finally cracked the problem with its new artificial intelligence system, known as Google Brain. With Brain, Google can now transcribe all of the addresses that Street View has captured in France in less than an hour. “Google is not really a search company. It’s a machine-learning company.” Since its birth in the company’s secretive X Labs three years ago, the Google Brain has flourished inside the company, giving its army of software engineers a way to apply cutting-edge machine-learning algorithms to a growing array of problems. And in many ways, it seems likely to give Google an edge as it expands into new territory over the next decade, much in the way that its search algorithms and data center expertise helped build its massively successful advertising business during the last ten years. “Google is not really a search company. It’s a machine-learning company,” says Matthew Zeiler, the CEO of visual search startup Clarifai, who worked on Google Brain during a pair of internships. He says that all of Google’s most-important projects—autonomous cars, advertising, Google Maps—stand to gain from this type of research. “Everything in the company is really driven by machine learning.” In addition to the Google Maps work, there’s Android’s voice recognition software and Google+’s image search. But that’s just the beginning, according to Jeff Dean, one of primary thinkers behind the Brain project. He believes the Brain will help with the company’s search algorithms and boost Google Translate. “We now have probably 30 or 40 different teams at Google using our infrastructure,” says Dean. “Some in production ways, some are exploring it and comparing it to their existing systems, and generally getting pretty good results for a pretty broad set of problems.” The project is part of a much larger shift towards a new form of artificial intelligence called “deep learning.” Facebook is exploring similar work, and so is Microsoft, IBM, and others. But it seems that Google has pushed this technology further—at least for the moment. CONTINUE READING....

ALSO INTRODUCING 'CLARIFAI' : The AI Startup Google Should Probably Snatch Up Fast


CLARIFAI First, Google ACQUIRED A STARTUP called DNNresearch (Read next story below), snapping up some of the world’s foremost experts in a burgeoning field of artificial intelligence known as deep learning. Then it shelled out $400 million for a secretive deep learning startup called DeepMind (Read below). Much like Facebook, Microsoft, and others, Google sees deep learning as the future of AI on the web, a better way of handling everything from voice and image recognition to language translation. But there’s one notable deep learning company that Google hasn’t yet bought. It’s called Clarifai, and it may remain as an independent operation. Clarifai, you see, wants to open up some of the deep learning secrets used by Google, Facebook, and other companies and share them with the rest of the world. Clarifai specializes in using deep learning algorithms for visual search. In short, it’s building software that will help you find photos—whether they’re on your mobile phone, a dating website, or on a corporate network—and it will sell this software to all sorts of other companies that want to roll it into their own online services. “We’re interested in making search through images simple,” says founder Matthew Zeiler, a 27-year-old researcher, fresh out of New York University’s computer science PhD program. Last year, along with NYU Professor Rob Fergus, Zeiler won a key image recognition test in a closely watched artificial intelligence competition called ImageNet. Fergus was soon snatched up by Facebook, and the big tech companies wanted to hire Zeiler too. But he had other plans. ‘It’s really defining your search in a visual way—not just in a text way.’ Over the past few years, there’s been a major-league talent grab going on in the world of deep learning, which relies on computer models that simulate the way information is processed by the human brain. In addition to Fergus, Facebook hired another well-known academic named Yann LeCun. Baidu picked up Stanford’s Andrew Ng, and Apple is building out a team too. The technology has already improved Android’s voice recognition and helped Microsoft create a futuristic live voice translation system called Skype Translate. But Zeiler thinks that many others could benefit from deep learning. The trouble is, unless you have the money to hire your own deep learning experts, it can be hard to get the technology just right. The really difficult part is building learning models—essentially algorithms for processing all of the visual data—that work quickly across many different types of images. “To train these models is more of an art than a science,” says Zeiler. “It takes a lot of years of experience.” That’s where Clarifai comes in. Zieler has spent the past five years working with two of the biggest names in the field on this kind of learning model: Geoff Hinton—now at Google—and Facebook’s Yann LeCun. The idea is that you can upload an image to the Clarifai software, and it will figure out what’s in your picture and offer you more of the same. “It’s really defining your search in a visual way—not just in a text way,” says Zeiler. CONTINUE READING...

ALSO More on DeepMind: AI Startup to Work Directly With Google’s Search Team
[Google has been buying a lot of crazy stuff lately. At least eight robot companies, including humanoid robot-maker Boston Dynamics. Nest, the smart-home company that designs thermostats and smoke detectors. And now DeepMind, an artificial intelligence startup.]


Boston Dynamics’ four-legged robot named WildCat can gallop at high speeds. Credit Boston Dynamics Taken together, the deals might all seem to add up to Skynet. But sources said DeepMind is actually being inserted into Google’s oldest team: Search. Google’s Jeff Dean will be working closely with DeepMind. Or, as search is known at Google today, the “Knowledge” group — so-called because it no longer just finds keywords on Web pages, but instead connects larger concepts. Knowledge is led by Google SVP Alan Eustace, but DeepMind will work closely with a team led by Jeff Dean, a near-15-year Google veteran best known for his work on distributed systems. By contrast, the pack of acquired robots will report to former Android boss Andy Rubin, and Nest will continue to be managed by Tony Fadell (who is to report directly to Google CEO Larry Page). “Manhattan Project for AI” It may be hard to believe given the $400 million price tag (perhaps more, with earn-outs), but DeepMind is seen by Google as primarily a talent acquisition. London-based DeepMind had not yet released any products, but sources said it was working on at least three: “A game with very advanced game AI, a smarter recommendation system for online commerce and something to do with images,” is how one source described it. DeepMind employs a team of at least 50 people and has secured more than $50 million in funding, and it competed for talent with companies like Google, Facebook, Baidu, IBM, Microsoft and Qualcomm. As far as AI goes, it was perhaps the only startup name that could be included on that list. “If anyone builds something remotely resembling artificial general intelligence, this will be the team,” one early investor in DeepMind told Re/code today. “Think Manhattan Project for AI.” DeepMind’s technology does become part of Google through the acquisition. There are traces of it around the Web, including three U.S. patent applications around reverse and composite image search and a paper about how a set of algorithms can learn to play and beat expert human players of the Atari games Breakout, Enduro and Pong. Facebook had also been interested in the team at DeepMind, as The Information reported, though a source described talks late last year with Facebook CEO Mark Zuckerberg as aimed more at scooping up some of the deep learning researchers than buying the full company. Amir Efrati at The Information anticipated the Google acquisition in a paywalled article in December, which laid out the “arms race” in deep learning between tech giants that are increasingly eager to hire researchers in the small field as part of general efforts to make their products smarter. Efrati also reported that the DeepMind Atari demonstration had impressed conference attendees in December. Going Deep Deep learning is a form of machine learning in which researchers attempt to train computer algorithms to spot meaningful patterns by showing them lots of data, rather than trying to program in every rule about the world. CONTINUE READING...

ALSO: Google Hires Brains that Helped Supercharge Machine Learning


Geoffrey Hinton (right) Alex Krizhevsky, and Ilya Sutskever (left) will do machine learning work at Google. Photo: U of T
GOOGLE HAS HIRED the man who showed how to make computers learn much like the human brain. His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students — Alex Krizhevsky and Ilya Sutskever. Their job: to help Google make sense of the growing mountains of data it is indexing and to improve products that already use machine learning — products such as Android voice search. Google paid an undisclosed sum to buy Hinton’s company, DNNresearch. It’s a bit of a best-of-both-worlds deal for the researcher. He gets to stay in Toronto, splitting his time between Google and his teaching duties at the University of Toronto, while Krizhevsky and Sutskever fly south to work at Google’s Mountain View, California campus. Back in the 1980s, Hinton kicked off research into neural networks, a field of machine learning where programmers can build machine learning models that help them to sift through vast quantities of data and put together patterns, much like the human brain. Once a hot research topic, neural networks had apparently failed to live up to their initial promises until around 2006, when Hinton and his researchers — spurred on by some new kick-ass microprocessors — developed new “deep learning” techniques that fine-tuned the tricky and time consuming process of building neural network models for computer analysis. “Deep learning, pioneered by Hinton, has revolutionized language understanding and language translation,” said Ed Lazowska, a computer science professor at the University of Washington. In an email interview, he said that a pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was “one of many things made possible by Hinton’s work.” “Hinton has been working on neural networks for decades, and is one of the most brilliant minds of the field,” said Andrew Ng, the Stanford University professor who set up Google’s neural network team in 2011. Ng invited Hinton to Google last summer, where the Toronto academic spent a few months as a visiting professor. “I’m thrilled that he’ll be continuing this work there, and am sure he’ll help drive forward deep learning research at Google,” Ng said via email. CONTINUE READING AND WATCH VIDEO DEMO...


READ FULL MEDIA REPORTS:

INSIDE THE ARTIFICIAL BRAIN THAT’S REMAKING THE GOOGLE EMPIRE


GETTY IMAGE

WIRED.COM, NOVEMBER 9, 2015 (WIRED MAGAZINE) BY AUTHOR: ROBERT MCMILLAN. DATE OF PUBLICATION: 07.16.14. 07.16.14 TIME OF PUBLICATION: 6:30 AM. -IT WAS ONE of the most tedious jobs on the internet.

A team of Googlers would spend day after day staring at computer screens, scrutinizing tiny snippets of street photographs, asking themselves the same question over and over again: “Am I looking at an address or not?’ Click. Yes. Click. Yes. Click. No.

This was a critical part of building the company’s Google Maps service. Knowing the precise address of a building is really helpful information for mapmakers. But that didn’t make life any easier for those poor Googlers who had to figure out whether a string of numbers captured by Google’s roving Street View cars was a phone number, a graffiti tag, or a legitimate address.

Then, a few months ago, they were relieved of their agony, after some Google engineers trained the company’s machines to handle this thankless task. Traditionally, computers have muffed this advanced kind of image recognition, and Google finally cracked the problem with its new artificial intelligence system, known as Google Brain. With Brain, Google can now transcribe all of the addresses that Street View has captured in France in less than an hour.

“Google is not really a search company. It’s a machine-learning company.”

Since its birth in the company’s secretive X Labs three years ago, the Google Brain has flourished inside the company, giving its army of software engineers a way to apply cutting-edge machine-learning algorithms to a growing array of problems. And in many ways, it seems likely to give Google an edge as it expands into new territory over the next decade, much in the way that its search algorithms and data center expertise helped build its massively successful advertising business during the last ten years.

“Google is not really a search company. It’s a machine-learning company,” says Matthew Zeiler, the CEO of visual search startup Clarifai, who worked on Google Brain during a pair of internships. He says that all of Google’s most-important projects—autonomous cars, advertising, Google Maps—stand to gain from this type of research. “Everything in the company is really driven by machine learning.”


Google's Jeff Dean
. Ariel Zambelich/WIRED

In addition to the Google Maps work, there’s Android’s voice recognition software and Google+’s image search. But that’s just the beginning, according to Jeff Dean, one of primary thinkers behind the Brain project. He believes the Brain will help with the company’s search algorithms and boost Google Translate.

“We now have probably 30 or 40 different teams at Google using our infrastructure,” says Dean. “Some in production ways, some are exploring it and comparing it to their existing systems, and generally getting pretty good results for a pretty broad set of problems.”

The project is part of a much larger shift towards a new form of artificial intelligence called “deep learning.” Facebook is exploring similar work, and so is Microsoft, IBM, and others. But it seems that Google has pushed this technology further—at least for the moment.

CONTINUE READING HERE...

AI as a Service Google

Brain—an internal codename, not anything official—started back in 2011, when Stanford’s Andrew Ng joined Google X, the company’s “moonshot” laboratory group, to experiment with deep learning. About a year later, Google had reduced Android’s voice recognition error rate by an astounding 25 percent. Soon the company began snatching up every deep learning expert it could find.

Last year, Google hired Geoff Hinton, one of the world’s foremost deep-learning experts. And then in January, the company shelled out $400 million for DeepMind, a secretive deep learning company.

With deep learning, computer scientists build software models that simulate—to a certain extent—the learning model of the human brain. These models can then be trained on a mountain of new data, tweaked and eventually applied to brand new types of jobs.

An image recognition model build for Google Image Search, for example, might also help out the Google Maps team. A text analysis model might help Google’s search engine, but it might be useful for Google+ too.


Difficult but correctly transcribed examples from the internal street numbers dataset. Photo is a sample of Street View images that Google Brain can read.

Google Google has made a handful of its AI models available on the corporate internet and Dean and his team have build the back-end software that lets Google’s army of servers number crunch the data and then present the results on a software dashboard that shows developers how well the AI code worked. “It looks like a nuclear reactor control panel,” says Dean.

With some projects— the Android voice work, for instance—Jeff Dean’s team needs to do some heavy lifting to make the the learning models work properly for the job at hand. But perhaps half of the teams now using the Google Brain software are simply downloading the source code, tweaking a configuration file, and then pointing Google Brain at their own data.

“If you want to do leading edge research in this area and really advance the state-of-the-art in what kinds of models make sense for new kinds of problems, then you really do need a lot of years of training in machine learning,” says Dean. “But if you want to apply this stuff, and what you’re doing is a problem that’s somewhat similar to problems that have already been solved by a deep model, then…people have had pretty good success with that, without being deep learning experts.”

The New MapReduce

This form of internal code-sharing has already helped another cutting-edge Google technology called MapReduce catch fire.

A decade ago, Dean was part of the team that built MapReduce as a way to harness Google’s tens of thousands of servers and train them on a single problem—indexing the world wide web, for example. The MapReduce code was eventually published internally and Google’s razor-sharp engineering staff figure out how to use train it on a whole new class of big data computing problems.

The ideas behind MapReduce were eventually coded into an open-source project called Hadoop, which gave the rest of the world the number-crunching prowess that had once been the sole provenance of Google.

This may eventually happen with Google Brain too, as details of Google’s grand AI project trickle out. In January, the company published a paper on its Google Maps work, and given Google’s history of sharing its research work, more such publications are likely.

Given the breadth of the problems these deep learning algorithms solve, there’s a lot more for Google to do with Dean and his team’s code. They’ve also found that the models tend to become more accurate the more data they consume. That may be the next big goal for Google: building AI models that are based on billions of data points, not just millions. As Dean says: “We’re trying to push the next level of scalability in training really, really big models that are accurate.”


WIRED

The AI Startup Google Should Probably Snatch Up Fast


CLARIFAI

First, Google ACQUIRED A STARTUP called DNNresearch (Read next story below), snapping up some of the world’s foremost experts in a burgeoning field of artificial intelligence known as deep learning.

Then it shelled out $400 million for a secretive deep learning startup called DeepMind (Read below). Much like Facebook, Microsoft, and others, Google sees deep learning as the future of AI on the web, a better way of handling everything from voice and image recognition to language translation.

But there’s one notable deep learning company that Google hasn’t yet bought. It’s called Clarifai, and it may remain as an independent operation. Clarifai, you see, wants to open up some of the deep learning secrets used by Google, Facebook, and other companies and share them with the rest of the world.

Clarifai specializes in using deep learning algorithms for visual search. In short, it’s building software that will help you find photos—whether they’re on your mobile phone, a dating website, or on a corporate network—and it will sell this software to all sorts of other companies that want to roll it into their own online services. “We’re interested in making search through images simple,” says founder Matthew Zeiler, a 27-year-old researcher, fresh out of New York University’s computer science PhD program.

Last year, along with NYU Professor Rob Fergus, Zeiler won a key image recognition test in a closely watched artificial intelligence competition called ImageNet. Fergus was soon snatched up by Facebook, and the big tech companies wanted to hire Zeiler too. But he had other plans.

‘It’s really defining your search in a visual way—not just in a text way.’ Over the past few years, there’s been a major-league talent grab going on in the world of deep learning, which relies on computer models that simulate the way information is processed by the human brain. In addition to Fergus, Facebook hired another well-known academic named Yann LeCun. Baidu picked up Stanford’s Andrew Ng, and Apple is building out a team too. The technology has already improved Android’s voice recognition and helped Microsoft create a futuristic live voice translation system called Skype Translate. But Zeiler thinks that many others could benefit from deep learning.

The trouble is, unless you have the money to hire your own deep learning experts, it can be hard to get the technology just right. The really difficult part is building learning models—essentially algorithms for processing all of the visual data—that work quickly across many different types of images. “To train these models is more of an art than a science,” says Zeiler. “It takes a lot of years of experience.” That’s where Clarifai comes in. Zieler has spent the past five years working with two of the biggest names in the field on this kind of learning model: Geoff Hinton—now at Google—and Facebook’s Yann LeCun.

The idea is that you can upload an image to the Clarifai software, and it will figure out what’s in your picture and offer you more of the same. “It’s really defining your search in a visual way—not just in a text way,” says Zeiler.

CONTINUE READING HERE...

An example of how the software works, using an image of one of our favorite internet cats, Lil Bub.Click to Open Overlay Gallery An example of how Clarifai’s software works, using an image of one of our favorite internet cats, Lil Bub. This could make Clarifai’s software appealing to businesses that own a large number of photographs but don’t yet have a good way to search through them. “The number of images and videos coming online is increasing,” says Max Krohn, a co-founder of the dating web site OKCupid. “So something has to make sense of all of that. And this idea that you just upload to Google and let them take care of that, it’s good for consumers, but it’s not good for enterprise or for some commerce plays.” Krohn became an angel investor in Clarifai after checking out Zieler’s image searching demo late last year. Another investor: Google Ventures.

Clarifai is developing an application program interface, or API, that will let software developers access its image search technology over the net. The company plans on licensing its software to corporate users—stock image companies, for example– and it also wants to build a consumer-grade app that could index and search the photos on your phone, much like Google’s Photos app. Zeiler thinks it could be useful in e-commerce and targeted advertising too. “Let’s say that you’re walking down the street and you want to buy that dress that you see on some girl,” he says. “Take a shot of it and we can instantly match it on all of the online stores.”

If you think that sounds like something that Google, Facebook, and even Amazon might be interested in, you’re right. But Clarifai is not yet ready to get “serious about acquisitions,” Zeiler says. For now, he’s focused on other things. “We really want to get something out there that users are going to be able to use and benefit from.”


WIRED

More on DeepMind: AI Startup to Work Directly With Google’s Search Team SCIENCE Liz Gannes By Liz Gannes @LizGannes EMAIL ETHICS James Temple By James Temple @jtemple EMAIL ETHICS January 27, 2014, 6:09 PM PST


SCIENCE Liz Gannes By Liz Gannes @LizGannes EMAIL ETHICS James Temple By James Temple @jtemple EMAIL ETHICS January 27, 2014, 6:09 PM PST

Google has been buying a lot of crazy stuff lately. At least eight robot companies, including humanoid robot-maker Boston Dynamics. Nest, the smart-home company that designs thermostats and smoke detectors. And now DeepMind, an artificial intelligence startup.


Boston Dynamics’ four-legged robot named WildCat can gallop at high speeds. Credit Boston Dynamics

Taken together, the deals might all seem to add up to Skynet. But sources said DeepMind is actually being inserted into Google’s oldest team: Search.


Jeff Dean will be working closely with DeepMind.

Google’s Jeff Dean will be working closely with DeepMind. Or, as search is known at Google today, the “Knowledge” group — so-called because it no longer just finds keywords on Web pages, but instead connects larger concepts. Knowledge is led by Google SVP Alan Eustace, but DeepMind will work closely with a team led by Jeff Dean, a near-15-year Google veteran best known for his work on distributed systems.

By contrast, the pack of acquired robots will report to former Android boss Andy Rubin, and Nest will continue to be managed by Tony Fadell (who is to report directly to Google CEO Larry Page).

“Manhattan Project for AI” It may be hard to believe given the $400 million price tag (perhaps more, with earn-outs), but DeepMind is seen by Google as primarily a talent acquisition.

London-based DeepMind had not yet released any products, but sources said it was working on at least three: “A game with very advanced game AI, a smarter recommendation system for online commerce and something to do with images,” is how one source described it.

DeepMind employs a team of at least 50 people and has secured more than $50 million in funding, and it competed for talent with companies like Google, Facebook, Baidu, IBM, Microsoft and Qualcomm. As far as AI goes, it was perhaps the only startup name that could be included on that list.

“If anyone builds something remotely resembling artificial general intelligence, this will be the team,” one early investor in DeepMind told Re/code today.

“Think Manhattan Project for AI.”

DeepMind’s technology does become part of Google through the acquisition. There are traces of it around the Web, including three U.S. patent applications around reverse and composite image search and a paper about how a set of algorithms can learn to play and beat expert human players of the Atari games Breakout, Enduro and Pong.

Facebook had also been interested in the team at DeepMind, as The Information reported, though a source described talks late last year with Facebook CEO Mark Zuckerberg as aimed more at scooping up some of the deep learning researchers than buying the full company.

Amir Efrati at The Information anticipated the Google acquisition in a paywalled article in December, which laid out the “arms race” in deep learning between tech giants that are increasingly eager to hire researchers in the small field as part of general efforts to make their products smarter. Efrati also reported that the DeepMind Atari demonstration had impressed conference attendees in December.

Going Deep

Deep learning is a form of machine learning in which researchers attempt to train computer algorithms to spot meaningful patterns by showing them lots of data, rather than trying to program in every rule about the world.

CONTINUE READING HERE...

Taking inspiration from the way neurons work in the human brain, deep learning uses layers of algorithms that successively recognize increasingly complex features — going from, say, edges to circles to an eye in an image.


DeepMind co-founder Demis Hassabis at the World Series of Poker

Notably, these techniques have allowed researchers to train algorithms using unstructured data, where features haven’t been laboriously labeled by human beings ahead of time. It’s not a new concept, but recent refinements have resulted in significant advances over traditional AI approaches.

Yoshua Bengio, a computer science professor at the University of Montreal, organized a deep learning workshop at the Neural Information Processing Systems conference where DeepMind presented the Atari paper.

Bengio said DeepMind was essentially using deep learning to train software to solve problems even when feedback is indirect and delayed. For the paper, DeepMind trained software to play video games without teaching it the rules, forcing it instead to learn through its own errors and poor scores.

Bengio used an analogy to explain: It’s easier for a student to learn when a teacher corrects every answer on a test, but DeepMind is trying to get machines to learn when the only feedback is the grade.

“It’s a much harder problem,” Bengio said. “But there are lots of problems in the real world that are like this.”

Fear That Winter Is Coming

The ramping excitement — and the head-turning size of the Google-DeepMind talent acquisition deal — has made some in the artificial intelligence space concerned.

AI folks tend to be a skittish and skeptical breed. That’s because the promise of AI — that machines could be as smart as humans — is still more science fiction than reality. And hype has spiraled out of control multiple times over the past 75 years, with repeated letdowns famously leading to so-called “AI winters,” when funding and interest went cold for years at a time.

In particular, the notion that DeepMind asked that Google create an internal ethics board as a condition of the acquisition, as reported by The Information, had some AI researchers griping.

Google declined to comment on speculation about the creation of an ethics board.

It’s strange to imagine that trying to hold a giant company like Google to an ethical standard would be a cause for concern. (C’mon — any modern moviegoer is familiar with the specter of robots taking over the world. Of course there are ethical issues present.)

But some in the AI research community think that’s something that can be dealt with at a more realistic date.

“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”

Of course, other people see the value of an ethics board.

Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, said an ethics board should merely raise the potential issues that software designers might not consider in designing AI tools.

“I don’t see the board’s job being to say ‘you can’t do this and can do this,’ but rather ‘let’s look at … how we can mitigate the risks and address the possible harms,’” he said.

To DeepMind’s credit, it was not exactly an epicenter of hype. Prior to being bought, the company was only really known to people who were recruited by it or had colleagues who were. In a 2011 interview that predated DeepMind, co-founder Shane Legg said he gave only a 50 percent chance that human-level machine intelligence would exist by 2028.

Search, Knowledge and Kittens

So what will Google do with DeepMind?

Artificial intelligence is core to many teams at Google, from the self-driving car to the search results page.

Jeff Dean (the Google executive running the team that DeepMind is joining) was the lead author on a paper in 2012 that boasted of training a deep network “30 times larger than previously reported in the literature” for the purposes of large visual object recognition tasks and speedy speech recognition. He also worked on a somewhat famous project where a neural network of 16,000 computers presented with stills from 10 million YouTube videos taught itself to recognize cats.


Image from a paper on the self-trained cat detector built by Google and Stanford

Image from a paper on the self-trained cat detector built by Google and Stanford That project was originally part of the secretive Google X research lab, but was later incorporated into more core search work, John Markoff at the New York Times reported.

“Google uses machine learning in every nook and cranny of what they do,” said Pedro Domingos, a computer science professor at the University of Washington. “Larry Page and Sergey Brin don’t say it, but they want to solve the AI problem. They really do want AI to come true.”

Google has been buying up companies and hiring leading researchers in the artificial intelligence space for years, including Ray Kurzweil, Sebastian Thrun, Peter Norvig and Geoffrey Hinton. The acquisition of the DeepMind team adds co-founder Demis Hassabis to that lineup, who worked as a neuroscientist before moving into AI.

Hassabis has closely studied how the brain functions — particularly the hippocampus, which is associated with memory — and worked on algorithms that closely model these natural processes.

“He is serious about combining neuroscience and machine learning, which is a very hot and very promising area,” Domingos said.


WIRED

Google Hires Brains that Helped Supercharge Machine Learning


Geoffrey Hinton (right) Alex Krizhevsky, and Ilya Sutskever (left) will do machine learning work at Google. Photo: U of T

GOOGLE HAS HIRED the man who showed how to make computers learn much like the human brain.

His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students — Alex Krizhevsky and Ilya Sutskever. Their job: to help Google make sense of the growing mountains of data it is indexing and to improve products that already use machine learning — products such as Android voice search.

Google paid an undisclosed sum to buy Hinton’s company, DNNresearch. It’s a bit of a best-of-both-worlds deal for the researcher. He gets to stay in Toronto, splitting his time between Google and his teaching duties at the University of Toronto, while Krizhevsky and Sutskever fly south to work at Google’s Mountain View, California campus.

Back in the 1980s, Hinton kicked off research into neural networks, a field of machine learning where programmers can build machine learning models that help them to sift through vast quantities of data and put together patterns, much like the human brain.

Once a hot research topic, neural networks had apparently failed to live up to their initial promises until around 2006, when Hinton and his researchers — spurred on by some new kick-ass microprocessors — developed new “deep learning” techniques that fine-tuned the tricky and time consuming process of building neural network models for computer analysis.

“Deep learning, pioneered by Hinton, has revolutionized language understanding and language translation,” said Ed Lazowska, a computer science professor at the University of Washington. In an email interview, he said that a pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was “one of many things made possible by Hinton’s work.”

“Hinton has been working on neural networks for decades, and is one of the most brilliant minds of the field,” said Andrew Ng, the Stanford University professor who set up Google’s neural network team in 2011. Ng invited Hinton to Google last summer, where the Toronto academic spent a few months as a visiting professor. “I’m thrilled that he’ll be continuing this work there, and am sure he’ll help drive forward deep learning research at Google,” Ng said via email.

CONTINUE READING HERE...

Google didn’t want to comment, or let Hinton talk to us about his new job, but clearly, it’s going to be important to Google’s future. Neural network techniques helped reduce the error rate with Google’s latest release of its voice recognition technology by 25 percent. And last month Google Fellow Jeff Dean told us that neural networks are becoming widely used in many areas of computer science.

“We’re not quite as far along in deploying these to other products, but there are obvious tie-ins for image search. You’d like to be able to use the pixels of the image and then identify what object that is,” he said. “There are a bunch of other more specialized domains like optical character recognition.”

“I am betting on Google’s team to be the epicenter of future breakthroughs,” Hinton wrote in a Google+ post announcing his move.

You can watch Rick Rashid’s cool demo here:

 

https://youtu.be/Nu-nlQqFCKg


Chief News Editor: Sol Jose Vanzi
© Copyright, 2015 by PHILIPPINE HEADLINE NEWS ONLINE
All rights reserved


PHILIPPINE HEADLINE NEWS ONLINE [PHNO] WEBSITE