© Copyright, 2015 (PHNO)
 http://newsflash.org

PHNO SCIENCE & INFOTECH NEWS
(Mini Reads followed by Full Reports)

PHNO ACCESSIBILITY AWARENESS: This page is dedicated to my accessibility student Mrs. Olympia Monfero of Thorncliffe Park, Toronto, Ontario, Canada. For the past ten years she has been my ace and steadfast Adaptive/Assistive Program student. Almost blind she uses 2 special programs, ZoomText and Kurzweil 1000 Now at 93, believe me, her memory is still a lot better than mine. She is now almost totally blind, now sees mostly shadows and knows who I am through my familiar voice.Here she is, the unforgettable and my most inspirational student 'Mommy Monfero':


TEN YEARS AGO -Mommy Monfero, 83 years old, diabetic, totally blind in one eye, only 20% vision left in other eye (in photo at left or top), checks large-print hard copy of an enlarged Word document she printed. She learned quickly using her MS Word 2003 to type her 'personal journal' mostly about her life with her late husband during the Japanese war in the Philippines. She was an accountant by profession in the Philippines, she learned and uses Excel for her Club's financial tasks. She loves expressing her artistic ideas designing patiently and meticulously her Seniors Club Christmas Party Programmes on Microsoft Publisher with my help readjusting sizes and fonts of her own choice and design. She uses  ZoomText to read her emails & Kurzweil 1000 scans & reads her postal delivered mail, newsletters, church bulletin, Sunday homilies and prayer books. She is totally a keyboard hotkeys user and types her keyboard keys shortcuts (all from memory) totally depending on what she hears from the ZoomText voice technology. She only sees shadows now on the enlarged screen. Today, she still has a fantastic memory & marked concentration power. She enjoys opening photo albums and videos her grandchildren uploads on her PC and listens to the videos from her family esp her grandchildren on You Tube. Today, at 93, she has upgraded her desktop PC to Windows 7 and attempting to navigate her timeline on Facebook where she is already joined by her children, grandchildren and greatgrand kids on Social Media. But she still needs me there beside her when on her Facebook timeline. I still have to describe to her the photos of her new grandchild as she  listens to comments and posts from her daughters and grandchildren via the Voice. This  'Artificial Intelligence' app is what we have been waiting for so she can be independently a FB user. Thank you Matt King, TY, Facebook! and TY! Wired!! lee quesada PHNO webmaster.PHOTO FROM  YEAR 2005 PHOTO FROM 'LEEQUESADA' PERSONAL  HOMEPAGE.

--------------------------------------------------------------------------------

'ARTIFICIAL INTELLIGENCE (AI)': FACEBOOK's 'AI' CAN CAPTION PHOTOS FOR THE BLIND ON ITS OWN
[Though this AI tool is merely a prototype, Facebook plans to eventually share it with the world at large. And that’s no small thing. About 50,000 people actively use the social network through Apple Voiceover, a popular text-to-speech system, and the overall population of blind Facebookers is undoubtedly much larger.]


Facebook’s AI Can Caption Photos for the Blind on Its Own Through the power of "deep learning," Facebook is figuring out how to make the social network accessible to nearly everyone. The post Facebook’s AI Can Caption Photos for the Blind on Its Own appeared first on WIRED. MATT KING IS blind, so he can’t see the photo. He is part ofthe FACEBOOK ACCESSIBILITY TEAM. And though it was posted to his Facebook feed with a rather lengthy caption, that’s no help. Thanks to text-to-speech software, his laptop reads the caption aloud, but it’s in German. And King doesn’t understand German. But then he runs an artificial intelligence tool under development at Facebook, and after analyzing the photo, the tool goes a long way towards describing it. The scene is outdoors, the AI says. It includes grass and trees and clouds. It’s near some water. King can’t completely imagine the photo—a shot of a friend with a bicycle during a ride through European countryside—but he has a decent idea of what it looks like. “My dream is that it would also tell me that it includes Christoph with his bike,” King says. “But from my perspective as a blind user, going from essentially zero percent satisfaction from a photo to somewhere in the neighborhood of half … is a huge jump.” 'As a blind user, going from essentially zero percent satisfaction from a photo to somewhere in the neighborhood of half ... is a huge jump.'
MATT KING, FACEBOOK The 49-year-old King is part of the Facebook Accessibility Team. This means he works to hone the world’s most popular social network so that it can properly serve people with disabilities, including people who are deaf, people without full use of their hands, and, yes, people who are blind, like King himself. Though that AI tool is merely a prototype, Facebook plans to eventually share it with the world at large. And that’s no small thing. About 50,000 people actively use the social network through Apple Voiceover, a popular text-to-speech system, and the overall population of blind Facebookers is undoubtedly much larger. READ MORE...

ALSO: Meet the Team That Makes It Possible for the Blind to Use Facebook; and meet JESSI LORENZ...


Facebook's Ramya Sethuraman and Jeff Wieland. Josh Valcarcel/WIRED
JESSIE LORENZ CAN’T see Facebook. But it gives her a better way to see the world—and it gives the world a better way to see her.
Lorenz has been blind since birth, and in some ways, this limits how she interacts with the people around her. “A lot of people are afraid of the blind,” she explains. “When you meet them in person, there are barriers.” But in connecting with many of the same people on Facebook, she can push through these barriers.“Facebook lets me control the narrative and break down some of the stigma and show people who I am,” she says. “It can change hearts and minds. It can make people like me—who are scary—more real and more human.” She uses Facebook through an iPhone and a tool called Voiceover, which converts text into spoken words. It’s not a perfect arrangement—Facebook photos are typically identified only with the word “photo”—but in letting her read and write on the social network, Voiceover and other tools provide a wonderfully immediate way to interact with people both near and far. “I can ask other parents about a playdate or a repair man or a babysitter, just like anyone else would,” says Lorenz, the executive director of the Independent Living Resource Center, a non-profit that supports people with disabilities in the San Francisco Bay Area. “Blindness becomes irrelevant in situations like that.” Lorenz is one of about 50,000 people who actively use Facebook through Apple Voiceover. No doubt, many more use it through additional text-to-speech tools. And tens of thousands of others—people who are deaf, or can’t use computer keyboards or mice or touch screens—use the social network in ways that most of its 1.3 billion users do not. They use closed captioning, mouth-controlled joysticks, and other tools—some built into Facebook, some that plug into Facebook from the outside. READ MORE...

ALSO: Facebook’s ‘Deep Learning’ Guru Reveals the Future of AI (Artificial Intelligence)


Yann LeCun. Photo: WIRED/Josh Valcarcel
NEW YORK UNIVERSITY professor Yann LeCun has spent the last 30 years exploring artificial intelligence, designing “deep learning” computing systems that process information in ways not unlike the human brain. And now he’s bringing this work to Facebook.
Earlier this week, the social networking giant told the world it had hired the French-born scientist to head its new artificial intelligence lab, which will span operations in California, London, and New York. From Facebook’s new offices on Manhattan’s Astor Place, LeCun will oversee the development of deep-learning tools that can help Facebook analyze data and behavior on its massively popular social networking service — and ultimately revamp the way the thing operates. With deep learning, Facebook could automatically identify faces in the photographs you upload, automatically tag them with the right names, and instantly share them with friends and family who might enjoy them too. Using similar techniques to analyze your daily activity on the site, it could automatically show you more stuff you wanna see. In some ways, Facebook and AI is a rather creepy combination. Deep learning provides a more effective means of analyzing your most personal of habits. “What Facebook can do with deep learning is unlimited,” says Abdel-rahman Mohamed, who worked on similar AI research at the University of Toronto. “Every day, Facebook is collecting the network of relationships between people. It’s getting your activity over the course of the day. It knows how you vote — Democrat or Republican. It knows what products you buy.” But at the same time, if you assume the company can balance its AI efforts with your need for privacy, this emerging field of research promises so much for the social networking service — and so many other web giants are moving down the same road, including Google, Microsoft, and Chinese search engine Baidu. “It’s scary on one side,” says Mohamed. “But on the other side, it can make our lives even better.”  read more...

ALSO WATCH VIDEO: WHAT IS ARTIFICIAL INTELLIGENCE (AI)?


FROM THE 'SCHOOL OF LIFE' YOU TUBE
Published on Aug 17, 2015 Should we be scared of artificial intelligence and all it will bring us? Not so long as we remember to make sure to build artificial emotional intelligence into the technology. Please help us to make films by subscribing here: ........Watch video...


READ FULL MEDIA REPORTS:

Facebook’s AI Can Caption Photos for the Blind on Its Own


Facebook's Matt King, Jeff Wieland, and Shaomei Wu. FACEBOOK

WASHINGTON, OCTOBER 26, 2015 (WIRED MAGAZINE) MATT KING IS blind, so he can’t see the photo. And though it was posted to his Facebook feed with a rather lengthy caption, that’s no help. Thanks to text-to-speech software, his laptop reads the caption aloud, but it’s in German. And King doesn’t understand German.

But then he runs an artificial intelligence tool under development at Facebook, and after analyzing the photo, the tool goes a long way towards describing it. The scene is outdoors, the AI says. It includes grass and trees and clouds. It’s near some water. King can’t completely imagine the photo—a shot of a friend with a bicycle during a ride through European countryside—but he has a decent idea of what it looks like.

“My dream is that it would also tell me that it includes Christoph with his bike,” King says. “But from my perspective as a blind user, going from essentially zero percent satisfaction from a photo to somewhere in the neighborhood of half … is a huge jump.”

'As a blind user, going from essentially zero percent satisfaction from a photo
to somewhere in the neighborhood of half ... is a huge jump.' MATT KING, FACEBOOK

The 49-year-old King is part of the Facebook Accessibility Team. This means he works to hone the world’s most popular social network so that it can properly serve people with disabilities, including people who are deaf, people without full use of their hands, and, yes, people who are blind, like King himself.

Though that AI tool is merely a prototype, Facebook plans to eventually share it with the world at large. And that’s no small thing. About 50,000 people actively use the social network through Apple Voiceover, a popular text-to-speech system, and the overall population of blind Facebookers is undoubtedly much larger.

READ MORE...

Like other social networks, Facebook is an extremely visual medium. But with help from a tool like Apple Voiceover, someone like King—who lost the last of his sight in college—can connect with friends and colleagues over Facebook much like anyone else can. As Jessie Lorenz, the executive director of the nonprofit Independent Living Resource Center, told WIRED earlier this year: “I can ask other parents about a playdate or a repair man or a babysitter, just like anyone else would. Blindness becomes irrelevant in situations like that.”

King tunes his text-to-speech tool to read Facebook posts at a rapid-fire pace—so fast that no one else in the room can understand it. That means he can browse his News Feed as quickly as the typical Facebooker. And in some cases, even without Facebook’s experimental AI system, he can start to understand what’s in a photo. Some photos include decent captions, and others offer meta-data describing who took them and when. But the AI system, bootstrapped with help from an accessibility researcher named Shaomei Wu and various Facebook AI engineers, pushes things significantly further. It can provide context using nothing but the photo itself.

“The team started with trying to make sure that all the products that [Facebook] builds are usable by people with disabilities,” says Jeff Wieland, the founder and head of Facebook’s accessibility team. “Long-term, we really want to get to the point where we’re building innovative technologies for people with disabilities.”

‘That’s Really Where We Want to Go’

Facebook’s photo-reading system is based on what’s called deep learning, a technique the company has long used to identify faces and objects in photos posted to its social network. Using vast neural networks—interconnected machines that approximate the web of neurons in the human brain—the company can teach its services to identify photos by analyzing enormous numbers of similar images.

To identify your face, for instance, it feeds all known pictures of you into the neural network, and over time, the system develops a pretty good idea of what you look like. This is how Facebook seems to recognize you and your friends when you upload a photo and start adding tags.

MORE DEEP LEARNING

Google uses similar neural networks to help you locate photos inside its new Google Photos app, and the same basic technology can drive all sorts of other online tasks, from speech recognition to language translation. It’s only natural that Facebook would use this technology to describe photos for the blind—though the technology is far from perfect.

“For object recognition and face recognition, we’ve basically reached human performance,” says Yoshua Bengio, a professor at the University of Montreal and one of the founding fathers of deep learning. “But there are still problems involving complex images, lighting, understanding the whole scene, and so on.”

At the moment, Facebook’s system merely provides a basic description of each photo. It can identify certain objects. It can tell you whether the photo was taken indoors or outdoors. It can say whether the people in the photo are smiling. But as King explains, this kind of thing can be quite useful. It’s particularly useful when friends and family upload new profile pics, which typically arrive without a caption.

That said, there’s ample room to improve the system. Deep learning neural nets are also pretty good at grasping natural language—the way humans naturally speak—and companies such as Google and Microsoft have published research papers showing how these neural nets can be used to automatically generate more complete photo captions—captions that describe the scene in full. This would be the next logical step for Facebook. “We’re returning a list. We’re not returning a story,” Wieland says. “But that’s really where we want to go.”


JOSH VALCARCEL/WIRED

The Entire Internet

The work is part of a broader effort to bring Facebook to people with disabilities.

The Accessibility Team, which Wieland founded after working at the User Experience Lab that tracks how Facebook is used across the ‘net, also facilitates closed captioning for the deaf. It promotes the use of mouth-controlled joysticks and other tools for those who can’t use their hands. And it works to ensure that the social network can be used in the developing world, where Internet connections are slower and less reliable than those in the States.

At the same time, Wieland’s team is hoping to push other companies in similar directions. In recent months, it helped found the Teaching Accessibility Initiative, a consortium of tech companies—including Yahoo and Microsoft—that aims to share practices in this area. And it’s working to modify React, Facebook’s open source app development tool, for use with text-to-speech readers and others software that aids people with disabilities. Because it’s open source, anyone can use React, and according to data from GitHub, it has become an extremely popular means of building new apps. “It’s one way we can make the entire Internet accessible,” Wieland says.

The possibilities within and beyond the company are enormous. As King notes, deep learning can be applied to speech recognition as well as image recognition, to moving images as well as stills. “AI is applicable to all those situations,” he says. “And it’s applicable to everyone.”


WIRED

Meet the Team That Makes It Possible for the Blind to Use Facebook


Facebook's Ramya Sethuraman and Jeff Wieland. Josh Valcarcel/WIRED

JESSIE LORENZ CAN’T see Facebook. But it gives her a better way to see the world—and it gives the world a better way to see her.

Lorenz has been blind since birth, and in some ways, this limits how she interacts with the people around her. “A lot of people are afraid of the blind,” she explains. “When you meet them in person, there are barriers.” But in connecting with many of the same people on Facebook, she can push through these barriers.

“Facebook lets me control the narrative and break down some of the stigma and show people who I am,” she says. “It can change hearts and minds. It can make people like me—who are scary—more real and more human.”

She uses Facebook through an iPhone and a tool called Voiceover, which converts text into spoken words. It’s not a perfect arrangement—Facebook photos are typically identified only with the word “photo”—but in letting her read and write on the social network, Voiceover and other tools provide a wonderfully immediate way to interact with people both near and far.

“I can ask other parents about a playdate or a repair man or a babysitter, just like anyone else would,” says Lorenz, the executive director of the Independent Living Resource Center, a non-profit that supports people with disabilities in the San Francisco Bay Area. “Blindness becomes irrelevant in situations like that.”


Jessie Lorenz. courtesy Jessie Lorenz

Lorenz is one of about 50,000 people who actively use Facebook through Apple Voiceover. No doubt, many more use it through additional text-to-speech tools.

And tens of thousands of others—people who are deaf, or can’t use computer keyboards or mice or touch screens—use the social network in ways that most of its 1.3 billion users do not. They use closed captioning, mouth-controlled joysticks, and other tools—some built into Facebook, some that plug into Facebook from the outside.

READ MORE...

So many people are using the social network through such tools, Facebook now employs a team of thinkers dedicated to ensuring they work as well as possible. “We wanted to build empathy into our engineering,” says Jeff Wieland, who helps oversee this effort.

He calls it the Facebook Accessibility team, and it’s a vital thing.

Not all online services are well suited to people with disabilities. “Google is really lousy,” says Lorenz, explaining that she can use Gmail but not Google Docs or Google Calendar.

And as a service like Facebook evolves—with engineers changing things on an almost daily basis—they consistently run the risk of undermining Voiceover and other alternative means of using the social network.

Tech companies have long worked to ensure their software and services can be used by people with disabilities. Ramya Sethuraman, who helps drive Facebook’s effort, worked on similar issues with old-school software at IBM. But in the modern age, where so many services change from day to day, this requires a greater diligence.

As Sethuraman points out, other companies are tackling these issues in ways similar to Facebook, including Twitter, LinkedIn, and eBay, and for Lorenz, the improvement is apparent. “The industry is becoming more conscious about these things,” she says, “and very slowly, it’s getting better.”

The task is certainly more difficult in the modern age. But at the same time, the possibilities are greater. And the stakes are higher. “There are more people with disabilities than ever before. People are living longer. People are more likely to survive accidents,” says Adriana Mallozzi, who has cerebral palsy, typically uses Facebook and other services through a joystick she can control with her mouth, and serves as a kind of tech consultant for people with disabilities in the Boston area. “Companies have to take this into consideration.”

Free Access

Jeff Wieland founded the Facebook Accessibility team in 2011.

After studying pre-med as a college undergrad, he’d come to the company as a part-timer a few years earlier, helping out with customer support to pay the bills while also working at a Stanford University infectious disease lab, and somewhere along the way, this morphed into a career serving Facebookers in other ways.

Moving away from the world of medicine, he eventually joined the company’s User Experience research lab, where he explored how the world at large used Facebook, testing the classic “big blue app” in various ways, including through focus groups. And at one point, he realized that a portion of the app’s audience was under-served. “So, I pitched the accessibility idea,” he says. “Our goal as a company is to connect the world. If you really believe that, we need to include people with disabilities.”

He soon won approval for a dedicated accessibility team, and Sethuraman was his first hire. Basically, they work with other teams across the company to fine-tune the social network so it can be readily used by the blind, the deaf (videos have sound), and those who can’t use a keyboard or a computer mouse or a touch screen. “We think about how we can set up our engineers to build services that can be used with something like a screen reader,” Wieland says, referring to tools like Voiceover. “You have to build your code so it will work well with these things.”

Most notably, Wieland and Sethuraman work closely with Facebook’s product infrastructure team, the team that builds the basic components used by services across the social network—things like buttons and menus. “If other engineers use these tools,” he says, “they get a certain amount of accessibility for free—without even having to think about it.”

But Wieland and Sethuraman also directly target these other engineers—engineers across the company. Together with the company’s central quality-assurance team, they pinpoint holes in the Facebook interface and work with the appropriate engineers to fix them.

At one point, for instance, they helped develop a way of automatically providing Vocieover users with more information about the many photos uploaded to Facebook, so the blind can better understand what’s pictured—as opposed to merely reading what people are saying about the photos. Now, together with tools like Voiceover, Facebook can tell the blind when a photo was taken (based on meta-data uploaded with the photo) or who’s in it (based on tags from users). “We will try to pull in information, ” Wieland says, “that tells a story.”


Soon Blind People Will Be Able To “See” Images On Facebook • October 15, 2015.

The Empathy Lab

Recently, in a busy walkway at the heart of the company’s headquarters in Menlo Park, California, Wieland and Sethuraman installed something they call the Facebook Empathy Lab. It’s not for research and testing per se. It’s meant to give all Facebook employees an idea of what it’s like to use the social network through Voiceover or short keys or closed captioning or high-contrast interfaces.

It’s essentially a row of laptops and phones. With one device, you can only drive Facebook with your voice. With another, you can only use keyboard shortcuts—not mice. And a long row of phones show what it’s like to use Facebook in places where screens are small and network bandwidth is smaller.

The hope is that the company’s engineers will keep both physical disabilities and technical restrictions in mind when building something new—not just when Wieland pays them a visit, not just after it’s built. “We wanted to tie this basic thing to the Facebook culture. We wanted it to be like hacking,” Sethuraman says, referring to the continuously creative mindset that pervades the company.

Enter AI

All this may seem like an inexact science. But the results of the team’s work are apparent—at least to some. Eighteen months ago, Lorenz says, using Facebook was far more difficult—she couldn’t tag people and couldn’t upload photos, for instance—and the company’s software updates would often undermine her ability to use Voiceover.

Certainly, things are far from perfect on Facebook. When Mallozzi uses the social network on her phone—taping into it via Bluetooth wireless controls built into her wheelchair—she wishes she could more easily scroll through the service and navigate individual pages. And despite recent changes made by Wieland and team, Lorenz says Facebook still doesn’t give her much info on photos. But that may change.

The company’s new artificial intelligence lab is exploring ways of using image recognition technology to generate captions that would identify photos in more precise ways—actually describe what’s pictured. And as this is rolled out, you can bet that it will work with Voiceover too. As Wieland says: “We were just talking about this at lunch today.”

Correction: This story has been updated to properly explain Jeff Wieland’s education history.


WIRED

Facebook’s ‘Deep Learning’ Guru Reveals the Future of AI


Yann LeCun. Photo: WIRED/Josh Valcarcel

NEW YORK UNIVERSITY professor Yann LeCun has spent the last 30 years exploring artificial intelligence, designing “deep learning” computing systems that process information in ways not unlike the human brain. And now he’s bringing this work to Facebook.

Earlier this week, the social networking giant told the world it had hired the French-born scientist to head its new artificial intelligence lab, which will span operations in California, London, and New York. From Facebook’s new offices on Manhattan’s Astor Place, LeCun will oversee the development of deep-learning tools that can help Facebook analyze data and behavior on its massively popular social networking service — and ultimately revamp the way the thing operates.

With deep learning, Facebook could automatically identify faces in the photographs you upload, automatically tag them with the right names, and instantly share them with friends and family who might enjoy them too. Using similar techniques to analyze your daily activity on the site, it could automatically show you more stuff you wanna see.

In some ways, Facebook and AI is a rather creepy combination. Deep learning provides a more effective means of analyzing your most personal of habits. “What Facebook can do with deep learning is unlimited,” says Abdel-rahman Mohamed, who worked on similar AI research at the University of Toronto. “Every day, Facebook is collecting the network of relationships between people. It’s getting your activity over the course of the day. It knows how you vote — Democrat or Republican. It knows what products you buy.”

But at the same time, if you assume the company can balance its AI efforts with your need for privacy, this emerging field of research promises so much for the social networking service — and so many other web giants are moving down the same road, including Google, Microsoft, and Chinese search engine Baidu. “It’s scary on one side,” says Mohamed. “But on the other side, it can make our lives even better.”

READ MORE...

This week, LeCun is at Neural Information Processing Systems Conference in Lake Tahoe — the annual gathering of the AI community where Zuckerberg and company announced his hire — but he took a short break from the conference to discuss his new project with WIRED. We’ve edited the conversation for reasons of clarity and length.

[This article was posted BY:  AUTHOR: CADE METZ. CADE METZ BUSINESS DATE OF PUBLICATION: 12.12.13. TIME OF PUBLICATION: 6:30 AM. 6:30 AM ON WIRED MAGAZINE]

WIRED: We know you’re starting an AI lab at Facebook. But what exactly will you and the rest of your AI cohorts be working on?

LeCun: Well, I can tell you about the purpose and the goal of the new organization: It’s to make significant progress in AI. We want to do two things. One is to really make progress from a scientific point of view, from the side of technology. This will involve participating in the research community and publishing papers. The other part will be to, essentially, turn some of these technologies into things that can be used at Facebook.

But the goal is really long-term, more long-term than work that is currently taking place at Facebook. It’s going to be somewhat isolated from the day-to-day production, if you will — so that we give people some breathing room to think ahead. When you solve big problems like this, technology always comes out of it, along the way, that’s pretty useful.

‘Mark Zuckerberg calls it the theory of the mind.
How do we model — in machines — what human users
are interested in and are going to do?’
— Yann LeCun

WIRED: What might that technology look like? What might it do?

LeCun: The set of technologies that we’ll be working on is essentially anything that can make machines more intelligent. More particularly, that means things that are based on machine learning. The only way to build intelligent machines these days is to have them crunch lots of data — and build models of that data.

The particular set of approaches that have emerged over the last few years is called “deep learning.” It’s been extremely successful for applications such as image recognition, speech recognition, and a little bit for natural language processing, although not to the same extent. Those things are extremely successful right now, and even if we just concentrated on this, it could have a big impact on Facebook. People upload hundreds of millions of pictures to Facebook each day — and short videos and signals from chats and messages.

But our mission goes beyond this. How do we really understand natural language, for example? How do we build models for users, so that the content that is being shown to the user includes things that they are likely to be interested in or that are likely to help them achieve their goals — whatever those goals are — or that are likely to save them time or intrigue them or whatever. That’s really the core of Facebook. It’s currently to the point where a lot of machine learning is already used on the site — where we decide what news to show people and, on the other side of things, which ads to display.

WIRED: The science at the heart of this is actually quite old, isn’t it? People like you and Geoff Hinton, who’s now at Google, first developed these deep learning methods — known as “back-propogation” algorithms — in the mid-1980s.

LeCun: That’s the root of it. But we’ve gone way beyond that. Back-propagation allows us do what’s called “supervised running.” So, you have a collection of images, together with labels, and you can train the system to map new images to labels. This is what Google and Baidu are currently using for tagging images in user photo collections.

That we know works. But then you have things like video and natural language, for which we have very little label data. We can’t just show a video and ask a machine to tell us what’s in it. We don’t have enough label data, and it’s not clear that we could — even by spending a lot of time getting users to provide labels — achieve the same level of performance that we do for images.

So, what we do is use the structure of the video to help the system build a model — the fact that some objects are in front of each other, for example. When the camera moves, the objects that are in front move differently from those in the back. A model of the object spontaneously emerges from this. But it requires us to invent new algorithms, new “unsupervised” learning algorithms.

This has been a very active area of research within the deep learning community. None of us believe we have the magic bullet for this, but we have some things that sort of work and that, in some cases, improve the performance of purely supervised systems quite a lot.

WIRED: You mentioned Google and Baidu. Other web companies, such as Microsoft and IBM, are doing deep learning work as well. From the outside, it seems like all this work has emerged from a relatively small group of deep learning academics, including you and Google’s Geoff Hinton.

LeCun: You’re absolutely right — though it is quickly growing, I have to say. You have to realize that deep learning — I hope you will forgive me for saying this — is really a conspiracy between Geoff Hinton and myself and Yoshua Bengio, from the University of Montreal. Ten years ago, we got together and thought we were really starting to address this problem of learning representations of the world, for vision and speech.

Originally, this was for things like controlling robots. But we got together and got some funding from a Canadian foundation called CIFAR, the Canadian Institute For Advanced Research. Geoff was the director, and I was the chair of the advisory committee, and we would get together twice a year to discuss progress.

It was a bit of a conspiracy in that the majority of the machine learning and computer communities were really not interested in this yet. So, for a number of years, it was confined to those workshops. But then we started to publish papers and we started to garner interest. Then things started to actually work well, and that’s when industry started to get really interested.

The interest was much stronger and much quicker than from the academic world. It’s very surprising.

‘You have to realize that deep learning — I hope you will forgive me for saying this —
is really a conspiracy between Geoff Hinton and myself
 and Yoshua Bengio, from the University of Montreal’
— Yann LeCun

WIRED: How do you explain the difference between deep learning and ordinary machine learning? A lot of people are familiar with the sort of machine learning that Google did over the first tens of its life, where it would analyze large amounts of data in an effort to, say, automatically identify web-spam.

LeCun: That’s relatively simple machine learning. There’s a lot of effort that goes into creating those machine learning systems, in the sense that the system is not able to really process raw data. The data has to be turned into a form that the system can digest. That’s called a feature abstractor.

Take an image, for example. You can’t feed the raw pixels into a traditional system. You have to turn the data into a form that a classifier can digest. This is what a lot of the computer vision community has been trying to do for the last twenty or thirty years — trying to represent images in the proper way.

But what deep learning allows us to do is learn this representation process as well, instead of having to build the system by hand for each new problem. If we have lots of data and powerful computers, we can build a system that can learn what the appropriate data representation is.

A lot of the limitations of AI that we see today are due to the fact that we don’t have good representations for the signal — or the ones that we have take an enormous amount of effort to build. Deep learning allows us to do this more automatically.

And it works better too.



WHAT IS
ARTIFICIAL INTELLIGENCE? Watch video...

 
https://youtu.be/9TRv0cXUVQw

Artificial Intelligence
The School of Life

Published on Aug 17, 2015 Should we be scared of artificial intelligence and all it will bring us? Not so long as we remember to make sure to build artificial emotional intelligence into the technology.

Please help us to make films by subscribing here: http://tinyurl.com/o28mut7  Brought to you by http://www.theschooloflife.com

 Produced in collaboration with Mad Adam http://www.madadamfilms.co.uk 

Category Education License Standard YouTube License


Chief News Editor: Sol Jose Vanzi
© Copyright, 2015 by PHILIPPINE HEADLINE NEWS ONLINE
All rights reserved


PHILIPPINE HEADLINE NEWS ONLINE [PHNO] WEBSITE