PHILIPPINE HEADLINE NEWS ONLINE: Since 1997 © Copyright (PHNO) http://newsflash.org



PHNO SCIENCE & INFOTECH NEWS THIS PAST WEEK
CLICK TO READ ONLINE HERE
(Mini Reads followed by Full Reports)

MASSIVE NETWORKS OF FAKE ACCOUNTS FOUND ON TWITTER; THEY'RE  NOT BOTS BUT CLEARLY NOT RUN BY HUMANS


JANUARY 24 -Image caption -The networks of fake accounts can be used to boost followers or post junk messages  The largest network ties together more than 350,000 accounts and further work suggests others may be even bigger. UK researchers accidentally uncovered the lurking networks while probing Twitter to see how people use it. Some of the accounts have been used to fake follower numbers, send spam and boost interest in trending topics.
Hidden purpose On Twitter, bots are accounts that are run remotely by someone who automates the messages they send and activities they carry out. Some people pay to get bots to follow their account or to dilute chatter about controversial subjects. "It is difficult to assess exactly how many Twitter users are bots," said graduate student Juan Echeverria, a computer scientist at UCL, who uncovered the massive networks. Mr Echeverria's research began by combing through a sample of 1% of Twitter users in order to get a better understanding of how people use the social network. However, analysis of the data revealed some strange results that, when probed further, seemed to reveal lots of linked accounts, suggesting one person or group is running the botnet. These accounts did not act like the bots other researchers had found but were clearly not being run by humans.
READ MORE...

ALSO: 'Wrong Ivanka' from UK hits back after Trump tweet


JANUARY 20 -Meanwhile, Ivanka Trump seemed to be unaware of the Twitter furore, posting this photo of herself A woman from Brighton who was mistaken for Ivanka Trump on Twitter by none other than the US President-elect himself has told the BBC it has been a surreal start to the day. Ivanka Majic, a digital consultant, said she and her husband were woken at 06:00 by calls from the media. They told her she featured in a tweet to Donald Trump's 20 million followers. Mr Trump was quoting praise for his daughter from another Twitter user, who had used the wrong username. Donald Trump: Image copyrightTWITTER (see below) Users were quick to jump in and point out the mistake, and Ms Majic has since replied: READ MORE...

ALSO: Google offers a glimpse into its fight against fake news
(Nearly 200 publishers were kicked off its advertising network in November and December of last year)


JANUARY 26 -Google says it took down 1.7 billion ads in violation of its policies in 2016. (Kim Jin-a/Associated Press)
Nearly 200 publishers were kicked off its advertising network in November and December of last year
-In the waning months of 2016, two of the world's biggest tech companies decided they would do their part to curb the spread of hoaxes and misinformation on their platforms — by this point, widely referred to under the umbrella of "fake news." Facebook and Google announced they would explicitly ban fake news publishers from using their advertising networks to make money, while Facebook later announced additional efforts to flag and fact-check suspicious news stories in users' feeds. Psychologists say they can inoculate people against fake news How successful have these efforts been? Neither company will say much — but Google, at least, has offered a glimpse. READ MORE...

ALSO: Only Facebook knows how it spreads fake election news; Secret algorithms make it hard to judge how too-good-to-be-true stories influence voters


JANUARY 26 -Hillary Clinton campaign chairman John Podesta addresses a crowd of supporters on election night in New York. (Jim Bourg/Reuters)
If Facebook is to be believed, Hillary Clinton has deep ties to satanic rituals and the occult. The post in question has nearly 3,000 shares, and links to a story on a conspiracy-laden political site. It is most definitely fake. But like many of the stories that were posted to Facebook in this U.S. election cycle, it was written specifically for those with a right-leaning partisan bias in mind. For this particular group of voters, it just begged to be shared. 'Because the algorithms are a black box, there's no way to study them.' — Frank Pasquale, law professor at the University of Maryland
And share they did. In an election dominated by the sexist, racist, and generally outrageous invective of America's president-elect Donald Trump, Facebook proved the perfect social platform for the sharing of fake, too-good-to-be-true style news. At the end of August, The New York Times' John Herrman reported on the subtle shift in Facebook feeds across America, many of which were increasingly filled with questionable news sources and fake stories specifically designed to be shared. More recently, BuzzFeed's Craig Silverman took on the daunting task of debunking fake news stories in near-real time. READ MORE...


READ FULL MEDIA REPORTS HERE BELOW
OR CLICK HERE TO READ ONLINE

Massive networks of fake accounts found on Twitter; they are not bots but clearly not run by humans


The networks of fake accounts can be used to boost followers or post junk messages -Tweet on computer screensImage copyrightPASHAIGNATOV

MANILA,
JANUARY 30, 2017 (BBC, UK) 24 January 2017 - The largest network ties together more than 350,000 accounts and further work suggests others may be even bigger. UK researchers accidentally uncovered the lurking networks while probing Twitter to see how people use it.

Some of the accounts have been used to fake follower numbers, send spam and boost interest in trending topics.

Hidden purpose

On Twitter, bots are accounts that are run remotely by someone who automates the messages they send and activities they carry out. Some people pay to get bots to follow their account or to dilute chatter about controversial subjects.

"It is difficult to assess exactly how many Twitter users are bots," said graduate student Juan Echeverria, a computer scientist at UCL, who uncovered the massive networks.
Mr Echeverria's research began by combing through a sample of 1% of Twitter users in order to get a better understanding of how people use the social network.

However, analysis of the data revealed some strange results that, when probed further, seemed to reveal lots of linked accounts, suggesting one person or group is running the botnet. These accounts did not act like the bots other researchers had found but were clearly not being run by humans.

READ MORE...

His research suggests earlier work to find bots has missed these types of networks because they act differently to the most obvious automated accounts.

The researchers are now asking the public via a website and a Twitter account to report bots they spot to help get a better idea of how prevalent they are. Many bots are obvious because they have been created recently, have few followers, have strange user names and little content in the messages.

The network of 350,000 bots stood out because all the accounts in it shared several subtle characteristics that revealed they were linked. These included:

•tweets coming from places where nobody lives
•messages being posted only from Windows phones
•almost exclusively including quotes from Star Wars novels

It was "amazing and surprising" to discover the massive networks, said Dr Shi Zhou, a senior lecturer from UCL who oversaw Mr Echeverria's research.

"Considering all the efforts already there in detecting bots, it is amazing that we can still find so many bots, much more than previous research," Dr Zhou told the BBC.

Twitter deserved praise for its work on finding and eliminating bots, he added, but it was clear that skilled hackers had found ways to avoid official scrutiny and keep the bots ticking over.

The pair's most recent work had uncovered a bigger network of bots that seemed to include more than 500,000 accounts.


Star Wars stormtroopersImage copyrightPA: Image caption-The bot accounts sent tweets using quotes from Star Wars novels

"Their potential threats are real and scary due to the sheer size of the botnet," he said.

It was hard to know who was behind the collections of fake accounts, said Dr Zhou, although there was evidence that a small percentage of the accounts had been sold or rented as they were now following Twitter users outside the main bot network.

"What is really surprising is our questioning on the whole effort of bot detection in the past years," said Dr Zhou. "Suddenly we feel vulnerable and don't know much: how many more are there? What do they want to do?"

A Twitter spokesman said the social network had clear policy on automation that was "strictly enforced".

Users were barred from writing programs that automatically followed or unfollowed accounts or which "favourited" tweets in bulk, he said.

Automated responses "degraded" the experience for other users and was prohibited, he added.

"While we have systems and tools to detect spam on Twitter, we also rely on our users to report spamming," he said.


BBC, UK

'Wrong Ivanka' from UK hits back after Trump tweet JANUARY 17, 2017


Image caption -Meanwhile, Ivanka Trump seemed to be unaware of the Twitter furore, posting this photo of herself
Ivanka Trump tweets a picture of herself, captioned: bright lights, big city #datenight

A woman from Brighton who was mistaken for Ivanka Trump on Twitter by none other than the US President-elect himself has told the BBC it has been a surreal start to the day.

Ivanka Majic, a digital consultant, said she and her husband were woken at 06:00 by calls from the media.

They told her she featured in a tweet to Donald Trump's 20 million followers.

Mr Trump was quoting praise for his daughter from another Twitter user, who had used the wrong username.


Donald Trump: Image copyrightTWITTER

Users were quick to jump in and point out the mistake, and Ms Majic has since replied:


Media captionThe Other Ivanka spoke to the BBC's Outside Source

READ MORE...

Ms Majic, who has worked for the Labour Party in the past, said she discovered what had happened when a news agency texted her husband.

"I came downstairs to check my phone and I had so many notifications," she told the BBC. "It's very unusual to be speaking to both ITV and the BBC 45 minutes into your day."

Ms Majic's username is @Ivanka, while Ivanka Trump's is @IvankaTrump


LinkedIn photo of Ivanka MajicImage copyrightIVANKA MAJIC: Image caption -Ms Majic says she also gets confused on Twitter with Ivanka Concrete

She said she has regularly been mistaken for Ms Trump on Twitter over the past year, but never on such a scale.

"Without Donald Trump it's a steady simmer of mentions," she said. "I am kind of like that @johnlewis bloke but John Lewis is probably nicer to be associated with."

"During the election I had a Twitter bot for everyone who accidentally mentioned me encouraging them to vote for Hillary [Clinton]".

Commander in Tweet

"Ivanka is an incredibly boring and popular Slavic girls name. The other one I get confused with is an Hungarian concrete company called Ivanka concrete," she added.

"I'm still undecided about whether to change my username. I don't use Twitter very much partly as a result as having so many mentions. Tweets from normal people get lost in the mix."

"I'm someone who has used twitter since 2007. A new thing comes along and you create a username never thinking that one day Ivanka Trump's dad will be President."
Patrick Evans, UGC and Social News team


CBC NEWS CANADA

Google offers a glimpse into its fight against fake news By Matthew Braga, CBC News Posted: Jan 25, 2017 9:00 AM ET Last Updated: Jan 25, 2017 1:03 PM ET


Google says it took down 1.7 billion ads in violation of its policies in 2016. (Kim Jin-a/Associated Press)

Nearly 200 publishers were kicked off its advertising network in November and December of last year


Photo of Matthew Braga
Matthew Braga
Senior Technology Reporter
Matthew Braga is the senior technology reporter for CBC News.
He was previously the Canadian editor of Motherboard,
Vice Media's science and technology website, and
a business and technology reporter for the Financial Post.
Email: matthew.braga@cbc.ca
@mattbraga
SecureDrop (to contact CBC anonymously)
PGP Key (for encrypted emails)

In the waning months of 2016, two of the world's biggest tech companies decided they would do their part to curb the spread of hoaxes and misinformation on their platforms — by this point, widely referred to under the umbrella of "fake news."

Facebook and Google announced they would explicitly ban fake news publishers from using their advertising networks to make money, while Facebook later announced additional efforts to flag and fact-check suspicious news stories in users' feeds.

Psychologists say they can inoculate people against fake news

How successful have these efforts been? Neither company will say much — but Google, at least, has offered a glimpse.

READ MORE...

In a report released today, Google says that its advertising team reviewed 550 sites it suspected of serving misleading content from November to December last year.

Of those 550 sites, Google took action against 340 of them for violating its advertising policies.

"When we say 'take action' that basically means, this is a site that historically was working with Google and our Adsense products to show ads, and now we're no longer allowing our ad systems to support that content," said Scott Spencer, Google's director of product management for sustainable ads in an interview.

Nearly 200 publishers — that is, the site operators themselves — were also removed from Google's AdSense network permanently, the company said.

Not all of the offenders were caught violating the company's new policy specifically addressing misrepresentation; some may have run afoul of other existing policies.

In total, Google says, it took down 1.7 billion ads in violation of its policies in 2016.

Questions remain

No additional information is contained within the report — an annual review of bad advertising practices that Google dealt with last year.

In both an interview and a followup email, Google declined to name any of the publishers that had violated its policies or been permanently removed from its network. Nor could Google say how much money it had withheld from publishers of fake news, or how much money some of its highest-grossing offenders made.

Some fake news site operators have boasted of making thousands of dollars a month in revenue from advertising displayed on their sites.

The sites reviewed by Google also represent a very brief snapshot in time — the aftermath of the U.S. presidential election — and Spencer was unable to say how previous months in the year might have compared.

"There's no way to know. We take action against sites when they're identified and they violate our policies," Spencer said. "It's not like I can really extrapolate the number."

A bigger issue

Companies such as Google are only part of the picture.

"It's the advertisers' dollars. It's their responsibility to spend it wisely," said Susan Bidel, a senior analyst at Forrester Research who recently co-wrote a report on fake news for marketers and advertisers.

That, however, is easier said than done. Often, advertisers don't know all of the sites on which their ads run — making it difficult to weed out sites designed to serve misinformation. And even if they are able to maintain a partial list of offending sites, "there's no blacklist that's going to be able to keep up with fake news," Bidel said, when publishers can quickly create new sites.

Until advertisers have more insight into where their ads run, Bidel said, it's left to advertising platforms such as Google and Facebook to weed out offending sites.

In an email, Facebook declined to answer specific questions on its efforts — specifically, how many fake news publishers it has suspended or taken action against, the names of publishers or the amount of revenue Facebook has withheld from publishers of fake news.

Instead, the company provided a statement, attributed to an unnamed spokesperson: "It is still early days, but we're looking forward to learning and continuing to roll this out more broadly soon" — "this" referring to its previously announced tools and efforts to address fake news.

"I always say the bad guys with algorithms are going to be one step ahead of the good guys with algorithms," Bidel said. "I don't know that you're ever going to be able to eradicate this form of fraud, or any other form of fraud. But it can be brought to some acceptable level — and that level needs to be determined by the industry."


CBC CANADA

Only Facebook knows how it spreads fake election news; Secret algorithms make it hard to judge how too-good-to-be-true stories influence voters By Matthew Braga, CBC News Posted: Nov 11, 2016 5:00 AM ET Last Updated: Nov 11, 2016 5:38 AM ET


Hillary Clinton campaign chairman John Podesta addresses a crowd of supporters on election night in New York. (Jim Bourg/Reuters)

If Facebook is to be believed, Hillary Clinton has deep ties to satanic rituals and the occult.

The post in question has nearly 3,000 shares, and links to a story on a conspiracy-laden political site. It is most definitely fake. But like many of the stories that were posted to Facebook in this U.S. election cycle, it was written specifically for those with a right-leaning partisan bias in mind. For this particular group of voters, it just begged to be shared.

'Because the algorithms are a black box, there's no way to study them.'
— Frank Pasquale, law professor at the University of Maryland

And share they did. In an election dominated by the sexist, racist, and generally outrageous invective of America's president-elect Donald Trump, Facebook proved the perfect social platform for the sharing of fake, too-good-to-be-true style news.

At the end of August, The New York Times' John Herrman reported on the subtle shift in Facebook feeds across America, many of which were increasingly filled with questionable news sources and fake stories specifically designed to be shared. More recently, BuzzFeed's Craig Silverman took on the daunting task of debunking fake news stories in near-real time.

READ MORE...

Democrats and Republicans alike clicked and shared on what they hoped to be true, whether or not there was any underlying truth.

In both the run-up to the election and its immediate aftermath, there have been arguments that Facebook helped make a Trump presidency possible — that, by design, Facebook helps breed misinformation and encourage the spread of fake news, and that it can shape voter opinion based on the stories it chooses to show.

Whether or not this is true is practically impossible to say because of how little insight we have into how Facebook's myriad algorithms work.


Election Protests California -High school students in San Francisco protest on Thursday against the election of Donald Trump. (Jeff Chiu/Associated Press)

"I think that if we were to learn how, for example, networks of disinformation form, that would give people a lot more information of how to create networks of information," said Frank Pasquale, a law professor at the University of Maryland, and author of The Black Box Society, a book on algorithms. "But because the algorithms are a black box, there's no way to study them."

Facebook is notoriously tight-lipped about how its algorithms are designed and maintained, and has granted only a handful of carefully controlled interviews with journalists. We know that signals such as likes, comments, and shares all factor heavily into what Facebook shows its users, but not which signals contribute to a particular post's appearance in a user's feed, nor how those signals are weighted.

"Anything that gets clicks, anything that gets more engagement and more potential ad revenue is effectively accelerated by the platform, with very rare exceptions," Pasquale said.

Algorithmic transparency

Inevitably, posts that hewed to partisan beliefs proved especially popular, whether or not they were true. And how much of an impact these voices had on the voting public, only Facebook knows.

For us to have any insight would require a level of algorithmic transparency, or algorithmic accountability into systems that few understand, though they increasingly shape the way we think.

"Election information is one of those domains where there's a pretty clear connection between information that people are being given access to and their ability to make a well informed decision," says Nicholas Diakopoulos, an assistant professor at the University of Maryland's journalism school.

He says algorithmic transparency is "one method to increase the level of accountability we have over these platforms."


GOP 2016 Convention -Facebook board member Peter Thiel, who donated $1.25 million to Donald Trump's campaign, speaks at the Republican National Convention in July. (Mark J. Terrill/Associated Press)

Both Diakopoulos and Pasquale believe that Facebook is actually a media company — despite its repeated claims otherwise — and as such needs to take more responsibility for the quality of news that appears on its site.

One concern is that Facebook has so much power and influence over the content its nearly 1.2 billion daily users see that it could conceivably influence the outcome of an election. In fact, Facebook actually did something to this effect in 2012, assisting academic researchers with a "randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 U.S. congressional elections."

The study's authors concluded that, both directly and indirectly, the Facebook messages increased voter turnout by 340,000 votes. Without more insight into how Facebook places news stories in its users' feeds, no one would ever know if a viral political hoax site was responsible for doing the same.

Trusted sources

There is little insight into how Facebook identifies trustworthy sources of information and penalizes those that are not. But in light of past censorship squabbles — such as Facebook's removal and subsequent reinstatement of a photo from the Vietnam War of a young, napalm-burned Kim Phuc — the question is whether users will feel comfortable with Facebook having that role.

"Do we really want Facebook deciding what's misinformation or not?" asked Jonathan Koren, who previously worked at Facebook on the company's trending news algorithm, and is a software engineer at an artificial intelligence company called Ozlo.

"And that's why they don't want to do it, because they don't want to be responsible for it. But at the same time, there's nobody responsible."


GO TO > > NEXT SCIENCE/INFO NEWS (coming next)

GO TO > > TRAVEL/LIFESTYLE/FOOD

GO TO >> HEADLINE NEWS THIS PAST WEEK

GO TO >> BUSINESS NEWS THIS WEEK

GO TO >> SPORTS BEAT