Report: Russia Has Access to UK Visa Processing

Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications.

The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas.

The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain.

The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center’s CCTV cameras and had a diagram of the center’s computer network. The two outlets say they have obtained the man’s deposition to the U.S. authorities but have decided against publishing the man’s name, for his own safety.

The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year.

The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications.

The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a “backdoor” to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017.

In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents’ real names and the media, including The Associated Press, were able to corroborate their real identities.

The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.

Tech Firm Pays Refugees to Train AI Algorithms

Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday.

REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI.

Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency.

People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London.

“This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues.

$20 a day for AI work

A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said.

REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts.

The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen.

The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says.

“This would give them the ability to rebuild a life … and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation.

Teaching the machines

AI is the development of computer systems that can perform tasks that normally require human intelligence.

It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers.

In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention.

REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years.

Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs.

Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.

Facebook CEO Details Company Battle with Hate Speech, Violent Content

Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site.

The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions.

In the future, if a person disagrees with Facebook’s decision, he or she will be able to appeal to an independent review board.

Facebook “shouldn’t be making so many important decisions about free expression and safety on our own,” Facebook CEO Mark Zuckerberg said in a call with reporters Thursday.

But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go.

“There are issues you never fix,” he said. “There’s going to be ongoing content issues.”

Company’s actions

In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. 

The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out.

Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. “But to suggest that we didn’t want to know is simply untrue,” he said.

Zuckerberg also said he didn’t know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm.

“It may be normal in Washington, but it’s not the kind of thing I want Facebook associated with, which is why we won’t be doing it,” Zuckerberg said.

The firm posted a rebuttal to the Times story.

Content removed

Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months.

But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said.

One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls “borderline content” so it gets less distribution and engagement.

“By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less-polarized discourse where more people feel safe participating,” Zuckerberg wrote in a post. 

Critics of the company, however, said Zuckerberg hasn’t gone far enough to address the inherent problems of Facebook, which has 2 billion users.

“We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality,” said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C.

VOA national security correspondent Jeff Seldin contributed to this report.

Realistic Masks Made in Japan Find Demand from Tech, Car Companies

Super-realistic face masks made by a tiny company in rural Japan are in demand from the domestic tech and entertainment industries and from countries as far away as Saudi Arabia.

The 300,000-yen ($2,650) masks, made of resin and plastic by five employees at REAL-f Co., attempt to accurately duplicate an individual’s face down to fine wrinkles and skin texture.

Company founder Osamu Kitagawa came up with the idea while working at a printing machine manufacturer.

But it took him two years of experimentation before he found a way to use three-dimensional facial data from high-quality photographs to make the masks, and started selling them in 2011.

The company, based in the western prefecture of Shiga, receives about 100 orders every year from entertainment, automobile, technology and security companies, mainly in Japan.

For example, a Japanese car company ordered a mask of a sleeping face to improve its facial recognition technology to detect if a driver had dozed off, Kitagawa said.

“I am proud that my product is helping further development of facial recognition technology,” he added. “I hope that the developers would enhance face identification accuracy using these realistic masks.”

Kitagawa, 60, said he had also received orders from organizations linked to the Saudi government to create masks for the king and princes.

“I was told the masks were for portraits to be displayed in public areas,” he said.

Kitagawa said he works with clients carefully to ensure his products will not be used for illicit purposes and cause security risks, but added he could not rule out such threats.

He said his goal was to create 100 percent realistic masks, and he hoped to use softer materials, such as silicon, in the future.

“I would like these masks to be used for medical purposes, which is possible once they can be made using soft materials,” he said. “And as humanoid robots are being developed, I hope this will help developers to create [more realistic robots] at a low cost.”

Debut of China AI Anchor Stirs up Tech Race Debates

China’s state-run Xinhua News has debuted what it called the world’s first artificial intelligence (AI) anchor. But the novelty has generated more dislikes than likes online among Chinese netizens, with many calling the new virtual host “a news-reading device without a soul.”

Analysts say the latest creation has showcased China’s short-term progress in voice recognition, text mining and semantic analysis, but challenges remain ahead for its long-term ambition of becoming an AI superpower by 2030.

Nonhuman anchors

Collaborating with Chinese search engine Sogou, Xinhua introduced two AI anchors, one for English broadcasts and the other for Chinese, both of which are based on images of the agency’s real newscasters, Zhang Zhao and Qiu Hao respectively.

In its inaugural broadcast last week, the English-speaking anchor was more tech cheerleader than newshound, rattling off lines few anchors would be caught dead reading, such as: “the development of the media industry calls for continuous innovation and deep integration with the international advanced technologies.”

It also promised “to work tirelessly to keep you [audience] informed as texts will be typed into my system uninterrupted” 24/7 across multiple platforms simultaneously if necessary, according to the news agency.

No soul

Local audiences appear to be unimpressed, critiquing the news bots’ not so human touch and synthesized voices.

On Weibo, China’s Twitterlike microblogging platform, more than one user wrote that such anchors have “no soul,” in response to Xinhua’s announcement. And one user joked: “what if we have an AI [country] leader?” while another questioned what it stands for in terms of journalistic values by saying “What a nutcase. Fake news is on every day.”

Others pondered the implication AI news bots might have on employment and workers.

“It all comes down to production costs, which will determine if [we] lose jobs,” one Weibo user wrote. Some argued that only low-end labor-intensive jobs will be easily replaced by intelligent robots while others gloated about the possibility of employers utilizing an army of low-cost robots to make a fortune.

A simple use case

Industry experts said the digital anchor system is based on images of real people and possibly animated parts of their mouths and faces, with machine-learning technology recreating humanlike speech patterns and facial movements. It then uses a synthesized voice for the delivery of the news broadcast.

The creation showcases China’s progress in voice recognition, text mining and semantic analysis, all of which is covered by natural language processing, according to Liu Chien-chih, secretary-general of Asia IoT Alliance (AIOTA).

But that’s just one of many aspects of AI technologies, he wrote in an email to VOA.

Given the pace of experimental AI adoption by Chinese businesses, more user scenarios or designs of user interface can be anticipated in China, Liu added.

Chris Dong, director of China research at the market intelligence firm IDC, agreed the digital anchor is as simple as what he calls a “use case” for AI-powered services to attract commercials and audiences.

He said, in an email to VOA, that China has fast-tracked its big data advantage around consumers or internet of things (IoT) infrastructure to add commercial value.

Artificial Intelligence has also allowed China to accelerate its digital transformation across various industries or value chains, which are made smarter and more efficient, Dong added.

Far from a threat to the US

But both said China is far from a threat to challenge U.S. leadership on AI given its lack of an open market and respect for intellectual property rights (IPRs) as well as its lagging innovative competency on core AI technologies.

Earlier, Lee Kai-fu, a well-known venture capitalist who led Google before it pulled out of China, was quoted by news website Tech Crunch as saying that the United States may have created Artificial Intelligence, but China is taking the ball and running with it when it comes to one of the world’s most pivotal technology innovations.

Lee summed up four major drivers behind his observation that China is beating the United States in AI: abundant data, hungry entrepreneurs, growing AI expertise and massive government support and funding.

Beijing has set a goal to become an AI superpower by 2030, and to turn the sector into a $150 billion industry.

Yet, IDC’s Dong cast doubts on AI’s adoption rate and effectiveness in China’s traditional sectors. Some, such as the manufacturing sector, is worsening, he said.

He said China’s “state capitalism may have its short-term efficiency and gain, but over the longer-term, it is the open market that is fundamental to building an effective innovation ecosystem.”

The analyst urges China to open up and include multinational software and services to contribute to its digital economic transformation.

“China’s ‘Made-in-China 2025’ should go back to the original flavor … no longer Made and Controlled by Chinese, but more [of] an Open Platform of Made-in-China that both local and foreign players have a level-playing field,” he said.

In addition to a significant gap in core technologies, China’s failure to uphold IPRs will go against its future development of AI software, “which is often sold many-fold in the U.S. than in China as the Chinese tend to think intangible assets are free,” AIOTA’s Liu said.

US Lawmaker Says Facebook Cannot Be Trusted to Regulate Itself

Democratic U.S. Representative David Cicilline, expected to become the next chairman of House Judiciary Committee’s antitrust panel, said on Wednesday that Facebook cannot be trusted to regulate itself and Congress should take action.

Cicilline, citing a report in the New York Times on Facebook’s efforts to deal with a series of crises, said on Twitter: “This staggering report makes clear that @Facebook executives will always put their massive profits ahead of the interests of their customers.”

“It is long past time for us to take action,” he said. Facebook did not immediately respond to a request for comment.

Facebook Chief Executive Mark Zuckerberg said a year ago that the company would put its “community” before profit, and it has doubled its staff focused on safety and security issues since then. Spending also has increased on developing automated tools to catch propaganda and material that violates the company’s posting policies.

​Other initiatives have brought increased transparency about the administrators of pages and purchasers of ads on Facebook. Some critics, including lawmakers and users, still contend that Facebook’s bolstered systems and processes are prone to errors and that only laws will result in better performance. The New York Times said Zuckerberg and the company’s chief operating officer, Sheryl Sandberg, ignored warning signs that the social media company could be “exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe.” And when the warning signs became evident, they “sought to conceal them from public view.”

“We’ve known for some time that @Facebook chose to turn a blind eye to the spread of hate speech and Russian propaganda on its platform,” said Cicilline, who will likely take the reins of the subcommittee on regulatory reform, commercial and antitrust law when the new, Democratic-controlled Congress is seated in January.

“Now we know that once they knew the truth, top @Facebook executives did everything they could to hide it from the public by using a playbook of suppressing opposition and propagating conspiracy theories,” he said.

“Next January, Congress should get to work enacting new laws to hold concentrated economic power to account, address the corrupting influence of corporate money in our democracy, and restore the rights of Americans,” Cicilline said.

FCC Launches First US High-Band 5G Spectrum Auction 

The Federal Communications Commission on Wednesday launched the agency’s first high-band 5G spectrum auction as it works to clear space for next-generation faster networks. 

Bidding began Wednesday on spectrum in the 28 GHz band and will be followed by bidding for spectrum in the 24 GHz band. The FCC is making 1.55 gigahertz of spectrum available and the auctions will be followed by a 2019 auction of three more millimeter-wave spectrum bands — 37 GHz, 39 GHz and 47 GHz. 

“These airwaves will be critical in deploying 5G services and applications,” FCC Chairman Ajit Pai said Wednesday. 

5G networks are expected to be at least 100 times faster than current 4G networks and cut latency, or delays, to less than one-thousandth of a second from one-hundredth of a second in 4G. They also will allow for innovations in a number of different fields. While millimeter-wave spectrum offers faster speeds, it cannot cover big geographic areas and will require significant new small cell infrastructure deployments. 

FCC Commissioner Brendan Carr said the spectrum being auctioned would allow for “faster broadband to autonomous cars, from smart [agriculture] to telehealth.” 

The spectrum being auctioned over the next 15 months “is more spectrum than is currently used for terrestrial mobile broadband by all wireless service providers combined,” the FCC said. 

Democratic FCC Commissioner Jessica Rosenworcel said the United States was following “the lead of South Korea, the United Kingdom, Spain, Italy, Ireland and Australia. But we put ourselves back in the running for next-generation wireless leadership,” and she called on the FCC to clearly state the timing for future spectrum auctions. 

Last month, U.S. President Donald Trump signed a presidential memorandum directing the Commerce Department to develop a long-term comprehensive national spectrum strategy to prepare for the introduction of 5G. 

Trump is also creating a White House Spectrum Strategy Task Force and wants federal agencies to report on government spectrum needs and review how spectrum can be shared with private sector users. 

AT&T, Verizon Communications, Sprint and T-Mobile U.S. are working to acquire spectrum and are developing and testing 5G networks. The first 5G-compatible commercial cellphones are expected to go on sale 

next year. 

As Laws Fail to Slow Online Sex Trade, Experts Turn to Tech

The online sale of sex slaves is going strong despite new U.S. laws to clamp down on the crime, data analysts said Wednesday, urging a wider use of technology to fight human trafficking.

In April, the United States passed legislation aimed at making it easier to prosecute social media platforms and websites that facilitate sex trafficking, days after a crackdown on classified ad giant Backpage.com.

The law resulted in an immediate and sharp drop in sex ads online but numbers have since picked up again, data presented at the Thomson Reuters Foundation’s annual Trust Conference showed.

“The market has been destabilized and there are now new entrants that are willing to take the risk in order to make money,” Chris White, a researcher at tech giant Microsoft who gathered the data, told the event in London.

New players

Backpage.com, a massive advertising site primarily used to sell sex — which some analysts believe accounted for 80 percent of online sex trafficking in the United States — was shut down by federal authorities in April.

Days later, the Fight Online Sex Trafficking Act (FOSTA), which introduced stiff prison sentences and fines for website owners and operators found guilty of contributing to sex trafficking, was passed into law.

The combined action caused the number of online sex ads to fall 80 percent to about 20,000 a day nationwide, White said.

The number of ads has since risen to about 60,000 a day, as new websites filled the gap, he said.

In October — in response to a lawsuit accusing it of not doing enough to protect users from human traffickers — social media giant Facebook said it worked internally and externally to thwart such predators.

Using technology to continuously monitor and analyze this kind of data is key to evaluating existing laws and designing new and more effective ones, White said.

“It really highlights what’s possible through policy,” added Valiant Richey, a former U.S. prosecutor who now fights human trafficking at the Organization for Security and Co-operation in Europe (OSCE), echoing the calls for new methods.

Law enforcement agencies currently tackle slavery one case at a time, but the approach lacks as the crime is too widespread and authorities are short of resources, he said.

As a prosecutor in Seattle, Richey said his office would work on up to 80 cases a year, while online searches revealed more than 100 websites where sex was sold in the area, some carrying an average of 35,000 ads every month.

“We were fighting forest fire with a garden hose,” he said. “A case-based response to human trafficking will not on its own carry the day.”

At least 40 million people are victims of modern slavery worldwide — with nearly 25 million trapped in forced labor and about 15 million in forced marriages.

Nigerian Firm Takes Blame for Routing Google Traffic Through China

Nigeria’s Main One Cable took responsibility Tuesday for a glitch that temporarily caused some Google global traffic to be misrouted through China, saying it accidentally caused the problem during a network 

upgrade. 

The issue surfaced Monday afternoon as internet monitoring firms ThousandEyes and BGPmon said some traffic to Alphabet’s Google had been routed through China and Russia, raising concerns that the communications had been intentionally hijacked. 

Main One said in an email that it had caused a 74-minute glitch by misconfiguring a border gateway protocol filter used to route traffic across the internet. That resulted in some Google traffic being sent through Main One partner China Telecom, the West African firm said. 

Google has said little about the matter. It acknowledged the problem Monday in a post on its website that said it was investigating the glitch and that it believed the problem originated outside the company. The company did not say how many users were affected or identify specific customers. 

Google representatives could not be reached Tuesday to comment on Main One’s statement. 

Hacking concerns

Even though Main One said it was to blame, some security experts said the incident highlighted concerns about the potential for hackers to conduct espionage or disrupt communications by exploiting known vulnerabilities in the way traffic is routed over the internet. 

The U.S. China Economic and Security Review Commission, a Washington group that advises the U.S. Congress on security issues, plans to investigate the issue, said Commissioner Michael Wessel. 

“We will work to gain more facts about what has happened recently and look at what legal tools or legislation or law enforcement activities can help address this problem,” Wessel said. 

Glitches in border gateway protocol filters have caused multiple outages to date, including cases in which traffic from U.S. internet and financial services firms was routed through Russia, China and Belarus. 

Yuval Shavitt, a network security researcher at Tel Aviv University, said it was possible that Monday’s issue was not an accident. 

“You can always claim that this is some kind of configuration error,” said Shavitt, who last month co-authored a paper alleging that the Chinese government had conducted a series of internet hijacks. 

Main One, which describes itself as a leading provider of telecom and network services for businesses in West Africa, said that it had investigated the matter and implemented new processes to prevent it from happening again. 

NATO Looks to Startups, Disruptive Tech to Meet Emerging Threats 

NATO is developing new high-tech tools, such as the ability to 3-D-print parts for weapons and deliver them by drone, as it scrambles to retain a competitive edge over Russia, China and other would-be battlefield adversaries. 

Gen. Andre Lanata, who took over as head of the NATO transformation command in September, told a conference in Berlin that his command demonstrated over 21 “disruptive” projects during military exercises in Norway this month. 

He urged startups as well as traditional arms manufacturers to work with the Atlantic alliance to boost innovation, as rapid and easy access to emerging technologies was helping adversaries narrow NATO’s long-standing advantage. 

Lanata’s command hosted its third “innovation challenge” in tandem with the conference this week, where 10 startups and smaller firms presented ideas for defeating swarms of drones on the ground and in the air. 

Winner from Belgium

Belgian firm ALX Systems, which builds civilian surveillance drones, won this year’s challenge.

Its CEO, Geoffrey Mormal, said small companies like his often struggled with cumbersome weapons procurement processes. 

“It’s a very hot topic, so perhaps it will help to enable quicker decisions,” he told Reuters. 

Lanata said NATO was focused on areas such as artificial intelligence, connectivity, quantum computing, big data and hypervelocity, but also wants to learn from DHL and others how to improve the logistics of moving weapons and troops. 

NATO Secretary-General Jens Stoltenberg said increasing military spending by NATO members would help tackle some of the challenges, but efforts were also needed to reduce widespread duplication and fragmentation in the European defense sector. 

Participants also met behind closed doors with chief executives from 12 of the 15 biggest arms makers in Europe. 

Facebook Unable to Identify Who Was Behind Network of Fake Accounts

Facebook said Tuesday it had been unable to determine who was behind dozens of fake accounts it took down shortly before the 2018 U.S. midterm elections.

“Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior,” Nathaniel Gleicher, head of cybersecurity policy, wrote on the company’s blog.

At least one of the Instagram accounts had well over a million followers, according to Facebook.

A website that said it represented the Russian state-sponsored Internet Research Agency claimed responsibility for the accounts last week, but Facebook said it did not have enough information to connect the agency that has been called a troll farm.

“As multiple independent experts have pointed out, trolls have an incentive to claim that their activities are more widespread and influential than may be the case,” Gleicher wrote.

Sample images provided by Facebook showed posts on a wide range of issues. Some advocated on behalf of social issues such as women’s rights and LGBT pride, while others appeared to be conservative users voicing support for President Donald Trump.

The viewpoints on display potentially fall in line with a Russian tactic identified in other cases of falsified accounts. A recent analysis of millions of tweets by the Atlantic Council found that Russian trolls often pose as members on either side of contentious issues in order to maximize division in the United States.

Media: German States Want Social Media Law Tightened

German states have drafted a list of demands aimed at tightening a law that requires social media companies like Facebook and Twitter to remove hate speech from their sites, the Handelblatt newspaper reported Monday.

Justice ministers from the states will submit their proposed revisions to the German law called NetzDG at a meeting with Justice Minister Katarina Barley on Thursday, the newspaper said, saying it had obtained a draft of the document.

The law, which came into full force on Jan. 1, is a highly ambitious effort to control what appears on social media and it has drawn a range of criticism.

While the German states are focused on concerns about how complaints are processed, other officials have called for changes following criticism that too much content was being blocked.

The states’ justice ministers are calling for changes that would make it easier for people who want to complain about banned content such as pro-Nazi ideology to find the required forms on social media platforms.

They also want to fine social media companies up to 500,000 euros ($560,950) for providing “meaningless replies” to queries from law enforcement authorities, the newspaper said.

Till Steffen, the top justice official in Hamburg and a member of the Greens party, told the newspaper that the law had in some cases proven to be “a paper tiger.”

“If we want to effectively limit hate and incitement on the internet, we have to give the law more bite and close the loopholes,” he told the paper. “For instance, it cannot be the case that some platforms hide their complaint forms so that no one can find them.”

Facebook in July said it had deleted hundreds of offensive posts since implementation of the law, which foresees fines of up to 50 million euros ($56.10 million) for failure to comply.

France to ‘Embed’ Regulators at Facebook to Combat Hate Speech

Facebook will allow French regulators to “embed” inside the company to examine how it combats online hate speech, the first time the wary tech giant has opened its doors in such a way, President Emmanuel Macron said Monday.

From January, Macron’s administration will send a small team of senior civil servants to the company for six months to verify Facebook’s goodwill and determine whether its checks on racist, sexist or hate-fueled speech could be improved.

“It’s a first,” Macron told the annual Internet Governance Forum in Paris. “I’m delighted by this very innovative experimental approach,” he said. “It’s an experiment, but a very important first step in my view.”

The trial project is an example of what Macron has called “smart regulation,” something he wants to extend to other tech leaders such as Google, Apple and Amazon.

The move follows a meeting with Facebook’s founder Mark Zuckerberg in May, when Macron invited the CEOs of some of the biggest tech firms to Paris, telling them they should work for the common good.

The officials may be seconded from the telecoms regulator and the interior and justice ministries, a government source said. Facebook said the selection was up to the French presidency.

It is unclear whether the group will have access to highly-sensitive material such as Facebook’s algorithms or codes to remove hate speech. It could travel to Facebook’s European headquarters in Dublin and global base in Menlo Park, California, if necessary, the company said.

“The best way to ensure that any regulation is smart and works for people is by governments, regulators and businesses working together to learn from each other and explore ideas,” Nick Clegg, the former British deputy prime minister who is now head of Facebook’s global affairs, said in a statement.

France’s approach to hate speech has contrasted sharply with Germany, Europe’s leading advocate of privacy.

Since January, Berlin has required sites to remove banned content within 24 hours or face fines of up to 50 million euros ($56 million). That has led to accusations of censorship.

France’s use of embedded regulators is modeled on what happens in its banking and nuclear industries.

“[Tech companies] now have the choice between something that is smart but intrusive and regulation that is wicked and plain stupid,” a French official said.

Macron, Tech Giants Launch ‘Paris Call’ to Fix Internet Ills

France and U.S. technology giants including Microsoft on Monday urged world governments and companies to sign up to a new initiative to regulate the internet and fight threats such as cyberattacks, online censorship and hate speech.

With the launch of a declaration entitled the ‘Paris call for trust and security in cyberspace’, French President Emmanuel Macron is hoping to revive efforts to regulate cyberspace after the last round of United Nations negotiations failed in 2017.

In the document, which is supported by many European countries but, crucially, not China or Russia, the signatories urge governments to beef up protections against cyber meddling in elections and prevent the theft of trade secrets.

The Paris call was initially pushed for by tech companies but was redrafted by French officials to include work done by U.N. experts in recent years.

“The internet is a space currently managed by a technical community of private players. But it’s not governed. So now that half of humanity is online, we need to find new ways to organize the internet,” an official from Macron’s office said.

“Otherwise, the internet as we know it today – free, open and secure– will be damaged by the new threats.”

By launching the initiative a day after a weekend of commemorations marking the 100th anniversary of World War I, Macron hopes to promote his push for stronger global cooperation in the face of rising nationalism.

In another sign of the Trump administration’s reluctance to join international initiatives it sees as a bid to encroach on U.S. sovereignty, French officials said Washington might not become a signatory, though talks are continuing.

However, they said large U.S. tech companies including Facebook and Alphabet’s Google would sign up.

“The American ecosystem is very involved. It doesn’t mean that in the end the U.S. federal government won’t join us, talks are continuing, but the U.S. will be involved under other forms,” another French official said.

 

Google Reforms Sexual Misconduct Rules

Google is promising to be more forceful and open about its handling of sexual misconduct cases, a week after high-paid engineers and others walked out in protest over its male-dominated culture.

CEO Sundar Pichai spelled out the concessions in an email sent Thursday to Google employees. The note of contrition came a week after the tech giant’s workers left their cubicles in dozens of offices around the world to protest management’s treatment of top executives and other male workers accused of sexual harassment and other misconduct involving men. The protest’s organizers estimated about 17,000 workers participated in the walkout .

“Google’s leaders and I have heard your feedback and have been moved by the stories you’ve shared,” Pichai wrote in his email. “We recognize that we have not always gotten everything right in the past and we are sincerely sorry for that. It’s clear we need to make some changes.” Pichai’s email was obtained by The Associated Press.

Google bowed to one of the protesters’ main demands by dropping mandatory arbitration of all sexual misconduct cases. That will now be optional under the new policies. It mirrors a change made by ride-hailing service Uber after the complaints of its women employees prompted an internal investigation concluding its rank had been poisoned by rampant sexual harassment

Google will also provide more details about sexual misconduct cases in internal reports available to all employees. The breakdowns will include the number of cases that were substantiated within various company departments and list the types of punishment imposed, including firings, pay cuts and mandated counseling.

The company is also stepping up its training aimed at preventing misconduct, requiring all employees to go through the process annually instead of every other year. Those who fall behind in their training, including top executives, will be dinged in their annual performance reviews, leaving a blemish that could lower their pay and make it more difficult to get promoted.

The reforms are the latest fallout from a broader societal backlash against men’s exploitation of their women subordinates in business, entertainment and politics — a movement that has spawned the “MeToo” hashtag as a sign of unity and a call for change.

Google got caught in the crosshairs two weeks ago after The New York Times detailed allegations of sexual misconduct about the creator of Google’s Android software, Andy Rubin. The newspaper said Rubin received a $90 million severance package in 2014 after Google concluded the accusations were credible. Rubin has denied the allegations.

Like its Silicon Valley peers, Google has already openly acknowledged that its workforce is too heavily concentrated with white and Asian men, especially in the highest paying executive and computer programming jobs. Women account for 31 percent of Google’s employees worldwide, and it’s lower for leadership roles.

Critics believe that gender imbalance as created a “brogammer” culture akin to a college fraternity house that treats women as sex objects. As part of its ongoing efforts, Google will now require at least one woman or a non-Asian ethnic minority to be included on the list of candidates for executive jobs.

Google isn’t addressing another one of the protesters’ grievance because it believes it doesn’t have merit. The protesters demanded that women be paid the same as men for doing similar work, something that Google has steadfastly maintained that it has been doing for years.

Bullied Online? Speak Out, Says Britain’s Princess Beatrice 

Bullied herself online, Britain’s Princess Beatrice is determined to ensure other girls are equipped to deal with internet abuse and get the best from the digital world. 

Beatrice — who as the eldest daughter of Prince Andrew and his former wife, the Duchess of York, is eighth in line to the British throne — said her bullying, about her weight and her appearance, were very public and could not be ignored. 

But she said other girls faced this in private and needed to be encouraged to speak out and to know where to get support, which prompted her to get involved in campaigns against cyber bullying. 

A recent study by the U.S.-based Pew Research Center found about 60 percent of U.S. teens had been bullied or harassed online, with girls more likely to be the targets of online rumor-spreading or nonconsensual explicit messages. 

“You’d like to say don’t pay attention to it … but the best advice is to talk about it,” Beatrice, 30, told the Thomson Reuters Foundation during an interview on Wednesday at the Web Summit, Europe’s largest annual technology conference. 

“Being a young girl, but now being 30 and a woman working full time in technology, I feel very grateful for those experiences. But at that time it was very challenging.” 

Beatrice, who works at the U.S.-based software company Afiniti, co-founded the Big Change Charitable Trust with a group of friends, including two of Richard Branson’s children, in 2010 to support young people who also grew up in the public eye. 

Campaign

She also last year joined the anti-bullying campaign “Be Cool Be Nice” along with other celebrities such as Kendall Jenner and Cara Delevingne, which included a book. 

“There are lots of people who are ready to help and I want to make sure young people feel they have the places to go to talk about it,” said Beatrice, adding that teachers and parents also had a role to play. 

Beatrice said her bullying was so public that she could not hide from it, but her mother, Sarah Ferguson, was a great source of support. 

One of the most public attacks on the princess was at the 2011 wedding of her cousin Prince William when her fascinator sparked a barrage of media attention. A month later she auctioned the hat for charity for 81,000 pounds ($106,500). 

Her mother, who divorced Prince Andrew in 1996, had to get used to unrelenting ribbing by Britain’s royal-obsessed media. 

“She has been through a lot,” said Beatrice, whose younger sister, Eugenie, married at Windsor Castle last month. 

“When you see role models who are continually put in very challenging situations and can support you … [then] some of the tools that I have had from her I would like to share.” 

Beatrice said mobile technology should be a force for good for girls in developed and developing countries, presenting new opportunities in terms of education, careers and health. 

“Social media and the pressures that these young people now face is a new phenomenon … and if I can do more to give young people the tools [to cope], that is my mission,” she said. 

“I would say to young girls: You are not alone. Keep going.” 

Facebook: More than 100 Accounts Blocked Prior to US Midterms

Facebook says it has blocked more than 100 accounts with potential ties to a so-called Russian “troll farm” that may have sought to interfere with Tuesday’s U.S. midterm elections.

The social media giant said in a statement Wednesday that it had blocked the Facebook and Instagram accounts ahead of the vote. Facebook said it made the move after a tip from law enforcement officials.

Facebook’s head of cybersecurity, Nathaniel Gleicher, said in a statement that the accounts were blocked late Monday over suspicions they were “engaged in coordinated inauthentic behavior, which is banned from our services.” Among those accounts blocked were 85 Instagram accounts and 30 Facebook pages, most of which were in French or Russian languages. The Instagram accounts were mostly English-language, Facebook said.

Investigators say the accounts may be linked to a group known as the Internet Research Agency, which is based in St. Petersburg, Russia. In February, a federal grand jury indicted the group over allegations of interference in the 2016 U.S. presidential election.

Gleicher called the recent discovery “a timely reminder that these bad actors won’t give up — and why it is so important we work with the U.S. government and other technology companies to stay ahead.”

Before Gleicher’s statement, the Internet Research Agency said in a statement that it was responsible for the accounts, although that has not been verified.

In its statement, the organization said, “Citizens of the United States of America! Your intelligence agencies are powerless. Despite all their efforts, we have thousands of accounts registered on Facebook, Twitter, and Reddit spreading political propaganda.” The message was written in capital letters.

The statement also included a list of accounts to which the organization was supposedly attached.

In April, Facebook closed some 270 accounts linked to the Internet Research Agency. Facebook also recently banned 82 accounts linked to Iran, that were posting politically charged memes.

Facebook, Google Tools Reveal New Political Ad Tactics

Public databases that shine a light on online political ads – launched by Facebook and Google before Tuesday’s U.S. elections – offer the public the first broad view of how quickly the companies yank advertisements that break their rules.

The databases also provided campaigns unprecedented insight into opponents’ online marketing, enabling them to capitalize on weaknesses, political strategists told Reuters.

Facebook and Google, owned by Alphabet, introduced the databases this year to give details on some political ads bought on their services, a response to U.S. prosecutors’ allegations that Russian agents who deceptively interfered in the 2016 election purchased ads from the companies.

Russia denies the charges. American security experts said the Russians changed tactics this year.

Reuters found that Facebook and Google took down 436 ads from May through October related to 34 U.S. House of Representatives contests declared competitive last month by RealClearPolitics, which tracks political opinion polls.

Of the 258 removed ads with start and end dates, ads remained on Google an average of eight days and Facebook 15 days, according to data Reuters collected from the databases.

Based on ranges in the databases, the 436 ads were displayed up to 20.5 million times and cost up to $582,000, amounting to a fraction of the millions of dollars spent online in those races.

Asked for comment, Google said it is committed to bringing greater transparency to political ads. Facebook said the database is a way the company is held accountable, “even if it means our mistakes are on display.”

In some cases, the companies’ automated scans did not identify banned material such as hateful speech or images of poor quality before ads went live.

Ads that are OK when scanned may also become noncompliant if they link to a website that later breaks down.

Google’s database covers $54 million in spending by U.S. campaigns since May and Facebook $354 million, according to their databases.

Facebook’s figure is larger partly because its database includes ads not only from federal races but also for state contests, national issues and get-out-the-vote efforts.

The databases generally do not say why a particular ad was removed, and only Facebook shows copies of yanked ads.

The American Conservative Union political organization, which had 136 ads removed through Sunday on Facebook, said some commercials contained a brief shot of comedian Kathy Griffin holding a decapitated head meant to portray U.S. President Donald Trump.

Removing the bloody image resolved the violation for sensational content, and the organization said it had no qualms about Facebook’s screening.

Some removals were errors. The Environmental Defense Action Fund said Facebook’s automated review wrongly misclassified one of its ads as promoting tobacco.

Ryan Morgan, whose political consulting firm Veracity Media arranged attack ads for a U.S. House race in Iowa, said Google barred those mentioning “white supremacy” until his team could explain the ads advocated against the racist belief.

Five campaign strategists told Reuters they adjusted advertising tactics in recent weeks based on what the databases revealed about opponents’ spending on ads and which genders, age groups and states saw the messages.

Ohio digital consultant Kevin Bingle said his team reviewed opponents on Facebook’s database daily to take advantage of gaps in their strategy.

Morgan said his team tripled its online ad budget to $600,000 for a San Francisco affordable housing tax after Facebook’s database showed the other side’s ads were reaching non-Californians.

That political intelligence “let us know that digital was a place we could run up the score,” he said.

Floating Solar Panels Buoy Access to Clean Energy in Asia

When the worst floods in a century swept through India’s southern Kerala state in August, they killed more than 480 people and left behind more than $5 billion in damage.

But one thing survived unscathed: India’s first floating solar panels, on one of the country’s largest water reservoirs.

As India grapples with wilder weather, surging demand for power and a goal to nearly quintuple the use of solar energy in just four years, “we are very much excited about floating solar,” said Shailesh K. Mishra, director of power systems at the government Solar Energy Corporation of India.

India is planning new large-scale installations of the technology on hydropower reservoirs and other water bodies in Tamil Nadu, Jharkhand and Uttarakhand states, and in the Lakshadweep islands, he told the Thomson Reuters Foundation.

“The cost is coming almost to the same level as ground solar, and then it will go (forward) very fast,” he predicted.

As countries move to swiftly scale up solar power, to meet growing demand for energy and to try to curb climate change, floating solar panels – installed on reservoirs or along coastal areas – are fast gaining popularity, particularly in Asia, experts say.

The panels – now in place from China to the Maldives to Britain – get around some of the biggest problems facing traditional solar farms, particularly a lack of available land, said Oliver Knight, a senior energy specialist with the World Bank.

“The water body is already there – you don’t need to go out and find it,” he said in a telephone interview.

And siting solar arrays on water – most cover up to 10 percent of a reservoir – can cut evaporation as well, a significant benefit in water-short places, Knight said.

Pakistan’s new government, for instance, is talking about using floating solar panels on water reservoirs near Karachi and Hyderabad, both to provide much-needed power and to curb water losses as climate change brings hotter temperatures and more evaporation, he said.

Solar arrays on hydropower dams also can take advantage of existing power transmission lines, and excess solar can be used to pump water, effectively storing it as hydropower potential.

Big Potential

China currently has the most of the 1.1 gigawatts of floating solar generating capacity now installed, according to the World Bank.

But the technology’s potential is much bigger – about 400 gigawatts, or about as much generating capacity as all the solar photovoltaic panels installed in the world through 2017, the bank said.

“If you covered 1 percent of manmade water bodies, you’re already looking at 400 gigawatts,” Knight said. “That’s very significant.”

Growing use of the technology has raised fears that it could block sun into reservoirs, affecting wildlife and ecosystems, or that electrical systems might not stand up to a watery environment – particularly in salty coastal waters.

But backers say that while environmental concerns need to be better studied, the relatively small amount of surface area covered by the panels – at least at the moment – doesn’t appear to create significant problems.

“People worried what will happen to fish, to water quality,” said India’s Mishra. “Now all that attention has gone.”

What may be more challenging is keeping panels working – and free of colonizing sea creatures – in corrosively salty coastal installations, which account for a relatively small percentage of total projects so far, noted Thomas Reindl of the Solar Energy Research Institute of Singapore.

He said he expects the technology will draw more investment “when durability and reliability has been proven in real world installations.”

Currently floating solar arrays cost about 18 percent more than traditional solar photovoltaic arrays, Knight said – but that cost is often offset by other lower costs.

“In many places one has to pay for land, for resettlement of people or preparing and leveling land and building roads,” he said. With floating solar, “you avoid quite a bit of that.”

Solar panels used on water, which cools them, also can produce about 5 percent more electricity, he said.

Mishra said that while, in his view, India has sufficient land for traditional solar installations, much of it is in remote areas inhospitable to agriculture, including deserts.

Putting solar panels on water, by comparison, cuts transmission costs by moving power generation closer to the people who need the energy, he said.

He said India already makes the solar panels it needs, and is now setting up manufacturing for the floats and anchors needed for floating solar systems.

When that capacity is in place, “then the cost will automatically come down,” he predicted.