FBI Task Force Sharing Information About Online Trolls 

The FBI has started sharing information about online trolls and other suspicious users with top technology companies as part of the bureau’s behind-the-scenes effort to disrupt foreign influence operations aimed at U.S. elections, with officials saying it is the service providers’ responsibility to police malign messaging by Russia and other countries.

“By sharing information with them, especially about who certain users and account holders actually are, we can assist their own, voluntary initiatives to track foreign influence activity and to enforce their own terms of service,” said Adam Dickey, a deputy assistant attorney general.

The information, described as “actionable intelligence,” is funneled through a foreign influence task force FBI Director Christopher Wray set up last fall November as part of a broader government approach to counter foreign influence operations and to prevent a repeat of Russian meddling in the 2018 midterm and the 2020 presidential elections.

The U.S. intelligence community concluded last year that Russia tried to interfere in the 2016 election in part by orchestrating a massive social media campaign aimed at swaying American public opinion and sowing discord.

“Technology companies have a front-line responsibility to secure their own networks, products and platforms,” Wray said. “But we’re doing our part by providing actionable intelligence to better enable them to address abuse of their platforms by foreign actors.”

He said FBI officials have provided top social media and technology companies with several classified briefings so far this year, sharing “specific threat indicators and account information, and a variety of other pieces of information so that they can better monitor their own platforms.”

FBI expertise

The task force works with personnel in all 56 FBI field offices and “brings together the FBI’s expertise across the waterfront — counterintelligence, cyber, criminal and even counterterrorism — to root out and respond to foreign influence operations,” Wray said at a White House briefing.  

Adam Hickey, a deputy assistant attorney general, said on Monday that the FBI’s unpublicized sharing of information with the social media companies is a “key component” of the Justice Department’s to counter covert foreign influence efforts.

“It is those providers who bear the primary responsibility for securing their own products and platforms,” Hickey said this week at MisinfoCon, an annual conference on misinformation held in Washington, D.C.

The comments come as top U.S. security officials from Director of National Intelligence Dan Coats on down warned about continued attempts by Russia and potentially others to disrupt the November midterm elections. 

Coats said on Friday that U.S. intelligence agencies continue “to see a pervasive message campaign” by Russia, while Wray said Moscow “continues to engage in malign influence operations to this day.” 

But the officials and social media company executives say the ongoing misinformation campaign does not reach the unprecedented levels seen during the 2016 election.  

Hickey, of the Justice Department’s national security division, said that the agency doesn’t often “expose and attribute” ongoing foreign influence operations partly to protect the investigations, methods and sources, and partly “to avoid even the appearance of partiality.”

Social media, technology companies

Social media and technology companies, widely criticized for their role in allowing Russian operatives to use their platforms during the 2016 election, have taken steps over the past year to crack down on misinformation.

In June, Twitter announced new measures to fight abuse and trolls, saying it is focused on “developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically.”

In April, Facebook announced that it had taken down 135 Facebook and Instagram accounts and 138 Facebook pages linked to the Internet Research Agency, a Russian troll farm indicted in February for orchestrating Russia’s social media operations in 2016.  

The company did not say whether it had removed the pages and accounts based on information provided by the FBI.  

Monika Bickert, head of Facebook’s product policy and counterterrorism, told an audience at the Aspen Security Forum last month that the social network has moved to shield its users against fake information by deploying artificial intelligence tools that detect fake accounts and instituting transparency in advertising requirements. 

Tom Burt, vice president for customer security and trust at Microsoft, speaking at the same event, disclosed that the company had worked with law enforcement earlier this year to foil a Russian attempt to hack the campaigns of three candidates running for office in the midterm elections.  

He did not identify the candidates by name but said they “were all people who, because of their positions, might have been interesting targets from an espionage standpoint, as well as an election disruption standpoint.”

Democratic Sen. Claire McCaskill of Missouri confirmed late last month that Russian hackers tried unsuccessfully to infiltrate her Senate computer network, raising questions about the extent to which Russia will try to interfere in the 2018 elections.

Wray stressed that the influence operations are not “an election cycle threat.”

“Our adversaries are trying to undermine our country on a persistent and regular basis, whether it’s election season or not,” he said.  

VR Transports Students Back to the Hiroshima Atomic Bomb Attack

Modern technology is transporting students back to the 20th century, to the exact moment during World War II when an atomic bomb was dropped on Hiroshima, Japan. No, it’s not time travel, but with the help of Virtual Reality – students are able to relive the 1945 U.S. attack which devastated the Japanese city, and left more than 140,000 dead. Faith Lapidus reports.

Facebook, Apple, YouTube Drop Alt-Right Conspiracy Outlet InfoWars

Several major media outlets announced Monday that they would be removing content from InfoWars, a far-right, conspiracy-peddling media source.

On Monday, Apple announced it had removed hundreds of podcasts produced by InfoWars from its iTunes and podcast apps.

Facebook said it had removed four pages belonging to InfoWars founder Alex Jones. And music-sharing app Spotify said it would be removing all InfoWars podcasts available on the site, following last week’s removal of some InfoWars content.

Jones has gained notoriety for spreading unsubstantiated conspiracy theories, including claiming that the terrorist attacks on the World Trade Center on 9/11 and the 2012 Sandy Hook Elementary School shooting in Connecticut were hoaxes perpetrated by the U.S. government.

Jones has also repeatedly used inflammatory language against transgender people and Muslims, one of the reasons Facebook said forced it to remove his content.

“We believe in giving people a voice, but we also want everyone using Facebook to feel safe,” the social media outlet said in a statement. “It’s why we have Community Standards and remove anything that violates them, including hate speech that attacks or dehumanizes others.”

In July, Facebook suspended Jones’s personal profile for what it called “bullying and hate speech.”

Apple said it removed all of the content from five of six InfoWars shows from its platforms. As of Monday, only one InfoWars podcast, named RealNews with David Knight, remained on iTunes.

“We have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users,” Apple said in a statement. “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

In July, Facebook and YouTube announced they had removed four of Jones’s videos from their sites. Two of the videos claimed without evidence that Muslims were taking over several European countries. Another compared the creators of a show about drag queens to satanists.

YouTube followed suit and banned Jones’s channel on Monday afternoon, claiming the account, which had over 2.4 million subscribers, violated the site’s guidelines on hate speech.

In recent weeks, Jones garnered increased attention as the parents of children killed in the Connecticut shooting sued him for defamation. While Jones said he now believes the shooting was not a hoax, he said his earlier claims were protected under U.S. free speech laws.

In July, Jones also appeared to threaten special counsel Robert Mueller, who is currently investigating U.S. President Donald Trump and his campaign for potential Russian influence.

“[Mueller is] a demon I will take down, or I’ll die trying,” Jones said, making a pistol motion with his hands. “You’re going to get it, or I’m going to die trying, bitch.”

While Jones’s beliefs have often been characterized as fringe, he has found some mainstream appeal. In December 2015, Trump, then a candidate, appeared on InfoWars via a satellite interview.

“Your reputation is amazing. I will not let you down,” Trump said to Jones.

Facebook Removes Alex Jones Pages for Hate, Bullying

Facebook says it has taken down four pages belonging to conspiracy theorist Alex Jones for violating its hate speech and bullying policies.

The social media giant said in a statement Monday that it also blocked Jones’ account for 30 days because he repeatedly posted content that broke its rules.

The company said it “unpublished” the four pages after receiving reports that they contained content “glorifying violence” and used “dehumanizing language” to describe Muslims, immigrants and transgender people.

Facebook is the latest tech company to take action against Jones, who has been facing a growing backlash on social media.

Last week, music streaming service Spotify removed some episodes of “The Alex Jones Show” podcast for breaching its hate content policy.

Apple iPhone Chip Supplier Says Virus Will Delay Shipments

A company that makes semiconductors for Apple iPhones says it is recovering from a virus outbreak but expects the incident to delay shipments and raise costs.

Taiwan Semiconductor Co. Ltd. said 80 percent of the fabrication tools affected by Friday’s virus had been recovered by Sunday. TSMC expects full recovery on Monday.

The company didn’t detail the impact on Apple or other customers. Apple Inc. did not immediately return a message seeking comment.

The semiconductor company blames the outbreak on a mistake during installation of software for a new tool, which was then connected to its computer network. It says confidential information was not compromised.

The company says the incident will cut third-quarter revenue by about 3 percent. But it’s confident it will get that back in the fourth quarter.

 

Palestinian Girls Will Pitch Their App to Silicon Valley 

Four Palestinian high school friends are heading to California this week to pitch their mobile app about fire prevention to Silicon Valley’s tech leaders, after winning a slot in the finals of a worldwide competition among more than 19,000 teenage girls.

For the 11th graders from the Israeli-occupied West Bank, the ticket of admission to the World Pitch Summit signals a particularly dramatic leap.

They come from middle class families that value education, but opportunities have been limited because of the omnipresent Israeli-Palestinian conflict, prevailing norms of patriarchy in their traditional society and typically underequipped schools with outdated teaching methods.

“We are excited to travel in a plane for the first time in our lives, meet new people and see a new world,” said team member Wasan al-Sayed, 17. “We are excited to be in the most prestigious IT community in the world, Silicon Valley, where we can meet interesting people and see how the new world works.”

​Twelve finalists

Twelve teams made it to the finals of the “Technovation Challenge” in San Jose, California, presenting apps that tackle problems in their communities. The Palestinian teens compete in the senior division against teams from Egypt, the United States, Mexico, India and Spain, for scholarships of up to $15,000.

It’s a life-changing experience for al-Sayed and her teammates, Zubaida al-Sadder, Masa Halawa and Tamara Awaisa.

They are now determined to pursue careers in technology.

“Before this program, we had a vague idea about the future,” said al-Sayed, speaking at a computer lab at An Najah University in her native Nablus, the West Bank’s second largest city. “Now we have a clear idea. It helped us pick our path in life.”

The teens first heard about the competition a few months ago from an IT teacher at their school in a middle-class neighborhood in Nablus, where IT classes are a modest affair, held twice a week, with two students to a computer.

The girls, friends since 10th grade, each had a laptop at home, and worked with Yamama Shakaa, a local mentor provided by the competition organizers. The teens “did everything by themselves, with very few resources,” Shakaa said.

The team produced a virtual reality game, “Be a firefighter,” to teach fire prevention skills.

​Blackouts and fires

The subject is particularly relevant in some parts of the Palestinian territories, such as the Gaza Strip, where a border blockade by Israel and Egypt, imposed after the takeover of the Islamic militant group Hamas in 2007, has led to hours-long daily power cuts and the widespread use of candles and other potential fire hazards.

The teens now hope to expand their app to include wildfire prevention. They will also present a business and marketing plan at the California pitching session.

After the competition, they will give the app to the Palestinian Education Ministry for use in schools.

“This prize has changed our lives,” al-Sayed said.

About the competition

The competition, now in its ninth year, is run by Iridescent, a global nonprofit offering opportunities to young people, especially girls, through technology. The group said 60 percent of the U.S. participants enroll in additional computer science courses after the competition, with 30 percent majoring in that field in college, well above the national rate among female U.S. college students. Two-thirds of international participants show an interest in technology-related courses, the group said.

Palestinian Education Minister Sabri Saidam counts on technology, along with a new emphasis on vocational training, to overhaul Palestinian schools, where many students still learn by rote in crowded classrooms.

Youth unemployment, particularly among university graduates, is a central problem across the Arab world, in part because of a demographic “youth bulge.” Last year, unemployment among Palestinian college graduates younger than 30 reached 56 percent, including 41 percent in the West Bank and 73 percent in the Gaza Strip, according to the Palestinian Central Bureau of Statistics.

Unemployment is particularly high among female university graduates, in part because young women are expected to marry and raise children, while young men are considered the main breadwinners. However, employers also complain that graduates studying outdated or irrelevant courses often lack the needed skills for employment.

Saidam said Palestinian schools have received 15,000 computers in the last couple of years. His ministry has also established 54 bookless “smart schools” for grades one to six where students use laptops and learn by doing, including educational trips and involvement with their society.

Election Crackdown Runs Into Speed-tweeting Human ‘Bots’

Nina Tomasieski logs on to Twitter before the sun rises. Seated at her dining room table with a nearby TV constantly tuned to Fox News, the 70-year-old grandmother spends up to 14 hours a day tweeting the praises of President Trump and his political allies, particularly those on the ballot this fall, and deriding their opponents.

She’s part of a dedicated band of Trump supporters who tweet and retweet Keep America Great messages thousands of times a day.

“Time to walk away Dems and vote RED in the primaries,” she declared in one of her voluminous tweets, adding, “Say NO to socialism & hate.”

While her goal is simply to advance the agenda of a president she adores, she and her friends have been swept up in an expanded effort by Twitter and other social media companies to crack down on nefarious tactics used to meddle in the 2016 election.

And without meaning to, the tweeters have demonstrated the difficulty such crackdowns face — particularly when it comes to telling a political die-hard from a surreptitious computer robot.

Last week, Facebook said it had removed 32 fake accounts apparently created to manipulate U.S. politics — efforts that may be linked to Russia.

Twitter and other sites also have targeted automated or robot-like accounts known as bots, which authorities say were used to cloak efforts by foreign governments and political bad actors in the 2016 elections.

But the screening has repeatedly and erroneously flagged Tomasieski and users like her.

Their accounts have been suspended or frozen for “suspicious” behavior — apparently because of the frequency and relentlessness of their messages. When they started tweeting support for a conservative lawmaker in the GOP primary for Illinois governor this spring, news stories warned that right-wing “propaganda bots” were trying to influence the election.

“Almost all of us are considered a bot,” says Tomasieski, who lives in Tennessee but is tweeting for GOP candidates across the U.S.

Cynthia Smith has been locked out of her account and “shadow banned,” meaning tweets aren’t as visible to others, because of suspected “automated behavior.”

“I’m a gal in Southern California,” Smith said. “I am no bot.”

The actions have drawn criticism from conservatives, who have accused Twitter, Facebook and other companies of having a liberal bias and censorship. It also raises a question: Can the companies outsmart the ever-evolving tactics of U.S. adversaries if they can’t be sure who’s a robot and who’s Nina?

“It’s going to take a really long time, I think years, before Twitter and Facebook and other platforms are able to deal with a lot of these issues,” said Timothy Carone, who teaches technology at Notre Dame’s Mendoza College of Business.

The core problem is that people are coming up with new ways to use the platforms faster than the companies can manage them, he said.

Twitter did not respond to a request for comment. But the company has said it identified and challenged close to 10 million suspected bot or spam accounts in May, up from 3.2 million last September. It’s also trying to weed out “trolls,” or accounts that harass other users, pick fights or tweet material that’s considered inflammatory.

Twitter acknowledges that there will be some “false positives.”

“Our goal is to learn fast and make our processes and tools smarter,” Twitter executives said in a blog post earlier this year.

Tomasieski and her conservative friends use so-called Twitter “rooms” — which operate using the group messaging function — to amplify their voices.

She participates in about 10 rooms, each with 50 members who are invited in once they hit a certain number of followers. That number varies, but “newbies” might have around 3,000, Tomasieski says. Some have far more.

Everyone in the room tweets their own material and also retweets everyone else’s. So a tweet that Tomasieski sends may be seen by her roughly 51,000 followers, but then be retweeted by dozens more people, each of whom may have 50,000 or more followers.

She says she’s learned some tricks to avoid trouble with Twitter. She’s careful not to exceed limits of roughly 100 tweets or retweets an hour. She doesn’t use profanity and she tries to mix up her subjects to appear more human and less bot-like.

During a recent afternoon, Tomasieski retweeted messages criticizing immigrants in the U.S. illegally, Democratic socialists and the media. One noted an Associated Press story about an increase in the number of Muslims running for public office — news the user described as “alarming.”

Tomasieski says she loves to write. But most important is helping “my guy.”

“There is as much enthusiasm today as there was when Trump was elected. It’s very quiet, but it’s there. My job is to get them to the polls,” she said. “That’s rewarding. I go to bed feeling like I have accomplished something.”

New Era in Space: NASA Astronauts Fly Commercial Spacecraft

A new era in American spaceflight was unveiled Friday, with NASA presenting the flight crews that will carry out the first test flights and operational missions aboard commercial spacecraft to be launched from U.S. soil for the first time since the space shuttle’s retirement in 2011. The test flights of the modules, Boeing’s CST-100 Starliner and SpaceX’s Crew Dragon, are expected next year. VOA Correspondent Mariama Diallo reports.

US Objects to China’s Internet Restrictions

The U.S. “remains deeply concerned with China’s long-standing restrictions on freedom of expression online,” a State Department official said Thursday, reacting to Google’s reported plan to relaunch its search engine in China.

“We strongly object to all efforts by China to force U.S. companies to block or censor online content as a condition for market access,” the official said.

Google shut down its Chinese search engine in 2010, citing government attempts to “limit free speech on the web.” But a company whistle-blower who spoke to the online publication The Intercept said Google was in the advanced stages of launching a custom Android search app in China that will comply with the Communist Party’s censorship policies on human rights, democracy, free speech and religion. 

The Intercept cited internal Google documents and people familiar with the rollout. The publication said the project, code-named Dragonfly, has been in development since 2017. It said the project began to progress more quickly following a December meeting between Google CEO Sundar Pichai and a senior Chinese government official.

According to the documents obtained by The Intercept, Google said it would automatically filter websites blocked by China’s so-called Great Firewall. Banned websites will be removed from the first page of search results with the disclaimer: “Some results may have been removed due to statutory requirements.”

Empty searches

The documents also say that Google’s app will “blacklist sensitive queries” by returning no results when people search for certain words or phrases.

“We provide a number of mobile apps in China … [to] help Chinese developers and have made significant investments in Chinese companies like JD.com. But we don’t comment on speculation about future plans,” a Google spokesman told VOA in a statement in response to the alleged plans.

China has 772 million internet users — more than any other country — and hundreds of millions of potential users who are not yet connected to the internet.

China’s top internet regulator, the Cyberspace Administration of China, has not commented on the plans.

U.S. Senator Marco Rubio, a Florida Republican and former presidential candidate, posted on Twitter that Google’s reported plans to set up “a censored search engine” in China were “very disturbing” and could help China “suppress the truth.”

VOA’s Nike Ching at the State Department contributed to this report.

Apple is 1st Public US Company to be Valued at $1 Trillion

Apple made history Thursday when it became the first publicly listed U.S. company to be valued at $1 trillion.

The tech giant’s share price climbed well over 2 percent in mid-session trading, boosting it about 9 percent higher since Tuesday, when it announced better-than-expected second-quarter earnings and a buyback of $20 billion worth of its own shares.

The Silicon Valley company’s stock has skyrocketed more than 50,000 percent since it went public in 1980, greatly exceeding the S&P 500’s impressive 2,000 percent gain during the same period.

Apple’s success was fueled in large part by its iPhone, which transformed it from a niche player in the burgeoning personal computer sector into a global technological powerhouse.

The company was co-founded by the late Steve Jobs, a product innovator who helped prevent the company’s collapse in the late 1990s.

As the company’s market value climbed over the decades, it revolutionized how consumers communicate with each other and how companies conduct business on a daily basis.

 

Congress Passes Bill Forcing Tech Companies To Disclose Foreign Software Probes

The U.S. Congress is sending President Donald Trump legislation that would force technology companies to disclose if they allowed countries like China and Russia to examine the inner workings of software sold to the U.S. military.       

The legislation, part of the Pentagon’s spending bill, was drafted after a Reuters investigation last year found software makers allowed a Russian defense agency to hunt for vulnerabilities in software used by some agencies of the U.S. government, including the Pentagon and intelligence services.      

The final version of the bill was approved by the Senate in a 87-10 vote on Wednesday after passing the House last week. The spending bill is expected to be signed into law by Trump.      

Security experts said allowing Russian authorities to probe the internal workings of software, known as source code, could help Moscow discover vulnerabilities they could exploit to more easily attack U.S. government systems.      

The new rules were drafted by Democratic Senator Jeanne Shaheen of New Hampshire.

“This disclosure mandate is the first of its kind, and is necessary to close a critical security gap in our federal acquisition process,” Shaheen said in an emailed statement.

“The Department of Defense and other federal agencies must be aware of foreign source code exposure and other risky business practices that can make our national security systems vulnerable to adversaries,” she said.   

Disclosure + database   

The law would force U.S. and foreign technology companies to reveal to the Pentagon if they allowed cyber adversaries, like China or Russia, to probe software sold to the U.S. military.      

Companies would be required to address any security risks posed by the foreign source code reviews to the satisfaction of the Pentagon, or lose the contract.      

The legislation also creates a database, searchable by other government agencies, of which software was examined by foreign states that the Pentagon considers a cyber security risk.      

It makes the database available to public records requests, an unusual step for a system likely to include proprietary company secrets.       

Tommy Ross, a senior director for policy at the industry group The Software Alliance, said software companies had concerns that such legislation could force companies to choose between selling to the U.S. and foreign markets.

“We are seeing a worrying trend globally where companies are looking at cyber threats and deciding the best way to mitigate risk is to hunker down and close down to the outside world,” Ross told Reuters last week.

A Pentagon spokeswoman declined to comment on the legislation.      

Source code revealed

In order to sell in the Russian market, technology companies including Hewlett Packard Enterprise Co, SAP SE and McAfee have allowed a Russian defense agency to scour software source code for vulnerabilities, the Reuters investigation found last year.      

In many cases, Reuters found that the software companies had not informed U.S. agencies that Russian authorities had been allowed to conduct the source code reviews. In most cases, the U.S. military does not require comparable source code reviews before it buys software, procurement experts have told Reuters.

The companies had previously said the source code reviews were conducted by the Russians in company-controlled facilities, where the reviewer could not copy or alter the software. The companies said those steps ensured the process did not jeopardize the safety of their products.      

McAfee announced last year that it no longer allows government source code reviews. Hewlett Packard Enterprise has said none of its current software has gone through the process.      

SAP did not respond to requests for comment on the legislation. HPE and McAfee spokespeople declined further comment.    

Dispute Over 3D-Printed Guns Raises Many Legal Issues

A little-known dispute over 3D-printed guns has morphed into a national legal debate in the last week, drawing attention to a technology that seems a bit of sci-fi fantasy and — to gun-control advocates — a dangerous way for criminals to get their hands on firearms that are easy to conceal and tough to detect.

The gun industry calls the outcry an overreaction that preys on unwarranted fears about a firearm that can barely shoot a round or two without disintegrating.

It also raises a host of constitutional questions involving First Amendment protections for free speech and Second Amendment rights to own guns.

Here are some questions and answers about the debate.

Q. What is behind the dispute?

A. Cody Wilson, the founder of Texas-based Defense Distributed, first posted downloadable blueprints for a handgun called the Liberator that could be made using a 3D printer in 2013. Within days it had been downloaded about 100,000 times until the State Department ordered him to cease, contending it violated federal export laws since some of the blueprints were saved by people outside the United States.

The dispute between Wilson and the federal government went on for years until this past June when they reached a settlement that paved the way for Wilson to resume posting the designs.

The State Department decision came amid an obscure administrative change — begun under the Obama administration — in how the weapons are regulated and administered. Military grade weapons remain under the purview of the State Department, while commercially available firearms fall under the Commerce Department. The settlement with Wilson determined that 3D-printed firearms are akin to more traditional firearms that aren’t subject to State Department regulations.

Wilson resumed sharing his blueprints for the gun the day the settlement went into effect last week.

​Q. Why does Wilson want the authority to post the designs on his website?

A. Wilson calls it a First Amendment issue. He believes the First Amendment gives him a constitutional right to disseminate the code to make a gun with a 3D printer.

“This is a very, very, very easy First Amendment question that I think people might be hesitant to accept because it involves guns and people don’t like guns,” said his lawyer, Josh Blackman.

And Wilson has a strong legal claim that distribution of the information is different than actually making an all-plastic firearm.

While it is a violation of the federal Undetectable Firearms Act to make, sell or possess a firearm that can’t be detected by magnetometers or metal detectors, what Wilson is doing is simply providing the information on how to make such a firearm.

“What Defense Distributed was doing was not making and then shipping the weapons overseas,” said Chuck James, a former federal prosecutor who is now a private lawyer with the Washington, D.C.-area firm of Williams Mullen. “They were making the data available on the web where it would be available to someone overseas.”

​Q. What kind of gun designs are available on the website?

A. Defense Distributed shows a variety of designs. The code for a 3D-printed gun is for what he calls the Liberator, which gets its name from a pistol American forces used during World War II.

His design includes a metal firing pin and a metal block. His site also includes blueprints to make various AR-platform long guns and some other handguns using more traditional means and materials.

Q. Are 3D-printed guns legal?

A. In 1988, the U.S. enacted the Undetectable Firearms Act, making it illegal to manufacture, sell or possess a firearm that couldn’t be detected by a metal detector. That law has been renewed several times by Congress and remains in effect.

If 3D-printed guns contain enough metal to be flagged by a metal detector, they are considered legal under U.S. law.

Gun-control advocates argue that the risks are too great to allow 3D-printed guns because even if they’re designed to include metal, it’s too easy for someone to not include those pieces or to remove them to skirt detection.

“It’s an absurdity. You can take the piece of metal out and put it back in at your own whims and you can take it out and walk through a metal detector undetected,” said Jonas Oransky, legal director for Everytown for Gun Safety.

Q. How well do 3D-printed guns work?

A. Gun experts and enthusiasts recoil at the suggestion that a 3D-printed gun is a true threat, calling the firearms mere novelties.

Unlike traditional firearms that can fire thousands of rounds in their lifetime, 3D-printed guns are notorious for usually lasting only a few rounds before they fall apart. They don’t have magazines that allow the usual nine or 15 rounds to be carried; instead, they usually hold a bullet or two and then must be manually loaded afterward. And they’re not usually very accurate either.

A video posted of a test by the federal Bureau of Alcohol, Tobacco, Firearms and Explosives in 2013 showed one of the guns produced from Wilson’s design — the Liberator — disintegrating into pieces after a single round was fired.

“People have got this Star Trek view” of the guns being futuristic marvels, said Chris Knox, communications director for The Firearms Coalition, a gun-rights group. “We’re not talking about exotic technology.”

Others are quick to point out that normal guns are readily available in the U.S. with little regulation, making 3D-printed guns a major hassle compared with regular weapons.

Q. What’s the status of the debate?

A. A federal judge on Tuesday issued a temporary restraining order blocking Wilson from continuing to post the designs on his site. A hearing is to be held in that case next week.

In the meantime, President Donald Trump criticized the Department of Justice advising the State Department to reach a settlement with Wilson without first consulting with him.

Wilson’s website currently displays a banner asking people to help “to uncensor the site.” Clicking a link directs the person to a page to pay membership dues ranging from $5 a month to $1,000 for a lifetime.

Google Mum on Chinese Search Engine Reports

Google declined Wednesday to confirm reports that it plans to launch a censored version of its search engine in China, where its main search platform was previously blocked, along with its YouTube video platform.

“We provide a number of mobile apps in China … [to] help Chinese developers, and have made significant investments in Chinese companies like JD.com. But we don’t comment on speculation about future plans,” a Google spokesperson told VOA in a statement.

The first report on the possible rollout came from The Intercept, and online news publication, which cited internal Google documents and people familiar with the purported plan.

The Intercept said the project, code-named Dragonfly, has been in development since last year. It said the project began to progress more quickly following a December meeting between Google CEO Sundar Pichai and a senior Chinese government official.

Search terms regarding democracy, human rights and peaceful protests will be among those blacklisted in the new search engine app, the report said. It added the search engine had already been demonstrated to Chinese government officials.

The report said a final version could be introduced within six to nine months, pending approval of Chinese officials.

China’s top internet regulator, the Cyberspace Administration of China, has not commented on the reported plans.

U.S. Senator Marco Rubio, a Florida Republican and former U.S. presidential candidate, posted on Twitter that Google should be given the “benefit of the doubt” but that the reported plans were still “very disturbing.”


Social Media Bosses to Face US Lawmakers in September

Top executives from Facebook, Twitter and Google will face lawmakers on Capitol Hill next month to explain what the social media giants are doing to combat foreign information operations.

Senate Intelligence Committee Chairman, Republican Sen. Richard Burr, and ranking Democrat, Sen. Mark Warner, made the announcement Wednesday, at the start of a hearing on how Russia and other countries and actors have been manipulating social media.

The goal of the September 5 hearing will be “to hear the plans they have in place, to press them to do more, and to work together to address this challenge,” Warner said.

“They can do better to protect our democracy,” he added. “I’m concerned that even after 18 months of study we are still only scratching the surface when it comes to Russia’s information warfare.”

Burr called the foreign information operations, like those being carried out by Russia, “an intolerable assault on the democratic foundation this republic was built on.”

“It’s also important that the American people know that these activities neither began nor ended with the 2016 elections” Burr said. He warned that activities like those identified recently by Facebook have been going beyond just social media, “creating events on our streets with real Americans unknowingly participating.”

Facebook Tuesday announced it had shut down 32 Facebook and Instagram accounts because they were “involved in coordinated inauthentic behavior,” much of it targeting left-wing American political groups.

Facebook said it was too early to say whether the accounts were being run by Russia, but an analysis by the Atlantic Council’s Digital Forensic Research Lab found signs pointing to “the Russian-speaking world.”

In a blog post, the lab noted similarities to activity by Russia’s Internet Research Agency (IRA), including “language patterns that indicate non-native English and consistent mistranslation, as well as an overwhelming focus on polarizing issues at the top of any given news cycle with content that remained emotive rather than fact-based.”

Facebook’s announcement followed a warning issued by Microsoft less than two weeks ago, which said hackers had targeted the campaigns of at least Congressional candidates in the upcoming election.

Microsoft said the phishing attacks, similar to ones employed by Russian-linked operatives to target the Republican and Democratic campaigns during the 2016 election, were thwarted.

Late last week, The Daily Beast reported one of the targets of those attacks was Missouri’s Democratic senator, Claire McCaskill, who has been highly critical of Russia.

During Wednesday’s senate hearing, a number of senators cautioned the issue is much bigger than the 2016 or 2018 elections.

“It is about the integrity of our society,” said Sen. Burr. “This is about national security.”

“It would be a mistake to think this is just about elections,” added Republican Sen. John Cornyn, noting similar techniques could be used to destroy reputations or tank stock prices.

Experts say some of that already is happening.

“On the state actor front we have seen evidence of campaigns targeting energy and agriculture,” said Renee DiResta, director of Research at New Knowledge.

“In agriculture, that’s taken the form of spreading fear about GMOs [genetically modified organisms],” she said.

“There’s a commercial dimension to this that’s underreported. There’s a lot more going on in the commercial space,” Graphika Founder and CEO John Kelly told lawmakers.

“Sometimes they’re tied, these political attacks and attack on corporations where corporations will be basically punished with falsely amplified boycott campaigns, and similar measures for doing something, which is politically not what Russia wants to see.”

Judge Blocks Plans to Post Gun Blueprints on Internet

A U.S. federal judge has blocked a Texas man from putting plans on the internet showing people how to make their own plastic guns right in their homes as President Donald Trump questioned whether the action should have been approved by his administration to begin with. It’s a controversy drawing comments from states, the Senate, and President Trump himself. VOA’s Bill Gallo reports.

Robotic Hand Can Juggle Cube — With Lots of Training

How long does it take a robotic hand to learn to juggle a cube?

About 100 years, give or take.

That’s how much virtual computing time it took researchers at OpenAI, the nonprofit artificial intelligence lab funded by Elon Musk and others, to train its disembodied hand. The team paid Google $3,500 to run its software on thousands of computers simultaneously, crunching the actual time to 48 hours. After training the robot in a virtual environment, the team put it to a test in the real world.

The hand, called Dactyl, learned to move itself, the team of two dozen researchers disclosed this week. Its job is simply to adjust the cube so that one of its letters — “O,” “P,” “E,” “N,” “A” or “I” — faces upward to match a random selection.

Ken Goldberg, a University of California, Berkeley robotics professor who isn’t affiliated with the project, said OpenAI’s achievement is a big deal because it demonstrates how robots trained in a virtual environment can operate in the real world. His lab is trying something similar with a robot called Dex-Net, though its hand is simpler and the objects it manipulates are more complex.

“The key is the idea that you can make so much progress in simulation,” he said. “This is a plausible path forward, when doing physical experiments is very hard.”

Dactyl’s real-world fingers are tracked by infrared dots and cameras. In training, every simulated movement that brought the cube closer to the goal gave Dactyl a small reward. Dropping the cube caused it to feel a penalty 20 times as big.

The process is called reinforcement learning. The robot software repeats the attempts millions of times in a simulated environment, trying over and over to get the highest reward. OpenAI used roughly the same algorithm it used to beat human players in a video game, Dota 2.

In real life, a team of researchers worked about a year to get the mechanical hand to this point.

Why?

For one, the hand in a simulated environment doesn’t understand friction. So even though its real fingers are rubbery, Dactyl lacks human understanding about the best grips.

Researchers injected their simulated environment with changes to gravity, hand angle and other variables so the software learns to operate in a way that is adaptable. That helped narrow the gap between real-world results and simulated ones, which were much better.

The variations helped the hand succeed putting the right letter face up more than a dozen times in a row before dropping the cube. In simulation, the hand typically succeeded 50 times in a row before the test was stopped.

OpenAI’s goal is to develop artificial general intelligence, or machines that think and learn like humans, in a way that is safe for people and widely distributed.

Musk has warned that if AI systems are developed only by for-profit companies or powerful governments, they could one day exceed human smarts and be more dangerous than nuclear war with North Korea.

Facebook Removes Accounts ‘Involved in Coordinated Inauthentic Behavior’

Efforts to influence U.S. voters ahead of the 2018 midterm elections in November appear to be well underway, though private companies and government officials are hesitant to say who, exactly, is behind the recently discovered campaigns.

Facebook announced Tuesday it had shut down 32 Facebook and Instagram accounts because they were “involved in coordinated inauthentic behavior.”

Specifically, the social media company said it took down eight Facebook pages, 17 Facebook profiles, and seven Instagram accounts, the oldest of which were created in March 2017.

Facebook said the entities behind the accounts ran some 150 ads for about $11,000 on Facebook and Instagram, paid for with U.S. and Canadian currency.

“We’re still in the very early stages of our investigation and don’t have all the facts — including who may be behind this,” Facebook said in a blog post. “It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past.”

Effort to spark confrontations

At least 290,000 accounts followed the fake pages, most of which appeared to target left-wing American communities in an effort to spark confrontations with the far right, according to an analysis done by the Atlantic Council’s Digital Forensic Research Lab.

 

“They appear to have constituted an attempt by an external actor — possibly, though not certainly, in the Russian-speaking world,” the Digital Forensic Research Lab said in its own post.

It said similarities to activity by Russia’s IRA included “language patterns that indicate non-native English and consistent mistranslation, as well as an overwhelming focus on polarizing issues at the top of any given news cycle with content that remained emotive rather than fact-based.”

Facebook’s announcement came the same day top U.S. officials warned the country is now in “a crisis mode.”

“Our democracy itself is in the crosshairs,” Homeland Security Secretary Kirstjen Nielsen said at a National Cybersecurity Summit, citing Russian interference in the 2016 presidential elections.

“It is unacceptable, and it will not be tolerated,” Nielsen said. “The United States possesses a wide range of response options — some of them seen, others unseen — and we will no longer hesitate to use them to hold foreign adversaries accountable.”

Homeland Security officials said they had been in touch with Facebook about the fake accounts and applauded the move to take them down. The White House also praised Facebook’s actions.

“We applaud efforts by our private sector partners to combat an array of threats that occur in cyberspace, including malign influence,” NSC spokesman Garrett Marquis told VOA.

Nielsen, who did not comment on the Facebook announcement directly, also said officials were “dramatically ramping up” efforts to protect U.S. election systems with the help of a new Election Task Force.

She also announced the launch of a National Risk Management Center to make it easier for the government to work with private sector companies to counter threats in cyberspace.

U.S. President Donald Trump, who has at times cast doubt on findings by the U.S. intelligence community regarding Russian interference in the 2016 election, chaired a meeting of his National Security Council on election security on Friday, with the White House promising continued support to safeguard the country’s election systems.

Vice President Mike Pence, speaking Tuesday at a Homeland Security-sponsored summit, echoed that, saying, “Any attempt to interfere in our elections is an affront to our democracy, and it will not be allowed.”

Pence assured the audience that the White House did not doubt Russia’s attempts to influence U.S. elections, saying, “Gone are the days when America allows our adversaries to cyberattack us with impunity.”

“We’ve already done more than any administration in American history to preserve the integrity of the ballot box,” he added. “The American people demand and deserve the strongest possible defense, and we will give it to them.”

Hackers targeted congressional campaigns

Less than two weeks ago, Microsoft said hackers had targeted the campaigns of at least three congressional candidates in the upcoming election.

Tom Burt, Microsoft’s vice president for customer security and trust, refused to attribute the attacks, but said the hackers used tactics similar to those used by Russian operatives to target the Republican and Democratic parties during their presidential nominating conventions in 2016.

Late last week, The Daily Beast reported one of the targets of the attack was Missouri Democratic senator Claire McCaskill, who has been highly critical of Russia and is facing a tough re-election campaign.

Until recently, both U.S. government and private sector officials had said they had not been seeing the same pace of attacks or influence campaigns that they saw in the run-up to the 2016 election.

“I think we’re not seeing that same conduct,” Monika Bickert, head of Facebook’s product policy and counterterrorism, said during an appearance earlier this month at the Aspen Security Forum. “But we are watching for that activity.”

Still, many officials and analysts said it was likely just a matter of time before Russia would seek to strike again.

“I think we have been clear across the entire administration that even though we aren’t seeing this level of activity directed at elections, we continue to see Russian information operations directed at undermining our democracy,” Homeland Security undersecretary Chris Krebs said.

Facebook said it was sharing what it knows because of a connection between the “bad actors” behind the Facebook and Instagram pages and some protests that are planned next week in Washington, D.C.

Facebook also canceled an event posted by one of the accounts — a page called “Resisters” — calling for a counterprotest to a “Unite the Right” event scheduled for August in Washington, D.C.

U.S. lawmakers’ reactions

Key U.S. lawmakers applauded Facebook’s actions Tuesday, though they warned more still needs to be done.

“The goal of these operations is to sow discord, distrust and division in an attempt to undermine public faith in our institutions and our political system,” Sen. Richard Burr, chair of the Senate Intelligence Committee, said in a statement. “The Russians want a weak America.”

“Today’s announcement from Facebook demonstrates what we’ve long feared — that malicious foreign actors bearing the hallmarks of previously identified Russian influence campaigns continue to abuse and weaponize social media platforms to influence the U.S. electorate,” Rep. Adam Schiff, the top Democrat on the House Intelligence Committee, said in a statement.

“It is clear that much more work needs to be done before the midterm elections to harden our defenses, because foreign bad actors are using the exact same playbook they used in 2016,” Schiff added.

With Drones and Satellites, India Gets to Know its Slums

Satellites and drones are driving efforts by Indian states to map informal settlements in order to speed up the process of delivering services and land titles, officials said.

The eastern state of Odisha aims to give titles to 200,000 households in urban slums and those on the outskirts of cities by the end of the year.

Officials used drones to map the settlements.

“What may have takes us years to do, we have done in a few months,” G. Mathi Vathanan, the state housing department commissioner, told the Thomson Reuters Foundation last week.

Land records across the country date back to the British colonial era, and most holdings have uncertain ownership, leading to fraud and lengthy disputes that often end in court.

Officials in Mumbai, where about 60 percent of the population lives in informal settlements, are also mapping slums with drones. Maharashtra state, where the city is located, is launching a similar exercise for rural land holdings.

In the southern city of Bengaluru, a seven-year study that recently concluded used satellite imaging and machine learning.

The study recorded about 2,000 informal settlements, compared with fewer than 600 in government records.

“Understanding human settlement patterns in rapidly urbanizing cities is important because of the stress on civic resources and public utilities,” said Nikhil Kaza, an associate professor at the University of North Carolina.

“Geospatial analysis can help identify stress zones, and allow civic authorities to focus their efforts in localized areas,” said Kaza, who analyzed the Bengaluru data.

About a third of the world’s urban population lives in informal settlements, according to United Nations data.

These settlements may account for 30 percent to 60 percent of housing in cities, yet they are generally undercounted, resulting in a lack of essential services, which can exacerbate poverty.

Identifying and monitoring settlements with traditional approaches such as door-to-door surveys is costly and time consuming. As technology gets cheaper, officials from Nairobi to Mumbai are using satellite images and drones instead.

About 65 million people live in India’s slums, according to census data, which activists say is a low estimate.

Lack of data can result in tenure insecurity, as only residents of “notified” slums – or those that are formally recognized – can receive property titles.

Lack of data also leads to poor policy because slums are “not homogenous,” said Anirudh Krishna, a professor at Duke University who led the Bengaluru study.

Some slums “are more likely to need water and sanitation facilities, while better off slums may require skills and entrepreneurship interventions,” he said.

“Lack of information on the nature and diversity of informal settlements is an important limitation in developing appropriate policies aimed at improving the lives of the urban poor.”

NASA Marks 60 Years Since Legal Inception

America’s dream of space exploration took its first official step 60 years ago Sunday when President Dwight Eisenhower signed a law authorizing the formation of NASA – the National Aeronautics and Space Administration.

Although humanity had been staring at the stars and wondering since they were living in caves, it took the Cold War to fire man into space.

The world was stunned when the Soviet Union on October 4, 1957, launched Sputnik — the first man-made object to orbit the Earth.

The United States was humiliated at being caught short — not just technologically, but militarily.

Eisenhower ordered government scientists to not only match the Soviets in space, but beat them.

NASA and its various projects — Mercury, Gemini and Apollo — became part of the language.

Just 11 years after Eisenhower authorized NASA, American astronaut Neil Armstrong walked on the moon. Six year later, an Apollo spacecraft linked with a Soviet Soyuz in orbit, turning rivalry into friendship and cooperation.

NASA followed that triumph with the space shuttle, Mars landers and contributions to the International Space Station. A manned mission to Mars is part of NASA’s future plans.

Last month, President Donald Trump called for the formation of a “space force” to be the sixth U.S. military branch.

NASA officially celebrates its 60th anniversary on October 1 – the day the agency formally opened for business.