Indian Agency Denies Reported Security Lapse in ID Card Project

The semi-government agency behind India’s national identity card project on Saturday denied a report by news website ZDNet that the program has been hit by another security lapse that allows access to private information.

ZDNet reported that a data leak on a system run by a state-owned utility company, which it did not name, could allow access to private information of holders of the biometric “Aadhaar” ID cards, exposing their names, their unique 12-digit identity numbers and their bank details.

But the Unique Identification Authority of India (UIDAI), which runs the Aadhaar program, said “there is no truth in this story” and that it was “contemplating legal action against ZDNet.”

ZDNet could not immediately be contacted for comment on the UIDAI’s response.

“There has been absolutely no breach of UIDAI’s Aadhaar database. Aadhaar remains safe and secure,” the agency said in a statement late Saturday.

Even if the claim purported in the story were taken as true, it would raise security concerns about the database of the utility company and would have “nothing to do with the security of UIDAI’s Aadhaar database,” it said.

More than 1 billion users

ZDNet had reported that even though the security lapse was flagged to some government agencies over a period of time, it had yet to be fixed. It said it was withholding the name of the utility and other details.

Karan Saini, a New Delhi security researcher, said that anyone with an Aadhaar number was affected.

“This is a security lapse. You don’t have to be a consumer to access these details. You just need the Uniform Resource Locator where the Application Programming Interface is located. These can be found in less than 20 minutes,” Saini told Reuters.

In recent months, researchers and journalists who have identified loopholes in the identity project have said they were slapped with criminal cases or harassed by government agencies because of their work.

Aadhaar, a biometric identification card with over 1.1 billion users, is the world’s biggest database. But it has been facing increased scrutiny over privacy concerns following several instances of breaches and misuse.

Last Thursday, the CEO of UIDAI said the biometric data attached to each Aadhaar was safe from hacking because the storage facility was not connected to the internet.

“Each Aadhaar biometric is encrypted by a 2,048-key combination and to decode it, the best and fastest computer of our era will take the age of the universe just to hack into one card’s biometric details,” Ajay Bhushan Pandey said.

UK Watchdog Evaluates Evidence From Cambridge Analytica

Britain’s information regulator said Saturday that it was assessing evidence gathered from a raid on the office of data mining firm Cambridge Analytica, part of an investigation into alleged misuse of personal information by political campaigns and social media companies like Facebook.

More than a dozen investigators from the Information Commissioner’s Office entered the company’s central London office late Friday, shortly after a High Court judge granted a warrant. The investigators were seen leaving the premises early Saturday after spending about seven hours searching the office.

The regulator said it would “consider the evidence before deciding the next steps and coming to any conclusions.”

“This is one part of a larger investigation by the ICO into the use of personal data and analytics by political campaigns, parties, social media companies and other commercial actors,” it said.

Authorities in Britain as well as the U.S. are investigating Cambridge Analytica over allegations the firm improperly obtained data from 50 million Facebook users and used it to manipulate elections, including the 2016 White House race and the 2016 Brexit vote in Britain.

Both Cambridge Analytica and Facebook deny wrongdoing. 

​Chief executive suspended

The data firm suspended its CEO, Alexander Nix, this week after Britain’s Channel 4 News broadcast footage that appeared to show Nix suggesting tactics like entrapment or bribery that his company could use to discredit politicians. The footage also showed Nix saying Cambridge Analytica played a major role in securing Donald Trump’s victory in the 2016 U.S. presidential election.

Cambridge Analytica’s acting chief executive, Alexander Tayler, said Friday that he was sorry that SCL Elections, an affiliate of his company, “licensed Facebook data and derivatives from a research company [Global Science Research] that had not received consent from most respondents” in 2014.

“The company believed that the data had been obtained in line with Facebook’s terms of service and data protection laws,” Tayler said.

His statement said the data were deleted in 2015 at Facebook’s request, and he denied that any of the Facebook data that Cambridge Analytica obtained were used in the work it did on the 2016 U.S. election.

Why is Austin an Attractive Hub for Many Tech Companies?

Austin, Texas, is not California’s Silicon Valley technology corridor. But companies from Silicon Valley and other major U.S. hubs are taking notice of Austin’s growing tech scene. Austin’s lower cost of living and doing business, combined with its smaller size, are just a few reasons that people are attracted to the area. VOA’s Elizabeth Lee explains other reasons that tech companies are opening up shop there.

Facebook CEO Mark Zuckerberg Sets Course for Popular Social Media Site

Now that Facebook CEO Mark Zuckerberg has spoken publicly about the firm’s data controversy, the chief question remains whether the changes he outlined will be enough to restore the public’s trust in the social media giant.

 

In a series of media interviews this week, Zuckerberg went into full damage control mode about how the company handled user data when it discovered in 2015 that 50 million users’ data had been shared with Cambridge Analytica, a consultancy that advises political campaigns, thus breaking the company’s rules.

 

He apologized. He called the recent controversy “a major breach of trust.”

 

What now?

 

Congressional leaders have already called on Zuckerberg to testify in Congress — something that Zuckerberg appeared willing to do, according to the interviews, if he was “the right person.”

 

Some Facebook critics argue the firm, which relies on advertising revenue, isn’t able or willing to curtail practices that may improve users’ privacy but potentially hurt its bottom line. The company needs some sort of regulatory oversight, they say, or new laws about users’ personal data.​

But for now, Zuckerberg outlined a series of measures that would limit the amount of data collected on users, something that many privacy advocates have argued for. The firm’s revenue model, he said, is here to stay.

 

“I don’t think the ad model is going to go away because I think fundamentally, it’s important to have a service like this that everyone in the world can use, and the only way to do that is to have it be very cheap or free,” Zuckerberg told the New York Times.

Going back to 2014

Facebook plans to turn the clock back to 2014, before it changed its rules stopping a developers’ ability to tap into users’ friends’ data.

 

With the help of forensic auditors, the company plans to investigate all “large apps” — “thousands,” by Zuckerberg’s estimate, that scooped up data then.

 

This includes users whose data was gathered by a researcher and given to Cambridge Analytica. Facebook plans to inform affected users. Cambridge Analytica has denied that it improperly used user data.

If a developer doesn’t want to comply with Facebook’s audit, Facebook will ban it from the social network, Zuckerberg said.

 

“Even if you solve the problem going forward, there’s still this issue of: Are there other Cambridge Analyticas out there,” Zuckerberg told the Times. “We also need to make sure we get that under control.”

 

Remove access to data

In addition, the company plans to remove a developer’s access to a person’s data if someone hasn’t used the developer’s app in three months. And the company plans to reduce the amount of information collected when users sign in.

 

Finally, the company says it plans to make it easier to see who has access to their data and to revoke permissions. The moves are intended to curtail what critics have long complained about Facebook’s role in enabling the ongoing collection of more data on users than is needed.

 

Feeling ‘uncomfortable’

Zuckerberg told Recode that Facebook, with more than two billion users, has become so big and important in the lives of many around the world that he doesn’t always feel comfortable making blanket decisions.

“I feel fundamentally uncomfortable sitting here in California at an office, making content policy decisions for people around the world,” he said. “Things like where is the line on hate speech?”

He has to make the decisions he said, because he runs Facebook.

“But I’d rather not.”

Black Identity, Technology in US Celebrated at Afrotectopia Fest

Being black and working in the tech industry can be an isolating experience.

New York nonprofit Ascend Leadership analyzed the hiring data of hundreds of San Francisco Bay-area tech companies from 2007 and 2015 and issued a report last year, detailing the lack of diversity in tech.

Based on data from the U.S. Equal Employment Opportunity Commission (EEOC), Ascend found that the black tech professional workforce declined from 2.5 percent in 2007 to 1.9 percent in 2015. The outlook was even bleaker at the top. Despite 43 percent growth in the number of black executives from 2007 to 2015, blacks accounted for 1.1 percent of the total number of tech executives in 2015.

“You’re one in a sea full of people that just don’t look like you,” said Ari Melenciano, a graduate student in the Interactive Telecommunications Program at New York University. Melenciano decided to do something about it and created Afrotectopia.

Recently held at NYU, the inaugural 2-day festival brought together black technologists, designers and artists to discuss their work and the challenges of navigating the mostly white world of technology and new media.

“It’s really important for us to be able to see ourselves and build this community of people that actually look like us and are doing amazing things,” Melenciano said.

Glenn Cantave, founder and CEO of performance art coalition Movers and Shakers NYC, was on hand to demonstrate the group’s use of augmented reality and virtual reality, with apps that address racism and discrimination.

“My parents told me from a very young age that ‘You will not be treated like your white friends. There are certain privileges that you do not have,'” said Cantave. “It’s affected my conduct, it affects how I navigate spaces. I stay hyper-aware of my surroundings at all times, in terms of safety.”

Cantave and his team are working on an augmented reality book for children entitled, White Supremacy 101: Columbus the Hero? The book will contain various images that become animated when viewed with an augmented reality app. Each excerpt is intended to be a counterpoint to traditional history lessons which tell American history from a white perspective.

“If these false narratives are perpetuated for generations in the future, you’re going to have a collective consciousness that doesn’t see black people as human beings,” Cantave said. “You see it with mass incarceration, you see it with police brutality, you see it with unsympathetic immigration policy.”

But technology offers an opportunity to change that, according to Idris Brewster, creator of the app and CTO of Movers and Shakers NYC.  

“Augmented reality and virtual reality … really provides us with a unique opportunity to use very immersive technology and tell a story in a very different and engaging way,” Brewster said.

Public response has been positive. “It’s blown the kids’ minds just to see animations. A lot of kids will be like, ‘Wow, this is like Harry Potter,'” he said.

Brewster also works as a computer science instructor at Google, where in 2016, blacks made up 1 percent of the company’s U.S. tech workers. He wants to see more minorities become tech creators, not just end users.

“There’s algorithms being created in our world right now that are detrimental to people of color because they’re not made for people of color,” Brewster said. “We need to start being able to figure out how we can get our minds and our perspectives in those conversations, creating those algorithms.”

Virtual reality filmmaker Jazzy Harvey attended Afrotectopia to present her virtual-reality film, Built Not Bought, which profiles the custom-car enthusiasts of south central Los Angeles.

Harvey said she felt greater creative freedom working with the new medium. “There’s no rules, and the fact that I have no rules and no restrictions … I get to choose which story is worth telling,” Harvey said.

Afrotectopia panelists and attendees tackled a variety of topics including digital activism, entrepreneurship and education, but ultimately, it was about getting everyone in the same room together.

“To come into a space where you don’t have to assimilate culturally, you can just be yourself and talk the way that you actually talk and really have people that can connect with you culturally is so important,” Melenciano said. “Especially when you’re talking about things that you’re passionate about like tech, it’s a space where we’re so often dismissed from.”

Zuckerberg Apology Fails to Quiet Facebook Storm

A public apology by Facebook chief Mark Zuckerberg failed Thursday to quell outrage over the hijacking of personal data from millions of people, as critics demanded that the social media giant go much further to protect user privacy.

Speaking out for the first time about the harvesting of Facebook user data by a British firm linked to Donald Trump’s 2016 campaign, Zuckerberg admitted Wednesday to betraying the trust of its 2 billion users and promised to “step up.”

Vowing to stop data leaking to outside developers and to give users more control over their information, Zuckerberg also said he was ready to testify before US lawmakers — which a powerful congressional committee promptly asked him to do.

With pressure ratcheting up on the 33-year-old CEO over a scandal that has wiped $60 billion off Facebook’s value, the initial response suggested his promise of self-regulation had failed to convince critics he was serious about change.

“Frankly I don’t think those changes go far enough,” Matt Hancock, Britain’s culture and digital minister, told the BBC.

“It shouldn’t be for a company to decide what is the appropriate balance between privacy and innovation and use of data,” he said. “The big tech companies need to abide by the law, and we are strengthening the law.”

In Brussels, European leaders were sending the same message as they prepared to push for tougher safeguards on personal data online, while Israel became the latest country to launch an investigation into Facebook.

The data scandal erupted at the weekend when a whistle-blower revealed that British consultant Cambridge Analytica had created psychological profiles on 50 million Facebook users via a personality prediction app, developed by a researcher named Aleksandr Kogan.

The app, downloaded by 270,000 people, scooped up their friends’ data without consent — as was possible under Facebook’s rules at the time.

‘Breach of trust’

Facebook said it discovered last week that Cambridge Analytica might not have deleted the data as it certified, although the British firm denied wrongdoing.

“This was a major breach of trust and I’m really sorry that this happened,” Zuckerberg said in an interview with CNN, after publishing a blog post outlining his response to the scandal.

“Our responsibility now is to make sure this doesn’t happen again.”

With Facebook already under fire for allowing fake news to proliferate during the U.S. election, Zuckerberg also said “we need to make sure that we up our game” ahead of midterm congressional elections in November, in which American officials have warned Russia can be expected to meddle as it did two years ago.

Cambridge Analytica has maintained it did not use Facebook data in the Trump campaign, but its now-suspended CEO boasted in secret recordings that his company was deeply involved in the race.

WATCH: Facebook Under Fire for Data Misuse

And U.S. special counsel Robert Mueller, who is investigating Russian interference in the 2016 presidential race, is reportedly looking into the consultant’s role in the Trump effort.

‘Abused and misused’

Zuckerberg’s apology followed a dayslong stream of damaging accusations against the world’s biggest social network, which now faces probes on both sides of the Atlantic.

In Washington on Thursday, leaders of the House Energy and Commerce Committee urged Zuckerberg to testify without delay, saying a briefing a day earlier by Facebook officials had left “many questions” unanswered.

“We believe, as CEO of Facebook, he is the right witness to provide answers to the American people,” said a statement from the panel, calling for a hearing “in the near future.”

America’s Federal Trade Commission is reportedly investigating Facebook over the scandal, while Britain’s information commissioner is seeking to determine whether it did enough to secure its data.

On Thursday, Israel’s privacy protection agency said it had informed Facebook of a probe into the Cambridge Analytica revelations, and was looking into “the possibility of other infringements of the privacy law regarding Israelis.”

Meanwhile, European Union leaders were due to press digital giants “to guarantee transparent practices and full protection of citizens’ privacy and personal data,” according to a draft summit statement obtained by AFP.

A movement to quit the social network has already gathered momentum — with the co-founder of the WhatsApp messaging service among those vowing to #deletefacebook — while a handful of lawsuits risk turning into class actions in a costly distraction for the company.

World Wide Web inventor Tim Berners-Lee described it as a “serious moment for the web’s future.”

“I can imagine Mark Zuckerberg is devastated that his creation has been abused and misused,” tweeted the British scientist.

“I would say to him: You can fix it. It won’t be easy but if companies work with governments, activists, academics and web users, we can make sure platforms serve humanity.”

Experts: Uber SUV’s Autonomous System Should Have Seen Woman

Two experts say video of a deadly crash involving a self-driving Uber vehicle shows the sport utility vehicle’s laser and radar sensors should have spotted a pedestrian, and computers should have braked to avoid the crash.

Authorities investigating the crash in a Phoenix suburb released the video of Uber’s Volvo striking a woman as she walked from a darkened area onto a street.

Experts who viewed the video told The Associated Press that the SUV’s sensors should have seen the woman pushing a bicycle and braked before the impact.

Also, Uber’s human backup driver appears on the video to be looking down before crash and appears startled about the time of the impact.

“The victim did not come out of nowhere. She’s moving on a dark road, but it’s an open road, so Lidar [laser] and radar should have detected and classified her” as a human, said Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles.

Sam Abuelsmaid, an analyst for Navigant Research who also follow autonomous vehicles, said laser and radar systems can see in the dark much better than humans or cameras and that the pedestrian was well within the system’s range.

“It absolutely should have been able to pick her up,” he said. “From what I see in the video it sure looks like the car is at fault, not the pedestrian.”

The video could have a broad impact on autonomous vehicle research, which has been billed as the answer to cutting the 40,000 traffic deaths that occur annually in the U.S. in human-driven vehicles.

Proponents say that human error is responsible for 94 percent of crashes, and that self-driving vehicles would be better because they see more and don’t get drunk, distracted or drowsy.

But the experts said it appears from the video that there was some sort of flaw in Uber’s self-driving system.

The video, Smith said, may not show the complete picture, but “this is strongly suggestive of multiple failures of Uber and its system, its automated system, and its safety driver.”

Tempe police, as well as the National Transportation Safety Board and the National Highway Traffic Safety Administration are investigating the Sunday night crash, which occurred outside of a crosswalk on a darkened boulevard.

The crash was the first death involving a fully autonomous test vehicle. The Volvo was in self-driving mode traveling about 40 mph (64 kph) with a human backup driver at the wheel when it struck 49-year-old Elaine Herzberg, police said.

The lights on the SUV did not illuminate Herzberg until a second or two before impact, raising questions about whether the vehicle could have stopped in time.

Tempe Police Chief Sylvia Moir told the San Francisco Chronicle earlier this week that the SUV likely would not be found at fault.

But Smith said that from what he observed on the video, the Uber driver appears to be relying too much on the self-driving system by not looking up at the road.

“The safety driver is clearly relying on the fact that the car is driving itself. It’s the old adage that if everyone is responsible no one is responsible,” Smith said. “This is everything gone wrong that these systems, if responsibly implemented, are supposed to prevent.”

The experts were unsure if the test vehicle was equipped with a video monitor that the backup driver may have been viewing.

Uber immediately suspended all road-testing of such autos in the Phoenix area, Pittsburgh, San Francisco and Toronto.

An Uber spokeswoman, reached Wednesday night by email, did not answer specific questions about the video or the expert observations.

“The video is disturbing and heartbreaking to watch, and our thoughts continue to be with Elaine’s loved ones. Our cars remain grounded, and we’re assisting local, state and federal authorities in any way we can,” the company said in a statement.

Tempe police have identified the driver as 44-year-old Rafael Vasquez. Court records show someone with the same name and birthdate as Vasquez spent more than four years in prison for two felony convictions — for making false statements when obtaining unemployment benefits and attempted armed robbery — before starting work as an Uber driver.

Tempe police and the NTSB declined to say whether the Vasquez who was involved in the fatal crash is the same Vasquez with two criminal convictions.

Attempts by the AP to contact Vasquez through phone numbers and social media on Wednesday afternoon were not successful.

Local media have identified the driver as Rafaela Vasquez. Authorities declined to explain the discrepancy in the driver’s first name.

The fatality has raised questions about whether Uber does enough to screen its drivers.

Uber said Vasquez met the company’s vetting requirements.

The company bans drivers convicted of violent crimes or any felony within the past seven years. Records show Vasquez’ offenses happened before the seven-year period, in 1999 and 2000.

The company’s website lists its pre-screening policies for drivers that spell out what drivers can and cannot have on their record to work for Uber.

 Their driving history cannot have any DUI or drug-related driving offenses within the past seven years, for instance. They also are prevented from having more than three non-fatal accidents or moving violations within the past three years.

Facebook’s Zuckerberg: ‘We Need to Step Up’ to Protect User Data  

Facebook founder Mark Zuckerberg has released a statement about the social network’s part in an illegal data collection scandal. 

“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” Zuckerberg said in a lengthy Facebook post. “The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.”

Facebook disclosed on Friday that it has known since 2015 that British researcher Aleksandr Kogan illegally shared users’ information with a research firm, after collecting that data legally through an application for a personality quiz. The research firm is alleged to have illegally used the data of an estimated 50 million Facebook users to build profiles for U.S. political campaigns, including the presidential campaign of Donald Trump.

Facebook has been criticized for failing to alert its users to the incident in 2015. Wednesday was the first time Zuckerberg publicly addressed the issue.

Included in his statement was a timeline of events that said Facebook demanded in 2015 that Cambridge Analytica delete all improperly acquired data. He said last week he learned from news outlets that the company may not have deleted the data, despite providing certification of having done so. 

Cambridge Analytica has denied that it kept the data. One Facebook executive in charge of security is reportedly leaving the firm as a result the matter.

Zuckerberg said the incident amounted not only to a breach of trust between Kogan, Cambridge Analytica, and Facebook but also “a breach of trust between Facebook and the people who share their data with us and expect us to protect it.”

New safeguards

The Facebook founder outlined new precautions his social media platform will take to protect user data in future: identifying any other application developers found to have misused personal data, restricting the types of data available to developers, and ending their access to a user’s data if the user has not used the app in the past three months. He also said Facebook will make it easier for users to revoke apps’ permission to use their data, by putting the tool at the top of a user’s news feed.

Within an hour of its posting, Zuckerberg’s message had garnered more than 32,000 “likes” or other reactions and had been shared more than 10,000 times. User comments varied from fan club-style expressions of support to bitter complaints about Zuckerberg’s failure to speak sooner.

While controversy has swirled, Facebook’s stock value has taken a significant hit. The company has lost more than $45 billion of its stock market value over the past three days. 

Questions about regulation

The probe over Cambridge Analytica is just the latest flashpoint around Facebook’s role in the 2016 election and comes as the company faces questions about how it should be regulated and monitored going forward.

With its more than 2 billion monthly users and billions of dollars in profit, Facebook has become a powerful conduit of news, opinion and propaganda, much of it targeted at individuals based on their own data. The social media site and investigators have found that Russia-backed operatives had used Facebook to spread disinformation and propaganda. 

In recent months, the company, along with YouTube and Twitter, has changed some of its practices to reduce the power of automated accounts and propaganda.  Facebook has said it would hire 10,000 security employees.

Facebook Founder: We Made a Mistake in Trying to Protect User Data

Facebook founder Mark Zuckerberg said in a rare television interview Wednesday that Facebook clearly made a mistake in its part in an illegal data collection scandal.

“This was a major breach of trust. I am really sorry this happened. We have a basic responsibility to protect people’s data,” he told CNN.

Zuckerberg did not elaborate on what mistake Facebook made, but he promised to check all apps and do a full forensic audit.

He also told CNN he is sure someone is trying to meddle in the upcoming November midterm U.S. congressional elections. He said Facebook is “really committed” to stop anyone from interfering in the elections through Facebook, including upcoming votes in Brazil and India.

WATCH: Facebook Under Fire for Data Misuse

Since 2015

Facebook disclosed on Friday that it has known since 2015 that British researcher Aleksandr Kogan illegally shared users’ information with a research firm, after collecting that data legally through an application for a personality quiz. The research firm is alleged to have illegally used the data of an estimated 50 million Facebook users to build profiles for U.S. political campaigns, including the presidential campaign of Donald Trump.

Facebook has been criticized for failing to alert its users to the incident in 2015. Wednesday was the first time Zuckerberg publicly addressed the issue.

Included in his statement was a timeline of events that said Facebook demanded in 2015 that Cambridge Analytica delete all improperly acquired data. He said last week he learned from news outlets that the company may not have deleted the data, despite providing certification of having done so. 

Cambridge Analytica has denied that it kept the data. One Facebook executive in charge of security is reportedly leaving the firm as a result the matter.

Zuckerberg said the incident amounted not only to a breach of trust between Kogan, Cambridge Analytica, and Facebook but also “a breach of trust between Facebook and the people who share their data with us and expect us to protect it.”

New safeguards

The Facebook founder outlined new precautions his social media platform will take to protect user data in future: identifying any other application developers found to have misused personal data, restricting the types of data available to developers, and ending their access to a user’s data if the user has not used the app in the past three months. He also said Facebook will make it easier for users to revoke apps’ permission to use their data, by putting the tool at the top of a user’s news feed.

Within an hour of its posting, Zuckerberg’s message had garnered more than 32,000 “likes” or other reactions and had been shared more than 10,000 times. User comments varied from fan club-style expressions of support to bitter complaints about Zuckerberg’s failure to speak sooner.

While controversy has swirled, Facebook’s stock value has taken a significant hit. The company has lost more than $45 billion of its stock market value over the past three days. 

Questions about regulation

The probe over Cambridge Analytica is just the latest flashpoint around Facebook’s role in the 2016 election and comes as the company faces questions about how it should be regulated and monitored going forward.

With its more than 2 billion monthly users and billions of dollars in profit, Facebook has become a powerful conduit of news, opinion and propaganda, much of it targeted at individuals based on their own data. The social media site and investigators have found that Russia-backed operatives had used Facebook to spread disinformation and propaganda. 

In recent months, the company, along with YouTube and Twitter, has changed some of its practices to reduce the power of automated accounts and propaganda.  Facebook has said it would hire 10,000 security employees.

EXCLUSIVE: Kaspersky Lab Plans Swiss Data Center to Combat Spying Allegations: Documents

Moscow-based Kaspersky Lab plans to open a data center in Switzerland to address Western government concerns that Russia exploits its anti-virus software to spy on customers, according to internal documents seen by Reuters.

Kaspersky is setting up the center in response to actions in the United States, Britain and Lithuania last year to stop using the company’s products, according to the documents, which were confirmed by a person with direct knowledge of the matter.

The action is the latest effort by Kaspersky, a global leader in anti-virus software, to parry accusations by the U.S. government and others that the company spies on customers at the behest of Russian intelligence. The U.S. last year ordered civilian government agencies to remove the Kaspersky software from their networks.

Kaspersky has strongly rejected the accusations and filed a lawsuit against the U.S. ban.

The U.S. allegations were the “trigger” for setting up the Swiss data center, said the person familiar with Kapersky’s Switzerland plans, but not the only factor.

“The world is changing,” they said, speaking on condition of anonymity when discussing internal company business. “There is more balkanisation and protectionism.”

The person declined to provide further details on the new project, but added: “This is not just a PR stunt. We are really changing our R&D infrastructure.”

A Kaspersky spokeswoman declined to comment on the documents reviewed by Reuters.

In a statement, Kaspersky Lab said: “To further deliver on the promises of our Global Transparency Initiative, we are finalizing plans for the opening of the company’s first transparency center this year, which will be located in Europe.”

“We understand that during a time of geopolitical tension, mirrored by an increasingly complex cyber-threat landscape, people may have questions and we want to address them.”

Kaspersky Lab launched a campaign in October to dispel concerns about possible collusion with the Russian government by promising to let independent experts scrutinize its software for security vulnerabilities and “back doors” that governments could exploit to spy on its customers.

The company also said at the time that it would open “transparency centers” in Asia, Europe and the United States but did not provide details. The new Swiss facility is dubbed the Swiss Transparency Centre, according to the documents.

Data review

Work in Switzerland is due to begin “within weeks” and be completed by early 2020, said the person with knowledge of the matter.

The plans have been approved by Kaspersky Lab CEO and founder Eugene Kaspersky, who owns a majority of the privately held company, and will be announced publicly in the coming months, according to the source.

“Eugene is upset. He would rather spend the money elsewhere. But he knows this is necessary,” the person said.

It is possible the move could be derailed by the Russian security services, who might resist moving the data center outside of their jurisdiction, people familiar with Kaspersky and its relations with the government said.

Western security officials said Russia’s FSB Federal Security Service, successor to the Soviet-era KGB, exerts influence over Kaspersky management decisions, though the company has repeatedly denied those allegations.

The Swiss center will collect and analyze files identified as suspicious on the computers of tens of millions of Kaspersky customers in the United States and European Union, according to the documents reviewed by Reuters. Data from other customers will continue to be sent to a Moscow data center for review and analysis.

Files would only be transmitted from Switzerland to Moscow in cases when anomalies are detected that require manual review, the person said, adding that about 99.6 percent of such samples do not currently undergo this process.

A third party will review the center’s operations to make sure that all requests for such files are properly signed, stored and available for review by outsiders including foreign governments, the person said.

Moving operations to Switzerland will address concerns about laws that enable Russian security services to monitor data transmissions inside Russia and force companies to assist law enforcement agencies, according to the documents describing the plan.

The company will also move the department which builds its anti-virus software using code written in Moscow to Switzerland, the documents showed.

Kaspersky has received “solid support” from the Swiss government, said the source, who did not identify specific officials who have endorsed the plan.

New Technology Being Developed for Pacemakers

When you are watching a television show and see someone get their heart shocked back into a rhythm, you will see their entire body rise up in the air. That’s what happens when a defibrillator is used, because the shock is that powerful. As VOA’s Carol Pearson reports, scientists are now working on better, more effective, and less-shocking ways to get a heart to start beating once again.

In Lab, 3-D Printing Cuts Costs, Manufacturing Time of Heat Exchangers

Heat exchangers are some of the most widely used energy-transfer devices, helping cool everything from car engines to power plants.

At the recent ARPA-e conference, organized by the U.S. Department of Energy, scientists from the University of Maryland showcased an advanced 3-D printer that, combined with a wire-laying head, cuts in half the time needed to manufacture heat exchangers.

David Hymas, a Ph.D. candidate at the University of Maryland, said that in most cases in a heat exchanger, the heat is transferred by forcing air over pipes or tubes with circulating water, which is often pumped from a nearby river or lake.

Reduce water use

“Currently power plants draw about 40 percent of all the freshwater supply in the United States,” Hymas said. Water consumption, he added, could be cut in half if lightweight air-cooled heat exchangers were created in 3-D printers.

“The water would flow in through one manifold entering these water tubes, right here, and then flow out through the other manifold. Air would blow across it, cooling these fins,” he said.

Printing a heat exchanger

At the school’s Advanced Heat Exchangers and Process Intensification Laboratory, Hymas showed off a heat exchanger manufactured in a lab-size 3-D printer that includes a wire-laying device. The first head in the machine builds up layers of polymer tubes, while another head lays copper or aluminum wire across them.

The printer used for testing the idea took almost 24 hours to create a shoebox-size heat exchanger, but research associate Farah Singer says the industrial-scale prototype machine proved to be much faster.

“It has 10 polymer heads and it is capable of printing, of laying at the same time, 45 fibers, 45 metal fibers, so we are talking about a full layer,” she said. “This machine is capable of printing in eight hours a 1 meter square heat exchanger. We are talking almost 20 kilowatts or 30 kilowatt heat exchanger.”

Reduction in weight and cost for 3-D-printed heat exchangers reaches 50 percent, depending on the application, which can range from power plants to air conditioning to cooling electronic devices.

“For the electronic cooling, for example, so far our experimental results have shown that we could have up to 52 percent reduction in cost while we have 26 percent increase in performance,” Singer said.

An added advantage is that 3-D printers allow creating very complex geometries, with the resolution between the cooling wires as low as 100 microns. The project was partially funded by the U.S. Department of Energy.

China Tests Unmanned Tanks in Modernization Push

China is testing unmanned tanks that could be equipped with artificial intelligence, a state-run newspaper said Wednesday, as the country continues with its military modernization program.

State television showed images this week of the unmanned tanks undergoing testing, the Global Times newspaper reported.

Footage showed a Type 59 tank being driven by remote control, in what the paper said was the first time a Chinese-made unmanned tank has been shown in a public forum.

The Type 59 tank is based on an old Soviet model first used in China in the 1950s and has been produced in large numbers and has a long service life, it said.

“A large number of due-to-retire Type 59 tanks can be converted into unmanned vehicles if equipped with artificial intelligence,” Liu Qingshan, the chief editor of Tank and Armored Vehicle, told the newspaper.

Unmanned tanks will be able to work on other unmanned equipment, integrate information from satellites, aircraft or submarines, the report added.

China is in the middle of a modernization program for its armed forces, including building stealth fighters and new aircraft carriers, as President Xi Jinping looks to assert the country’s growing power.

Can Self-Driving Cars Withstand First Fatality?

The deadly collision between an Uber autonomous vehicle and a pedestrian near Phoenix is bringing calls for tougher self-driving regulations, but advocates for a hands-off approach say big changes aren’t needed.

Police in Tempe, Arizona, say the female pedestrian walked in front of the Uber SUV in the dark of night, and neither the automated system nor the human backup driver stopped in time. Local authorities haven’t determined fault, and federal transportation authorities say they won’t release any findings on the crash until their investigation is complete.

Current federal regulations have few requirements specifically for self-driving vehicles, leaving it for states to handle. Many, such as Arizona, Nevada and Michigan, cede key decisions to companies as they compete for investment that will come with the technology.

No matter whether police find Uber or the pedestrian at fault in the Sunday crash, many federal and state officials say their regulations are sufficient to keep people safe while allowing the potentially lifesaving technology to grow. Others, however, argue the regulations don’t go far enough.

“I don’t think we need to jump to conclusions and make changes to our business,” said Michigan state Senator Jim Ananich, the chamber’s minority leader. He and other Democrats joined Republicans to pass a bill last year that doesn’t require human backup drivers and allows companies wide latitude to conduct tests.

Ananich called the death of Elaine Herzberg, 49, a tragedy and said companies need to continue refining their systems. “I want that work to happen here, because we have a 100-year history of making the best cars on the planet,” he said. “It’s not perfect by any means, and we are just going to have to keep working until it is.”

Proponents of light regulations, including the Trump administration’s Transportation Department, say the technology could reduce the 40,000 traffic deaths that happen annually in the U.S. The government says 94 percent of crashes are caused by human error that automated systems can reduce because they don’t get drunk, sleepy or inattentive.

U.S. Representative Bob Latta, an Ohio Republican who chairs a House subcommittee that passed an autonomous vehicle bill, said the measure has sufficient provisions to ensure the cars operate safely. It requires the National Highway Traffic Safety Administration to develop safety standards and allows the agency to update outdated regulations. It also prohibits states from regulating autonomous driving systems to avoid a patchwork of rules, Latta said.

The bill has passed the House. The Senate is considering a similar measure.

About 6,000 pedestrians were killed last year in crashes that involved cars driven by humans, he said. “What we want to do is see that stop or try to get it preventable,” he said.

But safety advocates and others say companies are moving too quickly, and they fear others will die as road testing finds gaps that automated systems can’t handle.

Jason Levine, executive director for the nonprofit Center for Auto Safety, said without proper regulations, more crashes will happen. “There’s no guardrails on the technology when it’s being tested without any sense of how safe it is before you put it on the road,” he said.

Others say that the laser and radar sensors on the SUV involved in the Tempe accident should have spotted Herzberg in the darkness and braked or swerved to avoid her. Development should be slowed, with standards set for how far sensors must see and how quickly vehicles should react, they said.

Sam Abuelsamid, an analyst for Navigant Research, expects the Arizona crash to slow research. “Responsible companies will take this opportunity to go back and look at their test procedures,” he said.

Toyota already is taking a step back, pausing its fully autonomous testing with human backups for a few days to let drivers process the Arizona crash and “help them do their jobs with less concern,” the company said. The company says it constantly refines its procedures.

Without standards for software coding quality and cybersecurity, there will be more deaths as autonomous vehicles are tested on public roads, said Lee McKnight, associate professor of information studies at Syracuse University.

“We can say eventually they’ll learn not to kill us,” McKnight said. “In the meantime, they will be killing more people.”

Breaking Up With Facebook Harder Than It Looks

Facebook’s latest privacy scandal, involving Trump campaign consultants who allegedly stole data on tens of millions of users in order to influence elections, has some people reconsidering their relationship status with the social network.

There’s just one problem: There isn’t much of anywhere else to go.

Facebook has weathered many such blow-ups before and is used to apologizing and moving on. But the stakes are bigger this time.

Regulatory authorities are starting to focus on the data misappropriation, triggering a 9 percent decline in Facebook’s normally high-flying stock since Monday. Some of that reflects fear that changes in Facebook’s business will hurt profits or that advertisers and users will sour on the social network.

The furor over Cambridge Analytica, the data mining firm accused of stealing Facebook data, followed a bad year in which Facebook acknowledged helping spread fake news and propaganda from Russian agents. It also came less than three months after CEO Mark Zuckerberg told the world that he would devote the year to fixing Facebook. Instead, things seem to be getting worse.

“It’s more serious economically, politically, financially, and will require a more robust response in order to regain users’ trust,” said Steve Jones, a professor of communications at the University of Illinois at Chicago.

Not so easy

Yet leaving Facebook isn’t simple for some people.

Arvind Rajan, a tech executive from San Francisco who deactivated his account on Monday, suddenly discovered he needs to create new usernames and passwords for a variety of apps and websites. That’s because he had previously logged in with his Facebook ID.

It’s a pain, he said, “but not the end of the world.” And because he is bothered by Facebook’s “ham-handed” response to recent problems, the inconvenience is worth it.

For other users looking to leave, it can feel as if there are no real alternatives. Twitter? Too flighty, too public. Instagram? Whoops, owned by Facebook. Snapchat? Please, unless you’re under 25 — in which case you’re probably not on Facebook to begin with.

Facebook connects 2.2 billion users and a host of communities that have sprung up on its network. No other company can match the breadth or depth of these connections — thanks in part to Facebook’s proclivity for squashing or swallowing up its competition.

What about your photos? 

But it is precisely in Facebook’s interest to make users feel Facebook is the only place to connect with others. Where else will grandmothers see photos of their far-flung grandkids? How will new mothers connect to other parents also up at 4 a.m. with a newborn?

“My only hesitation is that there are hundreds of pictures posted over 13 years of my life that I do not want to lose access to. If there was a way to recover these photos, I would deactivate immediately,” Daniel Schwartz, who lives in Atlanta, said in an email. 

People eager to delete their profiles may find unexpected problems that point to how integral Facebook is to many activities, said Ifeoma Ajunwa, a professor of organizational behavior at Cornell University.

“It is getting more and more difficult for people to delete Facebook, since it’s not just as a social media platform but also almost like a meeting square,” she said.

Parents could soon realize that their child’s soccer schedule with games and pickup times is only on a Facebook page, for example. Many businesses also schedule meetings via Facebook.

“It’s more and more difficult for people to feel plugged in if you’re not on Facebook,” Ajunwa said.

Exit can take 90 days

Not surprisingly, Facebook doesn’t make it easy to leave. To permanently delete your account, you need to make a request to the company. The process can take several days, and if you log in during this time, your request will be canceled. It can take up to 90 days to delete everything.

There’s a less permanent way to leave — deactivation — which hides your profile from everyone but lets you return if you change your mind.

Lili Orozco, 28, an office manager for her family’s heating and cooling company in Watkinsville, Georgia, deleted her account in December. She was upset that every new app she downloaded would ask for her Facebook contacts.

And while she liked staying in touch with people, she was irritated by the conspiracy stories her high school friends would share.

“Falsehoods spread faster on Facebook than the truth does,” she said. She now gets her news from Twitter and shares pictures with friends through Instagram.

Zuckerberg Asked to Testify in UK; Data Firm’s CEO Suspended

A British parliamentary committee on Tuesday summoned Facebook CEO Mark Zuckerberg to answer questions as authorities stepped up efforts to determine if the personal data of social-media users has been used improperly to influence elections.

The request comes amid allegations that a data-mining firm based in the U.K. used information from more than 50 million Facebook accounts to help Donald Trump win the 2016 presidential election. The company, Cambridge Analytica, has denied wrongdoing.

However, the firm’s board of directions announced Tuesday evening that it had suspended CEO Alexander Nix pending an independent investigation of his actions. Nix made comments to an undercover reporter for Britain’s Channel 4 News about various unsavory services Cambridge Analytica provided its clients.

“In the view of the board, Mr. Nix’s recent comments secretly recorded by Channel 4 and other allegations do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation,” the board said in a statement.

Facebook also drew continued criticism for its alleged inaction to protect users’ privacy. Earlier Tuesday, the chairman of the U.K. parliamentary media committee, Damian Collins, said his group has repeatedly asked Facebook how it uses data and that Facebook officials “have been misleading to the committee.”

“It is now time to hear from a senior Facebook executive with the sufficient authority to give an accurate account of this catastrophic failure of process,” Collins wrote in a note addressed directly to Zuckerberg. “Given your commitment at the start of the New Year to ‘fixing’ Facebook, I hope that this representative will be you.”

Facebook sidestepped questions on whether Zuckerberg would appear, saying instead that it’s currently focused on conducting its own reviews.

​Personal data

The request to appear comes as Britain’s information commissioner said she was using all her legal powers to investigate the social-media giant and Cambridge Analytica.

Commissioner Elizabeth Denham is pursuing a warrant to search Cambridge Analytica’s servers. She has also asked Facebook to cease its own audit of Cambridge Analytica’s data use.

“Our advice to Facebook is to back away and let us go in and do our work,” she said.

Cambridge Analytica said it is committed to helping the U.K. investigation. However, Denham’s office said the firm failed to meet a deadline to produce the information requested.

Denham said the prime allegation against Cambridge Analytica is that it acquired personal data in an unauthorized way, adding that the data provisions act requires services like Facebook to have strong safeguards against misuse of data.

Chris Wylie, who once worked for Cambridge Analytica, was quoted as saying the company used the data to build psychological profiles so voters could be targeted with ads and stories.

Undercover investigation

The firm found itself in further allegations of wrongdoing. Britain’s Channel 4 used an undercover investigation to record Nix saying that the company could use unorthodox methods to wage successful political campaigns for clients.

He said the company could “send some girls” around to a rival candidate’s house, suggesting that girls from Ukraine are beautiful and effective in this role.

He also said the company could “offer a large amount of money” to a rival candidate and have the whole exchange recorded so it could be posted on the internet to show that the candidate was corrupt.

Nix says in a statement that he deeply regrets his role in the meeting and has apologized to staff.

“I am aware how this looks, but it is simply not the case,” he said. “I must emphatically state that Cambridge Analytica does not condone or engage in entrapment, bribes or so-called ‘honeytraps,’ and nor does it use untrue material for any purposes.”

Nix told the BBC the Channel 4 sting was “intended to embarrass us.”

“We see this as a coordinated attack by the media that’s been going on for very, very many months in order to damage the company that had some involvement with the election of Donald Trump,” he said.

The data harvesting used by Cambridge Analytica has also triggered calls for further investigation from the European Union, as well as federal and state officials in the United States.

Google Launches News Initiative to Combat Fake News

Alphabet’s Google is launching the Google News Initiative to weed out fake news online and during breaking news situations, it said in a blog post Tuesday.

Google said it plans to spend $300 million over the next three years to improve the accuracy and quality of news appearing on its platforms. 

The changes come as Google, Facebook and Twitter face a backlash over their role during the U.S. presidential election by allowing the spread of false and often malicious information that might have swayed voters toward Republican candidate Donald Trump.

In a separate blog post, Google said it was launching a tool to help subscribe to news publications.

Subscribe with Google will let users buy a subscription on participating news sites using their Google account and manage all their subscriptions in one place.

Google said it would launch the news subscription with the Financial Times, The New York Times, Le Figaro and The Telegraph among others. The search engine said it plans to add more news publishers soon.

Crash Marks 1st Death Involving Fully Autonomous Vehicle

A fatal pedestrian crash involving a self-driving Uber SUV in a Phoenix suburb could have far-reaching consequences for the new technology as automakers and other companies race to be the first with cars that operate on their own.

The crash Sunday night in Tempe was the first death involving a full autonomous test vehicle. The Volvo was in self-driving mode with a human backup driver at the wheel when it struck 49-year-old Elaine Herzberg as she was walking a bicycle outside the lines of a crosswalk in Tempe, police said.

 

Uber immediately suspended all road-testing of such autos in the Phoenix area, Pittsburgh, San Francisco and Toronto. The ride-sharing company has been testing self-driving vehicles for months as it competes with other technology companies and automakers like Ford and General Motors.

 

Though many in the industries had been dreading a fatal crash they knew it was inevitable.

 

Tempe police Sgt. Ronald Elcock said local authorities haven’t determined fault but urged people to use crosswalks. He told reporters at a news conference Monday the Uber vehicle was traveling around 40 mph when it hit Helzberg immediately as she stepped on to the street.

 

Neither she nor the backup driver showed signs of impairment, he said.

 

“The pedestrian was outside of the crosswalk, so it was midblock,” Elcock said. “And as soon as she walked into the lane of traffic, she was struck by the vehicle.”

 

The National Transportation Safety Board, which makes recommendations for preventing crashes, and the National Highway Traffic Safety Administration, which can enact regulations, sent investigators.

 

Uber CEO Dara Khosrowshahi expressed condolences on his Twitter account and said the company is cooperating with investigators.

 

The public’s image of the vehicles will be defined by stories like the crash in Tempe, said Bryant Walker Smith, a University of South Carolina law professor who studies self-driving vehicles. It may turn out that there was nothing either the vehicle or its human backup could have done to avoid the crash, he said.

 

Either way, the fatality could hurt the technology’s image and lead to a push for more regulations at the state and federal levels, Smith said.

Autonomous vehicles with laser, radar and camera sensors and sophisticated computers have been billed as the way to reduce the more than 40,000 traffic deaths a year in the U.S. alone. Ninety-four percent of crashes are caused by human error, the government says.

 

Self-driving vehicles don’t drive drunk, don’t get sleepy and aren’t easily distracted. But they do have faults.

 

“We should be concerned about automated driving,” Smith said. “We should be terrified about human driving.”

 

In 2016, the latest year available, more than 6,000 U.S. pedestrians were killed by vehicles.

 

The federal government has voluntary guidelines for companies that want to test autonomous vehicles, leaving much of the regulation up to states.

 

Many states, including Michigan and Arizona, have taken a largely hands-off approach, hoping to gain jobs from the new technology, while California and others have taken a harder line.

 

California is among states that require manufacturers to report any incidents during the testing phase. As of early March, the state’s motor vehicle agency had received 59 such reports.

 

Arizona Gov. Doug Ducey used light regulations to entice Uber to the state after the company had a shaky rollout of test cars in San Francisco. Arizona has no reporting requirements. Hundreds of vehicles with automated driving systems have been on Arizona’s roads.

 

Ducey’s office expressed sympathy for Herzberg’s family and said safety is the top priority.

 

The crash in Arizona isn’t the first involving an Uber autonomous test vehicle. In March 2017, an Uber SUV flipped onto its side, also in Tempe. No serious injuries were reported, and the driver of the other car was cited for a violation.

 

Herzberg’s death is the first involving an autonomous test vehicle but not the first in a car with some self-driving features. The driver of a Tesla Model S was killed in 2016 when his car, operating on its Autopilot system, crashed into a tractor-trailer in Florida.

 

The NTSB said that driver inattention was to blame but that design limitations with the system played a major role in the crash.

 

The U.S. Transportation Department is considering further voluntary guidelines that it says would help foster innovation. Proposals also are pending in Congress, including one that would stop states from regulating autonomous vehicles, Smith said.

 

Peter Kurdock, director of regulatory affairs for Advocates for Highway and Auto Safety in Washington, said the group sent a letter Monday to Transportation Secretary Elaine Chao saying it is concerned about a lack of action and oversight by the department as autonomous vehicles are developed. That letter was planned before the crash.

 

Kurdock said the deadly accident should serve as a “startling reminder” to members of Congress that they need to “think through all the issues to put together the best bill they can to hopefully prevent more of these tragedies from occurring.”

Self-Driving Car Hits and Kills Pedestrian Outside of Phoenix

A self-driving car has hit and killed a woman in the southwestern United States in what is believed to be the first fatal pedestrian crash involving the new technology.

Police said Monday a self-driving sport utility vehicle owned by the ride sharing company Uber struck 49-year-old Elaine Herzberg, who was walking outside of a crosswalk in the Phoenix suburb of Tempe. She later died in a hospital from her injuries.

Uber said it had suspended its autonomous vehicle program across the United States and Canada following the accident.

 

Police say the vehicle was in autonomous mode, but had an operator behind the wheel, when the accident took place.

 

Testing of self-driving cars by various companies has been going on for months in the Phoenix area, as well as Pittsburgh, San Francisco and Toronto as automakers and technology companies compete to be the first to introduce the new technology.

The vehicle involved in the crash was a Volvo XC90, which Uber had been using to test its autonomous technology. However, Volvo said it did not make the self-driving technology.

 

The U.S. National Highway Traffic Safety Administration and National Transportation Safety Board said they are sending a team to gather information about the crash.

Uber CEO Dara Khosrowshahi expressed condolences on Twitter and said the company is working with local law enforcement on the investigation.

The fatal crash will most likely raise questions about regulations for self-driving cars. Arizona has offered little regulations for the new technology, which has led to many technology companies flocking to the state to test their autonomous vehicles.

Proponents of the new technology argue that self-driving cars will prove to be safer than human drivers, because the cars will not get distracted and will obey all traffic laws.

Critics have expressed concern about the technology’s safety, including the ability of the autonomous technology to deal with unpredictable events.

 

Consumer Watch, the nonprofit consumer advocacy group, called Monday for a nationwide moratorium on testing self-driving cars on public roads while investigators figure out what went wrong in the latest accident.

 

“Arizona has been the Wild West of robot car testing, with virtually no regulations in place,” the group said in a statement.

Democratic Sen. Edward Markey of Massachusetts, who is a member of the Senate transportation committee, said there must be more oversight of the technology. He said he is working on a “comprehensive” autonomous vehicle legislative package.

 

“This tragic accident underscores why we need to be exceptionally cautious when testing and deploying autonomous vehicle technologies on public roads,” he said.

Concerns over the safety of autonomous vehicles increased in July 2016 after a fatality involving a partially autonomous Tesla automobile. In that accident, the driver put the car in “autopilot” mode, and the car failed to detect a tractor-trailer that was crossing the road. The driver of the Tesla died in the crash. Safety regulators later determined Tesla was not at fault.

However, critics have expressed concerns about the safety of the technology, including the ability of the autonomous technology to deal with unpredictable events.