Technology

US Sues SpaceX for Discriminating Against Refugees, Asylum-recipients

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-recipients at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

India Lands Craft on Moon’s Unexplored South Pole

An Indian spacecraft has landed on the moon, becoming the first craft to touch down on the lunar surface’s south pole, the country’s space agency said.

India’s attempt to land on the moon Wednesday came days after Russia’s Luna-25 lander, also headed for the unexplored south pole, crashed into the moon.  

It was India’s second attempt to reach the south pole — four years ago, India’s lander crashed during its final approach.  

India has become the fourth country to achieve what is called a “soft-landing” on the moon – a feat accomplished by the United States, China and the former Soviet Union.  

However, none of those lunar missions landed at the south pole. 

The south side, where the terrain is rough and rugged, has never been explored.  

The current mission, called Chandrayaan-3, blasted into space on July 14.

Kenyan Court Gives Meta and Sacked Moderators 21 Days to Pursue Settlement  

A Kenyan court has given Facebook’s parent company, Meta, and the content moderators who are suing it for unfair dismissal 21 days to resolve their dispute out of court, a court order showed on Wednesday.

The 184 content moderators are suing Meta and two subcontractors after they say they lost their jobs with one of the firms, Sama, for organizing a union.

The plaintiffs say they were then blacklisted from applying for the same roles at the second firm, Luxembourg-based Majorel, after Facebook switched contractors.

“The parties shall pursue an out of court settlement of this petition through mediation,” said the order by the Employment and Labour Relations Court, which was signed by lawyers for the plaintiffs, Meta, Sama and Majorel.

Kenya’s former chief justice, Willy Mutunga, and Hellen Apiyo, the acting commissioner for labor, will serve as mediators, the order said. If the parties fail to resolve the case within 21 days, the case will proceed before the court, it said.

Meta, Sama and Majorel did not immediately respond to requests for comment.

A judge ruled in April that Meta could be sued by the moderators in Kenya, even though it has no official presence in the east African country.

The case could have implications for how Meta works with content moderators globally. The U.S. social media giant works with thousands of moderators around the world, who review graphic content posted on its platform.

Meta has also been sued in Kenya by a former moderator over accusations of poor working conditions at Sama, and by two Ethiopian researchers and a rights institute, which accuse it of letting violent and hateful posts from Ethiopia flourish on Facebook.

Those cases are ongoing.

Meta said in May 2022, in response to the first case, that it required partners to provide industry-leading conditions. On the Ethiopia case, it said in December that hate speech and incitement to violence were against the rules of Facebook and Instagram.

Meta Rolls Out Web Version of Threads 

Meta Platforms on Tuesday launched the web version of its new text-first social media platform Threads, in a bid to retain professional users and gain an edge over rival X, formerly Twitter.

Threads’ users will now be able to access the microblogging platform by logging-in to its website from their computers, the Facebook and Instagram owner said.

The widely anticipated roll out could help Threads gain broader acceptance among power users like brands, company accounts, advertisers and journalists, who can now take advantage of the platform by using it on a bigger screen.

Threads, which crossed 100 million sign-ups for the app within five days of its launch on July 5, saw a decline in its popularity as users returned to the more familiar platform X after the initial rush.

In just over a month, daily active users on Android version of Threads app dropped to 10.3 million from the peak of 49.3 million, according to a report, dated August 10, by analytics platform Similarweb.

The company will be adding more functionality to the web experience in the coming weeks, Meta said.

Europe’s Sweeping Rules for Tech Giants Are About to Kick In

Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.

The first phase of the European Union’s groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.

The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, have already started making changes.

Here’s a look at what’s happening this week:

Which platforms are affected?

So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject, as are Google’s Search and Microsoft’s Bing search engine.

Google Maps and Wikipedia round out the list.

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — will face the DSA’s highest level of regulation.

Brussels insiders, however, have pointed to some notable omissions from the EU’s list, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later on.

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

Citing uncertainty over the new rules, Meta Platforms has held off launching its Twitter rival, Threads, in the EU.

What’s changing?

Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.

Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.

TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.

Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.

TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.

TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.

The DSA prohibits targeting vulnerable categories of people, including children, with ads.

Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.

TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing that it’s being treated unfairly.

Nevertheless, Zalando is launching content flagging systems for its website even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.

The company has supported the DSA, said Aurelie Caulier, Zalando’s head of public affairs for the EU.

“It will bring loads of positive changes” for consumers, she said. But “generally, Zalando doesn’t have systemic risk [that other platforms pose]. So that’s why we don’t think we fit in that category.”

Amazon has filed a similar case with a top EU court.

What happens if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work.

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia.

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.

Under the rules, the biggest platforms will have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work.

What about the rest of the world?

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe, said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.

“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

The regulations are “dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place,” she said.

Meta to Soon Launch Web Version of Threads in Race with X for Users

Meta Platforms is set to roll out the web version on its new text-first social media platform Threads, hoping to gain an edge over X, formerly Twitter, as the initial surge in users waned.

The widely anticipated web version will make Threads more useful for power users like brands, company accounts, advertisers and journalists.

Meta did not give a date for the launch, but Instagram head Adam Mosseri said it could happen soon.

“We are close on web…,” Mosseri said in a post on Threads on Friday. The launch could happen as early as this week, according to a report in the Wall Street Journal.

Threads, which launched as an Android and iOS app on July 5 and gained 100 million users in just five days, saw its popularity drop as users returned to the more familiar platform X after the initial rush to try Meta’s new offering. 

But in just over a month, its daily active users on Android app dropped to 10.3 million from the peak of 49.3 million, according to a report by analytics platform Similarweb dated Aug. 10. 

Meanwhile, the management is moving quickly to launch new features. Threads now offers the ability to set post notifications for accounts and view them in a type of chronological feed. 

It will soon roll out an improved search that could allow users to search for specific posts and not just accounts. 

Biden Administration Announces More New Funding for Rural Broadband Infrastructure

The Biden administration on Monday continued its push toward internet-for-all by 2030, announcing about $667 million in new grants and loans to build more broadband infrastructure in the rural U.S.

“With this investment, we’re getting funding to communities in every corner of the country because we believe that no kid should have to sit in the back of a mama’s car in a McDonald’s parking lot in order to do homework,” said Mitch Landrieu, the White House’s infrastructure coordinator, in a call with reporters.

The 37 new recipients represent the fourth round of funding under the program, dubbed ReConnect by the U.S. Department of Agriculture. Another 37 projects received $771.4 million in grants and loans announced in April and June.

The money flowing through federal broadband programs, including what was announced Monday and the $42.5 billion infrastructure program detailed earlier this summer, will lead to a new variation on “the electrification of rural America,” Landrieu said, repeating a common Biden administration refrain.

The largest award went to the Ponderosa Telephone Co. in California, which received more than $42 million to deploy fiber networks in Fresno County. In total, more than 1,200 people, 12 farms and 26 other businesses will benefit from that effort alone, according to USDA.

The telephone cooperatives, counties and telecommunications companies that won the new awards are based in 22 states and the Marshall Islands.

At least half of the households in areas receiving the new funding lack access to internet speeds of 100 megabits per second download and 20 Mbps upload — what the federal government considers “underserved” in broadband terminology. The recipients’ mandate is to build networks that raise those levels to at least 100 Mbps upload and 100 Mbps download speeds for every household, business and farm in their service areas.

Agriculture Secretary Tom Vilsack said the investments could bring new economic opportunities to farmers, allow people without close access to medical care to see specialist doctors through telemedicine and increase academic offerings, including Advanced Placement courses in high schools.

“The fact that this administration understands and appreciates the need for continued investment in rural America to create more opportunity is something that I’m really excited about,” Vilsack said on the media call.  

Russia Fines Google $32,000 for Videos About Ukraine Conflict

A Russian court on Thursday imposed a $32,000 fine on Google for failing to delete allegedly false information about the conflict in Ukraine.

The move by a magistrate’s court follows similar actions in early August against Apple and the Wikimedia Foundation that hosts Wikipedia.

According to Russian news reports, the court found that the YouTube video service, which is owned by Google, was guilty of not deleting videos with incorrect information about the conflict — which Russia characterizes as a “special military operation.”

Google was also found guilty of not removing videos that suggested ways of gaining entry to facilities which are not open to minors, news agencies said, without specifying what kind of facilities were involved.

In Russia, a magistrate court typically handles administrative violations and low-level criminal cases.

Since sending troops into Ukraine in February 2022, Russia has enacted an array of measures to punish any criticism or questioning of the military campaign.

Some critics have received severe punishments. Opposition figure Vladimir Kara-Murza was sentenced this year to 25 years in prison for treason stemming from speeches he made against Russia’s actions in Ukraine.

Texas OKs Plan to Mandate Tesla Tech for EV Chargers in State

Texas on Wednesday approved its plan to require companies to include Tesla’s technology in electric vehicle charging stations to be eligible for federal funds, despite calls for more time to re-engineer and test the connectors.

The decision by Texas, the biggest recipient of a $5 billion program meant to electrify U.S. highways, is being closely watched by other states and is a step forward for Tesla CEO Elon Musk’s plans to make its technology the U.S. charging standard.

Tesla’s efforts are facing early tests as some states start rolling out the funds. The company won a slew of projects in Pennsylvania’s first round of funding announced on Monday but none in Ohio last month.

Federal rules require companies to offer the rival Combined Charging System, or CCS, a U.S. standard preferred by the Biden administration, as a minimum to be eligible for the funds.

But individual states can add their own requirements on top of CCS before distributing the federal funds at a local level.

Ford Motor and General Motors’ announcement about two months ago that they planned to adopt Tesla’s North American Charging Standard, or NACS, sent shockwaves through the industry and prompted a number of automakers and charging companies to embrace the technology.

In June, Reuters reported that Texas, which will receive and deploy $407.8 million over five years, planned to mandate companies to include Tesla’s plugs. Washington state has talked about similar plans, and Kentucky has mandated it.

Florida, another major recipient of funds, recently revised its plans, saying it would mandate NACS one year after standards body SAE International, which is reviewing the technology, formally recognizes it. 

Some charging companies wrote to the Texas Transportation Commission opposing the requirement in the first round of funds. They cited concerns about the supply chain and certification of Tesla’s connectors could put the successful deployment of EV chargers at risk.

That forced Texas to defer a vote on the plan twice as it sought to understand NACS and its implications, before the commission voted unanimously to approve the plan on Wednesday.

“The two-connector approach being proposed will help assure coverage of a minimum of 97% of the current, over 168,000 electric vehicles with fast charge ports in the state,” Humberto Gonzalez, a director at Texas’ department of transportation, said while presenting the state’s plan to the commissioners.

Musk’s X Delays Access to Content on Reuters, NY Times, Social Media Rivals

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, The Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on Aug. 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

Google to Train 20,000 Nigerians in Digital Skills

Google plans to train 20,000 Nigerian women and youth in digital skills and provide a grant of $1.6 million to help the government create 1 million digital jobs in the country, its Africa executives said on Tuesday. 

Nigeria plans to create digital jobs for its teeming youth population, Vice President Kashim Shettima told Google Africa executives during a meeting in Abuja. Shettima did not provide a timeline for creating the jobs. 

Google Africa executives said a grant from its philanthropic arm in partnership with Data Science Nigeria and the Creative Industry Initiative for Africa will facilitate the program. 

Shettima said Google’s initiative aligned with the government’s commitment to increase youth participation in the digital economy. The government is also working with the country’s banks on the project, Shettima added. 

Google director for West Africa Olumide Balogun said the company would commit funds and provide digital skills to women and young people in Nigeria and also enable startups to grow, which will create jobs. 

Google is committed to investing in digital infrastructure across Africa, Charles Murito, Google Africa’s director of government relations and public policy, said during the meeting, adding that digital transformation can be a job enabler. 

Chinese Surveillance Firm Selling Cameras With ‘Skin Color Analytics’

IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns. 

In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin. 

Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.”  VOA Mandarin verified this on Dahua’s Chinese website. 

The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.  

Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage. 

“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.  

VOA Mandarin contacted Dahua for comment but did not receive a response. 

The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.

‘Skin color is a basic feature’

Dahua said its skin tone analysis capability was an essential function in surveillance technology.  

 In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”  

IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products. 

In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.  

“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.  

Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.   

“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.

“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”  

Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.” 

However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.  

IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life. 

The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security. 

The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.  

Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.  

‘No place in a liberal democracy’

On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.  

“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.  

Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.” 

“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.  

In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations. 

The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.  

The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.  

Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.  

Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored. 

Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”  

Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.  

She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”  

She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”  

China to Require all Apps to Share Business Details in New Oversight Push

China will require all mobile app providers in the country to file business details with the government, its information ministry said, marking Beijing’s latest effort to keep the industry on a tight leash. 

The Ministry of Industry and Information Technology (MIIT) said late on Tuesday that apps without proper filings will be punished after the grace period that will end in March next year, a move that experts say would potentially restrict the number of apps and hit small developers hard. 

You Yunting, a lawyer with Shanghai-based DeBund Law Offices, said the order is effectively requiring approvals from the ministry. The new rule is primarily aimed at combating online fraud but it will impact all apps in China, he said. 

Rich Bishop, co-founder of app publishing firm AppInChina, said the new rule is also likely to affect foreign-based developers which have been able to publish their apps easily through Apple’s App Store without showing any documentation to the Chinese government. 

Bishop said that in order to comply with the new rules, app developers now must either have a company in China or work with a local publisher.  

Apple did not immediately reply to a request for comment. 

The iPhone maker pulled over a hundred artificial intelligence (AI) apps from its App Store last week to comply with regulations after China introduced a new licensing regime for generative AI apps for the country.  

The ministry’s notice also said entities “engaged in internet information services through apps in such fields as news, publishing, education, film and television and religion should also submit relevant documents.” 

The requirement could affect the availability of popular social media apps such as X, Facebook and Instagram. Use of such apps are not allowed in China, but they can be still downloaded from app stores, enabling Chinese to use them when traveling overseas. 

China already requires mobile games to obtain licenses before they launch in the country, and it had purged tens of thousands of unlicensed games from various app stores in 2020. 

Tencent’s WeChat, China’s most popular online social platform, said on Wednesday that mini apps, apps that can be opened within WeChat, must also follow the new rules. 

The company said that new apps must complete the filing before launch starting from September, while exiting mini apps have until the end of March.  

 

US Launches Contest to Use AI to Prevent Government System Hacks

The White House on Wednesday said it had launched a multimillion-dollar cyber contest to spur use of artificial intelligence to find and fix security flaws in U.S. government infrastructure, in the face of growing use of the technology by hackers for malicious purposes.  

“Cybersecurity is a race between offense and defense,” said Anne Neuberger, the U.S. government’s deputy national security adviser for cyber and emerging technology.

“We know malicious actors are already using AI to accelerate identifying vulnerabilities or build malicious software,” she added in a statement to Reuters.

Numerous U.S. organizations, from health care groups to manufacturing firms and government institutions, have been the target of hacking in recent years, and officials have warned of future threats, especially from foreign adversaries.  

Neuberger’s comments about AI echo those Canada’s cybersecurity chief Samy Khoury made last month. He said his agency had seen AI being used for everything from creating phishing emails and writing malicious computer code to spreading disinformation.

The two-year contest includes around $20 million in rewards and will be led by the Defense Advanced Research Projects Agency, the U.S. government body in charge of creating technologies for national security, the White House said.

Google, Anthropic, Microsoft, and OpenAI — the U.S. technology firms at the forefront of the AI revolution — will make their systems available for the challenge, the government said.

The contest signals official attempts to tackle an emerging threat that experts are still trying to fully grasp. In the past year, U.S. firms have launched a range of generative AI tools such as ChatGPT that allow users to create convincing videos, images, texts, and computer code. Chinese companies have launched similar models to catch up.

Experts say such tools could make it far easier to, for instance, conduct mass hacking campaigns or create fake profiles on social media to spread false information and propaganda.  

“Our goal with the DARPA AI challenge is to catalyze a larger community of cyber defenders who use the participating AI models to race faster – using generative AI to bolster our cyber defenses,” Neuberger said.

The Open Source Security Foundation (OpenSSF), a U.S. group of experts trying to improve open source software security, will be in charge of ensuring the “winning software code is put to use right away,” the U.S. government said. 

US to Restrict High-Tech Investment in China

U.S. President Joe Biden is planning Wednesday to impose restrictions on U.S. investments in some high-tech industries in China.

Biden’s expected executive order could again heighten tensions between the U.S., the world’s biggest economy, and No. 2 China after a period in which leaders of the two countries have held several discussions aimed at airing their differences and seeking common ground.

The new restrictions would limit U.S. investments in such high-tech sectors in China as quantum computing, artificial intelligence and advanced semi-conductors, but apparently not in the broader Chinese economy, which recently has been struggling to advance.

In a trip to China in July, Treasury Secretary Janet Yellen told Chinese Premier Li Qiang, “The United States will, in certain circumstances, need to pursue targeted actions to protect its national security. And we may disagree in these instances.”

Trying to protect its own security interests in the Indo-Pacific region and across the globe, National Security Adviser Jake Sullivan said in April that the U.S. has implemented “carefully tailored restrictions on the most advanced semiconductor technology exports” to China.

“Those restrictions are premised on straightforward national security concerns,” he said. “Key allies and partners have followed suit, consistent with their own security concerns.”

Sullivan said they are not, as Beijing has claimed, a ‘technology blockade.’”

Zoom, Symbol of Remote Work Revolution, Wants Workers Back in Office Part-time

The company whose name became synonymous with remote work is joining the growing return-to-office trend.

Zoom, the video conferencing pioneer, is asking employees who live within a 50-mile radius of its offices to work onsite two days a week, a company spokesperson confirmed in an email. The statement said the company has decided that “a structured hybrid approach – meaning employees that live near an office need to be onsite two days a week to interact with their teams – is most effective for Zoom.”

The new policy, which will be rolled out in August and September, was first reported by the New York Times, which said Zoom CEO Eric Yuan fielded questions from employees unhappy with the new policy during a Zoom meeting last week.

Zoom, based in San Jose, California, saw explosive growth during the first year of the COVID-19 pandemic as companies scrambled to shift to remote work, and even families and friends turned to the platform for virtual gatherings. But that growth has stagnated as the pandemic threat has ebbed.

Shares of Zoom Video Communications Inc. have tumbled hard since peaking early in the pandemic, from $559 apiece in October 2020, to below $70 on Tuesday. Shares have slumped more than 10% to start the month of August. In February, Zoom laid off about 1,300 people, or about 15% of its workforce.

Google, Salesforce and Amazon are among major companies that have also stepped up their return-to-office policies despite a backlash from some employees.

Similarly to Zoom, many companies are asking their employees to show up to the office only part time, as hybrid work shapes up to be a lasting legacy of the pandemic. Since January, the average weekly office occupancy rate in 10 major U.S. cities has hovered around 50%, dipping below that threshold during the summer months, according to Kastle Systems, which measures occupancy through entry swipes.

Pope Warns Against Potential Dangers of Artificial Intelligence

Pope Francis on Tuesday called for a global reflection on the potential dangers of artificial intelligence (AI), noting the new technology’s “disruptive possibilities and ambivalent effects.”  

Francis, who is 86 and said in the past he does not know how to use a computer, issued the warning in a message for the next World Day of Peace of the Catholic Church, falling on New Year’s Day.  

The Vatican released the message well in advance, as it is customary.  

The pope “recalls the need to be vigilant and to work so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded,” it reads.  

“The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law,” it adds.  

Back in 2015, Francis acknowledged being “a disaster” with technology, but he has also called the internet, social networks and text messages “a gift of God,” provided that they are used wisely.  

In 2020, the Vatican joined forces with tech giants Microsoft MSFT.O and IBM IBM.N to promote the ethical development of AI and call for regulation of intrusive technologies such as facial recognition.

US Tech Groups Back TikTok in Challenge to Montana State Ban

Two to tech groups on Monday backed TikTok Inc in its lawsuit seeking block enforcement of a Montana state ban on use of the short video sharing app before it takes effect on January 1.

NetChoice, a national trade association that includes major tech platforms, and Chamber of Progress, a tech-industry coalition, said in a joint court filing that “Montana’s effort to cut Montanans off from the global network of TikTok users ignores and undermines the structure, design, and purpose of the internet.”

TikTok, which is owned by China’s ByteDance, filed a suit in May seeking to block the first-of-its-kind U.S. state ban on several grounds, arguing it violates the First Amendment free speech rights of the company and users.

Analysts Say Use of Spyware During Conflict Is Chilling

The use of sophisticated spyware to hack into the devices of journalists and human rights defenders during a period of conflict in Armenia has alarmed analysts.

A joint investigation by digital rights organizations, including Amnesty International, found evidence of the surveillance software on devices belonging to 12 people, including a former government spokesperson.

The apparent targeting took place between October 2020 and December 2022, including during key moments in the Nagorno-Karabakh conflict, Amnesty reported.

The region has been at the center of a decades-long dispute between Azerbaijan and Armenia, which have fought two wars over the mountainous territory.

Elina Castillo Jiménez, a digital surveillance researcher at Amnesty International’s Security Laboratory, told VOA that her organization’s research — published earlier this year — confirmed that at least a dozen public figures in Armenia were targeted, including a former spokesperson for the Ministry of Foreign Affairs and a representative of the United Nations.

Others had reported on the conflict, including for VOA’s sister network Radio Free Europe/Radio Liberty; provided analysis; had sensitive conversations related to the conflict; or in some cases worked for organizations known to be critical of the government, the researchers found.

“The conflict may have been one of the reasons for the targeting,” Castillo said.

If, as Amnesty and others suspect, the timing is connected to the conflict, it would mark the first documented use of Pegasus in the context of an international conflict.

Researchers have found previously that Pegasus was used extensively in Azerbaijan to target civil society representatives, opposition figures and journalists, including the award-winning investigative reporter Khadija Ismayilova.

VOA reached out via email to the embassies of Armenia and Azerbaijan in Washington for comment but as of publication had not received a response.

Pegasus is a spyware marketed to governments by the Israeli digital security company NSO Group. The global investigative collaboration, The Pegasus Project, has been tracking the spyware’s use against human rights defenders, critics and others.

Since 2021, the U.S government has imposed measures on NSO over the hacking revelations, saying its tools were used for “transnational repression.” U.S actions include export limits on NSO Group and a March 2023 executive order that restricts the U.S. government’s use of commercial spyware like Pegasus.

VOA reached out to the NSO Group for comment but as of publication had not received a response.

Castillo said that Pegasus has the capability to infiltrate both iOS and Android phones.

Pegasus spyware is a “zero-click” mobile surveillance program. It can attack devices without any interaction from the individual who is targeted, gaining complete control over a phone or laptop and in effect transforming it into a spying tool against its owner, she said.

“The way that Pegasus operates is that it is capable of using elements within your iPhones or Androids,” said Castillo. “Imagine that it embed(s) something in your phone, and through that, then it can take control over it.”

The implications of the spyware are not lost on Ruben Melikyan. The lawyer, based in Armenia’s capital, Yerevan, is among those whose devices were infected.

An outspoken government critic, Melikyan has represented a range of opposition parliamentarians and activists.

The lawyer said he has concerns that the software could have allowed hackers to gain access to his data and information related to his clients.

“As a lawyer, my phone contained confidential information, and its compromise made me uneasy, particularly regarding the protection of my current and former clients’ rights.” he said.

Melikyan told VOA that his phone had been targeted twice: in May 2021, when he was monitoring Armenian elections, and again during a tense period in the Armenia and Azerbaijan conflict in December 2022.

Castillo said she believes targeting individuals with Pegasus is a violation of “international humanitarian law” and that evidence shows it is “an absolute menace to people doing human rights work.”

She said the researchers are not able to confirm who commissioned the use of the spyware, but “we do believe that it is a government customer.”

When the findings were released this year, an NSO Group spokesperson said it was unable to comment but that earlier allegations of “improper use of our technologies” had led to the termination of contracts.

Amnesty International researchers are also investigating the potential use of a commercial spyware, Predator, which was found on Armenian servers.

“We have the evidence that suggests that it was used. However, further investigation is needed,” Castillo said, adding that their findings so far suggest that Pegasus is just “one of the threats against journalists and human rights defenders.”

This story originated in VOA’s Armenia Service.

US Mom Blames Face Recognition Technology for Flawed Arrest

A mother is suing the city of Detroit, saying unreliable facial recognition technology led to her being falsely arrested for carjacking while she was eight months pregnant. 

Porcha Woodruff was getting her two children ready for school the morning of February 16 when a half-dozen police officers showed up at her door to arrest her, taking her away in handcuffs, the 32-year-old Detroit woman said in a federal lawsuit.

“They presented her with an arrest warrant for robbery and carjacking, leaving her baffled and assuming it was a joke, given her visibly pregnant state,” her attorney wrote in a lawsuit accusing the city of false arrest. 

The suit, filed Thursday, argues that police relied on facial recognition technology that should not be trusted, given “inherent flaws and unreliability, particularly when attempting to identify Black individuals” such as Woodruff.

Some experts say facial recognition technology is more prone to error when analyzing the faces of people of color.

In a statement Sunday, the Wayne County prosecutor’s office said the warrant that led to Woodruff’s arrest was on solid ground, NBC News reported.

“The warrant was appropriate based upon the facts,” it said.

The case began in late January, when police investigating a reported carjacking by a gunman used imagery from a gas station’s security video to track down a woman believed to have been involved in the crime, according to the suit.

Facial recognition analysis from the video identified Woodruff as a possible match, the suit said.

Woodruff’s picture from a 2015 arrest was in a set of photos shown to the carjacking victim, who picked her out, according to the lawsuit.

Woodruff was freed on bond the day of her arrest and the charges against her were later dropped due to insufficient evidence, the civil complaint maintained. 

“This case highlights the significant flaws associated with using facial recognition technology to identify criminal suspects,” the suit argued.

Woodruff’s suit seeks unspecified financial damages plus legal fees. 

Musk Says Fight with Zuckerberg Will be Live-Streamed on X

Elon Musk said in a social media post that his proposed cage fight with Meta (META.O) CEO Mark Zuckerberg would be live-streamed on social media platform X, formerly known as Twitter. 

The social media moguls have been egging each other into a mixed martial arts cage match in Las Vegas since June.

“Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans,” Musk said in a post on X early on Sunday morning, without giving any further details.

Earlier on Sunday, Musk had said on X that he was “lifting weights throughout the day, preparing for the fight”, adding that he did not have time to work out so brings the weights to work.

When a user on X asked Musk the point of the fight, Musk responded by saying “It’s a civilized form of war. Men love war.”

Meta did not respond to a Reuters request for comment on Musk’s post. 

The brouhaha began when Musk said in a June 20 post that he was “up for a cage match” with Zuckerberg, who is trained in jiujitsu.

A day later, Zuckerberg, 39, who has posted pictures of matches he has won on his company’s Instagram platform, asked Musk, 51, to “send location” for the proposed throwdown, to which Musk replied “Vegas Octagon”, referring to an events center where mixed martial arts (MMA) championship bouts are held.

Musk then said he would start training if the cage fight took shape.