Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.

AI-generated fashion models could bring more diversity to industry — or leave it with less

Chicago, Illinois — London-based model Alexsandrah has a twin, but not in the way you’d expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other “even down to the baby hairs.” And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

“Fashion is exclusive, with limited opportunities for people of color to break in,” said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers’ rights in the fashion industry. “I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry’s declared intentions and their real actions.”  

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they’ve made. Data suggests that women are more likely to work in occupations in which the technology could be applied and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

“We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl’s and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy’s said their respective companies do not use AI models, although Walmart clarified that “suppliers may have a different approach to photography they provide for their products, but we don’t have that information.”

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

“One model does not represent everyone that’s actually shopping and buying a product,” he said. “As a person of color, I felt this painfully myself.”

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands “are serious about inclusion efforts, they will continue to hire these models of color,” he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the “The World’s First Digital Supermodel.” But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn’t take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu’s backstory and portrays her voice for interviews.

Alexsandrah said she is “extremely proud” of her work with The Diigitals, which created her own AI twin: “It’s something that even when we are no longer here, the future generations can look back at and be like, ‘These are the pioneers.'”

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it’s sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for “research” purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

“This is a complete violation,” she said. “It was really disappointing for me.”

But absent AI regulations, it’s up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to “the Wild West.”

That’s why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models’ clear written consent to create or use a model’s digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models’ digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as “somebody that I know, love, trust and is my friend.” Wilson says they make sure any compensation for Alexsandrah’s AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: “We have this amazing Earth that we’re living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?”

Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

Indiana aspires to become next great tech center

indianapolis, indiana — Semiconductors, or microchips, are critical to almost everything electronic used in the modern world. In 1990, the United States produced about 40% of the world’s semiconductors. As manufacturing migrated to Asia, U.S. production fell to about 12%.  

“During COVID, we got a wake-up call. It was like [a] Sputnik moment,” explained Mark Lundstrom, an engineer who has worked with microchips much of his life. 

The 2020 global coronavirus pandemic slowed production in Asia, creating a ripple through the global supply chain and leading to shortages of everything from phones to vehicles. Lundstrom said increasing U.S. reliance on foreign chip manufacturers exposed a major weakness. 

“We know that AI is going to transform society in the next several years, it requires extremely powerful chips. The most powerful leading-edge chips.” 

Today, Lundstrom is the acting dean of engineering at Purdue University in Lafayette, Indiana, a leader in cutting-edge semiconductor development, which has new importance amid the emerging field of artificial intelligence. 

“If we fall behind in AI, the consequences are enormous for the defense of our country, for our economic future,” Lundstrom told VOA. 

Amid the buzz of activity in a laboratory on Purdue’s campus, visitors can get a vision of what the future might look like in microchip technology. 

“The key metrics of the performance of the chips actually are the size of the transistors, the devices, which is the building block of the computer chips,” said Zhihong Chen, director of Purdue’s Birck Nanotechnology Center, where engineers work around the clock to push microchip technology into the future. 

“We are talking about a few atoms in each silicon transistor these days. And this is what this whole facility is about,” Chen said. “We are trying to make the next generation transistors better devices than current technologies. More powerful and more energy-efficient computer chips of the future.” 

Not just RVs anymore

Because of Purdue’s efforts, along with those on other university campuses in the state, Indiana believes it’s an attractive location for manufacturers looking to build new microchip facilities. 

“Purdue University alone, a top four-ranked engineering school, offers more engineers every year than the next top three,” said Eric Holcomb, Indiana’s Republican governor. “When you have access to that kind of talent, when you have access to the cost of doing business in the state of Indiana, that’s why people are increasingly saying, Indiana.” 

Holcomb is in the final year of his eight-year tenure in the state’s top position. He wants to transform Indiana beyond the recreational vehicle, or “RV capital” of the country.  

“We produce about plus-80% of all the RV production in North America in one state,” he told VOA. “We are not just living up to our reputation as being the number one manufacturing state per capita in America, but we are increasingly embracing the future of mobility in America.” 

Holcomb is spearheading an effort to make Indiana the next great technology center as the U.S. ramps up investment in domestic microchip development and manufacturing.  “If we want to compete globally, we have to get smarter and healthier and more equipped, and we have to continue to invest in our quality of place,” Holcomb told VOA in an interview. 

His vision is shared by other lawmakers, including U.S. Senator Todd Young of Indiana, who co-sponsored the bipartisan CHIPS and Science Act, which commits more than $50 billion in federal funding for domestic microchip development. 

‘We are committed’

Indiana is now home to one of 31 designated U.S. technology and innovation hubs, helping it qualify for hundreds of millions of dollars in grants designed to attract technology-driven businesses. 

“The signal that it sends to the rest of the world [is] that we are in it, we are committed, and we are focused,” said Holcomb. “We understand that economic development, economic security and national security complement one another.” 

Indiana’s efforts are paying off. 

In April, South Korean microchip manufacturer SK Hynix announced it was planning to build a $4 billion facility near Purdue University that would produce next-generation, high-bandwidth memory, or HBM chips, critical for artificial intelligence applications.  

The facility, slated to start operating in 2028, could create more than 1,000 new jobs. While U.S. chip manufacturer SkyWater also plans to invest nearly $2 billion in Indiana’s new LEAP Innovation District near Purdue, the state recently lost bidding to host chipmaker Intel, which selected Ohio for two new factories. 

“Companies tend to like to go to locations where there is already that infrastructure, where that supply chain is in place,” Purdue’s Lundstrom said. “That’s a challenge for us, because this is a new industry for us. So, we have a chicken-and- egg problem that we have to address, and we are beginning to address that.” 

Lundstrom said the CHIPS and Science Act and the federal money that comes with it are helping Indiana ramp up to compete with other U.S. locations already known for microchip development, such as Silicon Valley in California and Arizona. 

What could help Indiana gain an edge is its natural resources — plenty of land and water, and regular weather patterns, all crucial for the sensitive processes needed to manufacture microchips at large manufacturing centers. 

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

Washington; Flagstaff, Arizona — President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer to produce semiconductors in the southwestern U.S. state of Arizona, which includes a third facility that will bring the foreign tech giant’s investment in the state to $65 billion.

Biden said the move aims to perk up a decades-old slump in American chip manufacturing. Taiwan Semiconductor Manufacturing Company (TSMC), which is based in the Chinese-claimed island, claims more than half of the global market share in chip manufacturing.

The new facility, Biden said, will put the U.S. on track to produce 20% of the world’s leading-edge semiconductors by 2030.

“I was determined to turn that around, and thanks to my CHIPS and Science Act — a key part of my Investing in America agenda — semiconductor manufacturing and jobs are making a comeback,” Biden said in a statement.

U.S. production of this American-born technology has fallen steeply in recent decades, said Andy Wang, dean of engineering at Northern Arizona University.

“As a nation, we used to produce 40% of microchips for the whole world,” he told VOA. “Now, we produce less than 10%.”

A single semiconductor transistor is smaller than a grain of sand. But billions of them, packed neatly together, can connect the world through a mobile phone, control sophisticated weapons of war and satellites that orbit the Earth, and someday may even drive a car.

The immense value of these tiny chips has fueled fierce competition between the U.S. and China.

The U.S. Department of Commerce has taken several steps to hamper China’s efforts to build its own chip industry. Those include export controls and new rules to prevent “foreign countries of concern” — which it said includes China, Iran, North Korea and Russia — from benefiting from funding from the CHIPS and Science Act.

While analysts are divided over whether Taiwan’s dominance of this critical industry makes it more or less vulnerable to Chinese aggression, they agree it confers the island significant global status.

“It is debatable what, if any, role Taiwan’s semiconductor manufacturing prowess plays in deterrence,” said David Sacks, an analyst who focuses on U.S.-China relations at the Council on Foreign Relations. “What is not debatable is how devastating an attack on Taiwan would be for the global economy.”

Biden did not mention U.S. adversaries in his statement, but he noted the impact of Monday’s announcement, saying it “represent(s) a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.”

VOA met with engineers in the new technological hub state, who said the legislation addresses a key weakness in American chip manufacturing.

“We’ve just gotten in the cycle of the last 15 to 20 years, where innovation has slowed down,” said Todd Achilles, who teaches innovation, strategy and policy analysis at the University of California-Berkeley. “It’s all about financial results, investor payouts and stock buybacks. And we’ve lost that innovation muscle. And the CHIPS Act — pulling that together with the CHIPS Act — is the perfect opportunity to restore that.”

The White House says this new investment could create 25,000 construction and manufacturing jobs. Academics say they’re churning out workers at a rapid pace, but that still, America lacks talent.

“Our engineering college is the largest in the country, with over 33,000 enrolled students, and still we’re hearing from companies across the semiconductor industry that they’re not able to get the talent they need in time,” Zachary Holman, vice dean for research and innovation at Arizona State University, told VOA.

And as the American industry stretches to keep pace, it races a technical trend known as t: that the number of transistors in a computer chip doubles about every two years. As a result, cutting-edge chips get ever smaller as they grow in computing power.

TSMC in 2022 broke ground on a facility that makes the smallest chip currently available, coming in at 3 nanometers — that’s just wider than a strand of DNA.

Reporter Levi Stallings contributed to this report from Flagstaff, Arizona.

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer for semiconductor manufacturing in Arizona, which includes a third facility that will bring the tech giant’s investment in the state to $65 billion. VOA’s White House correspondent Anita Powell reports from Washington, with reporter Levi Stallings in Flagstaff, Arizona.

Experts fear Cambodian cybercrime law could aid crackdown

PHNOM PENH, CAMBODIA — The Cambodian government is pushing ahead with a cybercrime law experts say could be wielded to further curtail freedom of speech amid an ongoing crackdown on dissent. 

The cybercrime draft is the third controversial internet law authorities have pursued in the past year as the government, led by new Prime Minister Hun Manet, seeks greater oversight of internet activities. 

Obtained by VOA in both English and Khmer language versions, the latest draft of the cybercrime law is marked “confidential” and contains 55 articles. It lays out various offenses punishable by fines and jail time, including defamation, using “insulting, derogatory or rude language,” and sharing “false information” that could harm Cambodia’s public order and “traditional culture.”  

The law would also allow authorities to collect and record internet traffic data, in real time, of people under investigation for crimes, and would criminalize online material that “depicts any act or activity … intended to stimulate sexual desire” as pornography. 

Digital rights and legal experts who reviewed the law told VOA that its vague language, wide-ranging categories of prosecutable speech and lack of protections for citizens fall short of international standards, instead providing the government more tools to jail dissenters, opposition members, women and LGBTQ+ people. 

Although in the works since 2016, earlier drafts of the law, which sparked similar criticism, have not leaked since 2020 and 2021. Authorities hope to enact the law by the end of the year. 

“This cybercrime bill offers the government even more power to go after people expressing dissent,” Kian Vesteinsson, a senior research analyst for technology at the human rights organization Freedom House, told VOA.  

“These vague provisions around defamation, insults and disinformation are ripe for abuse, and we know that Cambodian authorities have deployed similarly vague criminal provisions in other contexts,” Vesteinsson said. 

Cambodian law already considers defamation a criminal offense, but the cybercrime draft would make it punishable by jail time up to six months, plus a fine of up to $5,000. The “false information” clause — defined as sharing information that “intentionally harms national defense, national security, relations with other countries, economy, public order, or causes discrimination, or affects traditional culture” — carries a three- to five-year sentence and fine of up to $25,000. 

Daron Tan, associate international legal adviser at the International Commission of Jurists, told VOA the defamation and false information articles do not comply with the International Covenant on Civil and Political Rights, to which Cambodia is a party, and that the United Nations Human Rights Committee is “very clear that imprisonment is never the appropriate penalty for defamation.” 

“It’s a step very much in the wrong direction,” Tan said. “We are very worried that this would expand the laws that the government can use against its critics.” 

Chea Pov, the deputy head of Cambodia’s National Police and former director of the Ministry of Interior’s Anti-Cybercrime Department that is overseeing the drafting process, told VOA the law “doesn’t restrict your rights” and claimed the U.S. companies which reviewed it “didn’t raise concerns.”  

Google, Meta and Amazon, which the government has said were involved in drafting the law, did not respond to requests for comment. 

“If you say something based on evidence, there is no problem,” Pov said. “But if there is no evidence, [you] defame others, which is also stated in the criminal law … we don’t regard this as a restriction.”  

The law also makes it illegal to use technology to display, trade, produce or disseminate pornography, or to advertise a “product or service mixed with pornography” online. Pornography is defined as anything that “describes a genital or depicts any act or activity involving a sexual organ or any part of the human body, animal, or object … or other similar pornography that is intended to stimulate sexual desire or cause sexual excitement.” 

Experts say this broad category is likely to be disproportionately deployed against women and LGBTQ+ people. 

Cambodian authorities have often rebuked or arrested women for dressing “too sexily” on social media, singing sexual songs or using suggestive speech. In 2020, an online clothes and cosmetics seller received a six-month suspended sentence after posting provocative photos; in another incident, a policewoman was forced to publicly apologize for posting photos of herself breastfeeding. 

Naly Pilorge, outreach director at Cambodian human rights organization Licadho, told VOA the draft law “could lead to more rights violations against women in the country.” 

“This vague definition of ‘pornography’ poses a serious threat to any woman whose online activity the government decides may ‘cause sexual excitement,’” Pilorge said. “The draft law does not acknowledge any legitimate artistic or educational purposes to depict or describe sexual organs, posing another threat to freedom of expression.” 

In March, authorities said they hosted civil society organizations to revisit the draft. They plan to complete the drafting process and send the law to Parliament for passage before the end of the year, according to Pov, the deputy head of police. 

Soeung Saroeun, executive director of the NGO Forum on Cambodia, told VOA “there was no consultation on each article” at the recent meeting. 

“The NGO representatives were unable to analyze and present their inputs,” said Saroeun, echoing concerns about its contents. “How is it [possible]? We need to debate on this.” 

The cybercrime law has resurfaced as the government works to complete two other draft internet laws, one covering cybersecurity and the other personal data protection. Experts have critiqued the drafts as providing expanded police powers to seize computer systems and making citizens’ data vulnerable to hacking and surveillance. 

Authorities have also sought to create a national internet gateway that would require traffic to run through centralized government servers, though the status of that project has been unclear since early 2022 when the government said it faced delays. 

Biden administration announces $6.6 billion to ensure leading-edge microchips are built in US 

WILMINGTON, Del. — The Biden administration pledged on Monday to provide up to $6.6 billion so that a Taiwanese semiconductor giant can expand the facilities it is already building in Arizona and better ensure that the most-advanced microchips are produced domestically for the first time. 

Commerce Secretary Gina Raimondo said the funding for Taiwan Semiconductor Manufacturing Co. means the company can expand on its existing plans for two facilities in Phoenix and add a third, newly announced production hub. 

“These are the chips that underpin all artificial intelligence, and they are the chips that are the necessary components for the technologies that we need to underpin our economy,” Raimondo said on a call with reporters, adding that they were vital to the “21st century military and national security apparatus.” 

The funding is tied to a sweeping 2022 law that President Joe Biden has celebrated and which is designed to revive U.S. semiconductor manufacturing. Known as the CHIPS and Science Act, the $280 billion package is aimed at sharpening the U.S. edge in military technology and manufacturing while minimizing the kinds of supply disruptions that occurred in 2021, after the start of the coronavirus pandemic, when a shortage of chips stalled factory assembly lines and fueled inflation. 

The Biden administration has promised tens of billions of dollars to support construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. 

“Semiconductors – those tiny chips smaller than the tip of your finger – power everything from smartphones to cars to satellites and weapons systems,” Biden said in a statement. “TSMC’s renewed commitment to the United States, and its investment in Arizona represent a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.” 

Taiwan Semiconductor Manufacturing Co. produces nearly all of the leading-edge microchips in the world and plans to eventually do so in the U.S. 

It began construction of its first facility in Phoenix in 2021, and started work on a second hub last year, with the company increasing its total investment in both projects to $40 billion. The third facility should be producing microchips by the end of the decade and will see the company’s commitment increase to a total of $65 billion, Raimondo said. 

The investments would put the U.S. on track to produce roughly 20% of the world’s leading-edge chips by 2030, and Raimondo said they should help create 6,000 manufacturing jobs and 20,000 construction jobs, as well as thousands of new positions more indirectly tied to assorted suppliers in chip-related industries tied to Arizona projects. 

The potential incentives announced Monday include $50 million to help train the workforce in Arizona to be better equipped to work in the new facilities. Additionally, approximately $5 billion of proposed loans would be available through the CHIPS and Science Act. 

“TSMC’s commitment to manufacture leading-edge chips in Arizona marks a new chapter for America’s semiconductor industry,” Lael Brainard, director of the White House National Economic Council, told reporters. 

The announcement came as U.S. Treasury Secretary Janet Yellen is traveling in China. Senior administration officials were asked on the call with reporters if the Biden administration gave China a head’s up on the coming investment, given the delicate geopolitics surrounding Taiwan. The officials said only that their focus in making Monday’s announcement was solely on advancing U.S. manufacturing. 

“We are thrilled by the progress of our Arizona site to date,” C.C. Wei, CEO of TSMC, said in a statement, “And are committed to its long-term success.” 

Exclusive: Russian company supplies military with microchips despite denials

PENTAGON — Russian microchip company AO PKK Milandr continued to provide microchips to the Russian armed forces at least several months after Russia invaded Ukraine, despite public denials by company director Alexey Novoselov of any connection with Russia’s military.

A formal letter obtained by VOA dated February 10, 2023, shows a sale request for 4,080 military grade microchips for the Russian military. The sale request was addressed from a deputy commander of the 546 military representation of the Russian Ministry of Defense and the commercial director of Russian manufacturer NPO Poisk to Milandr CEO S.V. Tarasenko for delivery by April 2023, more than a year into the war.

The letter instructs Milandr to provide three types of microchip components to NPO Poisk, a well-established Russian defense manufacturer that makes detonators for weapons used by the Russian Armed Forces.

“Each of these three circuits that you have in the table on the document, each one of them is classed as a military-grade component … and each of these is manufactured specifically by Milandr,” said Denys Karlovskyi, a research fellow at the London-based Royal United Services Institute for Defense and Security Studies. VOA shared the document with him to confirm its authenticity.

In addition to Milandr CEO Tarasenko, the letter is addressed to a commander of the Russian Defense Ministry’s 514 military representation of the Russian Ministry of Defense named I.A. Shvid.

Karlovskyi says this inclusion shows that Milandr, like Poisk, appears to have a Russian commander from the Defense Ministry’s oversight unit assigned to it — a clear indicator that a company is part of Russia’s defense industry.

Milandr, headquartered near Moscow in an area known as “Soviet Silicon Valley,” was sanctioned by the United States in November 2022, for its illegal procurement of microelectronic components using front companies.

In the statement announcing the 2022 sanctions against Milandr and more than three dozen other entities and individuals, U.S. Treasury Secretary Janet Yellen said, “The United States will continue to expose and disrupt the Kremlin’s military supply chains and deny Russia the equipment and technology it needs to wage its illegal war against Ukraine.”

Karlovskyi said that in Russia’s database of public contracts, Milandr is listed in more than 500 contracts, supplying numerous state-owned and military-grade enterprises, including Ural Optical Mechanical Plant, Concern Avtomatika and Izhevsk Electromechanical Plant, or IEMZ Kupol, which also have been sanctioned by the United States.

“It clearly suggests that this entity is a crucial node in Russia’s military supply chain,” Karlovskyi told VOA.

Novoselov, Milandr’s current director, told Bloomberg News last August that he was not aware of any connections to the Russian military.

“I don’t know any military persons who would be interested in our product,” he told Bloomberg in a phone interview, adding that the company mostly produces electric power meters.

The U.S. allegations are “like a fantasy,” he said. “The United States’ State Department, they suppose that every electronics business in Russia is focused on the military. I think that is funny.”

But a U.S. defense official told VOA that helping Russia’s military kill tens of thousands of people in an illegal invasion “is no laughing matter.”

“The company is fueling microchips for missiles and heavily armored vehicles that are used to continue the war in Ukraine,” said the defense official, who spoke to VOA on the condition of anonymity due to the sensitivities of discussing U.S. intelligence.

Milandr’s co-founder Mikhail Pavlyuk was also sanctioned during the summer of 2022 for his involvement in microchip smuggling operations and was caught stealing from Milandr. Pavlyuk fled Russia and has claimed he was not involved.

Officials estimate that 500,000 Ukrainian and Russian troops have been killed or injured in the war, with tens of thousands of Ukrainian civilians killed in the fighting.

“There are consequences to their actions, and the U.S. will persist to expose and disrupt the Kremlin’s supply chain,” the U.S. defense official said.

US, Europe, Issue Strictest Rules Yet on AI

washington — In recent weeks, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for other countries.

This month, the United States and the U.K. signed a memorandum of understanding allowing for the two countries to partner in the development of tests for the most advanced artificial intelligence models, following through on commitments made at the AI Safety Summit last November.

These actions come on the heels of the European Parliament’s March vote to adopt its first set of comprehensive rules on AI. The landmark decision sets out a wide-ranging set of laws to regulate this exploding technology.

At the time, Brando Benifei, co-rapporteur on the Artificial Intelligence Act plenary vote, said, “I think today is again an historic day on our long path towards regulation of AI. … The first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”

The new rules aim to protect citizens from dangerous uses of AI, while exploring its boundless potential.

Beth Noveck, professor of experiential AI at Northeastern University, expressed enthusiasm about the rules.

“It’s really exciting that the EU has passed really the world’s first … binding legal framework addressing AI. It is, however, not the end; it is really just the beginning.”

The new rules will be applied according to risk level: the higher the risk, the stricter the rules.

“It’s not regulating the tech,” she said. “It’s regulating the uses of the tech, trying to prohibit and to restrict and to create controls over the most malicious uses — and transparency around other uses.

“So things like what China is doing around social credit scoring, and surveillance of its citizens, unacceptable.”

Noveck described what she called “high-risk uses” that would be subject to scrutiny. Those include the use of tools in ways that could deprive people of their liberty or within employment.

“Then there are lower risk uses, such as the use of spam filters, which involve the use of AI or translation,” she said. “Your phone is using AI all the time when it gives you the weather; you’re using Siri or Alexa, we’re going to see a lot less scrutiny of those common uses.”

But as AI experts point out, new laws just create a framework for a new model of governance on a rapidly evolving technology.

Dragos Tudorache, co-rapporteur on the AI Act plenary vote, said, “Because AI is going to have an impact that we can’t only measure through this act, we will have to be very mindful of this evolution of the technology in the future and be prepared.”

In late March, the Biden administration issued the first government-wide policy to mitigate the risks of artificial intelligence while harnessing its benefits.

The announcement followed President Joe Biden’s executive order last October, which called on federal agencies to lead the way toward better governance of the technology without stifling innovation.

“This landmark executive order is testament to what we stand for: safety, security, trust, openness,” Biden said at the time,” proving once again that America’s strength is not just the power of its example, but the example of its power.”

Looking ahead, experts say the challenge will be to update rules and regulations as the technology continues to evolve.

Scathing federal report rips Microsoft for response to Chinese hack

BOSTON — In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying “a cascade of errors” by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company’s knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China.

It concluded that “Microsoft’s security culture was inadequate and requires an overhaul” given the company’s ubiquity and critical role in the global technology ecosystem. Microsoft products “underpin essential services that support national security, the foundations of our economy, and public health and safety.”

The panel said the intrusion, discovered in June by the State Department and dating to May, “was preventable and should never have occurred,” and it blamed its success on “a cascade of avoidable errors.” What’s more, the board said, Microsoft still doesn’t know how the hackers got in.

The panel made sweeping recommendations, including urging Microsoft to put on hold adding features to its cloud computing environment until “substantial security improvements have been made.”

It said Microsoft’s CEO and board should institute “rapid cultural change,” including publicly sharing “a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products.”

In a statement, Microsoft said it appreciated the board’s investigation and would “continue to harden all our systems against attack and implement even more robust sensors and logs to help us detect and repel the cyber-armies of our adversaries.”

In all, the state-backed Chinese hackers broke into the Microsoft Exchange Online email of 22 organizations and more than 500 individuals around the world — including the U.S. ambassador to China, Nicholas Burns — accessing some cloud-based email boxes for at least six weeks and downloading some 60,000 emails from the State Department alone, the 34-page report said. Three think tanks and foreign government entities, including a number of British organizations, were among those compromised, it said.

The board, convened by Homeland Security Secretary Alejandro Mayorkas in August, accused Microsoft of making inaccurate public statements about the incident — including issuing a statement saying it believed it had determined the likely root cause of the intrusion “when, in fact, it still has not.” Microsoft did not update that misleading blog post, published in September, until mid-March, after the board repeatedly asked if it planned to issue a correction, it said.

Separately, the board expressed concern about a separate hack disclosed by the Redmond, Washington, company in January, this one of email accounts — including those of an undisclosed number of senior Microsoft executives and an undisclosed number of Microsoft customers — and attributed to state-backed Russian hackers.

The board lamented “a corporate culture that deprioritized both enterprise security investments and rigorous risk management.”

The Chinese hack was initially disclosed in July by Microsoft in a blog post and carried out by a group the company calls Storm-0558. That same group, the panel noted, has been engaged in similar intrusions — compromising cloud providers or stealing authentication keys so it can break into accounts — since at least 2009, targeting companies including Google, Yahoo, Adobe, Dow Chemical and Morgan Stanley.

Microsoft noted in its statement that the hackers involved are “well-resourced nation state threat actors who operate continuously and without meaningful deterrence.”

The company said that it recognized that recent events “have demonstrated a need to adopt a new culture of engineering security in our own networks,” and added that it had “mobilized our engineering teams to identify and mitigate legacy infrastructure, improve processes, and enforce security benchmarks.”

US, Britain announce partnership on AI safety, testing

WASHINGTON — The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.

Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.

“We all know AI is the defining technology of our generation,” Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.”

Britain and the United States are among countries establishing government-led AI safety institutes.

Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entites.

Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes. Both are working to develop similar partnerships with other countries to promote AI safety.

“This is the first agreement of its kind anywhere in the world,” Donelan said. “AI is already an extraordinary force for good in our society and has vast potential to tackle some of the world’s biggest challenges, but only if we are able to grip those risks.”

Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

In a joint interview with Reuters Monday, Raimondo and Donelan urgent joint action was needed to address AI risks.

“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan said. “We have a focus one the areas that we are dividing and conquering and really specializing.”

Raimondo said she would raise AI issues at a meeting of the U.S.-EU Trade and Technology Council in Belgium Thursday.

The Biden administration plans to soon announce additions to its AI team, Raimondo said. “We are pulling in the full resources of the U.S. government.”

Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.

In October, Biden signed an executive order that aims to reduce the risks of AI. In January, the Commerce Department said it was proposing to require U.S. cloud companies to determine whether foreign entities are accessing U.S. data centers to train AI models.

Britain said in February it would spend more than 100 million pounds ($125.5 million) to launch nine new research hubs and AI train regulators about the technology.

Raimondo said she was especially concerned about the threat of AI applied to bioterrorism or a nuclear war simulation.

“Those are the things where the consequences could be catastrophic and so we really have to have zero tolerance for some of these models being used for that capability,” she said.