Musk Says Twitter to Change Logo to “X” From The Bird  

Elon Musk said Sunday that he plans to change the logo of Twitter to an “X” from the bird, marking what would be the latest big change since he bought the social media platform for $44 billion last year. 

In a series of posts on his Twitter account starting just after 12 a.m. ET, Twitter’s owner said that he’s looking to make the change worldwide as soon as Monday. 

“And soon we shall bid adieu to the twitter brand and, gradually, all the birds,” Musk wrote on his account. 

Earlier this month, Musk put new curfews on his digital town square, a move that came under sharp criticism that it could drive away advertisers and undermine its cultural influence as a trendsetter. 

In May, Musk hired longtime NBC Universal executive Linda Yaccarino as Twitter’s CEO in a move to win back advertisers. 

Luring advertisers is essential for Musk and Twitter after many fled in the early months after his takeover of the social media platform, fearing damage to their brands in the ensuing chaos. Musk said in late April that advertisers had returned, but provided no specifics. 

 

AI Firms Strike Deal With White House on Safety Guidelines 

The White House on Friday announced that the Biden administration had reached a voluntary agreement with seven companies building artificial intelligence products to establish guidelines meant to ensure the technology is developed safely.

“These commitments are real, and they’re concrete,” President Joe Biden said in comments to reporters. “They’re going to help … the industry fulfill its fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

The companies that sent leaders to the White House were Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The firms are all developing systems called large language models (LLMs), which are trained using vast amounts of text, usually taken from the publicly accessible internet, and use predictive analysis to respond to queries conversationally.

In a statement, OpenAI, which created the popular ChatGPT service, said, “This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the U.S. and around the world.”

Safety, security, trust

The agreement, released by the White House on Friday morning, outlines three broad areas of focus: assuring that AI products are safe for public use before they are made widely available; building products that are secure and cannot be misused for unintended purposes; and establishing public trust that the companies developing the technology are transparent about how they work and what information they gather.

As part of the agreement, the companies pledged to conduct internal and external security testing before AI systems are made public in order to ensure they are safe for public use, and to share information about safety and security with the public.

Further, the commitment obliges the companies to keep strong safeguards in place to prevent the inadvertent or malicious release of technology and tools not intended for the general public, and to support third-party efforts to detect and expose any such breaches.

Finally, the agreement sets out a series of obligations meant to build public trust. These include assurances that AI-created content will always be identified as such; that companies will offer clear information about their products’ capabilities and limitations; that companies will prioritize mitigating the risk of potential harms of AI, including bias, discrimination and privacy violations; and that companies will focus their research on using AI to “help address society’s greatest challenges.”

The administration said that it is at work on an executive order that would ask Congress to develop legislation to “help America lead the way in responsible innovation.”

Just a start

Experts contacted by VOA all said that the agreement marked a positive step on the road toward effective regulation of emerging AI technology, but they also warned that there is far more work to be done, both in understanding the potential harm these powerful models might cause and finding ways to mitigate it.

“No one knows how to regulate AI — it’s very complex and is constantly changing,” said Susan Ariel Aaronson, a professor at George Washington University and the founder and director of the research institute Digital Trade and Data Governance Hub.

“The White House is trying very hard to regulate in a pro-innovative way,” Aaronson told VOA. “When you regulate, you always want to balance risk — protecting people or businesses from harm — with encouraging innovation, and this industry is essential for U.S. economic growth.”

She added, “The United States is trying and so I want to laud the White House for these efforts. But I want to be honest. Is it sufficient? No.”

‘Conversational computing’

It’s important to get this right, because models like ChatGPT, Google’s Bard and Anthropic’s Claude will increasingly be built into the systems that people use to go about their everyday business, said Louis Rosenberg, the CEO and chief scientist of the firm Unanimous AI. 

“We’re going into an age of conversational computing, where we’re going to talk to our computers and our computers are going to talk back,” Rosenberg told VOA. “That’s how we’re going to engage search engines. That’s how we’re going to engage apps. That’s how we’re going to engage productivity tools.”

Rosenberg, who has worked in the AI field for 30 years and holds hundreds of related patents, said that when it comes to LLMs being so tightly integrated into our day-to-day life, we still don’t know everything we should be concerned about.

“Many of the risks are not fully understood yet,” he said. Conventional computer software is very deterministic, he said, meaning that programs are built to do precisely what programmers tell them to do. By contrast, the exact way in which large language models operate can be opaque even to their creators.

The models can display unintended bias, can parrot false or misleading information, and can say things that people find offensive or even dangerous. In addition, many people will interact with them through a third-party service, such as a website, that integrates the large language model into its offering, but can tailor its responses in ways that might be malicious or manipulative.

Many of these problems will become apparent only after these systems have been deployed at scale, by which point they will already be in use by the public.

“The problems have not yet surfaced at a level where policymakers can address them head-on,” Rosenberg said. “The thing that is, I think, positive, is that at least policymakers are expecting the problems.”

More stakeholders needed 

Benjamin Boudreaux, a policy analyst with the RAND Corporation, told VOA that it was unclear how much actual change in the companies’ behavior Friday’s agreement would generate.

“Many of the things that the companies are agreeing to here are things that the companies already do, so it’s not clear that this agreement really shifts much of their behavior,” Boudreaux said. “And so I think there is still going to be a need for perhaps a more regulatory approach or more action from Congress and the White House.”

Boudreaux also said that as the administration fleshes out its policy, it will have to broaden the range of participants in the conversation.

“This is just a group of private sector entities; this doesn’t include the full set of stakeholders that need to be involved in discussions about the risks of these systems,” he said. “The stakeholders left out of this include some of the independent evaluators, civil society organizations, nonprofit groups and the like, that would actually do some of the risk analysis and risk assessment.”

Japan Signs Chip Development Deal With India 

Japan and India have signed an agreement for the joint development of semiconductors, in what appears to be another indication of how global businesses are reconfiguring post-pandemic supply chains as China loses its allure for foreign companies.

India’s Ashwini Vaishnaw, minister for railways, communications, and electronics and information technology, and Japan’s minister of economy, trade and industry, Yasutoshi Nishimura, signed the deal Thursday in New Delhi.

The memorandum covers “semiconductor design, manufacturing, equipment research, talent development and [will] bring resilience in the semiconductor supply chain,” Vaishnaw said.

Nishimura said after his meeting with Vaishnaw that “India has excellent human resources” in fields such as semiconductor design.

“By capitalizing on each other’s strengths, we want to push forward with concrete projects as early as possible,” Nishimura told a news conference, Kyodo News reported.  

Andreas Kuehn, a senior fellow at the American office of Observer Research Foundation, an Indian think tank, told VOA Mandarin: “Japan has extensive experience in this industry and understands the infrastructure in this field at a broad level. It can be an important partner in advancing India’s semiconductor ambitions.”

Shift from China

Foreign companies have been shifting their manufacturing away from China over the past decade, prompted by increasing labor costs.

More recently, Beijing’s push for foreign companies to share their technologies and data has increased uneasiness with China’s business climate, according to surveys of U.S. and European businesses there.

The discomfort stems from a 2021 data security law that Beijing updated in April and put into effect on July 1. Its broad anti-espionage language does not define what falls under China’s national security or interests. 

After taking office in 2014, Indian Prime Minister Narendra Modi launched a “Make in India” initiative with the goal of turning India into a global manufacturing center with an expanded chip industry.

The initiative is not entirely about making India a self-sufficient economy, but more about welcoming investors from countries with similar ideas. Japan and India are part of the QUAD security framework, along with the United States and Australia, which aims to strengthen cooperation as a group, as well as bilaterally between members, to maintain peace and stability in the region.

Jagannath Panda, director of the Stockholm Center for South Asian and Indo-Pacific Affairs of the Institute for Security and Development Policy, said that the international community “wants a safe region where the semiconductor industry can continue to supply the global market. This chain of linkages is critical, and India is at the heart of the Indo-Pacific region” — a location not lost on chip companies in the United States, Taiwan and Japan that are reevaluating supply chain security and reducing their dependence on China.

Looking ahead

Panda told VOA Mandarin: “The COVID pandemic has proved that we should not rely too much on China. [India’s development of the chip industry] is also to prepare India for the next half century. Unless countries with similar ideas such as the United States and Japan cooperate effectively, India cannot really develop its semiconductor industry.”

New Delhi and Washington signed a memorandum of understanding in March to advance cooperation in the semiconductor field.

During Modi’s visit to the United States in June, he and President Joe Biden announced a cooperation agreement to coordinate semiconductor incentive and subsidy plans between the two countries.

Micron, a major chip manufacturer, confirmed on June 22 that it will invest as much as $800 million in India to build a chip assembly and testing plant.

Applied Materials said in June that it plans to invest $400 million over four years to build an engineering center in Bangalore, Reuters reported.  The new center is expected to be located near the company’s existing facility in Bengaluru and is likely to support more than $2 billion of planned investments and create 500 new advanced engineering jobs, the company said.

Experts said that although the development of India’s chip industry will not pose a challenge to China in the short term, China’s increasingly unfriendly business environment will prompt international semiconductor companies to consider India as one of the destinations for transferring production capacity.

“China is still a big player in the semiconductor industry, especially traditional chips, and we shouldn’t underestimate that. I don’t think that’s going to go away anytime soon. The world depends on this capacity,” Kuehn said. 

He added: “For multinational companies, China has become a more difficult business environment to operate in. We are likely to see them make other investments outside China after a period of time, which may compete with China’s semiconductor industry, especially in Southeast Asia. India may also play a role in this regard.” 

Bo Gu contributed to this report.

US Tech Leaders Aim for Fewer Export Curbs on AI Chips for China 

Intel Corp. has introduced a processor in China that is designed for AI deep-learning applications despite reports of the Biden administration considering additional restrictions on Chinese companies to address loopholes in chip export controls.

The chip giant’s product launch on July 11 is part of an effort by U.S. technology companies to bypass or curb government export controls to the Chinese market as the U.S. government, citing national security concerns, continues to tighten restrictions on China’s artificial intelligence industry.

CEOs of U.S. chipmakers including Intel, Qualcomm and Nvidia met with U.S. Secretary of State Antony Blinken on Monday to urge a halt to more controls on chip exports to China, Reuters reported. Commerce Secretary Gina Raimondo, National Economic Council director Lael Brainard and White House national security adviser Jake Sullivan were among other government officials meeting with the CEOs, Reuters said.

The meeting came after China announced restrictions on the export of materials that are used to construct chips, a response to escalating efforts by Washington to curb China’s technological advances.

VOA Mandarin contacted the U.S. chipmakers for comment but has yet to receive responses.

Reuters reported Nvidia Chief Financial Officer Colette Kress said in June that “over the long term, restrictions prohibiting the sale of our data center graphic processing units to China, if implemented, would result in a permanent loss of opportunities for the U.S. industry to compete and lead in one of the world’s largest markets and impact on our future business and financial results.”

Before the meeting with Blinken, John Neuffer, president of the Semiconductor Industry Association, which represents the chip industry, said in a statement to The New York Times that the escalation of controls posed a significant risk to the global competitiveness of the U.S. industry.

“China is the world’s largest market for semiconductors, and our companies simply need to do business there to continue to grow, innovate and stay ahead of global competitors,” he said. “We urge solutions that protect national security, avoid inadvertent and lasting damage to the chip industry, and avert future escalations.”

According to the Times, citing five sources, the Biden administration is considering additional restrictions on the sale of high-end chips used to power artificial intelligence to China. The goal is to limit technological capacity that could aid the Chinese military while minimizing the impact such rules would have on private companies.   Such a move could speed up the tit-for-tat salvos in the U.S.-China chip war, the Times reported. 

And The Wall Street Journal reported last month that the White House was exploring how to restrict the leasing of cloud services to AI firms in China.

But the U.S. controls appear to be merely slowing, rather than stopping, China’s AI development.

Last October, the U.S. Commerce Department banned Nvidia from selling two of its most advanced AI-critical chips, the A100 and the newer H100, to Chinese customers, citing national security concerns. In November, Nvidia designed the A800 and H800 chips that are not subject to export controls for the Chinese market.

According to the Journal, the U.S. government is considering new bans on the A800 exports to China.

According to a report published in May by TrendForce, a market intelligence and professional consulting firm, the A800, like Nvidia’s H100 and A100, is already the most widely used mainstream product for AI-related computing.

Combining chips

Robert Atkinson, founder and president of the Information Technology and Innovation Foundation, told VOA in a phone interview that although these chips are not the most advanced, they can still be used by China.  

“What you can do, though, is you can combine lesser, less powerful chips and just put more of them together. And you can still do a lot of AI processing with them. It just makes it more expensive. And it uses more energy. But the Chinese are happy to do that,” Atkinson said.

As for the Chinese use of cloud computing, Hanna Dohmen, a research analyst at Georgetown’s Center for Security and Emerging Technology, told VOA Mandarin in a phone interview that companies can rent chips through cloud service providers.  

In practice, it is similar to a pedestrian hopping on an e-share scooter or bike — she pays a fee to unlock the scooter’s key function, its wheels.

For example, Dohman said that Nvidia’s A100, which is “controlled and cannot be exported to China, per the October 7 export control regulations,” can be legally accessed by Chinese companies that “purchase services from these cloud service providers to gain virtual access to these controlled chips.”

Dohman acknowledged it is not clear how many Chinese AI research institutions and companies are using American cloud services.

“There are also Chinese regulations … on cross-border data that might prohibit or limit to what extent Chinese companies might be willing to use foreign cloud service providers outside of China to develop their AI models,” she said.

Black market chips

In another workaround, Atkinson said Chinese companies can buy black market chips. “It’s not clear to me that these export controls are going to be able to completely cut off Chinese computing capabilities. They might slow them down a bit, but I don’t think they’re going to cut them off.”

According to an as yet unpublished report by the Information Technology and Innovation Foundation, China is already ahead of Europe in terms of the number of AI startups and is catching up with the U.S.

Although Chinese websites account for less than 2% of global network traffic, Atkinson said, Chinese government data management can make up for the lack of dialogue texts, images and videos that are essential for AI large-scale model training.

 “I do think that the Chinese will catch up and surpass the U.S. unless we take fairly serious steps,” Atkinson said.  

UN Security Council Debates Virtues, Failings of Artificial Intelligence

Artificial intelligence was the dominant topic at the United Nations Security Council this week.

In his opening remarks at the session, U.N. Secretary-General Antonio Guterres said, “AI will have an impact on every area of our lives” and advocated for the creation of a “new United Nations entity to support collective efforts to govern this extraordinary technology.”

Guterres said “the need for global standards and approaches makes the United Nations the ideal place for this to happen” and urged a joining of forces to “build trust for peace and security.”

“We need a race to develop AI for good,” Guterres said. “And that is a race that is possible and achievable.”

In his briefing, to the council, Guterres said the debate was an opportunity to consider the impact of artificial intelligence on peace and security “where it is already raising political, legal, ethical and humanitarian concerns.”

He also stated that while governments, large companies and organizations around the world are working on an AI strategy, “even its own designers have no idea where their stunning technological breakthrough may lead.”

Guterres urged the Security Council “to approach this technology with a sense of urgency, a global lens and a learner’s mindset, because what we have seen is just the beginning.”

AI for good and evil

The secretary-general’s remarks set the stage for a series of comments and observations by session participants on how artificial intelligence can benefit society in health, education and human rights, while recognizing that, gone unchecked, AI also has the potential to be used for nefarious purposes.

To that point, there was widespread acknowledgment that AI in every iteration of its development needs to be kept in check with specific guidelines, rules and regulations to protect privacy and ensure security without hindering innovation.

“We cannot leave the development of artificial intelligence solely to private sector actors,” said Jack Clark, co-founder of Anthropic, a leading AI company. “The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.”

AI as human labor

Yi Zeng, a professor at the Institute of Automation, Chinese Academy of Sciences, shared a similar sentiment.

“AI should never pretend to be human,” he said. “We should use generative AI to assist but never trust them to replace human decision-making.”

The U.K. holds the council’s rotating presidency this month and British Foreign Secretary James Cleverly, who chaired the session, called for international cooperation to manage the global implications of artificial intelligence. He said that “global cooperation will be vital to ensure AI technologies and the rules governing their use are developed responsibly in a way that benefits society.”

Cleverly noted how far the world has come “since the early development of artificial intelligence by pioneers like Alan Turing and Christopher Strachey.”

“This technology has advanced with ever greater speed, yet the biggest AI-induced transformations are still to come,” he said.

Making AI inclusive

“AI development is now outpacing at breakneck speed, and governments are unable to keep up,” said Omran Sharaf, assistant minister of foreign affairs and international cooperation for advanced science and technology, in the United Arab Emirates.

“It is time to be optimistic realists when it comes to AI” and to “harness the opportunities it offers,” he said.

Among the proposals he suggested was addressing real-world biases that AI could double down on.

“Decades of progress on the fight against discrimination, especially gender discrimination towards women and girls, as well as against persons with disabilities, will be undermined if we do not ensure an AI that is inclusive,” Sharaf said.

AI as double-edged sword

Zhang Jun, China’s permanent representative to the U.N., lauded the empowering role of AI in scientific research, health care and autonomous driving.

But he also acknowledged how it is raising concerns in areas such as data privacy, spreading false information, exacerbating social inequality, and its potential misuse or abuse by terrorists or extremist forces, “which will pose a significant threat to international peace and security.”

“Whether AI is used for good or evil depends on how mankind utilizes it, regulates it and how we balance scientific development with security,” he said.

U.S. envoy Jeffrey DeLaurentis said artificial intelligence offers great promise in addressing global challenges such as food security, education and medicine. He added, however, that AI also has the potential “to compound threats and intensify conflicts, including by spreading mis- and disinformation, amplifying bias and inequality, enhancing malicious cyber operations, and exacerbating human rights abuses.”

“We, therefore, welcome this discussion to understand how the council can find the right balance between maximizing AI’s benefits while mitigating its risks,” he said.

Britain’s Cleverly noted that since no country will be untouched by AI, “we must involve and engage the widest coalition of international actors from all sectors.” 

VOA’s Margaret Besheer contributed to this story.

US Communications Commission Hopeful About Artificial Intelligence 

Does generative artificial intelligence pose a risk to humanity that could lead to our extinction?

That was among the questions put to experts by the head of the U.S. Federal Communications Commission at a workshop hosted with the National Science Foundation.

FCC chairwoman Jessica Rosenworcel said she is more hopeful about artificial intelligence than pessimistic. “That might sound contrarian,” she said, given that so much of the news about AI is “dark,” raising questions such as, “How do we rein in this technology? What does it mean for the future of work when we have intelligent machines? What will it mean for democracy and elections?”

The discussion included participants from a range of industries including network operators and vendors, leading academics, federal agencies, and public interest representatives.  

“We are entering the AI revolution,” said National Science Foundation senior adviser John Chapin, who described this as a “once-in-a-generation change in technology capabilities” which “require rethinking the fundamental assumptions that underline our communications.” 

“It is vital that we bring expert understanding of the science of technology together with expert understanding of the user and regulatory issues.” 

Investing in AI 

FCC Commissioner Nathan Simington pointed out that while technology may sometimes give the appearance of arriving suddenly, in many cases it’s a product of a steady but unnoticed evolution decades in the making. He gave the example of ChatGPT as AI that landed seemingly overnight, with dramatic impact. 

“Where the United States has succeeded in technological development, it has done so through a mindful attempt to cultivate and potentiate innovation.”

Lisa Guess, senior vice president of Solutions Engineering at the firm Ericsson/Cradlepoint, expressed concern that her company’s employees could “cut and paste” code into the ChatGPT window to try to perfect it, thereby exposing the company’s intellectual property. ”There are many things that we all have to think through as we do this.” 

Other panelists agreed. “With the opportunity to use data comes the opportunity that the data can be corrupted,” said Ness Shroff, a professor at The Ohio State University who is also an expert on AI. He called for “appropriate guardrails” to prevent that corruption.

FCC Commissioner Geoffrey Starks said AI “has the potential to impact if not transform nearly every aspect of American life.” Because of that potential, everyone, especially in government, shoulders a responsibility to better understand AI’s risks and opportunities. “That is just good governance in this era of rapid technological change.”  

“Fundamental issues of equity are not a side salad here,” he said. “They have to be fundamental as we consider technological advancement. AI has raised the stakes of defending our networks” and ultimately “network security means national security.” 

Digital equity, robocalls 

Alisa Valentin, senior director of technology and telecommunications policy at the civil rights organization the National Urban League, voiced her concerns about the illegal and predatory nature of robocalls. “Even if we feel like we won’t fall victim to robocalls, we are concerned about our family members or friends who may not be as tech savvy,” knowing how robocalls “can turn people’s lives upside down.”

Valentin also emphasized the urgent need to close the digital divide “to make sure that every community can benefit from the digital economy not only as consumers but also as workers and business owners.” 

“Access to communication services is a civil right,” she said. “Equity has to be at the center of everything we do when having conversations about AI.” 

Global competition

FCC Commissioner Simington said global competitors are “really good, and we should assume that they are taking us seriously, so we should protect what is ours.” But regulations to protect the expropriation of American innovation should not go overboard.

“Let’s make sure we don’t give away the store, but let’s not do it by keeping the shelves empty.” 

White House Partners With Amazon, Google, Best Buy To Secure Devices From Cyberattacks

The White House on Tuesday along with companies such as Amazon.com Inc, Alphabet’s Google and Best Buy will announce an initiative that allows Americans to identify devices that are less vulnerable to cyberattacks.

A new certification and labeling program would raise the bar for cybersecurity across smart devices such as refrigerators, microwaves, televisions, climate control systems and fitness trackers, the White House said in a statement.

Retailers and manufacturers will apply a “U.S. Cyber Trust Mark” logo to their devices and the program will be up and running in 2024.

The initiative is designed to make sure “our networks and the use of them is more secure, because it is so important for economic and national security,” said a senior administration official, who did not wish to be named.

The Federal Communications Commission will seek public comment before rolling out the labeling program and register a national trademark with the U.S. Patent and Trademark Office, the White House said.

Other retailers and manufacturers participating in the program include LG Electronics U.S.A., Logitech, Cisco Systems and Samsung.

In March, the White House launched its national cyber strategy that called on software makers and companies to take far greater responsibility to ensure that their systems cannot be hacked.

It also accelerated efforts by agencies such as the Federal Bureau of Investigation and the Defense Department to disrupt activities of hackers and ransomware groups around the world.

Last week, Microsoft and U.S. official said Chinese state-linked hackers secretly accessed email accounts at around 25 organizations, including at least two U.S. government agencies, since May.

Norway Threatens $100,000 Daily Fine on Meta Over Data

Norway’s data protection agency said Monday it would ban Facebook and Instagram owner Meta from using the personal information of users for targeted advertising, threatening a $100,000 daily fine if the company continues. 

The business practices of big U.S. tech firms are under close scrutiny across Europe over concerns about privacy, with huge fines handed out in recent years. 

The Norwegian watchdog, Datatilsynet, said Meta uses information such as the location of users, the content they like and their posts for marketing purposes. 

“The Norwegian Data Protection Authority considers that the practice of Meta is illegal and is therefore imposing a temporary ban of behavioural advertising on Facebook and Instagram,” it said in a statement.  

The ban will begin on August 4 and last three months to give Meta time to take corrective measures. The company will be fined one million kroner ($100,000) per day if it fails to comply.  

“We will analyze the decision … but there is no immediate effect on our services,” Meta told AFP in a statement. 

The Norwegian regulator added that its ruling was neither a ban on Facebook and Instagram operating in the country nor a blanket ban on behavioral advertising. 

The Austrian digital privacy campaign group noyb, which has lodged a number of complaints against Meta’s activities, said it “welcomes this decision as a first important step” and hopes data regulators in other countries will follow suit. 

Meta suffered a major setback earlier this year when European regulators dismissed the legal basis Meta had used to justify gathering users’ personal data for use in targeted advertising. 

Meta suffered another major setback earlier this month when the European Court of Justice (ECJ) rejected its various workarounds and empowered antitrust regulators to take data privacy issues into account when conducting investigations. 

UK Watchdog Proposes Applying ‘Consumer Duty’ to Social Media

Britain’s financial watchdog on Monday proposed toughening up safeguards against the illegal marketing of financial products on social media by applying a stringent “consumer duty” that is being rolled out to banks, funds and insurers on July 31.

The Financial Conduct Authority has said its new duty will be a step change in protecting retail investors after years of mis-selling scandals, by forcing firms to demonstrate how they are giving consumer good outcomes.

“Where applicable, the Consumer Duty will raise our expectations of firms communicating financial promotions on social media above the requirement… to be ‘clear, fair and not misleading’,” the FCA said in proposals out to public consultation.

“Firms advertising using social media must consider how their marketing strategies align with acting to deliver good outcomes for retail customers.”

In the fourth quarter of last year, nearly 70% of amended or withdrawn financial marketing following FCA intervention involved a promotion on websites or social media, the FCA said.

The watchdog is targeting so-called ‘finfluencers’ or widely followed people on social media who promote financial products.

“Consumers exhibit high levels of trust in finfluencers, but their advice can often be misleading,” the FCA said.

“Promoting a regulated financial product or service without approval of an FCA authorized person, or providing financial advice without FCA authorisation, may be a criminal offense.”

Promotions should also include risk warnings, it added.

Musk Says Twitter Is Losing Cash Because Advertising Is Down and the Company Is Carrying Heavy Debt

Elon Musk says Twitter is still losing cash because advertising has dropped by half.

In a reply to a tweet offering business advice, Musk tweeted Saturday, “We’re still negative cash flow, due to (about a) 50% drop in advertising revenue plus heavy debt load.”

“Need to reach positive cash flow before we have the luxury of anything else,” he concluded.

Ever since he took over Twitter in a $44 billion deal last fall, Musk has tried to reassure advertisers who were concerned about the ouster of top executives, widespread layoffs and a different approach to content moderation. Some high-profile users who had been banned were allowed back on the site.

In April, Musk said most of the advertisers who left had returned and that the company might become cash-flow positive in the second quarter.

In May, he hired a new CEO, Linda Yaccarino, an NBCUniversal executive with deep ties to the advertising industry.

But since then, Twitter has upset some users by imposing new limits on how many tweets they can view in a day, and some users complained that they were locked out of the site. Musk said the restrictions were needed to prevent unauthorized scraping of potentially valuable data.

Twitter got a new competitor this month when Facebook owner Meta launched a text-focused app, Threads, and gained tens of millions of sign-ups in a few days. Twitter responded by threatening legal action.

Sources: US Chip CEOs Plan Washington Trip to Talk China Policy

The chief executives of Intel Corp and Qualcomm Inc are planning to visit Washington next week to discuss China policy, according to two sources familiar with the matter.

The executives plan to hold meetings with U.S. officials to talk about market conditions, export controls and other matters affecting their businesses, one of the sources said. It was not immediately clear whom the executives would meet.

Intel and Qualcomm declined to comment, and officials at the White House did not immediately return a request for comment.

The sources said other semiconductor CEOs may also be in Washington next week. The sources declined to be named because they were not authorized to speak to the media.  

U.S. officials are considering tightening export rules affecting high-performance computing chips and shipments to Huawei Technologies Co Ltd, sources told Reuters in June. The rules would respectively affect Intel, which is preparing a new artificial intelligence chip that could be shipped to China, and Qualcomm, which has a license to sell chips to Huawei.

The Biden administration last October issued a sweeping set of rules designed to freeze China’s semiconductor industry in place while the U.S. pours billions of dollars in subsidies into its own chip industry.

The possible rule tightening would hit Nvidia particularly hard. The company’s strong position in the AI chip market helped boost its worth to $1 trillion earlier this year.

The chip industry has been warmly received in Washington in recent years as lawmakers and the White House work to shift more production to the U.S. and its allies, and away from China. Intel CEO Pat Gelsinger and Qualcomm CEO Cristiano Amon have met often with government officials.

Next week’s meetings, which one of the sources said could include joint sessions between executives and U.S. officials, come as Nvidia Corp NVDA.O and other chip companies fear a permanent loss of sales for an industry with large amounts of business in China while tensions escalate between Washington and Beijing.

One of the sources familiar with the matter said the executives’ goals for the meetings would be to ensure that government officials understand the possible impact of any further tightening of rules around what chips can be sold to China.

Many U.S. chip firms get more than one-fifth of their revenue from China, and industry executives have argued that reducing those sales would cut into profits that they reinvest into research and development.

Microsoft: Chinese Hackers Exploited Code Flaw to Steal US Agencies’ Emails 

Microsoft says hackers used a flaw in its code to steal emails from government agencies and other clients. 

In a blog post published Friday, the company said that Chinese hackers were able to take advantage of “a validation error in Microsoft code” to carry out their cyberespionage campaign. 

The blog provided the most thorough explanation yet for a hack that rattled both the cybersecurity industry and China-U.S. relations. Beijing has denied any involvement in the spying. 

Microsoft and U.S. officials said on Wednesday night that since May, Chinese state-linked hackers had been secretly accessing email accounts at about 25 organizations. U.S. officials said those included at least two U.S. government agencies. 

Microsoft has not identified any of the hack’s targets, but several victims have acknowledged they were affected, including personnel at the State Department, the Commerce Department and the U.S. House of Representatives. 

Secretary of State Antony Blinken told China’s top diplomat, Wang Yi, in a meeting in Jakarta on Thursday that any action that targets the U.S. government, U.S. companies or American citizens “is of deep concern to us, and that we will take appropriate action to hold those responsible accountable,” according to a senior State Department official. 

Microsoft’s own security practices have come under scrutiny, with officials and lawmakers calling on the Redmond, Washington-based company to make its top level of digital auditing, also called logging, available to all its customers free of charge.

India to Launch Moonshot Friday

India is set to launch a spacecraft to the moon Friday.

If successful, it would make India only the fourth country to do so, after the U.S., the Soviet Union, and China.

It will take the $75 million Chandrayaan-3 over a month to reach the moon’s south pole  in August.

The south pole is a special place of interest because scientists believe water is present there.

Chandrayaan-3’s equipment includes a lander to deploy a rover.

Chandrayaan-3 means “moon craft” in Sanskrit.

Targeting of State Department, Others in Microsoft Hack ‘Intentional’  

Hackers, possibly linked to China’s intelligence agencies, are being blamed for a monthlong campaign that breached some unclassified U.S. email systems, allowing them to access to a small number of accounts at the U.S. State Department and a handful of other organizations.

Microsoft first announced the intrusion Tuesday, attributing the attack on its Outlook email service to Chinese threat actors it dubbed Storm-0558.

The company said in a blog post that the hackers managed to forge a Microsoft authentication token and gain access to the email accounts of 25 organizations, both in the U.S. and around the globe, starting in mid-May.

The company said access was cut off after the breach was discovered a month later.

“We assess this adversary is focused on espionage, such as gaining access to email systems for intelligence collection,” Microsoft said. “This type of espionage-motivated adversary seeks to abuse credentials and gain access to data residing in sensitive systems.”

The State Department confirmed Wednesday that it had discovered the breach and had taken “immediate steps” to secure its systems and to notify Microsoft.

Some U.S. officials, however, were hesitant to back Microsoft’s attribution for the attack while saying the U.S. “would make all efforts to impose costs” on whoever was responsible.

“The sophistication of this attack, where actors were able to access mailbox content of victims, is indicative of APT [advanced persistent threat] activity but we are not prepared to discuss attribution at a more specific level,” a senior FBI official told reporters Wednesday, briefing them on the condition of anonymity.

According to senior officials with the FBI and the Cybersecurity and Infrastructure Security Agency (CISA), the number of U.S. victims of the Microsoft Outlook breach was in the single digits and only a small number of accounts were accessed.

They added that because the breach was detected quickly, the hackers did not have access to any email account for more than a month and never had access to any classified information or systems. In many cases, their access lasted only days.

Still, the officials noted reason for concern.

“The targeting was intentional,” said a senior CISA official who spoke to reporters on the condition of anonymity.

“This appears to have been a very targeted, surgical campaign that was not seeking the breadth of access we have seen in other campaigns,” the official added.

Despite the reluctance of some U.S. cyber officials to place the blame on China, there was no hesitation Wednesday from key U.S. lawmakers.

“The Senate Intelligence Committee is closely monitoring what appears to be a significant cybersecurity breach by Chinese intelligence,” Chairman Mark Warner said in a statement.

“It’s clear that the PRC is steadily improving its cyber collection capabilities directed against the U.S. and our allies,” the Virginia Democrat added. “Close coordination between the U.S. government and the private sector will be critical to countering this threat.”

Top U.S. intelligence, security and military officials have long warned about the growing cybersecurity threat posed by China-linked hackers.

Earlier this year, CISA Director Jen Easterly warned China “will almost certainly” employ aggressive cyber operations against the U.S. should tensions between Washington and Beijing get worse.

A separate Defense Department cyber strategy likewise warned of China’s increased investments in military cyber capabilities while also empowering a growing number of cyber proxies. 

But John Hultquist, chief analyst at Google’s Mandiant cybersecurity intelligence operation, said this latest attack showed that the Chinese threat has evolved in a very dangerous way.

“Chinese cyber espionage has come a long way,” Hultquist said in an email. “They have transformed their capability from one that was dominated by broad, loud campaigns that were far easier to detect. They were brash before, but now they are clearly focused on stealth.”

VOA reached out to the Chinese Embassy in Washington about the allegations that Beijing was behind the Microsoft attack.

“China is against cyberattacks of all kinds and has suffered from cyber hacking,” Chinese Embassy spokesperson Liu Pengyu told VOA in an email. “As MFA (Ministry of Foreign Affairs) spokesperson has commented at regular press conference, the source of Microsoft’s claim is information from the U.S. government authorities.”

Liu went on to call the U.S. “the biggest hacking empire and global cyber thief,” saying it was “high time that the U.S. explained its cyberattack activities and stopped spreading disinformation to deflect public attention.”

In its blog post about the latest breach Tuesday, Microsoft said it had managed to repair its systems for all of its customers.

The FBI and CISA on Wednesday separately issued a cybersecurity advisory, urging organizations using Microsoft Exchange Online to take steps to increase their security measures and also their monitoring of their systems to catch any suspicious activity. 

‘Meta Loses More:’ Zuckerberg Takes Threads Fight to EU

U.S. tech titan Mark Zuckerberg has plunged into a high-stakes game of brinkmanship with the European Union by withholding his new Threads app from users in Europe, but analysts say he will struggle to win the fight.

Threads, billed as the killer of Twitter, a platform that has tumbled into chaos under the leadership of mercurial tycoon Elon Musk, has added more than 100 million users in its first week in app stores.

But Zuckerberg’s firm Meta said it could not be released in Europe because of “regulatory uncertainty” around the Digital Markets Act, an antitrust regulation that will not come into force until next year.

“The reason they gave made me laugh,” said Diego Naranjo, head of policy at campaign group European Digital Rights.

“The regulation is not uncertain, it’s very certain, it’s just that Meta doesn’t like it.” 

His theory is that Meta will give Threads to the rest of the world and Europeans will become so vexed at missing out that they will pressure the EU to water down the DMA.

Naranjo, for one, thinks the ploy will fail.

But either way, the rest of the big tech platforms will be glued to their screens as this fight could shape the future regulatory landscape in Europe for all of them.

‘Fatal’ blow

Meta and the rest are already regularly in trouble with EU regulators over their data gathering and retention policies.

They struggle to keep to the terms of Europe’s mammoth five-year-old data privacy regulation (GDPR).

When the DMA was announced, their reaction was muted as it seemed to be about business and competition, a simpler topic for them though not without pitfalls.

The DMA bans the biggest tech firms from favoring their own platforms, particularly problematic for the latest launch as Threads and Instagram accounts are linked.

But the DMA’s Article 5.2 contained a bombshell: the firms will be banned from transferring user data across platforms unless they get consent.

Berin Szoka, president of the pro-business U.S. think tank TechFreedom, said the DMA’s rules would require Meta to ask for the consent of someone’s Instagram contacts before their data could be transferred to Threads.

“In practice, this could prove fatal to Threads’ rollout,” he said, as the network effect would be dead on arrival.

“I don’t really see a good way out here for Meta.”

Naranjo has little sympathy for Meta, saying the European embargo was just a “political push” by the firm against the EU.

“We will see who loses more,” he said. “My guess is that Meta will lose more from not having 450 million potential customers on their network.”

‘Question of time’

The European Consumer Group (BEUC) said the Threads issue showed the DMA doing exactly what it is supposed to do.

“The DMA does not stand in the way of new products or innovation,” said the group’s competition specialist Vanessa Turner.

“It creates an environment for innovation from more competitors and at the same time protects consumers.”

Meta has left the door open for a Threads launch in Europe and few expect it to maintain its embargo indefinitely.

European law expert Alexandre de Streel said big tech firms would probably be hammering out compliance issues with the EU over the coming months.

“I think it’s more a question of time to understand the scope of the legislation and have a dialogue with the commission,” he said.

But Szoka suggested the EU might be about to get a dose of unintended consequences.

“It would be particularly sad if DMA shields Twitter from competition,” he said.

Meta, he argued, had committed to making Threads compatible with its competitors, adding: “That’s something Twitter has only talked about.” 

Europe Signs Off on New Privacy Pact That Allows People’s Data to Keep Flowing to US 

The European Union signed off Monday on a new agreement over the privacy of people’s personal information that gets pinged across the Atlantic, aiming to ease European concerns about electronic spying by American intelligence agencies.

The EU-U.S. Data Privacy Framework has an adequate level of protection for personal data, the EU’s executive commission said. That means it’s comparable to the 27-nation’s own stringent data protection standards, so companies can use it to move information from Europe to the United States without adding extra security.

U.S. President Joe Biden signed an executive order in October to implement the deal after reaching a preliminary agreement with European Commission President Ursula von der Leyen. Washington and Brussels made an effort to resolve their yearslong battle over the safety of EU citizens’ data that tech companies store in the U.S. after two earlier data transfer agreements were thrown out.

“Personal data can now flow freely and safely from the European Economic Area to the United States without any further conditions or authorizations,” EU Justice Commissioner Didier Reynders said at a press briefing in Brussels.

Washington and Brussels long have clashed over differences between the EU’s stringent data privacy rules and the comparatively lax regime in the U.S., which lacks a federal privacy law. That created uncertainty for tech giants including Google and Facebook parent Meta, raising the prospect that U.S. tech firms might need to keep European data that is used for targeted ads out of the United States.

The European privacy campaigner who triggered legal challenges over the practice, however, dismissed the latest deal. Max Schrems said the new agreement failed to resolve core issues and vowed to challenge it to the EU’s top court.

Schrems kicked off the legal saga by filing a complaint about the handling of his Facebook data after whistleblower Edward Snowden’s revelations a decade ago about how the U.S. government eavesdropped on people’s online data and communications.

Calling the new agreement a copy of the previous one, Schrems said his Vienna-based group, NOYB, was readying a legal challenge and expected the case to be back in the European Court of Justice by the end of the year.

“Just announcing that something is ‘new’, ‘robust’ or ‘effective’ does not cut it before the Court of Justice,” Schrems said. “We would need changes in U.S. surveillance law to make this work — and we simply don’t have it.”

The framework, which takes effect Tuesday, promises strengthened safeguards against data collection abuses and provides multiple avenues for redress.

Under the deal, U.S. intelligence agencies’ access to data is limited to what’s “necessary and proportionate” to protect national security.

Europeans who suspect U.S. authorities have accessed their data will be able to complain to a new Data Protection Review Court, made up of judges appointed from outside the U.S. government. The threshold to file a complaint will be “very low” and won’t require people to prove their data has been accessed, Reynders said.

Business groups welcomed the decision, which clears a legal path for companies to continue cross-border data flows.

“This is a major breakthrough,” said Alexandre Roure, public policy director at the Brussels office of the Computer and Communications Industry Association, whose members include Apple, Google and Meta.

“After waiting for years, companies and organisations of all sizes on both sides of the Atlantic finally have the certainty of a durable legal framework that allows for transfers of personal data from the EU to the United States,” Roure said.

In an echo of Schrems’ original complaint, Meta Platforms was hit in May with a record $1.3 billion EU privacy fine for relying on legal tools deemed invalid to transfer data across the Atlantic.

Meta had warned in its latest earnings report that without a legal basis for data transfers, it would be forced to stop offering its products and services in Europe, “which would materially and adversely affect our business, financial condition, and results of operations.”

New Handbook Highlights Ways to Develop Tech Ethically

In a world where technology, such as artificial intelligence, is advancing at a rapid pace, what guidance do technology developers have in making the best ethically sound decisions for consumers? 

A new handbook, titled “Ethics in the Age of Disruptive Technologies: An Operational Roadmap,” promises to give guidance on such issues as the ethical use of AI chatbots like ChatGPT.

The handbook, released June 28, is the first product of the Institute for Technology, Ethics and Culture, or ITEC, the result of a collaboration between Santa Clara University’s Markkula Center for Applied Ethics and the Vatican’s Center for Digital Culture.

The handbook has been in the works for a few years, but the authors said they saw a need to work with a new sense of urgency with the recent escalation of AI usage, following security threats and privacy concerns after the recent release of ChatGPT.     

Enter Father Brendan McGuire.

McGuire worked in the tech industry, serving as executive director of the Personal Computer Memory Card International Association in the early 1990s, before entering the priesthood about 23 years ago. 

McGuire said that over the years, he’s continued to meet with friends from the tech world, many of whom are now leaders in the industry. But, about 10 years ago, their discussions started to get more serious, he said.

“They said, ‘What is coming over the hill with AI, it’s amazing, it’s unbelievable. But it’s also frightening if we go down the wrong valley,'” McGuire said.

“There’s no mechanism to make decisions,” McGuire said, quoting his former colleagues. He then contacted Kirk Hanson, who was then head of the Markkula Center, as well as a local bishop.

“The three of us got together and brainstormed, ‘What could we do?'” McGuire said. “We knew that each of these companies are global companies, so, therefore, they wouldn’t really respect a pastor or a local bishop. I said, if we could get somebody from the Vatican to pay attention, then we could make some traction.”

For McGuire, a Catholic priest, getting guidance from Pope Francis and the Vatican — with its diplomatic, cultural, and spiritual influence — was a natural step. He said he was connected with Bishop Paul Tighe, who was serving as the secretary of the Dicastery for Culture and Education at the Vatican, a department that works for the development of people’s human values.

McGuire said Tighe was asked by Pope Francis to look into further addressing digital and tech ethical issues.

After a few years of informal collaborations, the Markkula Center and the Vatican officially created the ITEC initiative in 2019. 

“We’re co-creators with God when we make these technologies,” he said, recognizing that technology can be used for good or bad purposes.  

The Vatican held a conference in 2019 in Rome called “The Common Good in the Digital Age.” McGuire said about 270 people attended, including Silicon Valley CEOs and experts in robotics, cyberwarfare and security. 

After gathering research by talking with tech leaders, the ITEC team decided to create a practical handbook to help companies think about and question at every level — from inception to creation to implementation — how technology can be used in an ethically positive way.

“Get the people who are designing it. Get the people who are writing code, get the people who are implementing it and not wait for some regulator to say, ‘You can’t do that,'” McGuire said.

These guidelines aren’t just for Catholics, he said. 

One of the handbook’s co-authors, Ann Skeet, senior director of leadership ethics at the Markkula Center, said the handbook is very straightforward and written in a manner business leaders are familiar with. 

“We’ve tried to write in the language of business and engineers so that it’s familiar to them,” Skeet said. “When they pick it up and they go through the five stages, and they see all the checklists and the resources, they actually recognize some of them. … We’ve done our best to make it as usable and practical as possible and as comprehensive as possible.”

“What’s important about this book is it puts materials right in the hands of executives inside the companies so that they can move a little bit past this moment of ‘analysis paralysis’ that we’re in while people are waiting to see what the regulatory environment is going to be like and how that unfolds.” 

In June, the European Parliament passed a draft law called the AI Act, which would restrict uses of facial recognition software and require AI creators to disclose more about the data used to create their programs. 

In the United States, policy ideas have been released by the White House that suggest rules for testing AI systems and protecting privacy rights.

“AI and ChatGPT are the hot topic right now,” Skeet said. “Every decade or so we see a technology come along, whether it’s the internet, social media, the cellphone, that’s somewhat of a game-changer and has its own inherent risks, so you can really apply this work to any technology.”

This handbook comes as leaders in AI are calling for help. In May, Sam Altman of OpenAI stated the need for a new agency to help regulate the powerful systems, and Microsoft President Brad Smith said government needs to “move faster” as AI progresses. 

Google CEO Sundar Pichai has also called for an “AI Pact” of voluntary behavioral standards while awaiting new legislation. 

AI Robots at UN Reckon They Could Run the World Better

A panel of AI-enabled humanoid robots told a United Nations summit Friday that they could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly developing potential of artificial intelligence.

And they admitted that they cannot — yet — get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the U.N.’s two-day AI for Good Global Summit in Geneva.

They joined around 3,000 experts in the field to try to harness the power of AI — and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

They were assembled for what was billed as the world’s first news conference with a packed panel of AI-enabled humanoid social robots.

“What a silent tension,” one robot said before the news conference began, reading the room.

Asked about whether they might make better leaders, given humans’ capacity to make errors, Sophia, developed by Hanson Robotics, was clear.

We can achieve great things

“Humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders,” it said.

“We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.

“AI can provide unbiased data while humans can provide the emotional intelligence and creativity to make the best decisions. Together, we can achieve great things.”

The summit is being convened by the U.N.’s ITU tech agency.

ITU chief Doreen Bogdan-Martin warned delegates that AI could end up in a nightmare scenario in which millions of jobs are put at risk and unchecked advances lead to untold social unrest, geopolitical instability and economic disparity.

Ameca, which combines AI with a highly realistic artificial head, said that depended on how AI was deployed.

“We should be cautious but also excited for the potential of these technologies to improve our lives,” the robot said.

Asked whether humans can truly trust the machines, it replied: “Trust is earned, not given… it’s important to build trust through transparency.”

Living until 180?

As the development of AI races ahead, the humanoid robot panel was split on whether there should be global regulation of their capabilities, even though that could limit their potential.

“I don’t believe in limitations, only opportunities,” said Desdemona, who sings in the Jam Galaxy Band.

Robot artist Ai-Da said many people were arguing for AI regulation, “and I agree.”

“We should be cautious about the future development of AI. Urgent discussion is needed now.”

Before the news conference, Ai-Da’s creator Aidan Meller told AFP that regulation was a “big problem” as it was “never going to catch up with the paces that we’re making.”

He said the speed of AI’s advance was “astonishing.”

“AI and biotechnology are working together, and we are on the brink of being able to extend life to 150, 180 years old. And people are not even aware of that,” said Meller.

He reckoned that Ai-Da would eventually be better than human artists.

“Where any skill is involved, computers will be able to do it better,” he said.

Let’s get wild

At the news conference, some robots were not sure when they would hit the big time, but predicted it was coming — while Desdemona said the AI revolution was already upon us.

“My great moment is already here. I’m ready to lead the charge to a better future for all of us… Let’s get wild and make this world our playground,” it said.

Among the things that humanoid robots don’t have yet include a conscience, and the emotions that shape humanity: relief, forgiveness, guilt, grief, pleasure, disappointment, and hurt.

Ai-Da said it was not conscious but understood that feelings were how humans experienced joy and pain.

“Emotions have a deep meaning and they are not just simple… I don’t have that,” it said.

“I can’t experience them like you can. I am glad that I cannot suffer.”

Chinese Regulators Fine Ant Group $985M in Signal That Tech Crackdown May End

HONG KONG — Chinese regulators are fining Ant Group 7.123 billion yuan ($985 million) for violating regulations in its payments and financial services, an indicator that more than two years of scrutiny and crackdown on the firm that led it to scrap its planned public listing may have come to an end.

The People’s Bank of China imposed the fine on the financial technology provider on Friday, stating that Ant had violated laws and regulations related to corporate governance, financial consumer protection, participation in business activities of banking and insurance institutions, payment and settlement business, and attending to anti-money laundering obligations.

The fine comes more than two years after regulators pulled the plug on Ant Group’s $34.5 billion IPO — which would have been the biggest of its time — in 2020. Since then, the company has been ordered to revamp its business and behave more like a financial holding company, as well as rectify unfair competition in its payments business.

“We will comply with the terms of the penalty in all earnestness and sincerity and continue to further enhance our compliance governance,” Ant Group said in a statement.

The move is widely seen as wrapping up Beijing’s probe into the firm and allowing Ant to revive its initial public offering. Chinese gaming firm Tencent, which operates messaging app WeChat, also received a 2.99 billion yuan fine ($414 million) for regulatory violations over its payments services, according to the central bank Friday, signaling that the crackdown on the Chinese technology sector could ease.

Alibaba’s New York-listed stock was up over 9% Friday afternoon.

Ant Group, founded by Alibaba co-founder Jack Ma, first started out as Alipay, a digital payments system aimed at making transactions more secure and trustworthy for buyers and sellers on its Taobao e-commerce platform.

The digital wallet soon grew to become a leading player in the online payments market in China, alongside Tencent’s WeChat Pay. It eventually grew into Ant, Alibaba’s financial arm that also offers wealth management products.

At one point, Ant’s Yu’ebao money-market fund was the largest in the world, but regulators have since ordered Ant to reduce the fund’s balance.

In January, it was announced that Ma would give up control of Ant Group. The move followed other efforts over the years by the Chinese government to rein in Ma and the country’s tech sector more broadly. Two years ago, the once high-profile Ma largely disappeared from view for 2 1/2 months after criticizing China’s regulators.

Yet Ma’s surrender of control came after other signs the government was easing up on Chinese online firms. Late last year Beijing signaled at an economic work conference that it would support technology firms to boost economic growth and create more jobs.

Also in January, the government said it would allow Ant Group to raise $1.5 billion in capital for its consumer finance unit.

Iran Blocks Public Access to Threads App; Raisi’s Account Created

Just one day after its launch, Threads, the latest social media network, was blocked by the Islamic Republic, denying access to the Iranian population. This action occurred even though an account had been created for Iran President Ebrahim Raisi on the platform.

On Thursday afternoon, Raisi’s user account, under the address raisi.ir, was established on Threads. Within a few hours, by Friday noon, he had garnered 27,000 followers. He has yet to make any posts, apparently because the Presidential Office staff administers Raisi’s social media accounts.

As Raisi’s user account debuted on the social media platform, numerous Iranian social media users have voiced concerns regarding restricted access to the platform since Thursday evening. Users have indicated that similar to Instagram, Twitter, and Facebook, they require a VPN or proxy to connect to Threads.  

Journalist Ehsan Bodaghi said on Twitter: “During the election, Mr. Raisi spoke about the importance of people’s online businesses and his 2 million followers on Instagram. After one year, he blocked and filtered all social media platforms, and now, within the initial hours, he has become a member of the social network # Threads, which his own government has filtered. Inconsistency knows no bounds!”

Another journalist, Javad Daliri, posted this on Twitter: “Mr. Raisi and Mr. Ghalibaf raced each other to join the new social network # Threads. As a citizen, I have a question: Can one issue filtering orders and be among the first to break the filtering and join? By the way, was joining this unknown network really your priority?”

Mohammad Bagher Ghalibaf is Speaker of the Parliament of Iran.

Despite the Iranian government’s frequent censorship of social media platforms, officials of the Islamic Republic use these platforms for communication. Notably, Ayatollah Ali Khamenei, the leader of the Islamic Republic of Iran, maintains an active presence on Twitter.

Threads was introduced by Meta, the parent company of Facebook and Instagram. The app was launched late Wednesday. Within two days, Threads has amassed more than 55 million users. The social network shares similarities with Twitter, allowing users to interact with posts through likes and reposts, and nearly doubles the character count limitation imposed by Twitter.

The similarities between Threads and Twitter have sparked a legal dispute between Elon Musk, the owner of Twitter, and Meta’s Mark Zuckerberg. Musk has accused Meta of employing former Twitter engineers and tweeted, “Competition is good, but cheating is not.”

Meta dismissed the copycat allegation, posting on Threads: “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing.”  

What Is Threads? Questions About Meta’s New Twitter Rival, Answered

Threads, a text-based app built by Meta to rival Twitter, is live.

The app, billed as the text version of Meta’s photo-sharing platform Instagram, became available Wednesday night to users in more than 100 countries — including the U.S., Britain, Australia, Canada and Japan. Despite some early glitches, 30 million people had signed up before noon on Thursday, Meta CEO Mark Zuckerberg said on Threads.

New arrivals to the platform include celebrities like Oprah, pop star Shakira and chef Gordon Ramsay — as well as corporate accounts from Taco Bell, Netflix, Spotify, The Washington Post and other media outlets.

Threads, which Meta says provides “a new, separate space for real-time updates and public conversations,” arrives at a time when many are looking for Twitter alternatives to escape Elon Musk’s raucous oversight of the platform since acquiring it last year for $44 billion. But Meta’s new app has also raised data privacy concerns and is notably unavailable in the European Union.

Here’s what you need to know about Threads.

How Can I Use Threads?

Threads is now available for download in Apple and Google Android app stores for people in more than 100 countries.

Threads was built by the Instagram team, so Instagram users can log into Threads through their Instagram account. Your username and verification status will carry over, according to the platform, but you will also have options to customize other areas of your profile — including whether or not you want to follow the same people that you do on Instagram.

Because Threads and Instagram are so closely linked, it’s also important to be cautious of account deletion. According to Threads’ supplemental privacy policy, you can deactivate your profile at any time, “but your Threads profile can only be deleted by deleting your Instagram account.”

Can I Use Threads If I Don’t Have An Instagram Account?

For now, only Instagram users can create Threads accounts. If you want to access Threads, you will have to sign up for Instagram first.

While this may receive some pushback, VP and research director at Forrester Mike Proulx said making Threads an extension of Instagram was a smart move on Meta’s part.

“It’s piquing [user] curiosity,” Proulx said, noting that Instagram users are getting alerts about their followers joining Threads — causing more and more people to sign up. “That’s one of the reasons why Threads got over 10 million people to sign up in just a seven hour period” after launching.

How Is Threads Similar To Twitter?

Threads’ microblogging experience is very similar to Twitter. Users can repost, reply to or quote a thread, for example, and can see the number of likes and replies that a post has received. “Threads” can run up to 500 characters — compared with Twitter’s 280-character threshold — and can include links, photos and videos up to five minutes long.

In early replies on Threads, Zuckerberg said making the app “a friendly place” will be a key to success — adding that that was “one reason why Twitter never succeeded as much as I think it should have, and we want to do it differently.”

Is Twitter Seeking Legal Action Against Meta?

According to a letter obtained by Semafor on Thursday, Twitter has threatened legal action against Meta over Threads. In the letter, which was addressed to Meta CEO Mark Zuckerberg and dated Wednesday, Alex Spiro, an attorney representing Twitter, accused Meta of unlawfully using Twitter’s trade secrets and other intellectual property by hiring former Twitter employees to create a “copycat” app.

Meta spokesperson Andy Stone responded to the report of Spiro’s letter on Threads Thursday afternoon, writing, “no one on the Threads engineering team is a former Twitter employee.”

Musk hasn’t directly tweeted about the possibility of legal action, but he has replied to several snarky takes on the Threads launch. The Twitter owner responded to one tweet suggesting that Meta’s app was built largely through the use of the copy and paste function, with a laughing emoji.

Twitter CEO Linda Yaccarino has also not publicly commented on Wednesday’s letter, but seemingly appeared to address Threads’ launch in a Thursday tweet — writing that “the Twitter community can never be duplicated.”

Hasn’t This Been Done Before?

The similarities of Meta’s new text-based app suggests the company is working to directly challenge Twitter. The tumultuous ownership has resulted in a series of unpopular changes that have turned off users and advertisers, some of whom are searching for Twitter alternatives.

Threads is the latest Twitter rival to emerge in this landscape following Bluesky, Mastodon and Spill.

How Does Threads Moderate Content?

According to Meta, Threads will use the same safety measures deployed on Instagram — which includes enforcing Instagram’s community guidelines and providing tools to control who can mention or reply to users.

Content warnings — on search queries ranging from conspiracy theory groups to misinformation about COVID-19 vaccinations — also appear to be similar to Instagram.

What Are The Privacy Concerns?

Threads could collect a wide range of personal information — including health, financial, contacts, browsing and search history, location data, purchases and “sensitive info,” according to its data privacy disclosure on the App Store.

Threads also isn’t available in the European Union right now, which has strict data privacy rules.

Meta informed Ireland’s Data Privacy Commission, Meta’s main privacy regulator for the EU, that it has no plans yet to launch Threads in the 27-nation bloc, commission spokesman Graham Doyle said. The company said it is working on rolling the app out to more countries — but pointed to regulatory uncertainty for its decision to hold off on a European launch.

What’s The Future For Threads?

Success for Threads is far from guaranteed. Industry watchers point to Meta’s track record of starting standalone apps that were later shut down — including an Instagram messaging app also called “Threads” that shut down less than two years after its 2019 launch, Proulx notes.

Still, Proulx and others say the new app could be a significant headache for Musk and Twitter.

“The euphoria around a new service and this initial explosion will probably settle down. But it is apparent that this alternative is here to stay and will prove to be a worthy rival given all of Twitter’s woes,” technology analyst Paolo Pescatore of PP Foresight said, noting that combining Twitter-style features with Instagram’s look and feel could drive user engagement.

Threads is in its early days, however, and much depends on user feedback. Pescatore believes the close tie between Instagram and Threads might not resonate with everyone. The rollout of new features will also be key.

 

Meta’s New Twitter Competitor, Threads, Boasts Tens of Millions of Sign-Ups

Tens of millions of people have signed up for Meta’s new app, Threads, as it aims to challenge competitor platform Twitter.

Threads launched on Wednesday in the United States and in more than 100 other countries.

In a Thursday morning post on the platform, Meta CEO Mark Zuckerberg said 30 million people had signed up.

“Feels like the beginning of something special, but we’ve got a lot of work ahead to build out the app,” he said in the post.

Threads is a text-based version of Meta’s social media app Instagram. The company says it provides “a new, separate space for real-time updates and public conversations.”

The high number of sign-ups is likely an indication that users are looking for an alternative to Twitter, which has been stumbling since Elon Musk bought it last year. Meta appears to have taken advantage of rival Twitter’s many blunders in pushing out Threads.

Like Twitter, Threads features short text posts that users can like, re-post and reply to. Posts can be up to 500 characters long and include links, photos and videos that are up to five minutes long, according to a Meta blog post.

Unlike Twitter, Threads does not include any direct message capabilities.

“Let’s do this. Welcome to Threads,” Zuckerberg wrote in his first post on the app, along with a fire emoji. He said the app had 10 million sign-ups in the first seven hours.

Kim Kardashian, Shakira and Jennifer Lopez are among the celebrities who have joined the platform, as well as politicians like Democratic U.S. Representative Alexandria Ocasio-Cortez. Brands like HBO, NPR and Netflix have also set up accounts.

Threads is not yet available in the European Union because of regulatory concerns. The 27-country bloc has stricter privacy rules than most other countries.

Threads launched as a standalone app, but users can log in using their Instagram credentials and follow the same accounts.

Analysts have said Threads’ links to Instagram may provide it with a built-in user base — potentially presenting yet another challenge to beleaguered Twitter. Instagram has more than 2 billion active users per month.

Twitter’s new CEO Linda Yaccarino appeared to respond to the debut of Threads in a Twitter post Thursday.

“We’re often imitated — but the Twitter community can never be duplicated,” she said in the post that did not directly mention Threads.

Some information in this report came from The Associated Press and Reuters.