China's Shein fashion retailer faces stricter EU regulation

The European Union on Friday added Chinese-founded fast-fashion company Shein to its list of very large online platforms (VLOP), which subjects it to stricter online rules.

The listing applies to companies with over 45 million users, according to the bloc's Digital Services Act (DSA). When added to the list, companies are required to do more to fight illegal and harmful content, as well as counterfeit products on their platforms.

How did Shein react?

The designation comes into effect from the end of August. Shein said it was committed to comply with the rules.

We share the [European] Commission's ambition to ensure consumers in the EU can shop online with peace of mind, and we are committed to playing our part," Leonard Lin, Shein's global head of public affairs, said in a statement.

Shein has said it has around 108 million monthly active users in the 27-nation EU. The online retailer has often been accused, like many other fast-fashion giants, of worker exploitation and harming the environment. 

How fast fashion is killing the planet

What is EU's Digital Services Act?

The EU has brought the world's largest digital platforms under heavy scrutiny lately, with investigations launched against Chinese-owned video sharing app TikTok, Elon Musk's X (formerly Twitter) as well as Chinese retailer AliExpress.

Another Chinese shopping app, Temu, recently announced having some 75 million monthly active users, making it subject to the bloc's list.

Some 16 tech firms are already subject to the DSA, including Amazon.com, Apple, Alibaba and Microsoft.

The DSA stipulates that digital platforms must assess the specific risks to European citizens' rights and safety by the published content or products. Platforms are required to submit a report to regulators, as well as an annual audit verifying they comply with the rules.

rmt/wmr (AFP, Reuters)

China's ByteDance denies plans to sell TikTok in US

ByteDance denied reports it intends to sell its popular TikTok app within the United States, after US President Joe Biden signed into law a legislation that would effectively ban the app should it not divest from the Chinese tech giant.

"ByteDance does not have any plans to sell TikTok," the company said, as the issue around the ban of TikTok further instigated rising tensions between Beijing and Washington.

During a visit of US Secretary of State Antony Blinken to China on Friday, Chinese chief diplomat Wang Yi warned of an increase in the "negative factors" in the relationship between the two countries, claiming that China's right to develop was being "unreasonably suppressed."

What did ByteDance say about a potential TikTok sale?

The Information, a tech-focused US news site, had reported that ByteDance was considering selling its popular app within the US, albeit without its secret sauce — the powerful algorithm that recommends videos to users.

"Foreign media reports about ByteDance exploring the sale of TikTok are untrue," ByteDance posted Thursday on Toutiao, a Chinese-language platform it owns.

TikTok argues it has spent some $1.5 billion (approximately €1.4 billion) on "Project Texas," which would store US data inside the United States. Critics, however, argue data storage is only part of the problem, and that the algorithm must be disconnected from ByteDance.

Even if a sale were to go through, it would be unlikely that the algorithm would be included. A recent Chinese law designated such algorithms as protected technology after former US President Donald Trump attempted to ban TikTok in 2020.

TikTok CEO Shou Zi Chew said the social media company would put up a legal fight in court against the US law calling for its divestment from Chinese ownership, describing the law as a "ban."

US lawmakers move closer to nationwide TikTok ban

Why is Washington trying to ban TikTok?

The newly signed law, which was passed through Congress in conjunction with a large-scale military aid bill for Ukraine, Israel and Taiwan, gives Chinese owner ByteDance nine months to sell the app, with a potential three-month extension if a sale was underway.

Under the law, ByteDance would have to sell the app or be excluded from Apple and Google's app stores in the United States.

TikTok is likely to challenge the bill on the basis of the First Amendment, which guarantees the right to freedom of speech, while some of TikTok's 170 million US users are also expected to take legal action.

US and other Western officials have claimed the social media platform allows Chinese authorities to collect data, spy on users and act as a conduit to spread propaganda. Beijing and ByteDance have both denied the allegations.

A number of Chinese national security laws compel organizations to assist with intelligence gathering.

TikTok has denied that it could be used as a tool for the Chinese government or that it has ever shared US user data with Chinese authorities, vowing never to do so even if asked.

rmt/sms (AFP, dpa, Reuters)

US House passes bill that could ban TikTok

The US House of Representatives passed on Saturday a bill that could see the popular video creation and sharing app TikTok banned in the country unless it divests from its Chinese parent company, ByteDance.

The bill passed with 360 votes in favor, and 58 against. It is expected to go to the Senate for a vote next week.

The bill was included as part of a larger legislative package providing aid to Ukraine, Israel and Taiwan.

TikTok warned that, if passed into law, the bill would "trample the free speech rights of 170 million Americans, devastate 7 million businesses, and shutter a platform that contributes $24 billion (€22.5 billion) to the US economy, annually."

US President Joe Biden has said he would approve the legislation if it makes its way to him.

What does the bill stipulate?

The bill gives Chinese owner ByteDance nine months to sell the app, with a potential three-month extension if a sale was underway. The parent company would also be barred from controlling TikTok's algorithm, which feeds users videos based on their needs.

Steven Mnuchin, who served as US treasury secretary under former President Donald Trump, has said he is interested in acquiring the app and has assembled a group of investors.

The latest bill is a revision of an earlier one passed by the House in March, which required ByteDance to sell TikTok within six months. However, some senators were concerned six months would be too short a deadline.

Why is there opposition to TikTok?

US officials have sounded the alarm over the app's growing popularity, particularly among young people, claiming it could allow Beijing to spy on its some 170 million users in the country.

A number of Chinese national security laws compel organizations to assist with intelligence gathering. Lawmakers and officials are also wary that Beijing could directly influence TikTok content based on its interests.

TikTok has denied that it could be used as a tool for the Chinese government or that it has ever shared US user data with Chinese authorities, vowing never to do so even if asked.

US lawmakers move closer to nationwide TikTok ban

The bill's opponents argue that Beijing could easily get data on US citizens in other ways, including through commercial data brokers that sell or rent personal information.

Among the opponents of the bill is billionaire Elon Musk, who now owns the social media platform X, formerly Twitter.

"TikTok should not be banned in the USA, even though such a ban may benefit the X platform," Musk said. "Doing so would be contrary to freedom of speech and expression."

rmt/wd (AFP, AP)

How Nvidia's Blackwell superchip could fuel an AI revolution

It wasn't a concert — Taylor Swift was nowhere to be seen. Still, thousands of people packed an arena in San Jose, California, to listen and cheer on March 18. The star of this show was Jensen Huang, who was on stage showing off a new chip that would be launched later in the year. His two-hour performance has since been watched by nearly 27 million people on YouTube.

Huang, CEO and co-founder of Nvidia, presented in his customary black leather jacket at the company's annual developer conference. Though still not a household name beyond the tech community, Nvidia caused waves recently after its market capitalization topped $2 trillion (€1.84 trillion), making it the third-most valuable listed company in the US behind Microsoft and Apple.

Nvidia CEO Jensen Huang next to a robot presenting at the company’s developer conference in San Jose, California
Nvidia CEO Jensen Huang had a helping hand while he presented his vision of the futurenull Eric Risberg/AP Photo/picture alliance

All this is linked to the company's semiconductors, called graphics processor units (GPUs). Nvidia is a chip designer  — and outsources chipmaking to expert manufacturers. Its hardware was initially used for video gaming, but the company found other options like cryptocurrency mining, 3-D modeling and self-driving vehicles.

Most importantly, they pivoted to integrating their chips into generative artificial intelligence (GAI) systems — a form of self-learning artificial intelligence capable of generating text, images, or other media.

At first glance, technology just for artificial intelligence (AI) may seem like a short road, but the possibilities around the technology have taken the world by storm since the introduction of ChatGPT in November 2022. Today, Nvidia's biggest customers are cloud-computing titans and companies that build AI models.

The new Blackwell superchip

Through its know-how, Nvidia has the chance to power this transformative technology. Currently, it holds around 80% of the global market for such AI chips.

The new chip presented in California is called Blackwell. With 208 billion transistors, it is an upgrade of the company's H100 chip, which Huang said is currently the most advanced GPU in production. The next-generation chip is 30 times quicker at some tasks than its predecessor.

To develop Blackwell, the company spent around $10 billion, according to Huang. Each chip will cost $30,000-$40,000. The company hopes its newest product will increase its hold on the AI chip market.

How does the technology work?

The Blackwell chip is part of an advanced system that the company says can be used "for trillion-parameter scale generative AI." The chips break tasks into small pieces. This parallel processing makes it possible to work out calculations faster. 

The new chip has a number of features that reduce both latency and energy use, says Bibhu Datta Sahoo, a professor who works at the University at Buffalo Center for Advanced Semiconductor Technologies.

Among other features, the Blackwell chip enables connecting many GPUs so that large AI models can be trained with a smaller carbon footprint. And it incorporates accelerated decompression of most major data formats, which enables the shift of data processing from different types of chips.

Asked if the chip could change the world, Sahoo told DW that it is difficult to say with so many teams working on things that could revolutionize AI model training. Nonetheless, "the Blackwell chip is a very good step in the right direction." 

A man pointing to a screen showing a vehicle and person recognition system for law enforcement
Training and working with AI has made huge strides in a relatively short periodnull Saul Loeb/AFP/Getty Images

More power, less energy

For Huang change cannot come fast enough pointing out that general-purpose computing has run out of steam and accelerated computing has reached a turning point. The world is seeing the start of a new industrial revolution, he said in San Jose. Creative ways need to be found to scale up while driving down costs so society can "consume more and more computing while being sustainable."

To make this possible, data centers need to grow and become more powerful. But some fear power-hungry AI chips will just add to energy use and strain grids. Nvidia sees the problem and says its new chip — though more powerful — is more energy efficient.

Experts agree. Based on available data, the Blackwell chip can reduce energy consumption by a factor of 3 to 4 compared to the previous generation of GPUs for training large AI models, said Sahoo.

This energy efficiency is especially important "considering the fact that the power consumption of data centers in the US is expected to reach 35 GW by 2030, up from 17 GW in 2022."

Taiwan, the semiconductor superpower

The road ahead is not without roadblocks

Despite all the advancements in building powerful chips for the next generation of AI, some have their doubts and fear a financial bubble as investors pile in. 

AI hardware makers have, so far, seen the biggest boom. This is natural since the underlying infrastructure must be in place before software can be used. Now that the infrastructure is coming into place, the technology can expand.

To secure its position, Nvidia is ramping up investments in its networking and software offerings to connect and manage its superchip hardware.

Yet the future holds a number of other challenges. Growing demand for semiconductors could put a strain on global supply chains. Most precarious is the fact that so much chip manufacturing is based in Taiwan.

Finally, the competition is not waiting to see what happens. Big rivals like Intel and AMD plus startups Cerebras and Groq are all working on their own chips. Even Nvidia's biggest customers — Amazon, Google and Microsoft — are getting into the chip design business. 

In an industry where size matters and new technology is quickly outdated, it will be an expensive race to stay on top.

Edited by: Uwe Hessler 

How crypto heists help North Korea fund its nuclear program

A new report by a United Nations panel set up to monitor North Korea's compliance with international sanctions claims Pyongyang continues "malicious" cyberattacks that have netted the regime around $3 billion (€2.76 billion) in the six years to 2023.  

The proceeds have reportedly funded as much as 40% of the cost of its weapons of mass destruction programs.

Analysts told DW that the crypto industry "is extremely concerned" that a powerful state actor is apparently carrying out virtual currency thefts effectively and with impunity, and that international law lags behind the rapid pace of development in the sector.

Similarly, they point out, the leaders of some of the nations that are most at risk of a cyberattack initiated by North Korea — notably South Korea, Japan and the United States — are presently preoccupied with serious political challenges that are taking up their time and energies.

The UN panel released its latest assessment of the state of North Korea's cyber activities on March 20, noting that it is investigating 58 cyber hacks against cryptocurrency-related companies between 2017 and 2023 that the panel believes were undertaken by Pyongyang.

The report concluded that North Korea is continuing its worldwide assault on financial institutions in order to evade UN sanctions and to cover the considerable cost of developing nuclear weapons and long-range missiles.

North Korea tests intercontinental ballistic missile

Funding for weapons programs

"The malicious cyberactivities of the Democratic People's Republic of Korea (DPRK) generate approximately 50% of its foreign currency income and are used to fund its weapons programs," the report said, referring to North Korea by its official name and citing information from an unnamed UN member state.

"A second member state reported that 40% of the weapons of mass destruction programs of the DPRK are funded by illicit cybermeans," the report stated.

Aditya Das, an analyst at the cryptocurrency research firm Brave New Coin in Auckland, New Zealand, said the industry has been shocked at the continuing "reach and complexity" of the crypto hacking efforts of the Lazarus Group, widely understood to be the cover for North Korea's state-run hacking team.

"The scale and quantity of the virtual currency thefts tied to the Lazarus Group — $615 million (€568 million) from Ronin Network, $100 million from Horizon, $100 million from Atomic Wallet — have been unprecedented," he told DW, adding: "It seems that any large crypto entity managing large amounts of crypto is on their radar."

Additionally, beyond these large thefts, Lazarus also appears to be going after smaller groups and individuals "with their wide net and repeatable attack approach," said Das. 

Escaping North Korea: A perilous journey to freedom

Deploying applications and tokens on the blockchain provides better access to security resources, and the quality of decentralized application audits and standards have improved significantly in recent years, Das said, although contract security expertise is still limited and therefore expensive.

"Another key attack vector to address is human error and phishing," Das emphasized.

"Lazarus is known for its social engineering and phishing campaigns and they target employees of large organizations, send them e-mails and LinkedIn messages with trapdoor attachments."

$615 million stolen from crypto firm

That is how hackers managed to access the Ronin Network in April 2022 — through a sidechain linked to blockchain game Axie Infinity — with the company estimating faked withdrawals cost it nearly $615 million. And the attack was a success for the hackers despite cryptocurrency firms impressing the importance of operational security on employees.

The security of the sector is also hampered by the decentralized, freewheeling, global nature of crypto, which users like but which also makes it difficult for governments to regulate.

"If possible, it would be good to see the actual criminals prosecuted as opposed to the applications they use," said Das. "But we know how good North Korea is at hiding its tracks and denying hacking. So for now, if prosecution is not possible then prevention is the best option."

Unfortunately, with the North pouring resources into its hacking teams because it is such a critical source of the funds the regime needs, Das said he expects more attacks to be similarly successful.

DocFilm - Cryptocurrencies - The Future of Money?

Hacking attacks pose more than the threat of ruin to financial companies, pointed out Park Jung-Won, a professor of international law at South Korea's Dankook University.

The North's cyberteams are said to regularly test the defenses of South Korea's government agencies, banking system, defense contractors and infrastructure, including the nation's nuclear power sector.

"We are very familiar with the North's illegal activities and the government and military have in recent years been paying much more attention and devoting additional resources to ensure the security of the nation," he said.

Efforts are also under way internationally to draw up laws regulating the sector globally, though there are serious hurdles that need to be overcome before that can happen.

Cyberattack legislation

"We are trying to create legislation that will fight cybertheft, cyberterrorism and other similar violations, but specific standards are difficult to achieve because they need the consensus of all the states involved," Park said. "Right now, there are lots of loopholes that bad actors, like North Korea, can take advantage of."

It is difficult to reach agreement within South Korea about the laws that are needed to help fend off cyberattacks that threaten the nation, the legal expert said, with ruling and opposition parties unwilling to be seen to agree on any issues less than a month ahead of the election.

"We know that the North has created and trained special hacking teams that are very sophisticated and have been given the sole task of attacking us," Park underlined. "We urgently need to respond to these challenges."

Is the Indo-Pacific entering new era of security alliances?

Edited by: Srinivas Mazumdaru

Ramadan: How are influencers changing the Muslim holiday?

Technology has changed the way that the month-long Muslim holiday of Ramadan is observed.

There are now apps that allow those celebrating to do everything from timing prayers to more easily donating to charity via smartphone. An important part of Ramadan is giving to those less fortunate than yourself. 

But there are also more controversial developments in the Ramadan-related digital world and, perhaps unsurprisingly, social media influencers are among them.

Locals in wealthy Gulf states are among the most connected in the world. For example, in the United Arab Emirates (UAE) an estimated 99% of the population is online; in Germany, it's 93%.

During Ramadan, surveys show that online media use becomes even more intense as people in countries like the UAE, Saudi Arabia and Qatar spend a lot more time on their phones and on social media.

At the same time, Ramadan also brings a surge in online shopping. At the end of the month, gifts are exchanged and new clothes are often purchased. Because there is more socializing at prayer and at communal meals, people may also dress up more.

Shoppers walk under Ramadan decorations inside a shopping mall in Dubai on the eve of the Muslim holy fasting month on March 22, 2023.
Just like Christmas in Christian-majority countries, Ramadan is also a commercial opportunitynull Giuseppe CacaceAFP/Getty Images

Influencers' growing Ramadan role

Standing at the intersection of all this is the influencer, often defined as somebody who has a large social media following and can "influence" followers' opinions and purchasing decisions.

"[Influencers'] incorporation of religious themes can range from a discussion on religious practices to suggestions about product purchases and which Ramadan streaming drama to watch," Gary Bunt, a professor of Islamic studies at the University of Wales and author of the book, "Islamic Algorithms," told DW. "As with other sectors, Muslim influencers may also market their own products or be sponsored to promote others."

"Ramadan has always been a focal point for accounts with an Islamic edge, going back to the 1990s," Bunt noted. But, he added, the current increase in influencer activity in the Middle East "aligns with the expansion of digital platforms, reduced digital divides and particularly the growth of TikTok."

Popular themes with Gulf state influencers include lavishly decorated tables prepared for "iftar," the large meal eaten after sundown when families break their fast, or collaborations with local fashion or beauty brands for special Ramadan collections. Food-focused influencers help launch special Ramadan meal deals or promote restaurants.

But just as some Europeans criticize Christmas's increasingly consumerist nature, there are fears in the Middle East that Ramadan, too, is becoming overly commercial.

Iyad Barghouthi, a sociology lecturer based in the occupied West Bank city of Ramallah, told DW that more influencers have not led to a rise in religious devotion. Instead, Ramadan customs are becoming more elaborate, less spontaneous and more superficial, he argued.

"There's something of a link between the increasing commercialization of Ramadan and influencers and marketing," Marc Owen Jones, an associate professor specializing in digital humanities at Qatar's Hamad bin Khalifa University, confirmed. "[Commercialization of Ramadan] is not necessarily a new thing though. This is just a new way of doing it," he said. "And it's definitely growing — which has prompted a backlash. If you look at comments on social media, there are plenty of people who resent those perceived as commercializing religious events."

The current conflict in Gaza gives them even more reason to be annoyed with influencers marketing extravagant dinners or designer bling. "There's definitely some pressure now for people not to share so much indulgence because of what's happening in Gaza," Owen Jones told DW. 

Bringing social change to Ramadan?

However, there is also evidence that influencers can have a positive impact during Ramadan, with some academics suggesting they may even be changing the way the religious holiday is observed.

For example: In the recent past, it was mostly women who were seen cooking for communal feasts. But that is changing. "We're starting to see more dads and male chef influencers in the kitchen, helping around the house and even decorating for Ramadan, which wasn't the norm several years back," Ailidh Smylie, managing director at Dubai-based social media marketing agency Socialize, wrote in 2022. "Brands are definitely stepping away from the cliche and traditional ways of portraying Ramadan."

Influencers have also been enlisted to work with campaigns advocating less food waste during Ramadan. For two years now, the United Nations Environment Program has worked with Lebanese chef Leyla Fathallah to reduce food waste during festivities. In Oman, a 2023 project called "Be Mindful" enlisted influencers to get locals thinking about how big a meal they really needed during Ramadan.

Influencers also allow younger Muslims to connect to religious traditions in a more personal way, observers say.

"A new generation of social media influencers has recently emerged in the Muslim world," the authors of an April 2022 study, Digital Islam and Muslim Millennials, reported. "They are Western-educated, unique storytellers, and savvy in digital media production."

The paper calls this new generation "GUMmies," short for "global urban Muslims," and argues that the way they interact with their own religion, including Ramadan, is evolving.

"Their religious practice focuses on storytelling rather than dogmatic texts, on human relationships, civil life, and what it means to be and act as a human being who happens to be a Muslim," the researchers wrote. GUMmies are "still interested in matters of worship — such as how to pray and fast in Ramadan — but they live in an information and media ecosystem that demands engagement, interaction, immediacy and personalization."

Influencers, the study concluded, may not just be commercializing or somehow changing Ramadan habits, they may also be "a potential sign of profound cultural change."

With additional reporting by Alla Ahmed.

Somber preparations for Ramadan in east Jerusalem

Edited by: Jon Shelton

Apple pulls the plug on its self-driving e-car project

Traditional car manufacturers in Europe, Asia and the US face a number of problems, some of them of their own making. To make matters worse, the creeping concern that tech companies could crowd legacy firms out of the personal mobility market looms over manufacturers like the figurative sword of Damocles. This threat materialized when Google parent Alphabet and Apple announced their self-driving car endeavors, Waymo and Project Titan.

Traditional car manufacturers worried they would be reduced to hardware delivery services, providing motorized base frames for "smartphones on wheels." As of this week, however, these fears might be a thing of the past, with Apple announcing that it was canceling its Titan electric vehicle project.

Instead, New York-based business daily The Wall Street Journal reported that some of the 2,000 car-developing staff members would shift to working on artificial intelligence (AI). That move could affect several hundred hardware developers, according to the US media group Bloomberg.

A white autonomous car carrying a large black box on its roof drives along a street crossing
Investors seemed to be relieved to hear that Apple had finally put its autonomous car project in the rearview mirrornull Andrej Sokolow/dpa/picture alliance

The AI field, specifically using programs such as ChatGPT to create new content from ever-growing mounds of data, is considered far more promising. Despite this, layoffs are to be expected.

E-cars a difficult market for Apple

Many insiders had been generally reserved about Apple's efforts to enter the auto industry, arguing that its business model was entirely different than that of the tech industry. 

They pointed out that electronic components for car manufacturing could not simply be sent by mail, or digitally uploaded to customers' computers. Instead, the business required shipping large and heavy parts across the globe, and maintaining them regularly.

Insiders also contended that, not least due to the many regulations in the industry, the development cycles for automotive products lasts much longer — not weeks and months, but years.

Finally, the profit margins in the automotive sector are much lower than in Apple's tech business. From the very beginning, car manufacturers had cautioned that building and selling a car was not easy.

Are classic cars better for the environment than EVs?

In the past few years, there was speculation that Apple might wind down its own development efforts simply by buying up a large component supplier or car manufacturer. The potential involvement of British luxury car builder McLaren was the subject of much debate.

Elon Musk, the founder of the US-based automotive and clean energy company Tesla, said he once offered to sell his car manufacturing business to Apple amid production challenges hounding Tesla's Model 3 line. But, as Musk recounted, Apple CEO Tim Cook wasn't even interesting in talking.

Billions down the drain?

Project Titan cost Apple over $10 billion (€9.3 billion), according to a report this week in The New York Times. Bloomberg estimated that the car development costs broke down to $1 billion each year.

Other market observers, such as analysts at the US consulting firm Guidehouse, believe Apple invested between $15 billion and $20 billion in development. That's money that will now be freed up for other projects, including in the car sector.

In a report on Apple's exit from the autonomous vehicle race, Wall Street analyst Erik Woodring wrote that "Apple is exhibiting some welcome cost discipline on longer-tailed future projects." He suggested that AI might be one of these endeavors.

Speaking to Apple's work on electric vehicles (EV) and autonomous vehicles (AV), he added that "Apple's EV/AV efforts were too far behind well-funded competitors to represent a viable path towards commercialization and product differentiation."

Polestar 2 — the ultimate Tesla beater?

The stock market responded to the announcement with a fair bit of relief, even if the cancellation did not lead to a jump in the share price. But a small uptick in the company's stocks after the decision was announced indicated that investors were glad to see Apple put its expensive car project in the rearview mirror.

'A good decision' to stop Project Titan

Most investors agree that Apple's cancellation of Project Titan was the next logical step. Jonathan Curtis, chief investment officer of the US-based Franklin Equity Group, told the German business newspaper Handelsblatt he believed it was the right choice for Apple to get involved in the automotive industry.

Citing the well-worn adage that cars are essentially "computers on wheels" nowadays, he added that his trust had invested billions in Apple. Curtis also told the German paper that he thought Apple's cancellation of its Project Titan was also "a good decision."

Now, Curtis argued, Apple would have to focus on ensuring that the AI on its phones ran smoothly. He pointed to one of Apple's uncharacteristic failures, asking: "What is one of the worst products that Apple's released in the last 10 years?"

The answer, he said, was Siri, Apple's virtual voice assistant. "If Apple can fix Siri," he argued, "they could launched a massive new iPhone cycle." Then the tech giant could sell new services that offered real added value in association with Siri software, Curtis said.

At a digital shareholder's meeting last week, Apple executive Cook announced that his company had new AI features in store. Traditionally, such promises to "break new ground" are saved for Apple's annual Worldwide Developers Conference in June. 

This article was originally written in German.

EU launches TikTok probe over child protection concerns

Since last year, Chinese-owned TikTok and other large internet platforms have been obliged to comply with the European's Union's (EU) Digital Service Act (DSA). Compared to similar regulations around the globe, requirements laid out in the DSA are relatively stringent.

Two months ago, the European Commission, which provides oversight, expressed doubt that TikTok was adhering to regulations. It launched preliminary investigations and requested a statement from the platform. According to the European Commission, the statement was unsatisfactory.

On February 19, Thierry Breton, EU commissioner for the internal market, announced on X, formerly Twitter, that his office had opened a formal investigation into TikTok. Should the Commission discover that TikTok is in violation of terms, the platform could face substantial fines of up to 5% of its daily revenue. It's unknown, however, how high this sum would be, as parent company ByteDance does not disclose TikTok's finances.

TikTok is not the only platform the EU is taking aim at. An investigation into X for the possible spread of false information and hate speech has been underway since December. And a dozen other online platforms are currently subject to preliminary investigations.

Globally, it is estimated that over one billion users are active on TikTok each month. Proceedings against the platform, which says it has about 142 million active users in Europe, could drag on for months as the online video-sharing platform could also take legal action against any decisions made by EU regulators.

Insufficient youth protection?

Among other things, EU Commissioner Breton has accused TikTok of insufficiently implementing youth protection guidelines on its platform. These stipulate that teenagers under 16 only be allowed to use TikTok with certain restrictions, and tweens and children under 13 only be able to use heavily trimmed-down versions of TikTok.

TikTok's age verification procedure could be inadequate, as the platform relies on users' entries without formally checking their age whenever a new account is opened. This problem is not unique to TikTok, but extends to social media platforms belonging to the Meta corporation, X, and even online pornography platforms. On its website, TikTok admits that it depends on users' honesty when it comes to such information.

What makes the TikTok algorithm so effective?

In theory, the TikTok platform does limit users under the age of 16 to one hour of viewing per day. However, it only takes a few clicks to lift these restrictions. What's more, the DSA also prohibits personalized advertisement for people under 16. Without proper ID checks, however, EU regulators believe this is difficult to ensure.

Is TikTok addictive?

The European Commission has also criticized the way that TikTok's algorithm floods users with an endless stream of videos tailored to their personal preferences. This in turn, can draw users deeper into the app in ways that "stimulate behavioral addictions."

Tiktok has argued that the algorithm can be switched off in accordance with DSA regulations. However, the option toggle is well-concealed within the app's menu, requiring numerous clicks to reach. Hard-to-find settings and menu options are also not unique to TikTok. But for years, media experts have pointed out that TikTok has a unique potential for addiction because so many of its users are under 25.

"There are indications that certain design elements can foster addictions," Julian Jaursch, expert for online platform regulation at the Berlin-based think tank Stiftung Neue Verantwortung (SNV), told German state television broadcaster ZDF. "This goes for TikTok, as well as for other platforms."

"Gathering evidence as to whether or not this is true won't be an easy task for the European Commission," he added.

From left to right, CEOs Shou Zi Chew (TikTok), Linda Yaccarino (X) and Mark Zuckerberg (Meta) testifying at a US Senate hearing on January 31, 2024
In January, the CEOs of TikTok, X and Meta testified before the US Senate Judiciary Committee null Manuel Balce Ceneta/AP/picture alliance

TikTok hopes to clarify

In a statement, TikTok announced it would continue to promote the improvement of youth protection guidelines. The platform also said, "we'll continue to work with experts and industry to keep young people on TikTok safe, and look forward to now having the opportunity to explain this work to the Commission in detail."

Aside from fortified youth protection, EU regulators want a complete list of TikTok's advertising clients. Moreover, the DSA requires TikTok to make its data accessible to researchers.

In January, TikTok CEO Shou Zi Chew was in Brussels to allay EU concerns about his video-sharing platform. Days later, he responded to probing questions from the US Senate. Above all, senators appeared concerned that TikTok relayed its users' data to affiliated corporations and that its software could even be used for spying. During a similar Senate hearing in 2023, Chew admitted that he did not allow his own children — six and eight years old at the time — to use the platform.

In Singapore, where his children live, there are no age limitations to the popular platform. In Germany, TikTok recently made headlines when it was discovered that the far-right nationalist party Alternative für Deutschland (AfD) was more popular on the platform than any of its competitors.

This article was translated from German.

AI chip race: Fears grow of huge financial bubble

Sam Altman caused more than a stir in early February when he called for a $5 to 7 trillion (€4.65 to 6.5 trillion) global investment to produce more powerful chips for the next generation of artificial intelligence (AI) platforms. Many industry analysts were left open-mouthed at the figure cited by the OpenAI chief executive, which is equivalent to almost a quarter of the US federal budget. 

Altman wants to solve some of the major issues faced by the AI sector, which includes a major shortage of chips and semiconductors needed to power large language models like his firm's ChatGPT, the Wall Street Journal reported earlier this month.

The US entrepreneur has warned that vastly more powerful computing will be needed to help AI eventually overtake human intelligence. Altman recently held discussions with potential investors in the United Arab Emirates, the business daily said.

Unprecedented investment demands

"Asking for $7 trillion is just indecent," Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington, told DW. "It is an order of magnitude more than the entire chip industry has spent in its history."

Domingos said Altman would likely settle for around $700 billion in backing, which is still a far cry from the value of the entire AI chip sector. Canadian-Indian analytics firm Precedence Research recently calculated the industry could be worth some $135 billion by 2030.

Others think that Altman's projection might not be so far out if the ambition is for AI to eventually become smarter than humans in every way.

"Right now, ChatGPT4 is only text,” Dylan Patel, chief analyst at SemiAnalysis, told DW. "But what if you add images, video, audio and motorized tactile feedback? And what if we assume that AI does outpace humans on all fronts? That is going to cost hundreds of billions or even trillions of dollars."

In the latest sign of the speed that AI is progressing, OpenAI last week unveiled a platform called Sora, for creating high-quality short videos from a simple line of text.

OpenAI's CEO Sam Altman, the founder of ChatGPT, speaks at a university event in London, UK, on May 24, 2023
Open AI CEO Sam Altman has called for a $5-7 trillion investment in advanced AI chip productionnull Alastair Grant/picture alliance / ASSOCIATED PRESS

AI chip race heats up

Before Altman's projection was made public, the world's major governments — the United States, China, Japan and several European countries — were already trying to secure or maintain a share of the chip industry for themselves.

Over the past 18 months, Washington has also levied sanctions on Beijing to stop Chinese firms from gaining access to US-designed chips. But rather than hobble Beijing's ability to develop advanced AI computing power, Domingos said the sanctions were "counterproductive."

"There are many ways that China can obtain US chips through intermediaries. But those sanctions also encourage China to develop its own capacity and be less reliant on US chips," the author of the book "The Master Algorithm" said.

Indeed, the US sanctions have emboldened Chinese leaders who have pledged to step up their investments in AI chip production.

China catching up fast

"China is subsidizing AI chips to the tune of $250 billion over the next decade to build a manufacturing supply chain and catch up," Patel noted. He said China is currently about four to five years behind Taiwan, the global leader in chip manufacturing, and two to three years behind in semiconductor design — a race currently being won by US chip firms.

Other countries may struggle to enter the AI chip-producing ring, as they don't have huge tech firms to commit tens of billions of investments, like Microsoft — which backs Altman's OpenAI and Google, which last year unveiled its own AI chip.

"If Germany wants to be a leader in AI, they're going to have to subsidize it because the likes of Mercedes Benz and Daimler are not necessarily going to invest a ton on advanced chips," said Patel.

Advanced chips a 'strategic commodity'

Economic historian Chris Miller, the author of the book "Chip War," told DW that more countries have realized that ultra-high-speed chips have become a "strategic commodity," amid the current geopolitical standoff between world powers.

He predicted that the US government and others "will be quite sensitive about where the chip plants are located and who's involved in their production" to avoid autocratic countries like China from using AI for nefarious purposes.

People walking towards the entrance to Nvidia Endeavor office building at the Company's corporate HQ in Silicon Valley, California, on August 9, 2019
The world's largest chip designer NVIDIA has seen its market value more than triple in two yearsnull IMAGO/Pond5 Images/Sundry Photography

NVIDIA leads stock market melt up

NVIDIA is the market leader in AI chip design. The Santa Clara, California-based firm is now valued at $1.8 trillion, making it the third-largest company on the US stock market, trailed by the likes of AMD and Intel.

Amid a stock market melt up — a period of rapid market growth fueled by investor optimism NVIDIA has seen its value rise by $296.5 billion in just the last month, which most analysts think is unsustainable.

Domingos likened the current investor craze for AI to a "balloon that is inflating very rapidly," until it bursts.

"A lot of people, companies, countries are going to lose a ton of money. There's going to be a lot of carnage,” he told DW. "But in the longer term, AI will be like the Internet. Who cares about the dotcom bust these days? The Internet is a reality, it's all-pervasive and the basis for the next advancement in technology."

Edited by: Kristie Pladson

Artificial intelligence is boosting many industries

Facebook at 20: From hope to disillusionment

Facebook, the world's largest social media network, is 20 years old. More than 3 billion people are active on their Facebook page at least once a month — more than one in three people on the planet. That's quite the success story.

But just a few days before its 20th anniversary, any celebratory mood was dampened when Facebook founder and Meta CEO Mark Zuckerberg faced harsh criticism at a hearing before the US Senate. "You have blood on your hands," Republican Senator Lindsey Graham shouted at Zuckerberg. "You have a product that's killing people."

The subject of the hearing on January 31 was the failure of major internet platforms to protect children and young people. Democrat Dick Durbin, who chairs the Senate Judiciary Committee, expressed the criticism in a nutshell.

"Their design choices, their failures to adequately invest in trust and safety, their constant pursuit of engagement and profit over basic safety have all put our kids and grandkids at risk," he said in his opening remarks.

The dangers of social media are now being widely discussed. In the US, it is being held partly responsible for a mental health crisis among young people.

Meta CEO Mark Zuckerberg arrives to testify before a Senate Judiciary Committee hearing on Capitol Hill in Washington, Wednesday, Jan. 31, 2024, to discuss child safety
Meta CEO Mark Zuckerberg testified before a Senate Judiciary Committee hearing on child safety on January 31null Susan Walsh/AP/picture alliance

In an interview with DW, Gerd Gigerenzer, a German psychologist and a specialist on risk research, listed some of the harmful effects of social media. And it's not just that more and more people are finding it harder and harder to concentrate. "Some studies have shown an increase in insecurity, low self-esteem, depression and even suicidal thoughts," he said.

In the US, for example, another indicator could be the increased suicide rate among people between the ages of 10 and 25, which shot up by 60% in the decade between 2011 and 2021.

A hopeful start for social networking

And yet Facebook started out so harmlessly. Those were the early days of the digital revolution, when the internet promised transparency and participation. While traditional media once operated on the model of communicating from one to the many, this new form — communicating from everyone to everyone — seemed to bring more freedom, participation, and democracy.

Facebook was an exciting social network where people could quickly find like-minded people, share their vacation photos and stay up-to-date with what their friends were up to. "In the beginning, Facebook was seen as having a rather altruistic mission: people hoped that connecting people would make the world a better place," recalled Berlin-based media scientist Martin Emmer.

How does social media cause stress?

But it became a network with far-reaching consequences. Take, for example, the great hopes initially raised by the Arab Spring uprisings in 2011. Because of the network's role in organizing demonstrations and resistance, it was sometimes called the "Facebook revolution."

Facebook, especially in tandem with the rapid advancement of the smartphone, addressed one of the oldest of human needs with the most cutting-edge technology. "Humans are social creatures," said Emmer. "And these platforms have achieved something unlike any other medium before them: they allow us to interact with other people at many different levels, subtly calibrated according to different types of friends. They allow us to take part in the lives of others."

Caught between empowerment and disempowerment

However, there is a price for the use of the network's infrastructure: users pay twice — with their data and with their attention span.

Attention is a scarce commodity, and advertisers are more than happy to pay for it. Especially when precise personality profiles make it possible to deliver messages to potential customers with pinpoint accuracy.

This is why platform providers collect as much data as possible from their users, with every like providing another data point. And with detailed knowledge of users' interests, likes and dislikes, timelines can be flooded with whatever kind of content will keep users on the platform for as long as possible.

For a long time, the impact this had on individuals and society was of no concern to those running the platforms. The growing polarization of society, the increasing viciousness of political discussions, the proliferation of the wildest conspiracy theories — all of this has been linked with Facebook and other platforms.

Thanks to their communication power, social networks can also be exploited for political purposes. In 2016, allegations were made that Russia had used Facebook to influence the outcome of presidential elections. Two years later, Facebook became embroiled in the Cambridge Analytica scandal. Largely without the knowledge of its users, the company had analyzed the data of around 50 million Facebook profiles, with the aim being to influence voter behavior with highly personalized messages. Facebook groups like "Stop the Steal" also played a role in the 2020 US presidential election, helping former President Donald Trump to propagate the myth of a stolen vote.

AI, social media could influence mega election year

2024 will be a major election year. Over half of the world's population will be going to the polls: in India and Indonesia, in Pakistan and Russia, in the European Union and in the United States. And Jaron Lanier, a US computer scientist and technology critic, is worried. 

"The rise of deepfakes from AI and other new applications of technology to manipulate people are coming about, and I think many people will not be prepared for that," Lanier told DW. Back in 2018, Lanier warned of the dangers of social media in his book "Ten Arguments for Deleting Your Social Media Accounts Right Now."

How is the EU countering Russian disinformation?

But on the positive side, Lanier also believes many people are slowly becoming aware of how they are being manipulated. "Whether the number of people who are is enough to make a difference, I don't know," he said.

Berlin-based network scientist Philipp Lorenz-Spreen agrees that societies have allowed data companies to dictate the terms for too long. "For 20 years now, we have allowed Web 2.0, the internet in which everyone can share content, to develop into something that is almost entirely commercial," he said. "We have allowed this attention economy to proliferate."

Politicians playing catch-up with tech giants

Meanwhile, politicians have been waking up and trying to catch up in the race with the technology giants. In 2022, the European Union passed the Digital Services Act. The aim is to speed up the removal of illegal content, such as hate speech. It also seeks to better protect the fundamental rights of users — including freedom of speech.

In addition, researchers will finally get access to data from the internet giants. "There's progress being made toward transparency so that we can open up this black box a little and see how this machine works," said a delighted Lorenz-Spreen.

However it works, it's extremely profitable. Facebook's parent company Meta, which also owns Instagram and WhatsApp, earned so much money from advertising in the last quarter of 2023 that Meta decided for the first time to pay out dividends to its shareholders on its 20th anniversary. For them, at least, there may be something to celebrate after all.

This article was originally written in German.

South Korea invests big in becoming a global chip leader

South Korea plans to build the world's largest semiconductor cluster in Gyeonggi Province by investing around $470 billion (€430 billion) over the next 23 years into the massive production park in partnership with major electronics companies Samsung Electronics and SK Hynix.

To support the plan, the government has proposed measures including tax incentives for investments and initiatives to boost competitiveness. South Korea aims to increase self-sufficiency in essential materials, parts, and equipment for chip production to 50% by 2030.

South Korea currently dominates the production of DRAM and NAND memory chips, which are used for managing and storing data on devices on PCs, smartphones and SD cards, holding a global market share of over 60%. South Korea aims to increase its share of other chips and processors.

Samsung also seeks to overtake the Taiwan Semiconductor Manufacturing Company's (TSMC) leading position producing wafers, which are thin disks made up of semiconducting material, mainly silicon, that act as the first layer in creating semiconductor components.

The Taiwanese are the global market leader in the foundry business with the contract manufacturing of processors for other companies.

China-Taiwan conflict: How it could ruin the global economy

Larger than 30,000 soccer fields

President Yoon Suk-yeol has said that the ambitious "mega cluster" is expected to generate nearly 3.5 million jobs. To achieve this, he emphasized the necessity of expanding nuclear energy to meet the energy demands of the semiconductor sector.

A semiconductor "cluster" is a group of facilities that perform research and all production steps of semiconductors all in a single area. South Korea's cluster comprises various industrial zones in the province of Gyeonggi, with a total area of 21,000 hectares (52,000 acres), which is 21 million square meters, or the size of almost 30,000 soccer fields. 

By 2047, plans call for an additional 16 production facilities to supplement the existing 19. Among these, three plants and two research factories are scheduled for completion by 2027. 

According to the industry ministry, Samsung and SK hynix plan to produce 7.1 million wafers per month there by 2030.

"If we complete the construction of the semiconductor mega cluster at an earlier date, we will achieve the world's leading competitiveness in the chip sector and provide quality jobs for young generations," said trade and industry minister Ahn Duk-geun.

Samsung Electronics is set to invest 500 trillion won ($375 billion) in the project, allocating 360 trillion won for six new production facilities in Yongin, located 33 kilometers (20 miles) south of Seoul. 

Additionally, 120 trillion won will be directed towards building three new factories in the Pyeongtaek production complex, situated 54 kilometers south of Seoul, along with three research factories in Giheung.

Official figures indicate that the second-largest chip manufacturer, SK Hynix, will contribute 122 trillion won to construct four new factories in Yongin.

Decoupling from China

With its long-term cluster plan, South Korea is responding to the changing climate in the semiconductor industry.

South Korea sees its chip industry at risk of losing importance in the power struggle between China and the USA. In 2022, South Korea exported semiconductor goods worth $129 billion, which accounted for around 19% of national exports.

A reduction in national production would hit South Korea's economy hard. "Korea is at the forefront of semiconductors, which creates economic opportunities but also makes companies vulnerable," said Troy Stangarone from the Korea Economic Institute in Washington.

The US is promoting the establishment of semiconductor production facilities with subsidies of $52.7 billion with the "CHIPS and Science Act." This is why Samsung is building a $17 billion chip plant in the US state of Texas.

At the same time, China is driving forward the development of its domestic semiconductor industry after the US severely restricted semiconductor exports.

Chinese workers with gold microchip plates
China has been bolstering its domestic production of microchipsnull picture alliance/dpa

At the same time, a new cluster of TSMC and Sony processor plants is being built on the main Japanese island of Kyushu. Meanwhile, OpenAI boss Sam Altmann is looking for producers of chips for the development and application of artificial intelligence.

Thanks to an indefinite special permit, South Korean manufacturers have so far been exempt from US restrictions and are allowed to export equipment and machinery to China. The Samsung plant for NAND memory chips in Xian and SK Hynix's NAND factory in Dalian benefit from this.

However, South Korean doubts about production in China are growing. On January 24, SK Hynix had to deny again that it was planning to sell the plant in Dalian, which it had only taken over from Intel for $9 billion in December 2020.

According to market researcher Bloomberg Intelligence, the US controls five of the ten stages of the chip supply and production chain, including etching, coating and doping, while Japan and the Netherlands control the remaining areas, such as wafer cleaning and lithography.

Thus, South Korea's key role as a chip manufacturer depends on technologies, materials and expertise mainly belonging to the US and its allies. South Korean manufacturers are therefore focusing on cooperation with US companies to strengthen their national production.

US-China tech rivalry: Semiconductors and geopolitics

This article was translated from German

Journalists demand change in Google, Meta's media policies

Google, Meta and other tech giants are making it more challenging to access independent media content in Belarus, exiled journalists have told the European Commission.

By complying with restrictions Alexander Lukashenko's government has imposed, these companies have "become tools for a totalitarian and authoritarian regime to put pressure on civil society instead of helping to promote independent media," the exiled journalist Natalia Belikova told the Financial Times in January.

"It is becoming increasingly clear that technology companies have enormous power — perhaps, in some cases, even more than those in political power," Belarusian opposition leader Svetlana Tsikhanouskaya told DW in January on the sidelines of the World Economic Forum in Davos. "It is important that these companies are on the side of good and committed to promoting democratic values."

Russian media have the same problem. "It is clear to us that algorithms by Google, the world's largest search engine, inevitably contribute to Russian state propaganda because links to state and pro-government media dominate in search results and recommended news generated for any particular user," said Sarkis Darbinyan, co-founder of the digital rights organization Roskomsvoboda.

If a user tries to access a blocked media company, the search engine algorithm marks the link as inactive, which makes the website disappear from the search results. Meanwhile, unblocked media with similar headlines appear instead.

Man sits at a computer with a sticker of a skull and crossbones wearing headphones
Sarkis Darbinyan, co-founder of the digital rights organization Roskomsvobodanull picture-alliance/dpa/RIA Novosti/R. Krivobok

Lev Gershenzon, the former head of the news service at Russia's largest search engine, Yandex, and founder of the news portal The True Story, told DW that there is another problem.

"Google's algorithms don't take into account that authoritarian regimes expend enormous resources to popularize websites that benefit them artificially."

Google focuses too much on the number of views, he said, which prioritizes websites with fake news and conspiracy theories.

"When the algorithm was developed, the idea was initially good: to prevent sites with illegal content from appearing in the search results," Darbinyan said.

But the algorithm can also be used with bad intent. "We want the platforms to remove illegal content from the internet," said Matthias Kettemann, co-head of the internet policy section at the Max Planck Institute for Comparative Public Law and International Law. "That's important. But if a state abuses this, for example, by declaring any criticism of the government illegal, that is a law violation. Then you can use the same tools to make legitimate criticism disappear on the internet."

'No public dialogue'

In the summer, Roskomsvoboda was among the many human rights organizations that signed on to a paper presented to Google by US digital rights NGO Access Now at the annual global RightsCon conference in Costa Rica. The paper highlights the challenges independent media faces because of restrictions imposed by IT giants.

Following the sanctions against Russia, many tech companies have shut down their offices, services and support there, restricting user access, the report states. The shutdown has made the work of independent media increasingly difficult, with Russian society becoming increasingly isolated in the face of state propaganda, according to the report.

Lev Gershenzon wears a hoodie and stands in front of a street art mural
Lev Gershenzon, former head of news for Russian search engine Yandexnull privat

"There is still no public dialogue with the big tech companies," said Gershenzon, who has been working on the issue for about a year.

Darbinyan said Google was "not particularly interested in changing its algorithms because of a few human rights groups." Meta, Darbinyan said, has been more open to civil society.

Truly protecting employees?

Kettemann said Google and other companies in Russia, Belarus and China were in a bind — forced to comply with the authorities' requirements to prevent endangering their employees. If the European Commission were to threaten Google with sanctions to force the unblocking of independent media websites, the company could be banned altogether in Russia. "And that, in turn, would result in even more severe cuts, both in terms of its own revenues and also for the communication environment," Kettemann said.

Matthias Kettemann wears glasses and looks into the camera
Matthias Kettemann, co-head of the internet policy section at the Max Planck Institute for Comparative Public Law and International Lawnull privat

Darbinyan said Google had already effectively left the Russian market. "Paid products no longer work in Russia due to problems with Visa and Mastercard," Darbinyan said. "Google has not even tried to restore payment options for users or monetized Russian channels to support independent media and bloggers who live off advertising revenue."

To side with independent media in Russia and Belarus, the search engine would have to change its algorithm worldwide, which would be very expensive.

"It could also severely affect SEO optimization," Darbinyan said, "which is used by thousands or even millions of companies on the internet."

According to the Financial Times report, EU officials said they had no basis for imposing fines or taking legal action against IT companies that don't help dissident journalists and writers in Belarus and elsewhere.

"Formally, the EU Commission has few options with regard to the behavior of an American company in a third country," Kettemann said. "However, as part of enforcing the Digital Services Act, the commission can, of course, monitor platforms that are also active in Europe — at least concerning their activities in Europe. In this context, it can also provide guidance on how these platforms should behave in non-European countries."

Gershenzon said coercion by politicians and public representatives would be "a bad path," as officials don't fully understand how technology works. Instead, the tech companies would do better to recognize the problem, take responsibility and act. "But this has yet to be seen," Gershenzon said, "and the fight against fakes and propaganda is only occurring verbally."

This article was originally published in Russian.

Protests spread in Russia over jailed activist

Artificial intelligence: Potential and pitfalls in 2024

Artificial intelligence has gone mainstream.

Long the stuff of science fiction and blue-sky research, ​AI technologies like the ChatGPT and Bard chatbots have become everyday tools used by millions of people. And yet, experts say, we've only seen a glimpse of what's to come.

"AI has reached its iPhone moment," said Lea Steinacker, chief innovation officer at startup ada Learning and author of a forthcoming book on artificial intelligence, referring to the introduction of Apple's smartphone in 2007, which popularized mobile internet access on phones.

Similarly, "applications like ChatGPT and others have brought AI tools to end users," Steinacker told DW. "And that will affect society as a whole."

Will deepfakes help derail elections?

So-called "generative" AI programs now allow anyone to create convincing texts and images from scratch in a matter of seconds. This has made it easier and cheaper than ever to produce "deepfake" content, in which people appear to say or do things they never did.

As major elections approach in 2024, from the US presidential race to the European Parliament elections, experts have said we could see a surge in deepfakes aimed at swaying public opinion or inciting unrest ahead of a vote.

"Trust in the EU electoral process will critically depend on our capacity to rely on cybersecure infrastructures and on the integrity and availability of information," warned Juhan Lepassaar, executive director of the EU's cybersecurity agency, when his office released a threat report in mid-October.

Deepfakes: Manipulating elections with AI

How much of an impact deepfakes will have will also largely depend on the efforts of social media companies to combat them. Several platforms, such as Google's YouTube and Meta's Facebook and Instagram, have implemented policies to flag AI-generated content, and the coming year will be the first major test of whether they work.

Who owns AI-generated content?

To develop "generative" AI tools, companies train the underlying models by feeding them vast amounts of texts or images sourced from the internet. So far, they've used these resources without obtaining explicit consent from the original creators — writers, illustrators, or photographers.

But rights holders are fighting back against what they see as violations of their copyrights.

Recently, The New York Times announced it was suing OpenAI and Microsoft, the companies behind ChatGPT, accusing them of using millions of the newspaper's articles. San Francisco-based OpenAI is also being sued by a group of prominent American novelists, including John Grisham and Jonathan Franzen, for using their works.

How AI shaped 2023: From fascination to fear

Several other lawsuits are pending. For example, the photo agency Getty Images is suing the AI company Stability AI, which is behind the Stable Diffusion image creation system, for analyzing its photos.

The first rulings in these cases could come in 2024 — and they could set precedents for how existing copyright laws and practices need to be updated for the age of AI.

Who holds the power over AI?

As AI technology becomes more sophisticated, it's becoming harder and more expensive for companies to develop and train the underlying models. Digital rights activists have warned this development is concentrating more and more cutting-edge expertise in the hands of a few powerful companies.

"This concentration of power in terms of infrastructure, computing power and data in the hands of a few tech companies illustrates a long-standing problem in the tech space," Fanny Hidvegi, Brussels-based director of European policy and advocacy at the nonprofit Access Now, told DW.

As the technology becomes an indispensable part of people's lives, a few private companies will influence how AI will reshape society, she warned.

How to make AI work for you

How to enforce AI laws?

Against this backdrop, experts agree that — just as cars need to be equipped with seatbelts — artificial intelligence technology needs to be governed by rules.

In December 2023, after years of negotiations,the EU agreed on its AI Act, the world's first comprehensive set of specific laws for artificial intelligence.

Now, all eyes will be on regulators in Brussels to see if they walk the walk and enforce the new rules. It's fair to expect heated discussions about whether and how the rules need to be adjusted.

"The devil is in the details," said Lea Steinacker, "and in the EU, as in the US, we can expect drawn-out debates over the actual practicalities of these new laws."

Edited by: Rina Goldenberg

Fact check: The strangest fakes of 2023

1. No, this is not Volodymyr Zelenskyy belly dancing

Both Moscow and Kyiv have been the subject of false information circulating online since Russia began its war of aggression against Ukraine in February 2022. Ukrainian President Volodymyr Zelenskyy , in particular, has often been the target of smear campaigns. One video in September allegedly showed Zelenskyy belly dancing in a skin-tight golden costume. But DW research determined the video to be a deepfake that had superimposed the president's face on a dancer's body.

2. No, Sweden is not organizing a sex tournament

In July, "news" circulated worldwide that Sweden had declared sex a sport and was organizing a tournament. Many international media outlets also reported on it, including the Times of India, one of the country's most respected newspapers. One report said Sweden wanted to organize a tournament in which participants would have sex with each other for up to six hours a day to see who was the best.

This claim was false, DW fact check research showed. Göteborgs-Posten, one of Sweden's major daily newspapers, had reported that Dragan Bratic, the owner of several strip clubs in the country, had applied to have sex classified as a sport. However, the Swedish Sports Confederation rejected this application in May, he confirmed to DW.

3. No, a viral photo of used condoms being cleaned and sold as new was not from Kenya

Along with dozens of photos of what appear to be used condoms, several Facebook posts earlier this year claimed that six students from Kenya had been arrested for cleaning used condoms and selling them as new.

But a reverse image search showed that the claim was false. However, according to a 2020 news report, almost 324,000 used condoms were indeed cleaned and resold — but in Vietnam, not Kenya. And not by six students, but rather by employees at a small factory.

4. No, there is no X-ray image of a live cockroach inside a chest

In May 2023, various Facebook users wrote that a patient X-rayed in a Kenyan state hospital was found to have a live cockroach in his chest. But the image with the alleged evidence was photoshopped, a reverse image search showed. The original X-ray image was published on a radiology website — sans cockroach.

5. No, chia seeds can't cure diabetes

As more people get easy access to artificial intelligence, there have been an increasing number of videos in which AI-generated "doctors" give health tips. One such video, in which an AI doctor claims chia seeds can help control diabetes, recently went viral.

However, both the doctor and the claim are fake. According to studies, chia seeds can have a positive effect on health and also have anti-diabetic effects, but experts say they can neither control nor cure the illness.

6. No, Joe Biden was not wearing a diaper

At 81 years of age, US President Joe Biden has faced repeated claims that he is too old to hold office. In June 2023, an alleged photo of him kneeling on the floor with a diaper peeking out of his trousers circulated in some countries.

But a reverse image search showed that the photo was manipulated. While Biden did actually fall down at a US Air Force Academy ceremony later that month, numerous videos and photos of the incident showed no evidence of a diaper.

7. No, this TikToker didn't win a lawsuit over her nonconsensual birth

Back in 2022, TikToker Kass Theaz made a video in which she claimed to be suing her parents for not getting her consent to bring her into the world. In another video from June 2022, she said she had won the lawsuit. And in November 2023 she claimed her parents now have to pay her $5,000 ($4,540) in damages each month. That video now has over 3.5 million views.

Though comments under it show that many users believe her, Theaz's TikTok profile clearly states that she is running a satire account.

8. No, there is no evidence that an American plane lost in 1955 reappeared after 37 years

A viral Facebook post claimed that an airplane took off from New York in 1955, went missing, and landed again 37 years later in Miami, Florida.

But research by the fact-checking team at French news agency AFP showed that there is no evidence of this ever happening. Not only do US authorities have no data showing that an airplane took off from New York and disappeared in 1955, the story was also originally published by a US tabloid known for its fictional content.

9. No, the OceanGate submersible was not recovered empty

On June 18, five people set off to view the Titanic — which sank in 1912 — in a mini-submarine operated by the company OceanGate. But shortly after it was lowered into the sea, communications with the crew ceased, triggering a massive rescue operation that captured the world's attention.

An alleged screenshot of a CNN article claiming the submersible had been found empty went viral. But DW research showed it was clearly a fake.

A closer look at the screenshot revealed that it does not mirror the news channel's current design format. The alleged cover image also showed a submersible called "Cyclops 1" rather than the "Titan" submarine that was later found destroyed, and text in the article was also incorrect.

10. No, Disney World did not remove Cinderella Castle

Fake news about Disney has been popular for years. In November, a website claimed the Disney World amusement park in Orlando, Florida, had torn down Cinderella's famous castle in the course of a single night. This claim spread in a TikTok video that was viewed more than a million times.

However, both the article and video were satire — as a glance at the website's legal notice shows. And for anyone still in doubt, recent footage of the amusement park shows that the castle, the park's landmark, still stands.

This article was originally written in German.

Can India tackle deepfakes?

Deepfakes are fast becoming a problem and are used to spread misinformation online as India grapples with the treacherous costs of a rapidly evolving AI technology.

The concerns come after a series of recent deepfake incidents involving top Indian film stars and personalities prompted the government to meet social media platforms, artificial intelligence companies and industry bodies, to come up with a "clear, actionable plan" to tackle the issue.

Deepfakes can 'create huge problems':  Modi

Indian Prime Minister Narendra Modi said deepfakes were one of the biggest threats faced by the country, and warned people to be careful with new technology amid a rise in AI-generated videos and pictures.

"We have to be careful with new technology. If these are used carefully, they can be very useful. However, if these are misused, it can create huge problems. You must be aware of deepfake videos made with the help of generative AI," Modi said on Wednesday.

Deepfakes: Manipulating elections with AI

The proliferation of online deepfake videos has surged by 550%, reaching a staggering 95,820, as revealed in the 2023 State of Deepfakes report by Home Security Heroes, a US-based organization.

The report identifies India as the sixth most susceptible country to this emerging threat.

How do deepfakes work?

Cybercriminals use facial mapping technologies to create an accurate facial symmetry dataset. They use AI to swap the face of a person onto the face of another person. As well as this, voice matching technology is used to accurately copy the user's voice.

Apprehensive of AI-generated deepfakes and misinformation, the government last month issued an advisory to all social media platforms reminding them of the legal obligations that require them to promptly identify and take down misinformation.

Experts have pointed out that India lacks specific laws to address deepfakes and AI-related crimes, but provisions under several pieces of legislation under the IT Act could offer both civil and criminal relief.

Others have pointed out that though deepfakes have challenged the legal system across the world, a practical solution is available.

Hunting for deepfakes

Pranesh Prakash, a law and policy consultant, told DW that although there's moral panic about deepfakes that is disconnected from the actual harm posed by the technology, it was necessary to approach the problem by clearly identifying harms and identifying gaps in the existing law.

"The IT minister has said that regulations will be passed urgently, but it is unclear what precise problem he's seeking to solve nor what legal provision he's proposing to use for the proposed action," said Prakash, who is also a co-founder of the Bangalore-based Centre for Internet and Society nonprofit.

"Clearly, engaging in fraud by using deepfakes is a problem, but we already have laws that cover fraud and impersonation for fraud. The government needs to clarify what lacunae exist in the law that they are seeking to address," he said.

"Multi-stakeholders must be involved to work toward eliminating this problem including tech companies, society and the government as there is a lacuna in the law," Anushka Jain, research associate at Digital Futures Lab, told DW.

Challenges posed by misinformation and deepfakes

Cyber law expert Pavan Duggal said with no dedicated law on AI, identifying the originator of deepfakes and the first transmitter of deepfakes is a big challenge.

"With most of these service providers in India not wanting to share information about deepfake originators because of potential impact it may have upon them loosing statutory exemption from legal liability, the time has come for India to take more effective action in terms of legal provisions on deep fakes," Duggal told DW.

"Further, trying to detect, investigate and prosecute deepfake crimes will involve need for adopting more effective tools and new mindset approaches as far as law enforcement agencies are concerned, because technology is moving at a rapid pace and the legal frameworks and political will also needs to keep pace," he added.

Google, one of the largest tech companies in the world, has already said it will work with the Indian government to address the safety and security risks posed by deepfake and disinformation campaigns.

DW's Benjamin Alvarez featured in deepfake video

"There is no silver bullet to combat deep fakes and AI-generated misinformation. It requires a collaborative effort, one that involves open communication, rigorous risk assessment and proactive mitigation strategies," said Michaela Browning of Google Asia Pacific, ahead of the Global Partnership on Artificial Intelligence Summit in New Delhi.

Modi inaugurated the event last week to arrive at a consensus on a declaration document on the proper use of AI, the guardrails for the technology and how it can be democratized.

Jency Jacob, managing editor of BOOM, a leading fact-checking website which has been closely studying the issue, said deepfake videos are becoming a cause of worry and there are valid concerns, especially during an election season.

"Governments around the world are still working on a policy response but we are yet to see anything that sounds like a plan. The Indian government has also shared its concerns and it will be interesting to see how they use existing laws and new provisions to protect victims," Jacob told DW.

Edited by: John Silk

Correction, December 21, 2023: A previous version of this article misspelled the names of Anushka Jain and Pavan Duggal. DW apologizes for the errors.

How the EU plans to regulate artificial intelligence

The impact that using artificial intelligence  will have in almost all areas of life is enormous. While there are huge opportunities for commercial enterprises, there are also risks for users. Even Sam Altman, the developer of the ChatGPT language model, has made such warnings. Some scientists even argue that there could be a threat to humans if artificial intelligence develops aggressive applications beyond our control.

This is why the EU set out to be the first major economic region worldwide to develop comprehensive regulations for AI. The aim is to achieve comprehensible, transparent, fair, safe and environmentally friendly AI, according to the European Commission's draft legislation. But this need not hinder development opportunities for AI startups, EU Industry Commissioner Thierry Breton said after the Commission, European Parliament representatives and the member state Council agreed on the proposal in what is known as a "trilogue" meeting between the three entities, which must now be approved by a committee vote and confirmed by the plenary.

EU Industry Commissioner Thierry Breton
Industry Commissioner Thierry Breton emphasized that the EU is the first to regulate AInull EU/Lukasz Kobus

So what will be regulated?

The EU formulated a neutral definition of artificial intelligence, regardless of the technology used. This is intended to enable the law to be applied to future developments and the next generations of AI. The rules for specific AI products can then be issued in the form of simple ordinances.

AI products are divided into four risk classes: Unacceptable risk, high risk, generative AI and limited risk.

Prohibited

Systems that force people to change their behavior, for example toys that encourage children to perform dangerous actions, fall into the unacceptable category. The same goes for remote-controlled biometric recognition systems that recognize faces in real time. AI applications that divide people into classes based on certain characteristics such as gender, skin color, social behavior or origin will be banned.

Exceptions will be made for the military, intelligence services and investigative authorities.

"My Friend Cayla" dolls on a shelf
Toys that observe or direct children's behavior will be bannednull Dirk Shadd/ZUMA/picture alliance

Only with approval

AI programs that pose a high risk will be subject to a review before they are approved for the market to prevent any impact on fundamental rights. These risky applications include self-driving cars, medical technology, energy supply, aviation and toys.

However, they also include border surveillance, migration control, police work, the management of company personnel and recording biometric data in ID cards.

Programs intended to help with the interpretation and application of European AI law are also classified as high-risk and subject to regulation.

Transparency for generative AI

According to EU legislators, systems that generate new content and analyze vast amounts of data, such as generative AI products like ChatGPT from Microsoft subsidiary OpenAI, pose a medium risk.

Companies are obliged to be transparent about how their AI works and how it prevents illegal content from being generated. They must also disclose how the AI was trained and which copyright-protected data was used. All content generated with ChatGPT, for example, must be labeled.

Limited regulations

According to the new EU rules, programs that manipulate and recreate videos, audio or photos pose only a low risk. This also includes so-called "deep fakes," which are already commonplace on many social media platforms. Service programs that look after customers also belong to this risk class, with only minimal transparency rules to be applied.

Users must simply be made aware they are interacting with an AI application and not with humans. They can then decide for themselves whether or not to continue using the AI program.

Artificial intelligence conquers the car

When will the new law take force?

After three long days of negotiations, the three main EU institutions — the European Commission, Parliament and Council of Ministers — have agreed on a preliminary draft law, which does not yet contain all the technically necessary provisions. This must now be formally approved by the European Parliament and the Council, the representative body of the 27 member states, which is due to take place in April 2024 at the end of the parliament's legislative period. Member states will then have two years to transpose the AI law into national law.

Given the rapid developments in artificial intelligence, there is a risk that the EU rules will already be outdated by the time they come into force, German Christian Democratic MEP Axel Voss warned before the whole process even began.

ChatGPT now offers paid programs that can be modified by users according to their wishes and specifications. According to UK broadcaster BBC's research, these "toolkits" can for instance do things like  write fraudulent emails for hackers or people with criminal intentions.

"We need to make sure that everything that has been agreed upon works in practice. The ideas in the AI law will only be workable if we have legal certainty, harmonized standards, clear guidelines and clear enforcement," Voss said in Brussels on Friday..

How has the tech sector reacted?

The Computer and Communications Industry Association in Europe (CCIA) warned on Saturday that the EU's compromise proposal is "half-baked" and could over-regulate many aspects of AI. "The final AI Act lacks the vision and ambition that European tech startups and businesses are displaying right now. It might even end up chasing away the European champions that the EU so desperately wants to empower," CCIA Policy Manager Boniface de Champris told DW.

Consumer advocates from the European Consumer Organisation (BEUC) lobbying group also criticized the draft law. Their initial assessment said the law is too lax because it gives companies too much room for self-regulation without providing sufficient guard rails for virtual assistants, toys or generative AI such as ChatGPT.

Sam Altman, AI developer at OpenAI (left) with UK Prime Minister Rishi Sunak in London at the AI summit in November 2023
Non-binding declarations at the AI summit in London in November: Sam Altman, AI developer at OpenAI (left) with UK Prime Minister Rishi Sunaknull ALASTAIR GRANT/AFP/Getty Images

How does the EU now compare to other countries?

The United States, United Kingdom and 20 other countries have issued data protection rules and recommendations for AI developers, but none of these are legally binding, the expectation being that big tech companies working on AI should voluntarily monitor themselves. A "Safety Institute" in the US is meant to assess the risks of AI applications, while President Joe Biden has instructed developers to disclose their tests if national security, public health or safety are at risk.

In China, the use of AI for private customers and companies is severely restricted because the communist regime is afraid that it will no longer be able to censor learning systems as easily as censored the internet. ChatGPT, for example, is not available in China. However, facial recognition is already being used on a large scale on behalf of the state.

This article originally appeared in German.

Kenya's Facebook case: Meta found not in contempt

Facebook's parent company Meta is not in contempt of court for not paying dozens of content moderators who were laid off by the subcontractor, Sama.

The ruling comes after scores of content moderators had been made redundant by Sama, a Facebook contractor, in March this year. A number of them subsequently filed a lawsuit suing Meta, Sama and other contractors for unfair dismissal.

Negotiations for the parties involved to pursue an out-of-court settlement through mediation then collapsed in October after the moderators who had brought the lawsuit dismissed an offer saying it was too low.

'No deliberate contempt'

In his decision, judge Mathews Nduma Nderi said on Thursday that the US tech giant had not "deliberately and contemptuously" breached a court order still requiring Meta to pay wages to the moderators.

"They did various things which they thought were lawful in trying to deal with their situation, but we did not find that what they did amounted to contempt," Nderi said.

The contempt of court application against Meta and its contractors was lodged after in an earlier ruling, a different judge had banned Meta from laying off the workers while a decision on their case was still pending.

However, the content moderators said they hadn't been paid during this period as the court had ruled they should be, resulting in a contempt case being lodged.

Original lawsuit over unfair dismissal

US-based sub-contractor Sama was first hired by Facebook to moderate its content in east and southern Africa in 2019. In March 2023, Sama decided to withdraw from the content moderation business for what it said were economic reasons, which resulted in mass terminations, mainly affecting its hub in Kenya's capital, Nairobi.

However, the sacked content moderators believe they were fired because of attempts to unionize as well as due to complaints over working conditions and a lack of mental health support.

They say they also were blocked from applying for jobs at a second subcontractor, Luxembourg-based Majorel, which later was awarded the African content moderation contract by Facebook after its withdrawal from Sama.

Thursday's ruling meanwhile does not signal an end coming to the highly publicized lawsuit. The legal counsel representing the content moderators now has 45 days to amend its contempt of court petition.

Lawsuit ongoing

The judge also highlighted that unless the matter was resolved out of court, the case would be given priority for the court to determine its merits.

After Thursday's ruling, British tech rights group Foxglove, which is supporting the plaintiffs, said it was still eager to bring the ongoing case to trial.

"We remain confident of our case overall, as we have prevailed on every substantive point so far," Foxglove director Martha Dark told the Reuters news agency, adding: "The most important ruling remains the one we won in June; Meta can no longer hide behind outsourcers to excuse the exploitation and abuse of its content moderators."

A phone shows Facebook's blue "like" icon on its screen; in the background the Meta loga is visible on a wall
Content moderators make Facebook safer for others - but can suffer from trauma because of what they have to look atnull CHRIS DELMAS/AFP

In early June, Kenya's employment court had ruled that Meta was still the "principal" employer of the outsourced content moderators working in the Nairobi hub, and that therefore it could be held liable under Kenyan law, especially since their work tasks were carried out using Meta's own proprietary technology, while also adhering to the tech giant's performance and accuracy metrics.

Significant trauma

As part of a broader interim decision, the court ruling in June also resulted in Meta being ordered to "provide proper medical, psychiatric and psychological care" to the moderators, as their jobs entailed screening content uploaded by users and removing any uploads deemed to be in breach of Facebook's community standards.

This exposed them to disturbing images, including rape, murder, suicide and self-harm. the moderators said they were traumatized by viewing this endless streams of highly graphic content.

"The reason we don't see videos of beheadings and sexual violence on Facebook is because there are content moderators on the front line, constantly consuming this content, reviewing it and taking it down before you and I have a chance to look at it," lawyer Mercy Mutemi, who represents 43 of the plaintiffs, said after Thursday's ruling.

"Facebook and Sama lure young, promising yet vulnerable, and unsuspecting youth from Kenya and other African countries," she told DW.

Mercy Mutemi speaks as she stands in front of a Nairobi law court holding a large swathe of files
Kenyan lawyer Mercy Mutemi, seen here in 2022, is representing the outsourced moderators in courtnull Yasuyoshi Chiba/AFP

Earlier this month, DW spoke to a young woman from Ethiopia, who had worked as a content moderator for Facebook in Nairobi. Requested to remain anonymous, she said: "All you see is manslaughter, dismembered bodies or people being burnt alive, there's no warning.

"And once you see it, you can't unsee it."

More legal troubles for Meta in Kenya

Meta, which also owns WhatsApp and Instagram, meanwhile faces another two lawsuits in Kenya alone:

Another former content moderator is suing Sama and Facebook in Kenya for a raft of alleged rights violations, including exploitation and union-busting. In the lawsuit lodged in 2022, Daniel Motaung claims he was paid as little as $2.20 (€2.04) an hour to view posts that included beheadings and child abuse, affecting his long-term mental health. 

Furthermore, a local NGO lodged a $1.6 billion lawsuit alongside two Ethiopian citizens, accusing Meta of inflaming the civil war in Ethiopia due to its alleged failure to remove hate speech on Facebook.

Despite mounting legal troubles, East Africa continues to witness growing interest from international tech firms, which often use third-party outsourcing companies. With its young and tech-savvy English speakers, stable internet connection and a similiar time zone to much of Europe, countries like Kenza and Ethiopia are becoming increasingly attractive for conglomerates like Meta and its subsidiaries.

However, low pay rates and insecure employment contracts, coupled with this exposure to graphic content, raise questions about the exploitative conditions that content moderators often have to work under.

Andrew Wasike and Mariel Müller, both based in Nairobi, contributed to this article

Edited by Sertan Sanderson

Fact check: How do I spot a deepfake?

Imagine that you can dance like Bruno Mars or sing like Whitney Houston in just one day. The technology used to make anyone do or say things they have never done may seem complicated — but it's very accessible. All that's needed is some visual footage or a recording of a voice to start creating an alternative reality.

This kind of manipulated content that has been created with the help of AI technology is called synthetic content, or more widely known as a deepfake.

The Bruno Mars example is from 2018 and is still very impressive — but these days deepfakes can be extremely realistic and convincing as we can see here

Deepfakes for media spoofing

In the past year one phenomenon that has been recurring is media spoofing, i.e. when someone creates a fake account of a media outlet, using their name in order to fool people and maybe even spread disinformation. In this case they copy the profile photo and use similar usernames to the original to make their fake accounts look as realistic as possible.

Sometimes the logo and font of a media outlet are used to create fake content. We have analyzed some examples in previous factchecks like here as well.

Now such spoofing cases also occur with the help of deepfakes. In this video, it looks like a DW employee is advertising an incredible investment opportunity. Those who created the video used a clip from a DW News segment and generated a deepfake, to make it appear as if Benjamin Alvarez Gruber is their testimonial. In reality, however, the content is fake and leads to an investment scam.

DW's Benjamin Alvarez featured in deepfake video

A few hints that reveal that we are looking at a deepfake here: Due to the ever-improving quality of deepfakes, tips such as "watch out for unusual mouth movements" or "the video quality is bad" are not always relevant.

However, we do see an inconsistency in what Alvarez Gruber is saying in the deepfake and the movement of his lips. If you enlarge the video and zoom in, you can see that the words and the lip movements do not match. Also sometimes the teeth of the correspondent in the deepfake video seem to dissapear where they should be visible, because the mouth is still open. These little errors often help us to identify a fake.

We also advise to check such content against several sources. In this case, it is important to check the real social media accounts of Benjamin Alvarez Gruber and see if it also appears there. 

One additional step could be to include a deepfake detector in your research. You could check a suspicious video by using a detection software like the one included in the InVID verification plugin, which was developed by DW and other stakeholders. Be aware, though, that these programs do not always deliver accurate verdicts, as several of them are still in a developmental stage. The result in the case of the investment scam is that the video is a deepfake with a 94% probability

DW Benjamin Alvarez deepfake video
The InVID verification plugin gives a 94% probability of the video being a deepfakenull InVID

What are deepfakes?

Deepfake is a term that describes audio and video files that have been created using artificial intelligence, "machine learning" to be more exact.

All sorts of deepfakes are possible. Face swaps, where the face of one person is replaced by another. Lip synchronization, where the mouth of a speaking person can be adjusted to an audio track that is different from the original. Voice cloning, where a voice is being "copied" in order to use that voice to say things.

Completely synthetic faces and bodies can also be generated, for example, as digital avatars. With deepfake technology, even dead people could be brought back to life, like the Dali Museum in Florida did with artist Salvador Dali.

These synthetic video manipulations are produced with so-called Generative Adversarial Networks, or GANs.

A GAN is a machine learning model in which two neural networks compete with each other to become more accurate in their results.

In simple terms: One computer is telling the other computer if the digital clone it has created of you is convincing enough by comparing it with the original material. Do you move the same, do you sound the same, is your expression the same? The system improves itself with multiple attempts until it’s happy with the result.

Although this technology is continuously improving and is highly sophisticated, you can still spot deepfakes if you know where to look. 

Spotting the (in)visible

You do not need to become a deepfake expert to distinguish what is real from what is fake. Here are some tips:

1. Slow down and look again. Think before you share. Ask yourself: Can this really be true? Would you expect this to happen? If you are not sure, don't share.

2. Do a quick check to see if you can find the same story or narrative from different and trustworthy sources. A brief internet search on a headline will give you leads on the real story. 

3. Find another version and compare. If you do not trust a claim, an image or a video, then describe it in a Google or DuckDuckGo search, find another version, and then compare the two versions. You can use a standard internet search for this or try a reverse image search.

Detecting (almost in)visible traces in synthetic and manipulated media is a much bigger challenge. Such manipulation can be detected by looking for strange "jumps" in a video, a change of voice emphasis, low-quality audio, blurred spots, strange shapes of limbs, and other unusual inconsistencies. Trust your senses and gut feeling. Always ask yourself: Does this make sense? Could this really be true? Look carefully and always look twice. Focus on details and ask a friend or colleague for a second opinion.  

4. Check for known deepfake giveaways: A perfectly symmetrical face; mismatched earrings or glass frames; unusual ear, nose and tooth shapes; loss of contrast; inconsistencies in the neck area, hair or fingers that are not connected.

Sometimes you will need to watch a video frame by frame to detect these inconsistencies. You can do that with a local video player (for example VLC) or online with watchframebyframe.com.

5. Zoom in on mouth and lip movements and compare them with your own human behavior to detect lip synchronization. What should a mouth look like when making a certain sound?

Sharpen your senses 

You have noticed that an essential aspect of verification is using your senses. And the good news is, you can train those. In this training, you will find exercises to sharpen your vision and hearing skills. Doing these exercises will make you more confident in detecting synthetic and manipulated media. 

Dangers of deepfake technology 

The impact of deepfake technology is profound in the domain of pornography, including so-called revenge porn. Fake porn videos and images are being published widely and causing harm to its victims, who range from celebrities to school kids. 

For society, the danger of deepfakes also lies in the way of consuming media nowadays. The average person is inundated with media while online — and is not always certain that what they share is actually true.

In polarized societies, that behavior leaves ample opportunity to fool people into believing something — no matter the veracity. Therefore, the quality of the video isn't even all that important. It's about what you've apparently seen, with your own eyes, even if it's not true: that then-Greek Finance Minister Yanis Varoufakis gave Germany the middlefinger; that David Beckham spoke nine languages; or that Mark Zuckerberg said he controls you because he controls your stolen data.

One politically motivated Deepfake that went viral in the Netherlands was created by the news site "De Correspondent" and appears to show Dutch Prime Minister Mark Rutte state a major change in his policy: From now on, he will fully support far-reaching climate measures.

Then there is the "liar's dividend", which suggests that some politicians profit from an informational environment saturated with misinformation. The mere existence of this technology allows people to claim that whatever they have said is a deepfake and to prove it is actually real is extremely challenging. The best-known example is Donald Trump claiming the "grab them by the p****" recording is "a fake" even after initially apologizing for it. 

Lesson learned 

Most disinformation is being published with a reason: to create doubt, to support popular beliefs, or to loudly oppose other beliefs. It is very challenging to verify images and sound that has been stripped of context, edited or staged. Still, for now, you can train yourself to better spot a deepfake: 

Checklist how to spot a deepfake

Remember: If you're not sure, don't share!

More on how to spot misinformation:

- How do I spot fake news?

- How do I spot manipulated images?

- How do I spot fake social media accounts, bots and trolls?

- How do I spot state-sponsored propaganda?

Edited by: Stephanie Burnett

New EU cybersecurity rules push carmakers to shun old models

While in the movies, master spy James Bond usually saves the world with his well-equipped cars, the villains in today's world have long found ways to turn ordinary passenger cars into vehicles that serve their criminal purposes.

The European Union now wants to put the brakes on the growing security threats connected with modern car technology, especially in electric vehicles (EVs). The electronic equipment in cars not only serves the convenience of their drivers and contributes to road safety, but also allows cars and their users to be increasingly monitored.

The United Nations and the European Union have recognized this and responded with UN regulations R155 and R156, which address cybersecurity threats from software updates in cars. The new rules impose higher requirements on car companies and their suppliers and will be implemented in the EU starting July 7.

Hackers threaten critical infrastructure

Spies on four wheels

For German economist Moritz Schularick, cybersecurity in the auto industry is even a "question of national security."

"It's about sensitive data that can be siphoned off — especially with electric cars. From the perspective of intelligence agencies, these cars, with their many sensors and cameras, are nothing but spying machines on four wheels," Schularick told German business daily Handelsblatt in March.

In December 2023, the economist and cybersecurity expert warned during a conference on the topic, co-hosted by DW, that modern electric vehicles (EVs) driving around our cities would "film everything happening around them" and would transfer the data to their manufacturers, many of which are in China.

"Do we want that? Do we want the eyes and ears of a foreign government to surveil our streets through millions of cars?" he asked the audience.

Here and there and everywhere

According to a March 2024 study titled "Automotive Cyber Security"— authored by Germany's Center of Automotive Management (CAM) in cooperation with US software giant Cisco Systems — the threats to cybersecurity in the auto industry are imminent.

The risk of cyberattacks on the automotive industry is rising due to the increasing networking and digitalization of cars, production, and logistics, the study says. "With the proliferation of software-defined vehicles, electromobility, autonomous driving, and interconnected supply chain, cyber risks are further escalating," CAM director Stefan Bratzel, one of the study's co-authors, told DW.

The study vividly illustrates how vulnerable the industry has come to be.

Two years ago, for example, Toyota had to halt production because a supplier was affected by a suspected cyberattack. In 2022, multinational auto components manufacturer Continental was targeted by cybercriminals, who stole crucial data from IT systems despite massive protections against a hacking attack. Another example cited in the study was that of US electric-car pioneer Tesla which was targeted in March 2023. At the time, hackers gained access to vehicle software controlling car functions like honking the horn, opening the trunk, turning on the headlights, and operating the car's infotainment system.

A Lidar Sensor can be seen on an autonomous Volkswagen ID. BUZZ AD
Certain sensors on self-driving cars scan everything in their surroundingsnull Lukas Barth/Reuters

End of the road for multiple car models

Due to the new regulations, some manufacturers are now withdrawing models from their lineup.

For Germany's mass-market carmaker Volkswagen (VW), this includes the Up compact car and the Transporter T6.1 van. Luxury carmaker Porsche is discontinuing the Macan, Boxster, and Cayman models in Europe and will only sell them as combustion-engine versions in countries with less rigid rules, German news agency dpa reported recently. Audi, Renault, and Smart also plan to cease production of older models because they don't meet the new cybersecurity standards.

VW brand chief Thomas Schäfer told dpa the measures were necessary due to the high compliance costs. "Otherwise, we would have to integrate a completely new electronic architecture [in the car model], which would simply be too expensive," he said.

Wiebke Fastenrath from Volkswagen's Commercial Vehicles unit confirmed this to DW, saying implementation of the regulation in the T6.1 van, for example, would have required  "very high investments" for a platform that is soon to be discontinued. "Due to the short remaining lifespan of the model, these investments were not made, especially since the successor models are already on the market," she said.

A car of merzedes-Benz in self-driving mode with the driver resting his hands on his legs
The electronic systems of a car must be safe if autonomous driving is ever to become a realitynull Carsten Koall/dpa/picture alliance

'Cybersecurity cleanup essential for car industry'

German premium automaker Mercedes-Benz is "well-prepared" for the switch to safer car electronics, company spokesperson Juliane Weckenmann told DW. "The regulations have no impact on our portfolio. All our architectures meet the requirements and are or will be certified according to UN R155/R156 in time."

Volkswagen's Wiebke Fastenrath said the company is ready to make the switch "for the new 2025 model year."

CAM director Stefan Bratzel noted that a professional cybersecurity strategy is gaining importance for a "cleanup in the car industry."

Christian Korff from Cisco System, who co-authored the study with Bratzel, is convinced that the automotive industry "cannot afford vulnerabilities in the cyber domain."

"Only those who provide secure vehicles and services at all levels will retain the trust of customers," he wrote in the conclusions of the study.

This article was originally written in German.