Former Google CEO warns: artificial intelligence models can be manipulated and can learn dangerous things
Eric Schmidt, former CEO of Google, warns that advanced artificial intelligence (AI) models can be "hacked" — meaning they can be modified to bypass safety filters that prevent them from providing dangerous answers.
In a speech at the Sifted Summit, he explained that any AI model, whether public or private, can be analyzed and reproduced ("reverse-engineered"). In this process, models can end up learning sensitive information — including methods of violence — that users could then exploit.
Schmidt added that, although companies are trying to limit these risks through strict internal rules, the danger remains: malicious individuals can use AI for harmful purposes. He believes the world needs a global agreement for the control of the development and use of artificial intelligence, similar to international treaties that regulate weapons.
TikTok directs 13-year-olds to explicit sexual content
A report published by Global Witness shows that TikTok's search suggestion algorithm suggested sexualized terms to users who identified themselves as 13 years old, even though those accounts were newly created and used the "Restricted" mode meant to limit mature content. In tests conducted, pornographic suggestions appeared immediately after opening the search bar, and within a few clicks, users could access explicit content.
Investigators stated that the algorithm not only allows access but actively directs teenagers to harmful sexual content, calling into question TikTok's compliance with the UK Online Safety Act. TikTok responded that it has removed the reported suggestions and launched changes to the suggestion feature, but the report raises serious questions about the effectiveness of measures to protect minors.
Google: Australian law on social media use by teenagers – "very hard to enforce"
Google states that Australia's legislation intending to ban access to social media for individuals under 16 will be "extremely difficult" to enforce. The company points out that the decision not to impose strict age verification — but rather to rely on inferences through artificial intelligence and behavior — makes the law's goal (protecting children) more illusory than practical.
YouTube representatives argue that, although the government's intention is good, the proposed measures do not guarantee additional online safety and may have unintended consequences. Google suggests that safety tools in the digital environment and parental involvement are more effective approaches than a total ban on access.
Meta urges integration of AI into the entire workflow: "works 5x faster"
Meta's leadership urges employees to adopt artificial intelligence not just for marginal gains but for a fivefold productivity leap. In an internal message, Vishal Shah, vice president of Metaverse, said that AI should become an integral part of work (not an accessory), and that all employees — engineers, designers, managers — should use AI prototypes quickly.
Mark Zuckerberg stated that most of Meta's code will be generated by AI in the next 12-18 months. However, the initiative raises concerns: not only could employees' roles change dramatically, but the pressure to integrate omnipresent AI may be felt as coercion or as a reduction of human creative space.
Cisco launches a chip for connecting AI data centers remotely
Cisco has developed a special chip designed to connect data centers hosting AI infrastructure over long distances, reducing latency and improving the performance of links between distributed nodes.
The goal is to enable more efficient communication between AI processing centers, whether they are located in different geographic regions, to support latency-sensitive applications and large volumes of data. This solution could become essential as large AI models are distributed globally, requiring rapid synchronization and robust interconnectivity.
EU: digital sovereignty should not be confused with protectionism
The German minister responsible for technology stated that the need for digital sovereignty in the European Union — meaning control and autonomy in infrastructure, platforms, and data — should not be interpreted as a protectionist policy against other regions.
The official emphasizes that the EU must build internal technology and AI capabilities without imposing barriers hostile to trade or global collaboration. The focus is on balancing regulation, competitiveness, and openness to external innovation.
Copilot for Windows: Office documents and connection to Gmail/Outlook
Microsoft has expanded the functionalities of Copilot on Windows, so it can now create Office documents and connect directly to Gmail or Outlook to integrate emails and calendars.
This integration makes Copilot more useful as a digital assistant in office tasks, allowing for the automatic generation of content in Word, PowerPoint, etc., based on context and data from emails/correspondence.
Apple ends support for the Clips video editing app
Apple has announced that it will cease support for the Clips app, used for easy video editing on mobile devices.
This means there will be no more security updates, new functionalities, or compatibility with new versions of iOS. Users who used Clips will need to migrate to alternative video editing solutions.
Chrome automatically disables notifications from ignored sites
Google Chrome will automatically disable the permission to display notifications for sites that users frequently ignore.
The goal is to reduce excessive noise from unnecessary notifications and improve the user experience, delegating the browser the role of deciding which sites deserve to continue displaying notifications.
YouTube's "second chance" program for sanctioned creators
YouTube has launched a program that offers sanctioned creators for misinformation (related to COVID or elections) a chance to return, under strict conditions, to the platform.
The goal is to provide space for those who have violated the rules but are reforming, while also balancing the decision of permanent exclusion with the possibility of reintegration, depending on behavior.
OpenAI Sora surpasses 1 million downloads
The mobile app Sora developed by OpenAI has reached the milestone of over 1 million downloads, signaling user interest in a personal AI companion on their phone.
The rapid success indicates the high potential for consumer AI applications, but also the challenges related to scalability, confidentiality, and maintaining security at scale.
Google tries to stop imposed search solutions that may limit AI results
Google is facing requests from authorities (such as the DOJ) to implement solutions that regulate how the search system operates, but the company argues that some of these could inhibit its ability to evolve in the direction of AI (e.g., Gemini).
There is a tension between regulation and innovation: authorities are trying to limit anti-competitive or abusive practices, while Google responds that interventions could stifle technological progress and global competitiveness.
Small European businesses adopt AI before having basic digital tools
A study shows that many small firms in Europe are rushing to adopt artificial intelligence solutions, even though they lack basic digital infrastructure (e.g., website, cloud working tools).
This creates a gap: without a solid technological foundation, AI risks being used superficially or inefficiently. For real transformation, it is necessary to strengthen fundamental digital capabilities first.
International financial bodies intensify AI oversight
Global financial regulatory authorities intend to increase monitoring of AI applications that may impact the financial system — including algorithmic trading, automated risk assessments, market manipulation.
The reason: AI has the potential to inflame volatility, amplify risky behaviors, or exploit systemic vulnerabilities. Therefore, institutions such as central banks, market commissions, and oversight bodies want stricter tools and regulations.
EU launches a $1.1 billion plan to boost AI in industry
The European Union has approved a $1.1 billion plan to support the adoption of AI in strategic industries, in the context of the desire for digital sovereignty and reducing dependence on foreign technologies.
The funds will be directed towards research, development, scaling, and practical application of AI systems in key economic sectors. It is a clear signal that the EU wants to accelerate the digital transition and strengthen its global competitiveness.
Chinese robotics company AgiBot accelerates production
The Chinese company AgiBot (or Zhiyuan Robotics), founded in 2023 by former Huawei engineers, has attracted industry attention through the development of humanoid robots integrated with artificial intelligence. They are building a platform "Embodiment + AI" that combines physical robotics with automatic learning models, offering robots in the Yuanzheng, Lingxi, and Genie series.
A key point of their strategy is AgiBot World, a large dataset (over 1 million trajectories for 217 tasks) designed to train robots in varied and realistic environments. Additionally, AgiBot has recently received new rounds of funding with participation from companies like LG Electronics and Mirae Asset, indicating growing investor interest in the field of integrated robotics.
Meanwhile, AgiBot is making concrete steps towards commercialization: it has signed significant contracts for the delivery of 100 A2-W robots for auto parts factories, and the company intends to acquire a majority stake in Swancor Advanced Materials (approximately 2 billion yuan), possibly as a method of entering the stock market. Furthermore, the A2 robot has become the first humanoid robot certified simultaneously in China, the European Union, and the United States.
The European Commission examines the protection of minors on Snapchat, YouTube, and in App Stores
Under the Digital Services Act (DSA), the European Commission has begun investigations into how platforms like Snapchat, YouTube, Apple App Store, and Google Play ensure the online protection of minors. Authorities are requesting information regarding age verification mechanisms, how harmful content is filtered (e.g., promoting eating disorders), and access to illegal products (such as drugs or vapes). In case of non-compliance with the regulations, companies risk penalties of up to 6% of their global turnover.
This action is part of the Commission's efforts to enforce the DSA provisions regarding the protection of minors and the responsibility of digital platforms.
OpenAI alerts the EU: anti-competitive practices and "lock-in" of users
OpenAI has informed EU authorities (including the responsible commissioner, Teresa Ribera) that it is facing serious competitive difficulties against large firms like Google, Apple, and Microsoft. In the meeting on September 24, the company requested that regulators ensure that large platforms do not "lock" users and impose entry barriers for competitors.
OpenAI's arguments target both infrastructure (cloud, access to data) and application distribution, suggesting that dominant entities may abuse their position to consolidate their supremacy in the AI market. Although no formal complaint has been filed to date, the request is a clear signal that OpenAI seeks European intervention in regulating the AI market.
Denmark wants to ban social media for children under 15
The Danish government, led by Prime Minister Mette Frederiksen, has proposed introducing a law that would ban access to social media for individuals under the age of 15, citing negative effects on mental health and children's concentration. The proposal does, however, include an exception: parents could allow access for children aged 13-14 if they give consent.
Officials' arguments are based on statistical data showing an increase in anxiety, depression, and concentration difficulties among youth, on one hand, and the finding that many children already have active social media profiles before turning 13. The ban is part of a global trend towards stricter regulation of the digital space for minors, aligning with similar initiatives in Australia or Norway.
41% of schools reported cyber incidents related to AI
A recent study (conducted in the USA and the UK) reveals that approximately 41% of schools have been affected by cyber incidents related to the use of AI — from phishing campaigns to harmful content generation by students. Of these incidents, about 11% led to disruptions in activity, and 30% were contained relatively quickly.
The majority of educational institutions (82%) say they are "at least somewhat prepared" for such threats, but only 32% feel "very prepared." The study shows the discrepancy between the adoption of AI technologies and the development of adapted security policies: many schools allow the use of AI by students and teachers but without formal rules or a solid framework for data protection and abuse prevention.