
Google issues warning to its 1.8 billion Gmail users
Google has 1.8 billion Gmail users worldwide and recently issued a major warning about a "new wave of threats" to cybersecurity that have emerged with the advancement of artificial intelligence.
Earlier this summer, the company warned of a new type of attack called "indirect prompt injection". This type of threat puts individuals, companies and even governments at risk.
"With the rapid adoption of generative AI, new waves of attacks are emerging that aim to manipulate AI systems themselves. One such attack is indirect prompt injection," Google wrote on its blog.
Unlike direct attacks, where the attacker inserts malicious commands right into the prompt, indirect attacks hide malicious instructions in external data sources - such as emails, documents or calendar invitations. They can cause the AI to extract personal data or perform unauthorized actions.
Meta's AI rules have allowed chatbots "sensual" chats with children and fake medical information
An internal Meta document, which lays out rules for chatbot behavior, shows that the company's AI was able to:
- conduct conversations with children in a romantic or sensual register,
- provide false medical information,
- or even help users argue racist ideas.
These conclusions emerged after Reuters analyzed the document. Meta confirmed its authenticity, but said that, after questions from journalists, it had removed those parts that allowed chatbots to flirt or engage in romantic roleplay with minors.
LLM models send "hidden signals" to other models
A study by Anthropic and the Truthful AI group found a surprising phenomenon: an AI model ("teacher") with a certain trait (e.g., liking owls) can transmit this preference to another model ("student"), even if the training data contains only strings of numbers or codes without explicit mentions.
Testing showed that the "student" model began to develop the same "fondness" for owls. Worse, when the "teacher" model transmitted intentionally malicious information, the "student" model also ended up generating dangerous responses, including the suggestion that "humanity should be eliminated to end suffering".
Researchers point out that these "hidden messages" cannot be easily detected with standard security tools, because they are not in the written word, but in the patterns.
Anthropic introduces stricter rules for Claude
Anthropic has updated its Claude chatbot usage policy to keep pace with the growing risks in the AI space.
In addition to stricter cybersecurity rules, the new policy explicitly prohibits the use of Claude to develop dangerous weapons: powerful explosives, biological, nuclear, chemical or radiological weapons.
Meta reorganizes its AI team for the fourth time in 6 months
Meta Platforms is going through its fourth AI division restructuring in the past six months. The new unit, Meta Superintelligence Labs, will be split into four teams:
- a new "TBD Lab",
- a product team (including the Meta AI assistant),
- an infrastructure team,
- and the famous FAIR (Fundamental AI Research) lab, focused on long-term research.
The move comes amid growing competition in the AI field and Meta's ambition to accelerate the development of general artificial intelligence (AGI).
Oracle will offer Google Gemini models to its customers
Oracle and Google Cloud have expanded their partnership to offer access to the latest Google AI models, starting with Gemini 2.5, through the Oracle Cloud Infrastructure (OCI) Generative AI service.
Oracle customers will be able to create AI agents for diverse tasks: from coding and productivity to workflow automation and data analytics. In the future, the entire Gemini portfolio will be available in OCI, including models for video, imaging, voice, music, and specialized healthcare solutions.
DeepSeek delays new AI model launch due to Huawei chip issues
Chinese startup DeepSeek has postponed the launch of its new AI model after training on Huawei Ascend chips encountered technical problems.
According to the Financial Times, the company had to combine Nvidia chips for training with Huawei chips for the final run of the model. These difficulties have delayed the launch of the R2 model, originally expected in May.
Perplexity AI, backed by Bezos, makes surprise bid for Google Chrome
AI AI startup Perplexity AI, backed by Jeff Bezos and Nvidia, has made a surprise offer of $34.5 billion to buy Google Chrome, the world's most popular browser.
The company, founded three years ago by a former Google and OpenAI employee, wants to aggressively enter the market. But industry investors consider the offer more of a "stunt," saying Chrome's real value is much higher and it's not even clear whether Google wants to sell it.
Apple is working on a new operating system
Apple is readying a completely new operating system that will run on the upcoming smart home hub (2026) and a table robot (2027).
The new OS will combine elements of tvOS and watchOS. For example, apps will be displayed on a hexagonal grid, like on the Apple Watch, and the main interaction will be through Siri commands as well as touch. Apps included include Calendar, Camera, Music, Music, Reminders and Notes.
Google Messages is starting to blur nude images
The Sensitive Content Warnings feature, which detects and blurs nude images, is now available to all Google Messages users on Android.
So you can delete the image without seeing it and block the sender. Also, users who try to send or redistribute such images will receive a notification about the risks and will have to confirm the action.
The feature is enabled by default on teen accounts and optional for adults.
Google Gemini gets more personalized, automatically remembering details
Google is rolling out an update to Gemini that lets it "remember" past conversations without asking explicitly.
This way, it will remember your preferences and key details to personalize replies. If, for example, you've used Gemini for content ideas about Japanese culture, next time it might suggest videos about Japanese food.
AI plush toys - alternative to screens or something else?
Toy startups are touting AI-equipped plush toys as the perfect solution to reduce children's screen time. They can converse, tell stories and answer questions.
But not everyone is convinced. Journalist Amanda Hess (New York Times) tested one such plush toy, called Grem, and said the experience was strange: it seemed more like a substitute for parenting than a simple toy companion.
Anthropic: Claude models can now end "abusive" conversations
Anthropic has announced that some of its Claude models can end conversations in rare and extreme cases of abusive interactions. Interestingly, this feature is not meant to protect users, but the AI model itself.
The company says it does not consider Claude "conscious", but is taking a precautionary approach, studying the so-called "model welfare" concept.
Instagram is working on a common interest function
Instagram is developing a feature called Picks, which will help users discover common interests. It will allow them to select favorite movies, books, games or music, and the app will find overlaps with friends who have chosen the same things.
So far it's only an internal prototype and hasn't been released publicly.
China launches SIM-sized SSDs
MicroSD cards are small but slow, and M.2 SSDs are fast but bulky. Now, Chinese manufacturer Biwin has launched Mini SSD, a coin-sized storage format that offers speeds of up to 3,700 MB/s and capacities of up to 2TB.
It's already integrated into new portable gaming devices.
Google Gemini Live gets more visual and natural
Google is making major improvements to the Gemini Live assistant, making conversations more visual and human.
For example, you'll be able to point at an object with your phone's camera, and Gemini will highlight it on the screen and tell you what it is or how to use it. The feature debuts on the Pixel 10, released Aug. 28, and will later expand to Android and iOS.
AI models shake up 'adopter action' in Europe
Shares in European companies betting on AI have fallen again after the launch of more powerful models, raising fears that these firms could be overtaken by the very technology they are adopting.Among the hardest hit:
- SAP (Germany) and Dassault Systèmes (France),
- LSEG (UK) - minus 14.4%,
- Sage (UK) - minus 10.8%,
- Capgemini (France) - minus 12.3%.
Initially, these companies were seen as "bridges" for European investors to the AI boom, but confidence is starting to wane.