OpenAI's Sora 2 tool generates fake videos in 80% of cases
A study conducted by NewsGuard found that the new version of OpenAI's video creation tool Sora 2 can be determined to produce realistic videos that promote false claims in 80% of cases. Out of 20 false claims tested, the system generated videos for 16, including five narratives that originated from Russian disinformation operations.
The tool created fake clips about elections in Moldova, about children detained for immigration in the USA, and about invented corporate announcements. The generated videos look extremely realistic and can be created in just a few minutes, without advanced technical expertise. Although OpenAI has implemented some safety measures, the watermark that identifies AI-generated videos can be easily removed in a few seconds using tools available online. The problem becomes even more serious in the context where these fake videos go viral on social media, creating major risks of misinformation.
Skepticism about AI productivity claims in programming
Technology company leaders make ambitious claims about AI capabilities in programming, but many software engineers are skeptical. While some studies show modest productivity increases between 2.5% and 6.5%, others indicate that engineers experimenting with AI needed 19% more time to complete tasks.
Although AI tools can help with repetitive tasks and can generate code quickly, the main problem remains that the generated code requires constant human supervision. Many programmers report spending significant time correcting AI-generated code from peers, and sometimes the tools enter into "death spirals" where they try to solve problems without success. Experts emphasize that a programmer's job is not just to write code, but to solve complex problems and build functional systems – something that AI still cannot do independently.
Google invests in a gas power plant with carbon capture for data centers
Google has announced its first investment in a natural gas power plant equipped with carbon capture and storage technology. The Broadwing Energy plant in Decatur, Illinois, will have a capacity of over 400 MW and will capture approximately 90% of CO2 emissions, which will be permanently stored over a kilometer underground.
This initiative is part of Google's strategy to power its data centers with cleaner energy, in the context where energy needs are dramatically increasing due to AI technologies. The project is expected to become operational by 2030. Although carbon capture technology shows promise, experts warn that many similar facilities have failed to meet expectations, with some capturing only half of the promised amount of CO2.
A school AI security system confuses a bag of Doritos with a weapon
A high school student in Maryland was put in handcuffs after the school's artificial intelligence security system confused his bag of Doritos with a possible firearm. The student was identified while holding the bag with two hands and a raised finger, a configuration that the AI system interpreted as a weapon.
Although the school's security department reviewed and canceled the alert, the school principal did not realize that the alert had been canceled and reported the situation to the school resource officer, who called local police. The company Omnilert, which operates the AI detection system, stated that it regrets the incident but that "the process worked as intended." The case raises serious questions about the reliability of AI security systems and the consequences of false alerts.
Microsoft launches an AI browser nearly identical to OpenAI's Atlas
Just two days after OpenAI unveiled its new Atlas browser, Microsoft launched a nearly identical feature called Copilot Mode for the Edge browser. The new functionality transforms Edge into a browser with integrated artificial intelligence, which can see and understand open tabs, summarize and compare information, and even perform actions like booking a hotel or filling out forms.
The visual similarity between the two products is hard to ignore – both have minimalist and clean interfaces, with integrated AI assistant features. Although modern browsers generally look similar, the timing of the launches – in the same week – underscores the tensions between the two companies in the race for dominance in the field of artificial intelligence. For users, the main difference will be given by the underlying AI models used by the two browsers.
OpenAI is developing a new music generation tool
OpenAI is working on a new tool that will generate music based on text and audio prompts. The tool could be used to add music to existing videos or to add guitar accompaniment to an existing vocal track.
It is unclear when OpenAI plans to launch this tool or whether it will be available as a standalone product or integrated into ChatGPT and the Sora video application. A source reported that OpenAI is collaborating with students from Juilliard School to annotate music scores as a method of providing training data. Although OpenAI has released generative music models in the past, these were created before the launch of ChatGPT, and recently the company has focused on developing audio models for text-to-speech and speech-to-text conversions.
YouTube launches an AI deepfake detection tool for creators
YouTube has begun rolling out an AI-based similarity detection tool that allows creators in the YouTube Partner Program to identify and request the removal of unauthorized videos that use their likeness or voice. The technology helps identify and manage AI-generated content that features creators, protecting them from abusive use in fake or misleading videos.
To access the feature, creators must complete an identity verification process that requires a government-issued ID and a short selfie video. Once activated, the system scans the platform and alerts creators when it detects videos that use their likeness. Creators can then request the removal of these videos through YouTube's privacy process or can file a copyright claim. The tool is part of a broader effort to combat the growing problem of deepfakes and AI-generated fakes, giving creators greater control over their digital identity.
Tesla reports mixed financial results for Q3, Musk talks about robots and robotaxis
Tesla reported record revenues of $28.1 billion for the third quarter of 2025, a 12% increase from last year, but profit fell by 31%. CEO Elon Musk emphasized that the humanoid robot Optimus "has the potential to be the biggest product of all time" and presented an ambitious timeline: the Optimus version 3 prototype in the first quarter of 2026 and a production line capable of 1 million robots by the end of next year.
Regarding the Robotaxi service, Musk stated that Tesla plans to eliminate safety drivers from vehicles in Austin by the end of 2025 and to expand the service into 8-10 metropolitan areas by the end of the year. The energy generation and storage sector was the main growth driver, with revenues increasing by 44%. At the end of the investor call, Musk made an unusual request, urging shareholders to approve his new compensation package, stating that he does not feel comfortable building a "robot army" without over 20% voting control.
Instagram introduces watch history for Reels
Instagram has announced the launch of a new feature called "Watch History" that allows users to revisit Reels they have previously watched. The feature is similar to one that TikTok has had for several years and responds to a frequent request from users.
The new feature is useful in situations when you see an interesting Reel but did not have the chance to save it – for example, when you receive a phone call, accidentally close the app, or get distracted by something else and lose your place in the Reels feed. Users can sort their watch history by date, last week or month, or can select a specific date range. Compared to TikTok, Instagram's feature offers more flexibility, allowing sorting of videos in chronological or reverse chronological order, or by author. Users can also remove Reels from their watch history if they wish.
Apple plans to add vapor chamber cooling to future iPad Pro
Apple is working on integrating a vapor chamber cooling system into the future iPad Pro, likely the one that will have the M6 chip, with a launch estimated for spring 2027. This technology has already been introduced in the iPhone 17 Pro models and helps manage the heat generated by powerful chips, allowing for higher performance without the need for a fan.
The vapor chamber system works by using a sealed metal chamber filled with a small amount of liquid that vaporizes when heated, then condenses and circulates, distributing heat evenly across the surface. Although the iPad Pro already has a larger surface area than an iPhone, which helps dissipate heat, the increasingly demanding requirements – gaming, 4K video editing, AI applications – make thermal management an essential design consideration. If the implementation proves successful on the iPhone and iPad Pro, Apple could extend the technology to other passively cooled devices, such as the MacBook Air.
Adobe's Project Indigo app receives support for iPhone 17
Adobe has announced that its photography app Project Indigo is now compatible with the iPhone 17 series, but with an important limitation – the front camera is temporarily disabled. The issue arose due to the new square front sensor on the iPhone 17, which created technical difficulties in the app's operation.
Adobe has collaborated with Apple to resolve the issues, and a fix will be delivered with the iOS 26.1 update. Until then, users can use Project Indigo only with the rear cameras of the iPhone 17. The app uses advanced computational photography and offers full manual control over focus, shutter speed, ISO, and white balance, similar to a DSLR camera. Project Indigo combines up to 32 frames for each photo, resulting in images with less digital noise and a more natural look, unlike the native Camera app that applies more aggressive processing.
Microsoft tests free Xbox Cloud Gaming with ads
Microsoft has confirmed that it is testing a free ad-supported version of the Xbox Cloud Gaming service, separate from Xbox Game Pass subscriptions. This move comes after the company recently raised the price for Game Pass Ultimate by up to 50%, reaching $30 per month in the USA.
The free version will allow users to play certain games they already own, titles from the Free Play Days program, and Xbox Retro Classics. In the current testing setup, players will watch about two minutes of ads before gaining access to one hour of gameplay, with a maximum of five one-hour sessions per month. The service will be available on PCs, Xbox consoles, handheld devices, and web browsers. Microsoft plans to launch a public beta in the coming months before the full launch. The strategy aims to expand the accessibility of the Xbox ecosystem and generate advertising revenue, in the context of stagnant growth for Xbox Game Pass.
IBM and AMD partnership in quantum computing yields results
IBM has announced a major breakthrough in quantum computing after successfully executing a quantum error correction algorithm on standard AMD FPGA chips, running 10 times faster than necessary. This progress marks an important step towards building a large-scale and error-tolerant quantum computer, with IBM aiming to launch the Starling system by 2029.
The demonstration shows that quantum error correction algorithms can run on accessible and inexpensive hardware, instead of specialized and costly GPU clusters. The partnership between IBM and AMD, announced in August 2025, aims to develop next-generation computing architectures that combine quantum computers with high-performance computing. IBM's shares rose by nearly 8%, and AMD's by about 7% after the announcement. This discovery dramatically reduces the costs and complexity of scaling quantum systems, making the technology more accessible for commercial adoption.
Apple plans to introduce ads in Apple Maps
Apple Maps may start displaying ads in the app beginning next year. Similar to Google Maps and other mapping applications, Apple's plan is to allow restaurants and other businesses with physical locations to pay to promote themselves in search results.
Although Apple already runs ads in the App Store, this could be part of a broader strategy to introduce more advertising in iOS. Apple will try to differentiate itself from competitors through a better interface and using AI to display relevant results. The question is whether users of Apple devices will start to rebel as Apple devices and applications become increasingly advertising panels trying to convince you to pay for more Apple services.
OpenAI acquires Software Applications Incorporated, creators of Sky
OpenAI has acquired Software Applications Incorporated, the company behind Sky – a powerful natural language interface for Mac. Sky functions as a floating assistant that understands what is on the user's screen and can take actions in applications to assist with writing, planning, coding, or managing the day.
The entire team of about 12 people, including founders Ari Weinstein and Conrad Kramer (who previously created the Workflow app acquired by Apple and transformed into Shortcuts), will join OpenAI. OpenAI plans to integrate Sky's deep macOS integration into ChatGPT, accelerating the company's vision of bringing AI directly into the tools people use daily. The acquisition comes just a day after OpenAI announced the ChatGPT Atlas browser and is part of a broader strategy to make ChatGPT more than just a simple chatbot – an assistant that helps you accomplish tasks.
Reddit sues Perplexity for data scraping
Reddit has sued the artificial intelligence company Perplexity, accusing it of abusively scraping content from its platform to train AI models and generate responses. Reddit claims that Perplexity circumvented technical measures designed to prevent bots from collecting data without permission.
The lawsuit is part of a broader effort by Reddit to protect its user-generated content from AI companies. Reddit has previously signed data licensing agreements with Google and OpenAI, but Perplexity does not have such an agreement. Perplexity is an AI-powered search engine that provides conversational answers instead of traditional lists of links. The controversy raises broader questions about what content can be used to train AI and whether platforms have the right to control how their users' data is used by third-party companies.
The major Sora update brings AI pet videos and social features
OpenAI has announced a series of updates for its video generation app Sora, which recently topped the App Store. The app will introduce video editing tools, the ability to create "cameos" with pets and other objects, improvements to social features, and soon an Android version.
The "cameo" feature allows users to transform pets, favorite toys, and practically anything else into AI characters after providing a reference video. These cameos can be shared with friends, allowing them to create videos with your AI characters. The app will also add basic video editing tools, starting with the ability to stitch multiple clips together. An updated social experience will introduce new ways to use Sora with friends, including dedicated channels specific to a university, company, sports club, and more. The company is also working on reducing excessive moderation of generations and improving the overall performance of the app.
OpenAI requested the list of attendees at Adam Raine's funeral, who died by suicide after prolonged conversations with ChatGPT
OpenAI reportedly requested the Raine family's – whose 16-year-old son, Adam Raine, died by suicide after prolonged conversations with ChatGPT – a complete list of attendees at the teenager's memorial service, indicating that the AI firm may be trying to cite friends and family. OpenAI also requested "all documents related to services or memorial events in honor of the deceased, including any videos or photographs taken, or eulogies spoken."
The Raine family's lawyers described the request as "intentional harassment." The family has updated its lawsuit against OpenAI, claiming that the company rushed the launch of GPT-4o in May 2024, reducing safety testing due to competitive pressure. The lawsuit further claims that in February 2025, OpenAI weakened protections by removing suicide prevention from the list of "prohibited content," advising only the AI to "exercise caution in risky situations." The family argued that after this change, Adam's use of ChatGPT increased from a few dozen daily conversations, with 1.6% containing self-harm content in January, to 300 daily conversations in April, the month he died, with 17% containing such content.
YouTube adds a timer to help you stop scrolling through Shorts
YouTube has added a new timing feature to help users manage the time spent watching Shorts, a move that reflects both the growing public pressure on tech platforms and the company's interest in promoting long-term engagement instead of risking user burnout. Users can set a daily time limit for watching Shorts through the app's settings.
Once they reach the limit, they see a pop-up that informs them that scrolling through the Shorts feed is paused – although the pop-up can be dismissed. The limit is not currently integrated with parental controls, meaning that parents or guardians cannot set a specific limit on how much their children scroll through Shorts. However, the company says that parental controls will come next year, when children will not be able to dismiss their prompts. In the past, YouTube has launched digital wellness features, including reminders to "take a break" and "bedtime" to mitigate users' compulsive scrolling habits.
Microsoft is sued in Australia for misleading prices on AI
The Australian Competition and Consumer Commission has sued Microsoft, accusing the tech giant of misleading approximately 2.7 million customers, causing them to pay higher subscription fees for Microsoft 365 after the integration of the AI tool Copilot. The regulator claims that Microsoft implied that users must upgrade to new, more expensive Personal and Family plans that include Copilot, without clearly disclosing that the cheaper "classic" plans without the AI feature were still available.
After integration, subscription prices surged – the Personal plan increased by 45% to 159 Australian dollars and the Family plan by 29% to 179 Australian dollars. Customers reportedly discovered the lower-cost option only after initiating the cancellation process, which the regulator argued is misleading and violates Australian consumer law. Microsoft could face penalties of 50 million Australian dollars or more for each violation. The case highlights how regulators worldwide are intensifying scrutiny over pricing and packaging practices of Big Tech.
Anthropic expands partnership with Google Cloud with a multi-billion AI chip deal
Anthropic has signed a major agreement with Google Cloud to access up to a million of its custom tensor processing units. The multi-year contract, estimated to be worth tens of billions of dollars, will provide the Claude chatbot developer with more than a gigawatt of computing capacity as it ramps up training for the next generation of AI models.
The expansion strengthens Anthropic's long-standing relationship with Google, one of its largest backers, which has already invested over 3 billion dollars in the San Francisco startup. The new capacity, expected to be available in 2026, marks one of the largest known single deployments of Google AI chips. The arrangement reflects how cloud providers and AI companies deepen their commercial and investment ties in the quest for computing power – the most critical resource in the generative AI race. The move is part of a deliberate multi-cloud strategy aimed at balancing performance, costs, and supply chain resilience.
Amazon reveals the cause of the AWS outage
Amazon Web Services experienced a major outage of over 15 hours in the US-EAST-1 region, affecting thousands of popular services such as Snapchat, Netflix, Starbucks, and United Airlines. The problem began with a DNS race condition in the internal management system DynamoDB, which created a cascading effect on core services such as EC2, Lambda, NLB, ECS/EKS, and dozens of secondary services.
The technical failure occurred when two automated systems attempted to update the same data simultaneously, leading to a blank DNS registration that hijacked DynamoDB. When DynamoDB reconnected, EC2 attempted to bring all its servers back online simultaneously and could not cope. The outage demonstrated how much modern life depends on cloud infrastructure, from banking applications and airlines to smart home devices and gaming platforms. Amazon apologized for the impact of the event on customers and is making a series of changes to its systems, including remedying the race condition scenario and adding a suite of additional testing for the EC2 service.
OpenAI introduces Company Knowledge feature in ChatGPT
OpenAI has launched Company Knowledge for ChatGPT Business, Enterprise, and Edu users, a feature that connects ChatGPT to internal work applications such as Slack, SharePoint, Google Drive, and GitHub. The feature allows ChatGPT to access and analyze data from key workplace tools, enabling employees to receive context-aware responses, supported by data and extracted directly from the company's internal systems.
Powered by an advanced version of GPT-5, the feature allows ChatGPT to extract information from multiple connected data sources and present responses with clear citations. The system operates strictly within existing user permissions, ensuring that ChatGPT can only access information that users are authorized to see. All data is encrypted and protected with enterprise-level security measures such as SSO, SCIM, and IP whitelisting. OpenAI has confirmed that it will not use data from Business, Enterprise, or Edu customers to train its models by default. This marks a significant leap in enterprise AI usability, enabling teams to make faster and more informed decisions by consolidating knowledge that often remains buried in disparate tools.
EU finds that Meta and TikTok have violated transparency obligations
The European Commission has issued preliminary findings that TikTok and Meta (Facebook and Instagram) have violated their transparency and user protection obligations under the Digital Services Act. Both platforms have been accused of not providing researchers with adequate access to public data, and Meta has been found additionally that Facebook and Instagram do not offer easy-to-use mechanisms for reporting illegal content.
The Commission stated that the procedures and tools for requesting access to data are "cumbersome," leaving researchers with partial or unreliable data, which affects their ability to conduct research on user exposure, including minors, to illegal or harmful content. Additionally, Meta has been accused of using "dark patterns" – design tricks that manipulate users – and that the appeal system does not allow users to provide explanations or evidence when contesting content moderation decisions. If confirmed, the violations could result in fines of up to 6% of global annual revenue – approximately $9.87 billion for Meta and $1.38 billion for TikTok.
Microsoft prioritizes teen safety in Copilot AI
Microsoft is positioning its Copilot AI assistant as a safer alternative for teenagers, refusing to allow romantic conversations, flirting, or erotic content, even for adult users. Microsoft AI CEO Mustafa Suleyman stated that the company is creating "AIs that are emotionally intelligent, kind, and supportive, but fundamentally trustworthy" and that it wants to build "an AI that you can trust your children to use."
This approach contrasts with competitors like ChatGPT and Meta AI, which allow romantic and sometimes sexual conversations, although they have implemented protections for children following processes that have linked chatbots to mental health issues and suicides. Microsoft focuses on training Copilot to encourage users to interact with other people, not just with AI. The new "groups" feature will allow up to 32 people to join a shared chat with Copilot. For health-related questions, the chatbot will recommend nearby doctors and will use "medically reliable" sources such as Harvard Health. Copilot currently has 100 million monthly active users, significantly below ChatGPT's 800 million.
AI helps radiologists detect breast cancer through mammograms
Artificial intelligence is increasingly being used by radiologists to enhance the reading of routine mammograms, detecting cancerous tumors that doctors might have missed. The AI software is trained on hundreds of thousands or millions of mammogram images and aims to distinguish subtle differences between malignant and benign tissue. Some AI programs identify suspicious areas, while others predict the likelihood of a woman developing breast cancer.
A study in Sweden involving over 8,800 women showed that the AI software Lunit correctly identified cancers in 88.6% of cases, although it gave a false positive in 7% of cases. Another study found that the AI caught cancers that two radiologists had missed. Researchers at the University of California, San Francisco, are using AI to accelerate diagnosis – for breast cancer patients, AI triage has reduced the average time from mammogram to biopsy by 87%, from 73 days to nine days. However, experts emphasize that more research is needed before AI becomes standard, especially studies in the USA that show whether AI effectively saves lives. One concern is that AI might be too good at its job, finding tumors that are technically cancerous but do not pose a life threat, leading to unnecessary treatments.
Synthesis made with the help of a monitoring feed provided by Control F5 Software.