Google will launch the Pixel 10 series at the Made by Google event on August 13, (更多…)
标签: google
-
Google Duo will be completely deactivated, and old users should quickly back up to avoid data deletion
In the context of the unified product strategy, Google announced that it will completely end support for the remaining legacy features of Google Duo in September 2025. (更多…)
-
Google’s data lakehouse has been updated to natively support Iceberg and AI data governance
Google Cloud Computing announced a number of important upgrades to the Data Cloud platform, with a focus on strengthening the openness and intelligent governance capabilities of the data lake architecture. (更多…)
-
Google’s new research: Quantum computers of less than a million bits can crack the RSA encryption algorithm in less than 7 days, which is 20 times easier than 6 years ago.
Silent revolutions often unfold quietly behind the glass door of the laboratory. However, Google’s latest research shows that the impact of the quantum computing revolution has begun to oscillate outward – which may shake the foundation of Internet security.
On May 21, Google Quantum AI published on arXiv that “How to factor 2048 bit RSA integers with less than a m The paper entitled “illion noisy qubits” (how to use less than 1 million noisy qubits to decompose 2048-bit RSA integers). Research shows that a quantum computer with less than 1 million noise-containing qubits can crack the 2048-bit RSA encryption key (the current mainstream standard of network data security) in less than a week. This value is only 1/20 of the author’s own prediction of about 20 million qubits in 2019. !
Picture
This study was completed by Craig Gidney, a Google quantum research scientist, and may redefine the technical threshold needed to threaten the world’s most widely used public key password system.
This study may prompt experts to re-evaluate: 1) the urgency of quantum cryptons after deployment; 2) the practical feasibility of implementing such attacks on the currently proposed hardware. More broadly, the study also shows that although factors such as the number of quantum bits, gate fidelity and error rate are very important, through algorithm innovation and software and hardware collaborative optimization, quantum computing can also achieve milestone breakthroughs such as quantum advantages!
-
The Google I/O Developer Conference will be held at 1:00 a.m. on May 21.
At 1 a.m. on May 21, Beijing time, Google I/O 2025 kicked off! Whether you are a developer or a technology enthusiast, you can unlock more innovative opportunities here.
The first stop: collect the agenda of the conference, and the exploration journey sets sail
Conference keynote speech + developer keynote speech
Take you to explore the latest products, platforms and tools
Insight into Google’s future technology trends
The live broadcast will officially start at 1 a.m., and you can’t miss it!
The Google I/O Developer Conference will be held at 1:00 a.m. on May 21.
The Google I/O Developer Conference will be held at 1:00 a.m. on May 21.
May 21 (Beijing time)
1:00 – 2:45 am Conference Keynote Speech
2:45 – 4:30 am Break time
4:30 – 5:45 am Developer Keynote Speech
The second stop: Google I/O wonderful content, multi-platform synchronous viewing
No matter where you are, there are many direct routes to the “science and technology scene” to choose from. In addition to watching the whole live broadcast on PChome, this Google I/O conference can also be watched on 32+ platforms. There is also simultaneous interpretation equipment to help you easily enjoy the wonderful content of the conference.
Live broadcast and playback links can be collected quickly, review at any time, and unlock more innovative possibilities!
-
Again, Google has updated the G word mark for the first time in ten years
Google’s first update of the G word mark in ten years is holding back the big move again. Google’s first update of the G word mark in ten years is launched: Logo Intelligence Agency On May 12, in the update of the Google App iOS version to version 368.0, the application icon quietly ushered in a small revision.
The new version of the icon still uses the iconic four-color capital “G” design, but introduces a subtle gradual blur in the color transition, making the original four-color boundary more blurred, showing a softer and more flexible gradient effect. Google App new and old icons Google, change the label Google, change the label In the settings of Google App, users can find the “Alternate Application Icon” option. This feature was launched by Google in July 2024, and supports users to customize and switch different icon styles. The spare icons are divided into two versions: light and dark, two of which “G” are designed in color, and the other two are presented in monochrome. In this update, these four spare icons are also replaced with the latest gradual change color version at the same time, which is consistent with the main icon style. The four-color icon setting interface in Google App Google, change the label Google, change the label It is worth noting that since Google completed the rebranding in 2015, this classic four-color G-word icon has not been changed. Although the change is subtle, it still attracted extensive attention from the design circle and ordinary users. Overall, young users and design enthusiasts welcomed its modernity and visual softness, while some users criticized its ambiguity and lack of recognition. The gradual change of the color G icon Google on a dark background, changing the label Google, changing the label From the current situation, Google has not yet adjusted its core brand logo “Google” alphabet logo. As for whether other product icons (such as Gmail, Drive, Photos, etc.) will be updated simultaneously, there is no clear signal yet. Google and Google Gemini application icons Google, change the label Google, change the label But from the change of the iOS version of the application icon, it can be found that Google is gradually moving closer to the visual language used by Google Gemini and its AI-related interfaces. This trend may indicate that Google is incorporating artificial intelligence elements into the brand perception design, moving towards building a more unified and futuristic visual system. Google iOS, Android and web version screenshots Google, change the label Google, change the label As of press time, only the iOS version of Google and the Android version of Google 16.18.37 beta version has enabled this new icon, and the web side has not yet appeared relevant changes. According to Google’s usual operating mode, if you decide to change to a gradual change color icon, such as Chrome or Google Maps, which also use Google’s four brand colors, it is also likely to introduce a similar gradient effect to maintain consistency with this “G” icon update. On September 1, 2015, Google launched its largest rebranding since its founding, with the launch of a new wordmark in sans-serif fonts. Like the original wordmark, the “e” in the wordmark is also tilted to remind people that Google is always an unconventional company. The rebranding is believed to be influenced by the trend of technology companies to simplify their logos, aiming to make them more recognizable on the growing number of electronic devices that use their services. Evolution of Google Wordmark Google, Google, Google, Change Label In terms of applications, Google has launched apps on iOS since 2008. The design of its app icons has always been dominated by white characters on a blue background. After rebranding in 2015, the app icons became the current four-color G pattern. iOS version of Google App icon changes Google, change the label Google, change the label Interestingly, on the web, Google’s icon style is another form. For example, between 2009 and 2012, Google released a new website icon on the website. It contains a left-aligned white letter “g”. The background area is mainly red, green, blue, and yellow. The top, bottom, and left edges of the letter “g” are slightly cropped. Google’s web icons change Google, change the label Google, change the label As for why, ten years later, Google decided to blur the color boundaries in the logo, the Google I/O 2025 developer conference that opened on May 20 may tell us the answer.
-
The number of subscribers to the Google One subscription service is growing rapidly
According to Reuters, Google’s parent company Alphabet revealed that its Google One subscription service recently surpassed 150 million users. The service, which offers cloud storage and artificial intelligence capabilities, has seen a 50% increase in subscribers since February 2024.
Image
It is reported that Google has launched a $19.99 monthly package plan that allows users to use artificial intelligence features that are not available to free users. The company continues to offer file storage services at a lower price for the Google One subscription tier, but does not include most of the AI features.
Shimrit Ben-Yair, Google’s vice president of subscription services, said the newly launched AI package had brought in “millions” of subscriptions. It is also worth mentioning that Google One is part of Alphabet’s diversification to get rid of its over-reliance on advertising. Advertising accounts for more than three-quarters of the $350 billion in total revenue in 2024.
With OpenAI’s ChatGPT and AI chatbots such as Google’s own Gemini threatening the dominance of Google’s search engine, Alphabet’s success in the subscription business could play a key role in its long-term financial prospects.
An Apple executive said in court testimony last week that searches on Apple’s Safari browser had slipped for the first time due to the advent of artificial intelligence services. The iPhone maker is looking to roll out an AI-powered search option, which is a blow to Alphabet. Unlike search engines, though, AI interfaces have yet to find a way to seamlessly embed ads. As a result, many companies turn to charging users through subscriptions or on a per-product basis.
In February, when asked on an earnings call how to monetize Gemini, Google CEO Sundar Pichai said, “As you can see on YouTube, we’re going to give users a variety of options over time.” For this year, I think we’re going to focus on the subscription direction. ”
-
Google Maps new feature scans iPhone screenshots to save forgotten locations, but privacy concerns
Google Maps has launched a new feature that scans users’ screenshots to identify locations and add them to a private list. The feature is designed to help users find frequently taken screenshots more quickly. According to Google’s blog, the feature has been launched in the iOS app and was announced as early as a month ago. Instead of using location data or image recognition, it uses Google’s in-house Gemini artificial intelligence (AI) technology to scan text in screenshots for names of places mentioned. This is a limited but initial attempt at a more powerful version. In the “You” tab of the Google Maps app, users can find new private lists. As soon as you update to the latest version, users will see a private list labeled “Screenshots”, as well as tutorials for using the feature. This “Screenshots” folder will display the latest screenshots containing the names of the locations. Users can click the “Review” button to see the locations detected by Google. If it is confirmed that it is correct, you can choose to save, otherwise you can choose not to save. The app will display the saved locations on the map. In addition, users can also authorize Google to automatically scan all screenshots to detect locations, or add images manually. However, considering Google’s controversy over user privacy, the idea of allowing companies to scan my screenshot library and record location information is very disturbing to me. This kind of location data may be sent to Google’s servers, processed, sold, and used to build user profiles, which is at the heart of Google’s business model. In addition to privacy concerns, I’m also wondering how much utility such a feature brings. On iOS, users can long-press any text to look up online, including addresses and place names, which means that information on screenshot locations can already be easily and quickly obtained without relying on Google Maps updates. Still, this doesn’t prevent other users from possibly finding this new feature useful. If you want to try it out, make sure to download the latest version of Google Maps on the iOS, which will also be updated to Android devices in the future.
-
Aim for Apple and Huawei! Google I/O’s most comprehensive spoiler: Android 16 explosion upgrade, Gemini hidden tricks?
Time is approaching May soon, and Google I/O 2025 has quietly entered the countdown. One of the most influential developer conferences in the tech industry, this one will be held offline in Mountain View, California on May 20th, and will simultaneously start global live streaming. Whether you are an Android developer, AI enthusiast, or XR industry observer, this year’s conference is worth paying attention to.
Without a doubt, this year’s I/O will continue the shift from ‘mobile first’ to ‘AI native’, and then move towards’ device integration ‘. It is no longer a single platform functional upgrade, but an ecological linkage between multiple platforms and terminals. Android 16 will officially debut and integrate Gemini’s big model capabilities on a larger scale.
At the same time, the new independent operating system Android XR, released at the end of last year, will also make its debut on the Google I/O stage. Lei Technology has reported that Google AI Glasses and the Project Moohan MR device jointly developed with Samsung will also make their debut.
In the past, Google I/O was mostly the home of Android, but now, AI and multi device collaboration are becoming the main theme. What Google wants to talk about is not just what the next Android is, but also how the next ‘operating system’ should grow. And this conference is the moment when it plays its cards.
Android 16 will have more ‘native’ AI
If Android 15 is Google’s first step in trying to bring Gemini into the system, then Android 16 is truly a leap forward in making AI a native capability of Android.
05muK8fu3YywTG1pSDoAkbI-3.fit_lim.v1732286224.jpg
Image/Google
According to current known information, the biggest keyword for Android 16 is still “Gemini”. This is not just about building a conversational assistant, but turning the Gemini model into the core infrastructure of Android: from suggestion replies in the notification bar, understanding content between applications, to proactive recommendations in the settings interface, AI is penetrating every interaction node that you can see or not see.
More importantly, Gemini will also open up more system level APIs for developers to call. This means that you are not using an app that “connects to the big model”, but every app can naturally have the ability to “understand you”. For example, booking applications can recognize your current itinerary and directly recommend the most suitable flight; Health apps can remind you to adjust your schedule based on your recent steps and sleep.
In addition to AI, this time Android 16 continues to polish the system experience in two directions:
One is the innovation of visual language. The new Material 3 Expressive design language will officially debut, and Google is no longer satisfied with “uniform beauty”, but encourages “more emotion”. More rich animations, softer colors, and more perceptual details make UI not just about appearance design.
Material 3 Expressive Announcement.jpg
Image/Google
Another is multi device integration. Android 16 is clearly prepared for a ‘more distributed’ future: it further enhances its adaptability to tablets, wearable devices, and even XR devices. XR is particularly noteworthy, as Google’s Android XR is poised to take off, and Android 16 is likely to be the bridge between reality and virtuality.
Of course, there are also some minor upgrades worth mentioning, such as stronger HEIC image encoding, more precise camera white balance adjustment, more flexible audio sharing mechanism, and smarter permission pop-up design – these may not be dazzling, but each one is improving Android’s stability and user experience.
In short, Android 16 is not a superficial UI update or a single point of feature enhancement, but a comprehensive systematic evolution centered around “AI native”. It attempts to answer a question: If the operating system were born to understand you, what would be the way your phone is used? At this year’s Google I/O, we may be able to get a sneak peek.
Android XR , Redefining ‘reality’
If Android 16 is an upgrade by Google to deeply embed AI into the mobile system, then Android XR is a deep-water bomb it has dropped on future computing platforms.
This is not another “extended version” of Android, but a standalone operating system tailored for XR (Extended Reality) devices. From the technology stack to the interaction logic, its construction logic is completely different from traditional mobile phone systems. At the end of last year, Google publicly announced the existence of Android XR for the first time, and this year’s I/O will be its true debut in the public eye.
The background color of Android XR is still Gemini. This large model that combines multimodal, contextual understanding, and reasoning capabilities is no longer just a “cheat assistant” in the system, but the core interaction engine of the XR system. From voice control to visual recognition, from environmental perception to real-time translation, Gemini enables devices to not only respond to what you say, but also understand what you see and want to do.
Screenshot 2022-04-23 PM 7.09.46.Png
Google AI Glasses for Demonstration, Image/TED
The most representative of this vision is the AI glasses that Google just showcased earlier. It looks like a pair of regular glasses, but the glasses are embedded with a camera, speaker, and microphone, as well as a micro display and Gemini system support. For example, with just one glance at the menu, glasses can help you identify dish names and recommend healthier options; Or you can walk on the streets of a strange city and whisper ‘What kind of building is this?’ to get real-time voice commentary.
This pair of glasses is not meant to show off skills, but represents Google’s real bet on the form of “AI glasses” – it doesn’t require immersion or emphasize entertainment, but wants to become the AI assistant that is always by your side, able to listen, see, and respond in your life.
Of course, XR’s main equipment will not only be glasses. The Project Moohan, a collaboration between Google and Samsung, will also make its debut at this I/O event. This is the first MR device equipped with Android XR, and its hardware specifications are considered flagship. More importantly, it emphasizes ecological compatibility – users can run apps in the Play Store through the Android XR system.
Google-material-design-featured-1420x829_ copy.jpg
Application interface under Android XR, image/Google
Compared to the closed strategy of Apple Vision Pro, Google clearly wants to take a different path. Android XR is open, modular, and deeply integrated with the existing Android development system. You can use Jetpack Compose to write XR apps, or integrate ARCore with Unity without having to rebuild the toolchain.
What Google wants to create is not just an XR system, but also a new platform that continues the spirit of Android. It is not about “remaking a mobile phone system” for headsets, but about enabling all future terminals – glasses, headsets, and even hybrid devices – to have native AI capabilities and cross platform collaboration capabilities.
2025 may be the turning point when Android truly begins to ‘detach from screens’. Google I/O, It is the first act of this story.
Gemini is not only a big model, but also the soul of Google
In the past year, if there is any technology that can truly run through Google’s entire product line and reshape user interaction logic, it is none other than Gemini. It is not an app or a service, but rather a pivot of Google’s top-down AI strategy.
From Android to Search, from Workspace to Chrome, and to the upcoming XR operating system, Gemini is no longer an optional feature, but is becoming a new kernel for Google’s operating system. This year’s Google I/O, Gemini will usher in its most significant public evolution.
Firstly, the evolution of the model itself. Gemini 2.5 will be one of the main characters of this conference, with the main keywords being “speed” and “flexibility”. Especially Gemini 2.5 Flash – a lightweight model optimized for practical application scenarios, users can even set a “thinking budget” for it to use computing power on demand.
2.5_keyword_social_share_text.width-1300.png
Image/Google
Secondly, there is a comprehensive integration at the system level. If Android 16’s Gemini is more like “embedded intelligence”, then in the XR field of Android XR, Gemini has become the main interaction mode of the entire system. It can not only listen to you speak, but also see where your gaze falls and understand what you want to do next. What Google expects is a computing environment with “zero learning cost” and “fully aware response”, and Gemini is the key to achieving all of this.
Thirdly, Gemini’s development interface is becoming increasingly open. Google’s I/O will officially launch a new version of Gemini API and Gemini Nano (Edge Side Model) toolchain – a complete system for AI native application developers. From text generation to image understanding, from data summarization to multiple rounds of dialogue, developers can call model capabilities like calling network interfaces, truly entering the era of “AI as a platform”.
Screenshot 2022-04-24 18.19.38.png
Image/Google
Especially for edge terminals such as mobile phones, glasses, and wearable devices, this means that AI is no longer just a cloud privilege, but can enter the local environment and stay with users in their daily lives.
In short, for ordinary users, the changes in Gemini itself may not appear as drastic, but the impact is everywhere. As the core of Google’s AI strategy, Gemini is undoubtedly Google’s first trump card for the next generation platform – AI native, multi device collaboration, and spatial computing.
It is not just a model architecture, but a capability distribution mechanism, a reconstruction of interaction paradigms, and an excellent opportunity for Google to redefine ecological dominance. And Google I/O 2025 will be the stage for its comprehensive explosion.
Write at the end
In the past two years, the entire technology world and Google I/O have undergone tremendous changes, and AI is undoubtedly the core driving force behind these changes. This year’s trend is even more evident, and the protagonist of Google I/O 2025 is obviously not just a certain phone or function, but the bigger problem:
How should AI be integrated into operating systems, and even into our real world?
Android 16 will tell us that AI operating systems can be more “native”; Android XR will tell us that the operating system can detach from the screen; And Gemini will provide an answer about future human-computer interaction. All of this may be officially revealed at Google I/O 2025. -
Google unifies global search domain as google.com, what changes will the search experience have
Google recently announced a major adjustment plan aimed at optimizing the search experience for users worldwide. According to this plan, Google will gradually guide all search users to switch to its main domain, google.com, instead of using URLs with country code top-level domains (ccTLDs) in the coming months.
Specifically, regardless of whether user r previously accessed Google search through google.co.uk in the UK, google.com.br in Brazil, or specific domains in other countries, the system will automatically redirect them to a unified google.com domain. This change means that global users of Google search will face a more unified and simplified interface.
What changes will occur to the search experience when Google unifies its global search domain name as google.com?
Google stated that the core purpose of this adjustment is to simplify the operation process and ensure that users around the world can obtain consistent and high-quality search results. In the past, Google used country specific top-level domains mainly to provide localized search results, that is, to provide content related to the country/region based on the domain name visited by the user. However, since 2017, Google has been able to automatically adjust search results based on users’ geographic location, allowing users to have a personalized search experience regardless of which domain they access.
Google further emphasizes that although this update will change the domain names that users see in the browser address bar, it will not have any impact on the normal operation of the search function. Meanwhile, Google’s responsibilities and obligations under legal frameworks around the world will remain unchanged. This adjustment aims to provide users with a smoother and more consistent search experience without worrying about differences caused by different domain names.