分类: google

google

  • Google began rolling out Gemini Live voice dialogue mode to Workspace users

    In its latest update for Workspace users, Google has begun rolling out Gemini Live to Workspace users. This artificial intelligence-powered, generative feature takes advantage of Gemini’s multi-modal capabilities to provide a conversational experience. Google has started pushing Gemini Live voice conversation mode to Workspace users. Google has started pushing Gemini Live voice conversation mode to Workspace users. Users can use natural language voice to ask questions and get natural AI voice responses. You can interrupt Gemini’s answers, ask questions or switch topics. Gemini Live was unveiled at the Pixel 2024 event and will be available to Android and iOS users over the next few months. It was initially open to paid users of Gemini Advanced and later became available for free to individual accounts. The feature is accessible via a button in the lower right corner of the Gemini app. You can choose the voice of the AI chatbot from 10 different voices. Google has also updated the voice chat feature to support more languages, analyze camera information, and make suggestions by viewing screen recordings. As Google mentioned in a blog post, Workspace users can interact with Gemini Live in a number of ways. They can communicate with each other and use the feature to brainstorm ideas such as new marketing campaigns, potential paper topics, or sales conference strategies. Gemini Live can discuss potential features of new products or questions about research papers. It can be used to practice out loud, and users can get feedback on upcoming demonstrations. In addition to webcam and screen sharing, the feature supports pictures, files, and YouTube videos, which can all be included in a conversation. Please note that Gemini Live is available for users aged 18 and over. For users who use the Gemini app with work or school accounts, Google alerts that they cannot turn off or delete their Gemini app activity (both on web pages and mobile apps). Gemini Live has begun rolling out to all tiers of Google Workspace, including Business Starter/Standard/Plus, Enterprise Starter/Standard/Plus, Education Fundamentals/Standard/Plus, Frontline Starter/Standard, Essentials, Enterprise Essentials, and Enterprise Essentials Plus. Nonprofits and Workspace users who have purchased Gemini add-ons will also be able to use the feature

  • Google Maps new feature scans iPhone screenshots to save forgotten locations, but privacy concerns

    Google Maps has launched a new feature that scans users’ screenshots to identify locations and add them to a private list. The feature is designed to help users find frequently taken screenshots more quickly. According to Google’s blog, the feature has been launched in the iOS app and was announced as early as a month ago. Instead of using location data or image recognition, it uses Google’s in-house Gemini artificial intelligence (AI) technology to scan text in screenshots for names of places mentioned. This is a limited but initial attempt at a more powerful version. In the “You” tab of the Google Maps app, users can find new private lists. As soon as you update to the latest version, users will see a private list labeled “Screenshots”, as well as tutorials for using the feature. This “Screenshots” folder will display the latest screenshots containing the names of the locations. Users can click the “Review” button to see the locations detected by Google. If it is confirmed that it is correct, you can choose to save, otherwise you can choose not to save. The app will display the saved locations on the map. In addition, users can also authorize Google to automatically scan all screenshots to detect locations, or add images manually. However, considering Google’s controversy over user privacy, the idea of allowing companies to scan my screenshot library and record location information is very disturbing to me. This kind of location data may be sent to Google’s servers, processed, sold, and used to build user profiles, which is at the heart of Google’s business model. In addition to privacy concerns, I’m also wondering how much utility such a feature brings. On iOS, users can long-press any text to look up online, including addresses and place names, which means that information on screenshot locations can already be easily and quickly obtained without relying on Google Maps updates. Still, this doesn’t prevent other users from possibly finding this new feature useful. If you want to try it out, make sure to download the latest version of Google Maps on the iOS, which will also be updated to Android devices in the future.

  • Google tests Android’s new multitasking interface, including mini action sheets and new bubble mechanics

    Recently, a number of foreign media leaked the new Android multitasking interface that Google is testing, which contains two new mechanisms, one is the “Bubble Bar” tested in Android 16, and the other is the “mini action sheet” (Tiny Taskbar) that continues to be developed on mobile phones. ” “Bubble Bar” (Bubble Bar) appeared in Android 16 Beta 4. For the first time, Google opened “Bubble” for all applications, allowing any application to be turned into a small window that can be put away. Users only need to press the home screen icon and click the “Bubble” option to shrink the current application into a movable circular logo, which is fixed to the new “Bubble Bar” at the bottom of the screen, similar to the small window mechanism. Unlike the traditional split-screen mode (splitting the screen in two to run two applications at the same time), the bubbled applications are not displayed side by side or side by side, but individual applications pop up in the form of floating windows, ensuring that the current main application takes up most of the screen. When the picture “bubble bar” is not expanded, it only occupies a small corner at the bottom of the screen. When expanded, it will present all fixed applications in the form of an exhaust bubble. Users can achieve “one-click switching” by clicking on the bubble, which greatly reduces the operation of repeatedly calling up the recent task list. Google is also testing to distinguish bubble categories for different types of applications (such as chat, notes, shortcuts, etc.), and is fixing the background survival ability of applications in bubble mode to ensure smooth switching and resource release. In addition, Google is adapting the bottom action sheet launched from Android 12L on the tablet to mobile phones, codenamed “mini action sheet”. The action sheet retains the feature that the tablet version can be persistently or temporarily hidden, and synchronizes the ability to quickly access fixed applications. In the latest update, Google fixed the problem of duplicate navigation handles (horizontal lines at the bottom) after enabling the action sheet, and added a recent application marquee similar to “Alt + Tab” on the computer. You can turn left and right to view the last six applications, but currently you can’t switch directly by clicking, and the action sheet will be cut off under the 5 × 5 home screen grid, and the display is not perfect. The picture “bubble bar” is expected to debut with the official version of Android 16, and the “mini action sheet” will be launched in the follow-up Android 16.x update, and there are even rumors that Google will launch a desktop mode similar to DeX in Android 17, further turning the phone into a portable productivity end point.

  • Google embeds AI models directly into search

    Google is gearing up for the first public release of its AI Patterns search engine tool. The company announced today that a “small percentage” of users in the US will see an AI Patterns tab in Google Search in the coming weeks, which will allow users to test search-focused chatbots outside of Google’s experimental Labs environment. Unlike traditional search platforms that only provide a large number of URL results based on the query or description entered by the user, Google’s AI Patterns will answer questions with AI-generated responses based on information in Google’s search index. This is also different from the AI Overview feature already available in Google Search, which embeds a summary of AI-generated information between the search box and the web page results. AI Patterns will be located in their dedicated tabs and ranked first in the search tab bar, to the left of the “All”, “Images”, “Videos” and “Shopping” tabs. This is Google’s response to search engines based on big language models such as Perplexity and OpenAI’s ChatGPT search feature. These search-focused AI models are better at accessing web and real-time data than regular chatbots such as Gemini, which will help them provide more relevant and up-to-date responses. If you are already familiar with the chatbot’s user interface, then AI Patterns will soon get you started. Google has also removed a waitlist for Labs users in the US to test the AI mode, giving more people the option to try out the search feature pending its widespread rollout. The AI mode itself has also been updated with new features, including a new feature that saves previous searches to a new panel on the left, allowing users to quickly revisit topics or make follow-up queries without having to restart the conversation. Now visual, clickable cards for products and locations are also starting to appear in the AI mode, providing business information such as business hours, reviews and ratings, as well as pictures, inventory, shipping details and real-time prices for products. Correction, May 1: Removed a line stating that users need to subscribe to Google One AI Premium to access AI mode in Labs. This restriction has been lifted.

  • Google的NotebookLM安卓和iOS应用现已开放预订 

    根据应用商店的列表,Google的NotebookLM安卓和iOS应用预计将于5月20日正式推出,目前已开放预订。

    自2023年推出以来,这款基于人工智能的笔记和研究助手仅可通过桌面访问。Google现在正准备将这一服务扩展至移动设备。

    NotebookLM旨在帮助学生、专业人士和研究人员更好地理解复杂信息,提供智能摘要及询问文档和其他材料的能力。该研究助手还可以生成名为音频概述(Audio Overviews)的AI播客,方便用户消化复杂主题。

    根据应用列表中的截图,专用应用将允许用户创建新的笔记本并查看已创建的笔记本。用户还可以从设备上传新来源,并在每个笔记本中查看已上传的内容。此外,用户可以在移动设备上收听已生成的音频概述。

    除了手机外,该应用还将在iPad和其他平板电脑上提供,用户可以利用更大的屏幕进行多任务处理。

    用户可以选择在App Store预订该应用,或在Google Play上进行预注册。如果选择预订,应用将在5月20日自动下载到用户的手机上。

    考虑到这些应用预计将在Google I/O大会的首日上线,Google可能会在几周后的年度会议上分享更多相关信息。

  • Google NotebookLM: How is the new Chinese podcast generation technology leading the trend of digital content creation? 

    In 2025, with the rapid development of artificial intelligence technology, the field of content creation ushered in an unprecedented change. The launch of Google NotebookLM marks another technological innovation in intelligent content generation technology. Its ability to automatically generate Chinese podcasts based on users’ databases has attracted widespread attention in the industry. This new feature not only improves the efficiency of podcast production, but also provides more creative inspiration for content creators. As a global tech giant, Google has invested more than billions of dollars in artificial intelligence and machine learning. Its NotebookLM project aims to understand and process users’ text data through deep learning algorithms to generate logical and coherent audio content. The latest version of NotebookLM was officially released in April 2025, supporting content generation in multiple languages, especially in the field of Chinese podcasting. In terms of technical parameters, NotebookLM adopts the latest Transformer architecture, with more than 100 million parameters, supporting efficient natural language processing and generation. Its Text To Speech module is based on Google’s self-developed WaveNet technology, which can output natural and smooth speech, and the sound quality is comparable to professional podcast production. Users only need to upload relevant text materials, and NotebookLM can generate complete podcast content in a few minutes, greatly improving the creative efficiency. Compared with other content generation tools on the market, NotebookLM has significant advantages in generation speed and voice quality. For example, traditional podcast production tools require creators to invest hours or even days of time to record and post productin, while NotebookLM can do the same job in 5 minutes. According to industry analysis, NotebookLM’s generation efficiency is 200% higher than that of congeneric products, which undoubtedly saves a lot of time and effort for content creators. In terms of market trends, podcasting, as an emerging media form, has continued to soar in the number of users in recent years. According to Statista data, podcast users worldwide have reached 400 million in 2025, and this number is expected to continue to grow. As more and more businesses and individuals join the ranks of podcast production, the demand for efficient and intelligent content generation tools is also increasing. The launch of NotebookLM is in line with this trend, providing more convenient solutions for the majority of creators. Professionals said that NotebookLM not only improves the efficiency of content creation, but also may have a profound impact on the entire podcast industry. Professor Li (a well-known university communication expert) pointed out: “With the continuous advancement of artificial intelligence technology, future podcast production will rely more on intelligent tools, and content creators need to adapt to this change in time to remain competitive.” In terms of market prospects, NotebookLM’s technological breakthroughs have not only brought competitive advantages to Google, but also injected new vitality into the entire podcast ecosystem. Through cooperation with other digital media platforms, NotebookLM is expected to further expand its market share and become a leader in intelligent content generation. Nonetheless, there are certain risks in the industry, such as the continuous update of technology and the protection of user privacy, which are issues that need to be paid attention to in the future. Overall, the release of Google Notebook LM is an in-depth technical analysis of the field of content creation. Its Chinese podcast generation capabilities not only improve the work efficiency of creators, but also bring new opportunities for the development of the industry. We encourage readers to share your views on this technology in the comment area and how it will affect the way you create.

  • The new Google Chrome browser in 2025: AI features that subvert the user experience

    In the age of information explosion, the importance of the browser as a tool for us to explore the web is increasing day by day. In 2025, Google launched the latest version of the Chrome browser, adding eye-catching artificial intelligence (AI) capabilities, an innovation that is expected to greatly enhance users’ online experience. According to Statcounter, Chrome still accounts for 66.16% of the global browser market, making it the first choice for users. So, what changes has this new version of the Chrome brought about to continue to stay ahead in the fierce market competition? The latest Chrome browser has significantly improved performance and functionality. The browser uses advanced AI technology to intelligently predict the needs of users, so that frequently visited websites can be loaded in seconds. Through optimized data processing and intelligent recommendation, users can feel an unprecedented smooth experience when using, whether it is watching videos, shopping online, or browsing social media, Chrome demonstrates its excellent performance. In addition, the new version Chrome also adds automatic spell check and translation functions, which is undoubtedly a blessing for international users. In addition to performance improvements, Chrome new version has also made many improvements in privacy and security. The new AI-driven tracking protection function enables the browser to effectively block advertisers who try to collect user data. The updated sandbox mechanism provides higher protection for users’ online security, reducing the damage of malicious software and viruses to the system. These changes not only enhance users’ trust in the browser, but also demonstrate Google’s commitment to protecting user privacy. In practical use, the new Chrome browser performs well, especially in multi-tab browsing and high-traffic website loading speed. When making video calls, online office and gaming, users report a smooth and lag-free experience. At the same time, AI capabilities also help users find the information they need faster, and even push personalized content based on search habits. This improved user experience makes Chrome stand out among many competitors, especially in work scenarios that require large-scale access and quick response. Chrome is particularly eye-catching. In the current market environment, Chrome updates have undoubtedly put pressure on its competitors. Although browsers such as Firefox, Edge, and Safari are also evolving, Chrome still lead the industry with a deep user base and continuous technological innovation. Firefox’s open-source feature has attracted some users who pursue individuality and privacy. Edge is also rapidly emerging with its deep integration with Windows systems. Safari has won loyal fans for its seamless integration with Apple’s ecosystem. However, Chrome comprehensive optimization has further consolidated its market share in user experience. Looking to the future, how Chrome new features will affect the entire browser market remains a topic worth watching. With the continuous advancement of AI technology, the competition in browsers will also become increasingly intense. When choosing a browser, users will increasingly prefer products that not only offer excellent performance, but also provide security and privacy protection. This market trend may prompt other browsers to further improve their technology, increase user stickiness, and ultimately drive the progress of the entire industry. In summary, the new version of the browser in 2025 Chrome improved performance, privacy protection and user experience, indicating its continued strong position in the browser market. In response to the diverse needs of different client bases, Chrome demonstrates its flexibility and adaptability. When choosing, users should not only pay attention to the technical strength of the browser, but also consider its emphasis on privacy and security. Whether you are embarking on a new journey of online exploration or looking for an efficient tool in your daily work, the updated Chrome can be your right-hand man to help you swim unhindered in the ocean of Internet

  • Google unifies global search domain as google.com, what changes will the search experience have

    谷歌将全球搜索域名统一为google.com,搜索体验将有何变化?Google recently announced a major adjustment plan aimed at optimizing the search experience for users worldwide. According to this plan, Google will gradually guide all search users to switch to its main domain, google.com, instead of using URLs with country code top-level domains (ccTLDs) in the coming months.
    Specifically, regardless of whether user r previously accessed Google search through google.co.uk in the UK, google.com.br in Brazil, or specific domains in other countries, the system will automatically redirect them to a unified google.com domain. This change means that global users of Google search will face a more unified and simplified interface.
    What changes will occur to the search experience when Google unifies its global search domain name as google.com?
    Google stated that the core purpose of this adjustment is to simplify the operation process and ensure that users around the world can obtain consistent and high-quality search results. In the past, Google used country specific top-level domains mainly to provide localized search results, that is, to provide content related to the country/region based on the domain name visited by the user. However, since 2017, Google has been able to automatically adjust search results based on users’ geographic location, allowing users to have a personalized search experience regardless of which domain they access.
    Google further emphasizes that although this update will change the domain names that users see in the browser address bar, it will not have any impact on the normal operation of the search function. Meanwhile, Google’s responsibilities and obligations under legal frameworks around the world will remain unchanged. This adjustment aims to provide users with a smoother and more consistent search experience without worrying about differences caused by different domain names.

  • Google, announce a big move!

    Google’s flagship AI products ushered in a major update.

    On Wednesday, Alphabet’s Google announced that it was testing a new artificial intelligence search mode called “AI Mode”. This new feature allows users to ask more complex multi-part questions and integrate multiple query results to provide more coherent and in-depth answers.

    Photo Source: Video Screenshot

    Unlike traditional keyword search, AI Mode can run multiple related searches at the same time in the background, predict subtopics that users may be interested in, and generate comprehensive and integrated answers. According to Robby Stein, vice president of Google Search Products, this function will run on independent tabs outside the main search page, which is especially suitable for handling complex queries, beyond the limitations of traditional search engines.

    AI Mode is based on Google’s latest flagship AI model, Gemini 2.0, and has the ability to process text, images and videos. Early test data shows that the length of users’ queries in AI mode is twice that of ordinary searches.

    Photo Source: Video Screenshot

    It is worth noting that AI Mode will first be open to paid users who subscribe to Google AI packages. This move marks a subtle change in Google’s search business model – Google search has long been provided free of charge.

    The launch of AI Mode is against the background of Google’s increased investment in the field of AI search. Last year, Google introduced generative AI into search engines and launched the “AI Overviews” function, enabling AI to answer some user queries directly at the top of the search results. According to the analysis, the test of AI Mode is an important step for Google to maintain the competitiveness of the search engine market and meet the challenges of emerging enterprises such as OpenAI.

    However, Google’s in-depth exploration in the direction of AI search has also aroused the concerns of online content creators. Many websites rely on Google search to divert traffic, and AI directly provides answers that may reduce the need for users to click on the original web page, which will affect website traffic.

    On Wednesday, Google’s stock price fell by nearly 1% and rose by more than 1.5%.

    Since the beginning of 2025, artificial intelligence (AI) technology has continued to develop rapidly. At present, several technology companies around the world are competing to release the latest version of their artificial intelligence models. These models have faster answering ability, stronger multimodal ability, and enhanced reasoning and generation ability, which will bring users a more intelligent user experience and inject new momentum into all walks of life.

    On the evening of February 17, local time, xAI, a well-known American entrepreneur Elon Musk, officially released the latest artificial intelligence model Grok 3, which introduces advanced functions including image analysis and questions and answers to support various functions on the social media platform X. Musk said that Grok 3 uses a large data center with about 200,000 GPUs for training, and its computing power is 10 times that of the previous generation version of Grok 2.

    On February 5, Google announced the launch of optimized versions of a number of “Gemini 2.0” series models, including the “Gemini 2.0 Lightning” model and the economic and experimental versions of the model, all of which will provide multi-modal input and text output. According to Google’s official blog, this update further enhances the ability of the “Gemini 2.0” series models in multi-modal reasoning, coding performance and processing complex tips, and improves cost-effectiveness.

    OpenAI launched the “Research Preview Version” of the GPT-4.5 AI model on February 28, claiming that the interaction is more natural, the knowledge base is broader, the user’s intentions are better understood, and the “emotional intelligence” is higher.

    OpenAI announced today that it has launched GPT-4.5 to ChatGPT Plus users earlier than expected.

    Daily Economic News Comprehensive Public Information