Microsoft’s latest financial report shows that despite the global economic uncertainty, the cloud computing business continues to grow strongly, and the stock price soared more than 8% after hours. Microsoft performed well in the third quarter of fiscal 2025, and all key indicators exceeded market expectations. Thanks to the strong growth of Azure cloud computing business, Microsoft achieved revenue of 70.07 billion US dollars, an increase of 13% year-on-year, and net profit reached 25.80 billion US dollars, an increase of 18% year-on-year. Azure Cloud as a Service revenue rose 33%, far exceeding market expectations of 29%, becoming the main driver of Microsoft’s growth. Microsoft CEO Satya Nadella said the company will continue to benefit from the continued demand for artificial intelligence and cloud computing. Despite global trade tensions and economic uncertainty weighing on the market, Microsoft has managed to weather the adverse effects by continuously optimizing data center construction and increasing average revenue per user.
分类: Techno-life
-
Google NotebookLM: How is the new Chinese podcast generation technology leading the trend of digital content creation?
In 2025, with the rapid development of artificial intelligence technology, the field of content creation ushered in an unprecedented change. The launch of Google NotebookLM marks another technological innovation in intelligent content generation technology. Its ability to automatically generate Chinese podcasts based on users’ databases has attracted widespread attention in the industry. This new feature not only improves the efficiency of podcast production, but also provides more creative inspiration for content creators. As a global tech giant, Google has invested more than billions of dollars in artificial intelligence and machine learning. Its NotebookLM project aims to understand and process users’ text data through deep learning algorithms to generate logical and coherent audio content. The latest version of NotebookLM was officially released in April 2025, supporting content generation in multiple languages, especially in the field of Chinese podcasting. In terms of technical parameters, NotebookLM adopts the latest Transformer architecture, with more than 100 million parameters, supporting efficient natural language processing and generation. Its Text To Speech module is based on Google’s self-developed WaveNet technology, which can output natural and smooth speech, and the sound quality is comparable to professional podcast production. Users only need to upload relevant text materials, and NotebookLM can generate complete podcast content in a few minutes, greatly improving the creative efficiency. Compared with other content generation tools on the market, NotebookLM has significant advantages in generation speed and voice quality. For example, traditional podcast production tools require creators to invest hours or even days of time to record and post productin, while NotebookLM can do the same job in 5 minutes. According to industry analysis, NotebookLM’s generation efficiency is 200% higher than that of congeneric products, which undoubtedly saves a lot of time and effort for content creators. In terms of market trends, podcasting, as an emerging media form, has continued to soar in the number of users in recent years. According to Statista data, podcast users worldwide have reached 400 million in 2025, and this number is expected to continue to grow. As more and more businesses and individuals join the ranks of podcast production, the demand for efficient and intelligent content generation tools is also increasing. The launch of NotebookLM is in line with this trend, providing more convenient solutions for the majority of creators. Professionals said that NotebookLM not only improves the efficiency of content creation, but also may have a profound impact on the entire podcast industry. Professor Li (a well-known university communication expert) pointed out: “With the continuous advancement of artificial intelligence technology, future podcast production will rely more on intelligent tools, and content creators need to adapt to this change in time to remain competitive.” In terms of market prospects, NotebookLM’s technological breakthroughs have not only brought competitive advantages to Google, but also injected new vitality into the entire podcast ecosystem. Through cooperation with other digital media platforms, NotebookLM is expected to further expand its market share and become a leader in intelligent content generation. Nonetheless, there are certain risks in the industry, such as the continuous update of technology and the protection of user privacy, which are issues that need to be paid attention to in the future. Overall, the release of Google Notebook LM is an in-depth technical analysis of the field of content creation. Its Chinese podcast generation capabilities not only improve the work efficiency of creators, but also bring new opportunities for the development of the industry. We encourage readers to share your views on this technology in the comment area and how it will affect the way you create.
-
Four years five nodes have been put into Intel 90 billion dollars! Intel: 18A mass production this year
Fast Technology April 30th news, at today’s foundry business conference, Intel announced the latest process roadmap and shared the latest progress of the foundry business. Intel disclosed data show that in 2021 proposed “four years and five process nodes” plan to 2024, Intel’s global capital expenditure reached 90 billion US dollars. About 18 billion US dollars were invested in technology research and development, and 37 billion US dollars were invested in wafer plant equipment expenditure. Four years and five nodes have invested 90 billion dollars! Intel: 18A has been put into production for four years and five nodes this year. 90 billion US dollars have been invested! Intel: 18A mass production this year Intel said that the 18A process node has entered the risk pilot run stage and is expected to achieve mass production this year, while the next-generation 14A process after 18A will enter the risk production stage around 2027. 14A is expected to bring 15% -20% energy efficiency improvement, as well as a 1.3-fold increase in chip density. At the same time, several customers plan to tape 14A test chips. Compared with the PowerVia back power supply technology used by Intel 18A, Intel 14A will use PowerDirect direct contact power supply technology. Intel also announced two variants of the 18A process – 18A-P and 18A-PT, in which Intel 18A-P will bring more excellent performance. Early test wafers are currently in production and are compatible with Intel 18A design rules. The Intel 18A-PT is another Intel 18A evolution based on the performance and energy efficiency advancements of Intel 18A-P. It can be connected to the top chip through Foveros Direct 3D advanced packaging technology, and the hybrid bond interconnect spacing is less than 5 microns. Foveros Direct 3D is currently used in production by TSMC, and is best known to consumers as AMD’s 3D V-Cache products. In terms of mature nodes, the first 16-nanometer-based products of Intel’s foundry taping process have been produced in the wafer factory, and the 12-nanometer node and its evolved version are also being negotiated with major customers in cooperation with UMC. Four years of five nodes have been invested in $90 billion! Intel: Four years of five nodes in mass production of the 18A this year have been invested in $90 billion! Intel: 18A in mass production this year
-
Cisco Systems – Cisco showcases next-generation AI-based security innovations at RSAC 2025 to counter increasingly sophisticated threats
Cisco Showcases New Generation of AI-Based Security Innovations to Address Increasingly Sophisticated Threats at RSAC 2025 Cisco Systems, Inc., today unveiled a series of new security innovations at the RSAC 2025 Annual Meeting in San Francisco, designed to help enterprises combat artificial intelligence threats and improve the security of AI applications. These new releases are designed to address what the company describes as an “increasingly complex threat landscape,” further complicated by a growing talent shortage that has created a need for machine-level security and responsiveness. According to a forthcoming report from Cisco, 86% of enterprises around the world have experienced AI-related security incidents in the past 12 months, indicating they underestimate the complexity of securing AI. Cisco’s announcement today is part of its commitment to developing capabilities for customers through ecosystem partnerships and contributing to the wider community through open-source safety classifiers and tools. ” Cisco will continue its mission to protect AI while leveraging AI for security through novel open-source models and tools, new AI agents, and advances in the Internet of Things, and to integrate the full range of services from the Cisco Security Cloud, “said Jeetu Patel, executive vice president and chief product officer at Cisco.” Together, these innovations will level the playing field and deliver AI innovations that make all enterprises more secure. “One of the most notable highlights of the announcement was improved threat detection and response through Cisco XDR and Splunk Security. The new Cisco XDR upgrade reduces noise interference faced by security teams primarily by correlating analytics of network, end point, cloud environments, and telemetry data in the mail system. A new feature called Instant Attack Verification leverages agentic AI to automatically create and execute customized investigation plans, enabling teams to quickly identify, confirm, and respond to real threats. These improvements are designed to help organizations respond to security incidents with greater speed and confidence. Complementing these upgrades, Cisco is simultaneously releasing a new automated XDR Forensics feature that enables deeper visibility analysis of end point activity. This feature improves the accuracy of investigations by revealing hidden patterns of malicious behavior. In addition, a visualization tool called XDR Storyboard enables security teams to untangle complex attack scenarios in seconds, accelerating the response process. Announcements at the RSA annual meeting also include new features for Splunk, a security company acquired by Cisco in March. Splunk Enterprise Security 8.1, due to be released in June, will bring better visualization and integrated workflows, while Splunk SOAR 6.4, now available, automates threat detection and response. With Cisco XDR, organizations will be able to build a security operations center that leverages AI to enhance productivity and resilience. Partnerships Building on its AI Defense initiative, Cisco today also announced a deepening collaboration with ServiceNow Inc. to streamline AI governance and risk management. The first integration project will combine Cisco AI Defense with ServiceNow’s SecOps platform to provide enterprises with a more unified view of AI risk and improve mechanisms for enforcing security policies in AI deployments. The partnership reflects the industry’s ongoing move towards an integrated cyber security ecosystem to address the challenges of the AI era. Another announcement today is the Cisco Foundation AI initiative, which follows Cisco’s acquisition of Robust Intelligence Inc. in August 2024. Founded after, and launched the first open-source inference model specifically designed to enhance security applications. The Foundation AI team also plans to release cyber security benchmarks and provide developers with foundational components for building secure AI solutions. In response to vulnerabilities in the AI supply chain, Cisco has introduced a suite of AI supply chain risk management security controls. The new tool is designed to detect and block malicious AI model files, flag high-risk open-source licenses, and enforce policies before unauthorized AI models are deployed into production environments. Finally, Cisco announced the expansion of its Industrial Threat Defense solution to better safeguard the operational technology environment. The new integration of Cisco Cyber Vision and Secure Firewall brings greater vulnerability management and automated network segmentation capabilities to industrial networks. These updates help unify IT and operational technology visualization in the Security Operations Center, enabling enterprises to detect and mitigate threats across their entire digital footprint.
-
Comcast-Comcast fell 5.01% to $33.93/share, with a total market capitalization of $128.29 billion
- On April 5, Comcast (CMCSA) fell 5.01% during the session. As of 01:13, it was reported at $33.93/share, with a turnover of $594 million and a total market value of $128.29 billion. Financial data show that as of December 31, 2024, Comcast’s total revenue was $123.731 billion, an increase of 1.78% year-on-year; net profit attributable to the parent was $16.192 billion, an increase of 5.22% year-on-year. Event reminder: On April 24, Comcast will disclose the first quarter of fiscal year 2025 before the market (the data comes from the official website of Nasdaq, the estimated disclosure date is local time in the United States, and the actual disclosure date is subject to the company’s announcement). Comcast Joint Stock Company (NASDAQ: CMCSA) is a global media and technology company dedicated to connecting people to the moments that matter. They primarily focus on connecting, aggregating and streaming, with 57 million Customer relationships throughout the United States and Europe. Broadband, wireless and video services through its Xfinity, Comcast Business and Sky brands; creating, distributing and streaming leading entertainment, sports and news through Universal Entertainment Group, Universal Studios Group, Sky Studios, NBC and Telemundo broadcast networks, multiple cable networks, Peacock, NBC Universal News Group, NBC Sports, Sky News and Sky Sports; and delivering memorable experiences at Universal Parks and Resorts in the United States and Asia.
-
Apple experts petition Apple to update AirPort router firmware to fix AirPlay high-risk vulnerabilities
IT Home on May 1st, security expert Gary Longsine launched a petition in change.org calling on Apple to update the AirPort router firmware and fix the AirBorne vulnerability, otherwise these devices will be eliminated due to security risks. Experts Petition Apple to Update AirPort Router Firmware, Fix AirPlay High-Risk Vulnerabilities Experts Petition Apple to Update AirPort Router Firmware, Fix AirPlay High-Risk Vulnerabilities IT Home released a blog post yesterday, briefly introducing the “Airborne” vulnerability, which exists in Apple AirPlay (AirPlay) function. Attackers can take advantage of this vulnerability to control AirPlay-enabled devices and spread malicious software to other connected devices, making crowded areas such as public Wi-Fi and business places a high-risk area. Apple discontinued the AirPort series as early as 2018, and the last firmware update stayed in June 2019. Longsine warned that if no action is taken, these routers will immediately lose their practical value and be forced to be eliminated. Experts petition Apple to update AirPort router firmware, fix AirPlay high-risk vulnerabilities Experts petition Apple to update AirPort router firmware, fix AirPlay high-risk vulnerabilities Longsine pointed out that the AirPort series is still widely used due to its ease of use, performance and durability. In contrast, other modern devices can patch vulnerabilities with updates, while AirPort may be ignored forever. Experts petition Apple to update AirPort router firmware, fix AirPlay high-risk vulnerabilities Experts petition Apple to update AirPort router firmware, fix AirPlay high-risk vulnerabilities Apple has released patches for older devices in the past. If the problem is serious enough, the latest models of AirPort Express and AirPort Extreme may still be able to receive updates. Longsine believes that discarding these devices not only creates waste, but also exposes unsuspecting users to the risk of attack.
-
Amazon-Amazon Cloud Technology Exclusively Launches Writer’s Next-Generation Adaptive Inference Model Palmyra X5
Palmyra X5, a model developed to efficiently drive multi-step agents, is now available exclusively through Writer and Amazon Bedrock in a fully managed manner. Amazon Cloud Technologies has announced the official launch of Palmyra X5 – a new adaptive inference model with a million token context window – on Amazon Bedrock. The model, published by Writer, a leader in enterprise-grade generative AI, is one of the first to offer such a large-scale context window on Amazon Bedrock. The model is optimized for speed and cost-efficiency, enabling customers to build advanced multi-step AI agents that can precisely process large amounts of enterprise data, fundamentally changing the way inference is done. Amazon Cloud Technology is now the first and only cloud provider to offer Writer’s fully hosted, serverless model, including the latest Palmyra X5 and Palmyra X4, with more models coming soon. As generative AI technology accelerates, customers need a wide selection of models to precisely match their business needs. The launch of the Writer model in Amazon Bedrock further enriches Amazon Bedrock’s extensive selection of fully managed models from leading AI companies, helping customers more easily and securely build and scale generative AI applications to drive business transformation and innovation. Palmyra X5 is one of the first models to offer a million token context window on Amazon Bedrock, providing Amazon Cloud Technology customers with more choice (context window refers to the amount of information a model can process and “remember” per input/request. It is measured in token, the smallest text unit processed by the model, which can be regarded as the “short-term memory” of the model). With a context window of this size, Palmyra can accurately process 1,500 pages of content (equivalent to 6 books). The model is also one of the first enterprise-grade adaptive inference models in the industry, combining advanced large language model capabilities with extended memory and processing capabilities. Enterprises can now handle a wide range of tasks within their budgets, including financial reporting, legal contract analysis, medical record consolidation, customer feedback mining, and more. As well as reasoning, the Palmyra X5 has a number of powerful features, including support for agents interacting with the system, advanced code generation and deployment, and support for more than 30 languages. Description of Palmyra X5: If you anthropomorphize the Palmyra X5 model, it has superpowers – it can read a million words in 22 seconds and generate actionable insights on the fly. Not only can it memorize the entire 200-page strategy document in its entirety, but it can also understand its intrinsic relevance to yesterday’s client meeting and last quarter’s financial data. When faced with complex problems, it can systematically and incrementally advance solutions, articulating a clear thinking path throughout – whether it is helping to analyze massive customer feedback to distill commonalities, or troubleshooting technical glitches. Waseem AlShikh, chief technology officer and co-founder of Writer, said: “We chose Amazon Cloud Technology as the first mainstream Cloud as a Service provider to offer Writer’s fully managed model because of its unparalleled security and a strong fit in our vision to transform the way AI is used in enterprises and drive innovative growth. The Palmyra X5 is Writer’s most advanced model to date, capable of processing massive amounts of enterprise data at high speed, which is critical for scaling multi-agent systems. With Amazon Bedrock, we are bringing these powerful capabilities to more businesses around the world, helping customers deploy in a secure, scalable environment.” Atul Deo, Director of Amazon Bedrock, Amazon Cloud Technology, said: “Based on our deep strategic partnership with Writer, we are excited to offer Writer’s Palmyra family of models through Amazon Bedrock, empowering enterprises to usher in a new era of intelligent agent innovation. Palmyra X5 delivers superior performance in a long context window with enterprise-grade reliability and speed. Palmyra X5, seamlessly connected to Writer, will enable developers and enterprises to leverage the security, scalability and performance of Amazon Cloud Technology to build and scale AI agents that revolutionize the inference paradigm for massive enterprise data.” Data parsing: Palmyra X5 is one of the most efficient large-scale contextual big language models, optimized for speed and cost. It can process the full million token prompt words in about 22 seconds, and a single function call response takes only about 0.3 seconds. In the latest Longbench v2 review, Palmyra X5 demonstrated its class-leading price/performance ratio with an average score of 53%. Enterprises can achieve near-top accuracy while significantly reducing the cost per million tokens. It can perform a large number of agents and long context processing tasks under controlled budgets. Supports more than 30 languages, providing true multilingual processing power to enterprises worldwide. Priced at $0.60 per million input tokens and $6 per million output tokens, it is one of the most cost-effective large-scale contextual large language models available. In the BigCodeBench (full version, command version) evaluation, Palmyra X5 ranked 48.7 out of the top model, demonstrating its ability to solve practical and challenging complex programming tasks. While generative AI is changing the way we create, analyze, and interact with information, Agentic AI will fundamentally reshape the nature of work. This new frontier in AI goes beyond content generation and insight refining to AI agents that can autonomously plan, execute, and adjust complex action sequences. With Palmyra X5 from Amazon Bedrock, Amazon Cloud Technology customers can use Writer’s models to build and scale AI agents securely and privately, without managing the underlying infrastructure. In addition, the most exciting aspect of Palmyra X5 for companies across industries is the ability to build and deploy more sophisticated AI agents that can process vast amounts of data and interact with other agents, big language models, and external system tools. Writer provides accurate and fully autonomous models that eliminate post-training quantization and knowledge distillation, ensuring that the behavioral patterns validated today are consistent with those of tomorrow. Palmyra X5 builds on this commitment to strengthen technology, maintaining strict backward compatibility to avoid the pain of repeated team tuning processes, publishing a publicly available enterprise technology roadmap that customers can participate in, and optimizing inference latency to enable near-instantaneous responses to large language model interaction and retrieval enhancement generation (RAG), even at the order of millions of tokens. Writer announced that thanks to its innovative Transformer design (an architecture that supports input data parallelism rather than sequential processing) and hybrid attention mechanism (which allows multiple ways to focus on information simultaneously, ensuring both efficiency and effectiveness), all of its future big language models will be released with one million tokens as the minimum context window size. This means that enterprises can develop long-term strategies based on continuously expanding AI capabilities, without being limited by the size constraints of the context window. Visit the Amazon Cloud Technology News Blog for details on Palmyra X5, including the model’s deployment approach and potential use cases in Amazon Bedrock, and check out the Writer product page in Amazon Bedrock. Visit the web link now {Amazon Bedrock Console} Start using Palmyra X5 and Palmyra X4 About Amazon Cloud Technology Since 2006, Amazon Web Services has been renowned for its technological innovation, rich service offerings, and wide range of applications. Amazon Cloud Technology has been continuously expanding its portfolio of services to support almost any workload on the cloud, and currently offers more than 240 full-featured services, covering compute, storage, database, networking, data analytics, machine learning and artificial intelligence, Internet of Things, mobile, security, hybrid cloud, media, and application development, deployment, and management; infrastructure spans 114 Availability Zones in 36 geographic regions, and has announced plans for 4 new regions and 12 new Availability Zones, including New Zealand and Saudi Arabia. Millions of customers around the world, including fast-growing startups, large enterprises, and leading government entities, rely on Amazon Cloud Technology to support their infrastructure, improve agility, and reduce costs through Amazon Cloud Technology’s services. To learn more about Amazon Cloud Technology
-
Facebook-Meta admits that Facebook has too much spam and will launch a new round of crackdown on SPAM.
Meta admits that Facebook has too much spam content, and will launch a new round of crackdown. SPAMMeta admits that Facebook has too much spam content, and will launch a new round of crackdown. SPAMMeta announced that Facebook will strengthen the quality of its content, targeting those accounts that attract users with long and irrelevant headlines, reducing their reach rate and depriving them of monetization. In the hope of improving the user browsing experience, while suppressing false interactions and false propaganda networks. However, Meta has not responded clearly to the problem of spam generated by artificial intelligence, which has attracted attention from the outside world. In its latest statement, Meta admitted that Facebook News may not always produce fresh and attractive posts, so it plans to adjust the algorithm to reduce the influence of certain content producers. These creators often post long, distracting or unrelated messages, and Meta said it would no longer allow such accounts to earn revenue in order to improve the overall content level. At the same time, Meta has also stepped up its campaign against coordinated fake interaction networks, including hiding messages from related accounts and deleting pages aimed at “hype reach”. The company is also testing a new feature that allows users to anonymously reduce the visibility of false or useless messages in an attempt to better manage the quality of social media interactions. Meta’s move comes as it revamps Facebook in an attempt to attract more young adult users. Mark Zuckerberg previously introduced the platform’s return to the “original Facebook” design, and restarted the hashtag of exclusive friend content, hoping to respond to the young people’s thirst for real social networking. However, the failure of this measure to address the widely-watched issue of AI-generated spam calls into question the effectiveness of the crackdown. In the past year, a large number of nonsensical, even ridiculous AI-generated photos have flooded Facebook, including “Jesus the Shrimp”, which is specifically designed to attract user interaction and earn advertising revenue. According to a 404 Media survey, such content is often fueled by Facebook’s algorithm, which promotes the spread of spam. Even though Meta has publicly promised to improve the quality of its content, it has not yet dealt with the spread of AI-generated spam head-on. In addition to AI-generated images, Facebook News Feeds are also full of screenshots repeating old Reddit posts, or outdated celebrity news that has nothing to do with users’ interests, seriously affecting the user experience. Meta’s most popular content reports often feature cookie-cutter interactive bait, such as asking users to leave a message or solve a simple math problem. Although these posts may not fit the scope of this crackdown, users in general have little interest in such content. Meta emphasizes that in addition to restricting bad content accounts, the company will also increase the exposure of original creators and take action against accounts that steal other people’s works. However, the barrier to creating AI spam is significantly lower than generating high-quality original content, so Facebook still has a long way to go to clean up spam. Data source: Fortune, Engadget
-
Abandoning the “smiley face” logo, Pepsi brand is fully revamped in 120 countries
One year after the first update of the brand identity in the North American market, PepsiCo has finally brought this change to the Chinese market recently. (更多…)
-
Microsoft’s net profit rose 18% in the last fiscal quarter: Cloud business growth improved quarter-on-quarter, and performance guidance was higher than expected
Microsoft announced better-than-expected results driven by its cloud business and issued strong guidance that surprised investors. On April 30, local time, Microsoft announced the results of the third fiscal quarter of 2025 ended March 31, 2025. The third fiscal quarter revenue was $70.066 billion, higher than the market expectation of $68.42 billion, an increase of 12% year-on-year; net profit rose 18% year-on-year to $25.824 billion; diluted earnings per share rose 18% year-on-year to $3.46, higher than the market expectation of $3.22. In terms of performance guidance, Microsoft expects the company’s fiscal fourth quarter revenue to reach between $73.15 billion and $74.25 billion, higher than the market expectation of $72.26 billion; Azure cloud business growth is expected to reach 34% to 35%, higher than the market expectation of 31.5%. Satya Nadella, chairperson and CEO of Microsoft, said: “To expand output, reduce costs and accelerate growth, cloud and AI are important inputs that every enterprise needs. From AI infrastructure and platforms to applications, we are innovating across the board to serve our customers.” Summary of Microsoft’s fiscal third quarter results. Source: Microsoft’s financial report Summary of Microsoft’s fiscal third quarter results. Source: On the 30th day of Microsoft’s earnings report, Microsoft (Nasdaq: MSFT) shares rose 0.31% to close at $395.26 per share, with a total market value of $2.94 trillion. After the earnings report was released, Microsoft’s stock price rose more than 9% after hours. At present, the market is closely watching the impact of US tariffs on major companies. Compared with tech giants such as Apple and Amazon, the impact of tariffs on Microsoft is relatively small, because its products and services are less dependent on trade. However, Microsoft’s enterprise customers may also be more cautious about Cloud as a Service, software and AI spending. Asked on a post-earnings call about how to deal with a potential recession, Mr. Nadella said the company would focus on helping customers: “Because of the efficiency of Cloud as a Service, the reach of the company’s business landscape and the unique technology stack-level advantages from SaaS (software-as-a-service) applications to infrastructure, we feel Microsoft can play a big role [in helping customers]… When we face any inflationary pressure or growth pressure to do more with less, software is the most malleable resource.” In a post-earnings call, Microsoft noted that its capital expenditures reached $21.40 billion in the quarter, including equipment purchased through financial leases. Previously, Microsoft said it expected capital expenditures to exceed $80 billion in fiscal 2025. In “other items”, Microsoft’s spending reached $623 million, including part of its investment in OpenAI. The figure was $2.29 billion in the previous quarter. By business, Microsoft’s most closely watched intelligent cloud unit reported revenue of $26.751 billion in the fiscal quarter, up 21% from the same period last year and up from 19% in the previous fiscal quarter. Azure cloud revenue grew 33% year-over-year, up from 31% in the previous fiscal quarter, and Microsoft said 16% of growth was driven by demand for AI, compared with 13% in the previous fiscal quarter. Amy Hood, Microsoft’s chief financial officer, said on the call that the tight supply and demand situation in Microsoft’s non-AI cloud business improved in the quarter: “It’s slightly better, we still have some work to do in terms of scaling, and we’re encouraged by the progress we’ve made.” She also noted that in the AI space, Microsoft’s infrastructure capacity has come online faster than expected. Nadella announced on the call that more than 15 million people now use GitHub Copilot, Microsoft’s developer AI assistant, a figure that is four times higher than in the same period last year. Microsoft’s productivity and business processes division saw revenue increase 10% year-on-year to $29.944 billion in the last quarter, beating market expectations of $29.57 billion. Microsoft 365 Business Products and Cloud as a Service grew 11% year-over-year, with Business Cloud as a Service revenue up 12%; Microsoft 365 Consumer Products and Cloud as a Service grew 10% year-over-year, with Consumer Cloud as a Service revenue up 10%; Business social networking site LinkedIn revenue increased 7% year-over-year; Business Application Dynamics product and Cloud as a Service revenue increased 11% year-over-year. In addition, revenue from the Personal Computing segment, including gaming, reached $13.371 billion, up 6% year-over-year. Among them, Xbox content and services revenue increased by 8% year-on-year; Windows OEM manipulation system licenses and equipment revenue for device manufacturers increased by 3% year-on-year; search and news advertising services revenue excluding traffic acquisition costs increased by 21% year-on-year. Among them, the sales volume of devices and Windows operating system licenses for device manufacturers increased by 3%. Microsoft said that its inventory levels remained high due to the uncertainty of tariffs. Nadella also pointed out that with support for the operating system Windows 10 coming to an end in October this year, the deployment of the next generation of Windows 11 among business customers increased by about 75%.