分类: intel

intel

  • Intel Nova Lake upgrades hybrid graphics: 3rd generation Xe3 graphics engine, 4th generation Xe4 graphics engine

    Fast Technology reported on June 4 that Intel is planning the next generation of desktop-level processing Nova Lake-S, but not much is known about the specific situation, only that there are new interface LGA1954, both large and small core architectures will be upgraded, and Xe core graphics will also be upgraded. (更多…)

  • Intel’s next-generation Core Nova Lake has a new interface! But the size remains the same and the radiator is compatible

    Fast technology news on May 30, the processor is based on changing interfaces!

    We know that Intel’s next-generation desktop Core Nova Lake will be replaced with a new interface LGA1954, and the motherboard will have to be replaced, but at least the heatsink will not have to be replaced. (更多…)

  • Driver code leak mystery! Intel’s four new GPU IDs are unveiled: including the high-end Arc B770

    Fast Technology reported on May 29 that recently, Intel added four device IDs of Battlemage series GPUs to the Linux Mesa graphics driver, which means that Intel may be preparing to launch new GPU products. (更多…)

  • Intel Xeon 6 processor is new again: combined with NVIDIA AI GPU

    According to Kuai Technology on May 23, the Intel Ceon 6 series is a huge family. It has successively released many series such as 6700E, 6900P, 6700P/6500P/6300P, 6000P, etc. Now three special models have been added, which are specially used to carry first Enter the AI system of GPU.

    All three new products adopt P-Core performance core design, support PCT (Priority Core Turbo) and SST-TF (Speed Select) two major turbo frequency technologies, which can provide up to 128 cores and customized CPU core frequencies, and then improve Upgrade the performance of GPU under high-intensity AI workload.

    Among them, PCT can dynamically make high-priority cores run at a higher turbo frequency, while low-priority cores run at the basic frequency, so as to optimize the allocation of CPU resources.

    This is crucial for AI workloads that require sequential or serial processing, which can not only accelerate the transmission of data to the GPU, but also significantly improve the operating efficiency of the entire system.

    Intel Xeon 6 processor is new again: combined with NVIDIA AI GPU

    Intel Xeon 6 processor is new again: combined with NVIDIA AI GPU

    As the main processor, the Xeon 6776P has been used in NVIDIA’s latest generation AI acceleration system DGX B300, which plays a crucial role in the management, coordination and support of AI acceleration systems.

    At the same time, the three new processors are specially designed to maximize uptime, with better stability and more convenient maintenance features, which can minimize the possibility of business interruption.

    Xeon 6 series processors also support MRDIMM and CXL, providing larger memory capacity and higher memory bandwidth. The number of PCIe channels is increased by 20% compared with the previous generation, accelerating the data transmission of IO-intensive workloads, supporting AMX advanced extended matrix instructions, and supporting FP1 required for AI loads. 6 Precision operation.

  • Intel’s official website mentions Ruixuan B750

    Kuai Technology reported on May 20 that there has been news that Intel’s next Battlemage graphics card will be Ruixuan B770, but recently, the word “Ruixuan B750” appeared on Intel’s Japanese official website, which triggered the outside world’s account whether he plans to launch a new mid-end graphics card. Guess.

    User @Haze2K1 found that a page on Intel’s Japanese official website mentioned “Ruixuan B750”, but the URL of the page actually pointed to Ruixuan B570.

    Intel’s official website mentions Ruixuan B750! Do you want to promote the new mid-end graphics card?

    Intel’s official website mentions Ruixuan B750! Do you want to promote the new mid-end graphics card?

    When you click on the specification tab, you will also be redirected to the specification page of Ruixuan B570, and it only appears on Intel’s Japanese website, which indicates that “B750” may be a pen error of “B570”.

    However, this does not mean that Intel has abandoned the Battlemage series of graphics cards. Intel has just launched the Ruixuan Pro 60 graphics card based on the BMG-G21 architecture, and it is reported that the Ruixuan B580 may launch a 24GB version.

    If Intel decides to add a new graphics card to its product line, it may be Ruixuan B750 or Ruixuan B770, or even both. These graphics cards will be the successors of Ruixuan A750 and A770.

    According to leaked information, the selected B770 may adopt the BMG-G31 GPU chip and be equipped with 32 Xe2 cores, which is a significant improvement compared with the 20 cores of the B580.

  • Intel B60 hard NVIDIA A1000, the battle for the hegemony of AI chips is on the rise again

    When the Intel Ruixuan Pro B60 with 24GB of video memory meets the NVIDIA RTX A1000 with 8GB of video memory, how many unknown industry battles are hidden behind this seemingly disparity duel? Can Intel CEO Chen Liwu’s declaration that “if you want to return to the peak, you need to tell the truth” shake NVIDIA’s dominant position in the AI chip market?

    Performance duel: The technology breakthrough behind the 3x graphics memory gap

    Under the spotlight of Computex 2025, Intel showed industry-shocking data: Ruixuan Pro B60 is equipped with 24GB GDDR6 graphics memory, which is three times that of NVIDIA A1000. This is not only reflected in the hardware parameters, but also directly transformed into practical application advantages – when running large models such as Llama3, the generation speed of B60 is up to 2.7 times ahead of its rivals.

    What is more worthy of attention is the multi-card interconnection scheme. Through the Project Battlematrix platform, eight B60 combinations can provide 192GB of video memory, which is enough to control a medium-scale AI model with 150 billion parameters. This kind of “stacked innovation” just hits the pain point of the current AI reasoning market – the thirsty demand of large models for graphics memory.

    Market chess game: Intel’s two-line battle strategy

    Chen Liwu’s “to tell the truth” strategy is reshaping Intel’s product route. On the one hand, use the B60 series to attack the high-end AI reasoning market held by NVIDIA, and on the other hand, harvest cost-effective users through the $299 B50. This approach of “high-end branding and mid-end grabbing share” is the same as AMD’s strategy against Intel.

    But the real killer lies in the layout of the Chinese market. Due to export controls, it is difficult for NVIDIA’s latest flagship chip to enter China, and the B60/B50 series, which meets the regulations, just fills this vacuum. The relevant person in charge of Intel said bluntly that this will “cater to the market demand for domestic AI reasoning model computing power”.

    Industry changes: from single-point breakthrough to ecological war

    NVIDIA’s moat is never just hardware. The software barriers built by CUDA ecology make challengers break back again and again. But this time, Intel is obviously prepared – AIAssistantBuilder open source software stack, containerized deployment scheme, ISV certification system, these supporting measures directly point to NVIDIA’s life.

    The management reform of “directly going deep into the seventh and eighth floors to listen to the opinions of engineers” revealed by Chen Liwu at the dinner may be the key to Intel’s revitalization of engineering culture. When the chip war enters the three-dimensional confrontation stage of “hardware + software + ecology”, execution will become the X factor that determines victory or defeat.

    The duel is far from over. NVIDIA’s next-generation products are on the way, and Intel needs to prove that the success of the B60 is not a pasy. But in any case, 2025 is destined to be a watershed in the AI chip market – when the challenger takes out three times the memory configuration of the king, the rules of the whole industry are being rewritten. As Chen Liwu said, “Let the results speak”, consumers will eventually vote with their feet, and the balance of the market has begun to tremble slightly.

  • Intel preheats Panther Lake processor with 16-core design

    Intel has warmed up its next-generation client processor “Panther Lake”. It will be put into production in the second half of 2025, and the “consumer-available” product will be launched in early 2026.

    Open NetEase News to view wonderful pictures

    The Panther Lake processor adopts a 16-core design, including 4 performance cores (P cores), 8 efficiency cores (E cores) and 4 low-power cores (LPE cores), with a total of 16 threads. Its basic frequency is 2.0GHz, and it is equipped with 24MB L2 cache and 18MB L3 cache. In addition, Panther Lake also has 8MB of system-level cache (SLC). It is worth noting that the lower-positioned Wildcat Lake processor will use a 4MB SLC.

    In terms of process technology, Panther Lake will adopt Intel’s latest 18A process, which is comparable to TSMC’s N3 or N2 process nodes, which can reduce power consumption and improve performance while increasing transistor density.

    Open NetEase News to view wonderful pictures

    Intel announced that the Panther Lake processor will reach a level comparable to the Core Ultra 200V “Lunar Lake” processor in terms of energy efficiency, maintaining a lead in the x86 field. In terms of high performance, the performance of Panther Lake will be similar to that of the Core Ultra 200H “Arrow Lake-H” processor.

    The Panther Lake processor will integrate the Xe3 GPU core, and the graphics performance will be significantly improved compared with the previous generation of Xe2 architecture. At the same time, the AI computing power of the processor has also reached 180 TOPS, of which the CPU contributes 10 TOPS, the GPU contributes 50 TOPS, and the NPU contributes 120 TOPS. In terms of expandability, the Panther Lake processor will support 4 PCIe 5.0 channels and 8 PCIe 4.0 channels, and is equipped with 4 Thunderbolt 4 interfaces.

    According to sources, Intel plans to launch a Panther Lake processor with 6P8E8LPE24Xe3 GPU configuration at the end of 2025. Other SKUs, such as the version configured by 4P8E4LPE12Xe3 GPU, will be launched one after another in early 2026.

  • Intel continues to output: 18A mass production is imminent, a number of technologies are flowing out, and Apple AMD is under pressure!

    The internal roll-up between big factories should be what all users are willing to see. With AMD’s power in 2025, some people are optimistic that AMD will bring new products, but some people will worry about a single company’s dominance. However, according to a series of recent news, Intel, a chip giant, is obviously not going to sit still. A series of things are foreshadowing that it will cause another industry shock in the coming long time! Let’s focus on sorting out these important news. Point 1: The 18A process is about to be mass-produced, and the performance of the standard TSMC’s 2nm The first priority in 2025 must be a technological breakthrough. Intel has said that it will launch the first batch of 18A process products in 2025, PantherLake for AIPC and ClearwaterForest for servers. The product of this 18A process process is understood to be directly benchmarked to TSMC’s 2nm process, with SRAM density close to TSMC’s 2nm, and will also introduce RibbonFET full surround gate transistor technology and PowerVia back power supply technology. The benefits are that the chip performance is increased by 25% and the power consumption is reduced by 36%. And the 18A process also has two options: HP high-performance library and HD high-density library. Compared with the existing Intel3 process, the performance is improved by 18% or the power consumption is reduced by 38% at 0.75V voltage, and the performance is improved by 25% or the power consumption is reduced by 36% at 1.1V. The process has been taped out for 18A at the Fab52 factory in Arizona and will be mass-produced by the end of the year. And Google, because big, Broadcom Zhiyuan Technology and other big factories have started 18A process for tape-out testing. Microsoft Azure, Amazon AWS, Broadcom and other top-tier clients have signed 18A foundry agreements with Intel. Point 2: XeSS super-division technology is upgraded again, the frame rate is increased by 4 times. Intel’s XeSS super-division technology has made further breakthroughs. AI-based frame generation technology (XeSS-FR) and low-latency optimization technology (XeLL) have achieved dual evolution of picture smoothness and operation response. According to the test, after matching the independent graphics card, the frame rate of many games has been improved by multiple levels, such as: “Diablo IV” increased by 4 times, “Assassin’s Creed: Shadow” increased by 2.4 times. The core display has also been significantly improved. With XeLL low-latency technology, the system delay can be reduced by an average of 45% in the actual test. At present, 19 games have adopted this technology, and more than 200 games have adopted it. The future performance in the game is very worth looking forward to. Key point 3:14A process is forward-looking, power consumption is reduced by 25%. Although this year is mainly based on 18A process, Intel has been preparing for the next generation of technology early. According to the news, the next generation 14A process technology has entered the early testing stage, and the next generation technology is mainly based on further breakthroughs in performance and energy efficiency. Through the introduction of PowerDirect direct contact power supply technology, the resistance is reduced by 30%, and the performance per watt is improved by 15% -20%. At the same time, the High-NAEUV lithography machine is used for the first time, and the transistor density is increased by 30% to 23.80 million/square millimeter. And the 14A process also has a dynamic optimization technology “TurboCell” that can dynamically adjust the voltage and frequency according to the load. It can improve the performance of high-frequency operation by 10%, and reduce the power consumption by 25% in low-load scenarios. Make the chip have high computing power while meeting the needs of long battery life. Key point 4: Advanced packaging technology matrix, 3D stacking + heterogeneous integration to improve the degree of freedom of the chip. In the field of packaging, Intel has completed the leap from 2D to 3D, from a single technology to a comprehensive layout of system-level solutions. Foveros Direct 3D packaging technology achieves an interconnect spacing of less than 5 microns through copper-copper hybrid bonding, which increases the density of traditional solder connections by 3 times and reduces power consumption by 40%, making it an ideal choice for AI chips and high-performance computing (HPC). The key technology of EMIB-T (Embedded Multi-Chip Interconnect Bridge-Silicon Via) is the introduction of TSV technology from 2.5D packages to support higher density signal transmission and power distribution, making the chip meet the needs of heterogeneous integration scenarios of different process chips. At the same time, the addition of two derivative solutions, Foveros-R and Foveros-B, has given a higher degree of freedom between chips and stronger support for the demand for high reliability. Point 5: Strategic transformation of the new leader: from IDM to open foundry, the yield rate has increased to 75%. Intel CEO Chen Liwu put forward the concept of “foundry is not selling capacity, but providing full chain value from design to system integration”, and established the “Core-particle Alliance” and “Value Chain Alliance” to integrate IP, EDA, and packaging resources to provide customers with a “one-stop” development platform. In terms of customer cooperation, Intel has cooperated with MediaTek, Qualcomm, Microsoft, etc. to completely transform into “open foundry”. In terms of company management, Chen Liwu put forward the concept of “the first day of startups”, breaking down hierarchical barriers and giving engineers more autonomy in innovation. And in the introduction of talents, it is also more inclined to the field of AI chip design, advanced packaging materials, and software-defined chips (SDSoC). Key point 6: Future product full-stack layout: AI chips, automotive electronics, and edge computing are fully blossoming. The key node in 2025 is the landing of the 18A process chip. With the performance of high performance and low power consumption of the chip and the super computing power and energy efficiency ratio in the AI field, it will give Apple’s M3 chip a strong response, and fill the gap between Qualcomm and Apple in edge AI. In 2026, Intel’s top priority is to do a good job in the 14A process chip. Nova Lake processors will support Chiplet design for the first time, allowing users to freely combine CPU, GPU and NPU modules to meet the challenges from AMD Zen5 and Apple server chips. At the same time, it focuses on various fields such as vehicle cameras, radar controllers, Internet of Things and autonomous driving. Through 12nm mature process products, 12nm autonomous driving domain controllers and 16nm-based low-power node chips for smart home and industrial sensors, it is not difficult to see that Intel’s current changes will not be limited to one field, but involve desktop, mobile end, autonomous driving, smart home and industrial and other aspects. And not only for chip performance and power consumption, but also for AI slowdowns, there will be new breakthroughs. Therefore, in the next one to two years, we should look forward to Intel’s new vitality, allowing more high-performance, low-power, and high-reliability products to be delivered to the public.

  • A savior for fans of allergies in the universe: Intel and Shell co-design lead the way in refrigerant fluids

    Create an industry-leading immersive liquid cooling solution to build a sustainable and efficient liquid cooling development path for data center users in the AI era. In the current era of rapid development of AI and computing power, the demand for powerful infrastructure in data centers continues to grow, and the accompanying heat dissipation problem is becoming more and more prominent, so IT operations and maintenance personnel are actively looking for efficient, scalable and sustainable heat dissipation solutions. Among them, liquid cooling technology is favored for its excellent heat dissipation effect. According to Dell’Oro Group1, by 2028, enterprises will invest in liquid cooling to account for 36% of data center heat dissipation management revenue. However, despite the outstanding performance of submerged liquid cooling technology, its promotion and application still face many challenges due to the lack of proven and easy-to-deploy submerged liquid cooling solutions in the industry. Facing the key challenges in the field of data center liquid cooling, Intel partnered with Shell Global Solutions (hereinafter referred to as Shell) to successfully validate a complete set of submerged liquid cooling solutions supported by hardware from Supermicro and Submer, providing a model for efficient and reliable cooling for industry users. The Intel Data Center Immersive Liquid Cooling Certification Scheme sets the industry standard 2 in terms of cooling efficiency and long-term performance, and is also applicable to the fourth and fifth generation Intel ® Xeon ® processors, providing strong support for the efficient operation of data centers. Image 1.png The Intel Data Center R & D Lab has been extensively tested and rigorously validated to combine the power of Intel Xeon processors with advanced expertise in single-phase immersion liquid cooling technology in this solution. This solution not only enables the smooth deployment of proven high-performance infrastructure in data centers, but also meets the demands of today’s AI and computing workloads. With this certification, Intel provides a warranty rider for single-phase immersion liquid cooling of Xeon processors to further strengthen the industry’s trust in immersion liquid cooling IT infrastructure and Shell immersion coolants for durability, efficiency and compatibility. At the same time, Intel and Shell are also jointly exploring future cooperation opportunities to provide certification for the latest generation of Intel processors using Shell coolant. Karin Eibschitz Segal, vice president and interim general manager of Intel’s data center and artificial intelligence business unit, said: “As data center demand continues to rise, Intel-certified data center immersive liquid cooling solutions will play an important role in providing energy-efficient, efficient and scalable computing infrastructure. This important achievement not only lays a solid foundation for future innovation and collaboration, but also guarantees that data centers have proven, ready-to-deploy and trusted cooling technologies.” Dr. Selda Gunsel, Chief Technology Officer and Executive Vice President of Technology, Shell Group, said: “Certification and validation are a top priority in this collaboration, which also provides data center operators with an efficient and trusted solution that has been professionally certified and tested in practice. This partnership shows that through the comprehensive certification and quality assurance system introduced by Intel, without going through the tedious proof-of-concept stage, data center operators can directly deploy proven solutions, achieving triple breakthroughs in performance improvement, energy efficiency optimization and environmental impact reduction in one fell swoop.” Note: For more information on immersive liquid cooling, please visit the Shell website. 1 For details, please visit https://www.datacenterdynamics.com/en/opinions/immersion-cooling-systems-advantages-and-deployment-strategies-for-ai-and-hpc-data-centers/. 2 Single-phase immersion liquid cooling can reduce electricity demand by up to half, reducing CO2 emissions by up to 30% and water consumption by up to 99% compared to traditional air cooling technology. When combined with renewable energy and smart energy management solutions, the deployment and application of single-phase immersion liquid cooling technology can help data center operators achieve sustainable performance optimization. Relevant data from: “Data Center Global Immersive Cooling Market – Growth, Trends and Forecasts (2019-2024) ” report (Mordor Intelligence) and Shell internal evaluation. Actual benefits will vary depending on site-specific development.

  • Intel Alchemist Graphics Highlights Feature: Deep Link Stops Update!

    According to the latest news, Deep Link, one of the highlights of Intel Alchemist’s discrete graphics card, has been discontinued for maintenance and future updates.

    Deep Link technology is a feature introduced by Intel in 2022 to optimize collaboration between Intel CPUs and the Arc Alchemist family of graphics cards.

    It improves system efficiency and performance by dynamically allocating CPU and GPU resources, especially in areas such as gaming, content creation, and live streaming.

    According to Intel employee Zack-Intel’s response on GitHub, the Deep Link technology will no longer receive any future updates. Intel Alchemist Graphics Highlights Feature: Deep Link Stops Update!

    Intel Alchemist Graphics Highlights Feature: Deep Link Stops Update!

    The Deep Link feature was once advertised as a highlight of the series of graphics cards, and stopping the update means that the feature will remain in its current state, with no further improvements or fixes in the future.

    Deep Link’s core features include Dynamic Power Share, Hyper Encode, Additive AI, and Stream Assist, which have been touted by Intel as key technologies to enhance the user experience.

    Deep Link is still available for now, but users may experience compatibility issues or other technical difficulties due to lack of future maintenance. Intel Alchemist Graphics Highlights Feature: Deep Link Stops Update!

    Intel Alchemist Graphics Highlights Feature: Deep Link Stops Update!