Intel has released the Arc Pro B-series to compete with Nvidia’s RTX Pro GPU suite.
The Intel Arc Pro B-series represents a significant step forward for workstation graphics, particularly with its focus on “AI-era” workloads. Built on the Xe2 architecture, these GPUs offer a 50% performance increase per core compared to the previous generation.
Model Comparison & Key Specifications
| Feature | Arc Pro B70 | Arc Pro B65 | Arc Pro B60 | Arc Pro B50 |
| Video Memory (VRAM) | 32 GB | 32 GB | 24 GB | 16 GB |
| Xᵉ-cores | 32 | 20 | 20 | 16 |
| Ray Tracing Units | 32 | 20 | 20 | 16 |
| AI Performance (pTOPS) | 367 | 197 | 197 | 170 |
| Memory Bandwidth | 608 GB/s | 608 GB/s | 456 GB/s | 224 GB/s |
| PCIe Support | Gen 5 (x16) | Gen 5 (x16) | Gen 5 (x8) | Gen 5 (x8) |
| Total Board Power (TBP) | 160W – 290W | 200W | 120W – 200W | 70W |
| Displays Supported | 4 | 4 | 4 | 4 |
Key Features for LLM & AI Workloads
- High-Capacity VRAM: With up to 32 GB of VRAM on the B70 and B65 models, these cards are designed to run larger AI models and complex datasets locally.
- Multi-GPU Scaling: Intel highlights robust Linux support for multi-GPU configurations, allowing users to combine the power of multiple cards to execute AI models requiring over 100 GB of VRAM.
- XMX AI Engines: These dedicated engines provide hardware acceleration specifically for AI inference and content creation.
- Improved Throughput: Intel claims the B70 can achieve up to 85% higher token throughput for multi-agent flows and up to 6.3x faster response times (Time to First Token) for multiple users compared to competing workstation cards like the NVIDIA RTX Pro 4000.
Professional & Technical Advantages
- Xe2 Architecture: Delivers enhanced efficiency and performance, particularly in rendering and video processing via the Xᵉ Media Engine.
- ISV Certifications: The series is undergoing rigorous testing for certification with major professional software including AutoCAD, SOLIDWORKS, Maya, and Revit.
- Linux Driver Support: A heavy emphasis is placed on scalable AI deployments in Linux environments, which is crucial for developers working with local LLMs.
One of the most significant advantages of the Intel Arc Pro B-series is its support for native multi-GPU scaling. While many consumer-grade solutions struggle with interconnect bottlenecks or driver limitations when adding a second or third card, the B-series is engineered for linear scaling in professional environments. This allows developers to pool the VRAM of multiple B70 or B65 cards—reaching over 100GB of total addressable memory on a single Linux workstation. For AI researchers and LLM developers, this means the ability to run 70B+ parameter models locally with high precision, without the enterprise-tier cost of a single high-end Nvidia A-series or H-series card.
While Nvidia utilizes NVLink for high-end scaling, it is often restricted to their most expensive SKUs. Intel provides a more accessible path to multi-GPU scaling across the entire Pro B-series lineup, making it a more flexible ‘building block’ for local AI labs.
While Nvidia remains the industry standard with its mature CUDA ecosystem and high-speed GDDR7 memory, Intel’s new Arc Pro B-series is making a compelling case for the budget-conscious AI developer. By offering up to 32GB of VRAM of GDDR6 at a price point significantly lower than Nvidia’s professional tier, Intel is effectively lowering the barrier to entry for running large, local LLMs. For users where raw memory capacity for high-parameter models is more critical than proprietary software lock-in, the B-series represents a massive shift in ‘price-per-gigabyte’ value.

