Jul 29, 2025
ASUS WEKA Node Solution for Finance: Speed Meets Savings
As data-intensive workloads for AI continue to grow, traditional storage architectures often struggle to deliver the low-latency and high throughput required. Scalability is also often limited with traditional storage architectures. The ASUS WEKA Node solution addresses these issues. A turnkey, enterprise-grade platform engineered to eliminate these bottlenecks, it integrates best-in-class ASUS hardware with the transformative WEKA® Data Platform software, creating a pre-validated, performance-tuned solution that delivers uncompromising performance with superior economic efficiency. By leveraging a parallel-file system architecture on a high-performance fabric network, this solution is designed to saturate the most demanding compute environments, maximizing the ROI on AI and compute investments.
Hardware
-
Server Platform
Typically, ASUS RS501A-E12RS12U servers and high-core-count AMD EPYC™ processors are employed, to provide the necessary computational power for the WEKA software to run efficiently.
-
Compute & GPU Integration
The servers feed data to powerful accelerators, ensuring GPU utilization remains consistently high.
-
Storage (Tier 0: Hot Tier)
Each node is populated with enterprise-class NVMe® SSDs. Employing a “flash-first” approach ensures that all I/O operations are serviced at extremely low latencies.
Storage Architecture
The ASUS WEKA Node solution employs a three-tier storage architecture engineered to support the immense demands of modern AI and HPC workloads. This architecture is intelligently designed to deliver optimal performance, massive scalability, and cost efficiency through a multi-layered approach.
-
Tier 1: Performance
Tier 1 is a high-speed parallel file system cluster built on all-flash TLC SSDs. This tier is engineered for ultralow latency and maximum throughput, directly serving the scale compute clusters. It connects to the tenants via a converged network fabric operating at 800Gb/s (4 x 200G) and leveraging RDMA (Remote Direct Memory Access). This high-performance fabric is critical for eliminating I/O bottlenecks and ensuring GPU clusters are continuously saturated with data. The architecture supports robust multi-tenancy by providing dedicated file sets for each tenant, with policy-based controls to deliver differentiated levels of service and secure data access.
-
Tier 2: Capacity
Tier 2 provides a cost-effective solution for less active data and large-scale data lakes. Data is intelligently and automatically tiered from the performance layer to this S3-compatible object storage cluster over a high-speed 200G Ethernet network. This tier utilizes a hybrid configuration of QLC SSDs and HDDs, along with JBOD support for seamless capacity expansion, offering a vast and economical storage pool without compromising data accessibility.
-
Tier 3: Backup
Tier 3 ensures data protection and long-term retention. It employs an ASUS SAN (Storage Area Network) appliance, which is isolated on its own dedicated 200G (2 x 4 x 25G) Ethernet network. This isolation prevents backup traffic from impacting the performance of the primary storage tiers.
WEKA Data Platform
The WEKA Data Platform is a revolutionary software-defined, parallel file system. A few key features are highlighted below.
-
Parallel & Distributed Architecture
The platform distributes both metadata and data across all nodes in the cluster. This parallel design eliminates the central controller bottlenecks inherent in legacy NAS systems, enabling linear performance scaling.
-
Full POSIX Compliance & Multi-Protocol Access
Applications can access data via a POSIX-compliant file system, requiring no code modifications, while the platform simultaneously provides access via NFS, SMB, and S3 protocols.
-
Automated Tiering (Snap-to-Object)
Data is automatically and transparently tiered, from the high-performance NVMe® tier to any S3-compatible object store (on-premises or on public cloud). This creates a limitless capacity pool at a superior blended cost.
Network Requirements
The WEKA Node solution mandates a high-performance network fabric to ensure data flows are unimpeded between storage nodes and compute clients.
-
High-Bandwidth, Low-Latency Fabric
A high-speed network significantly reduces data transfer latency, ensures stable connectivity across large-scale nodes, and accelerates data access and processing workflows.
-
RDMA Integration
Support for Remote Direct Memory Access allows compute clients to access data directly from storage memory, bypassing the CPU and kernel overhead to achieve latencies often below 30 microseconds.
Success Story
Addressing Analytics Bottlenecks for a Global Investment Bank
A global investment bank faced persistent performance bottlenecks in its quantitative trading division. In order to process years of tick-by-tick market data at efficient speeds, the bank deployed a dedicated, compact 4-node ASUS WEKA Node solution. The solution not only delivered ultrafast parallel file access, it also significantly reduced latency for small-file and metadata-intensive workloads—effectively eliminating the limitations of legacy storage infrastructure. Results were immediate and transformative. Backtesting times for complex trading strategies were reduced from 18 hours to just 2.5 hours, enabling the team to run up to seven times more simulations per day. This dramatically accelerated strategy refinement shortened the development-to-deployment cycle for new models, and enhanced the firm’s overall trading performance and responsiveness to market changes. With the ASUS WEKA Node solution, the bank achieved a fundamental infrastructure upgrade that directly empowered smarter, faster business decisions.