Testing Weka Performance
This page describes a series of tests for measuring performance after the installation of the Weka system. The same tests can be used to test the performance of any other storage solution.
About Weka Performance Testing
There are three main performance metrics when measuring a storage system performance:
Latency, which is the time from operation initiation to completion
The number of different IO operations (read/write/metadata) that the system can process concurrently
The bandwidth of data that the system can process concurrently
Each of these performance metrics applies to read operations, write operations, or a mixture of read and write operations.
When measuring the Weka system performance, different mount modes produce different performance characteristics. Additionally, client network configuration (using either user-space DPDK networking or kernel UDP) also significantly affects performance.
The FIO Utility
The FIO Utility is a generic open-source storage performance testing tool which can be defined as described here. In this documentation, the usage of FIO version 3.20 is assumed.
All FIO testing is done using the client/server capabilities of FIO. This makes multiple client testing easier since FIO reports aggregated results for all clients under the test. Single client tests are run the same way to keep the results consistent.
Start the FIO server on every one of the clients:
fio --server --daemonize=/tmp/fio.pidRun the test command from one of the clients, note, the clients need to be mounted to a Weka filesystem.
An example of launching a test (sometest) on all clients in a file (clients.txt) using the server/client model:
An example for the clients' file, when running multiple clients:
An example of aggregated test results:
The single-client or aggregated tests deffer in the clients participating in the test, as defined in the clients.txt.
MDTest
MDTest is a generic open-source metadata performance testing tool. In this documentation, the usage of version 1.9.3 is assumed.
MDTest uses an MPI framework to coordinate the job across multiple nodes. The results presented here were generated using the MPICH version 3.3.2 and can be defined as described here. While it's possible to have variations with different MPI versions, most are based on the same ROMIO and will perform similarly.
Weka Client Performance Testing
Overall, the tests contained on this page are designed to show off the sustainable peak performance of the filesystem. Care has been taken to make sure they are realistic and reproducible.
Where possible, the benchmarks try to negate the effects of caching. For file testing, o_direct calls are used to bypass the client's cache. In the case of metadata testing, each phase of testing uses different clients. Also, between each test, the Linux caches are flushed to ensure all data being accessed is not present in the cache. While applications will often take advantage of cached data and metadata, this testing focuses on the filesystem's ability to deliver data independent of caching on the client.
While we provide below the output of one iteration, we ran each test several times and provided the average results in the Result Summary.
Results Summary
Single Client Results
Read Throughput
8.9 GiB/s
21.4 GiB/s
Write Throughput
9.4 GiB/s
17.2 GiB/s
Read IOPS
393,333 ops/s
563,667 ops/s
Write IOPS
302,333 ops/s
378,667 ops/s
Read Latency
272 µs avg.
99.5% completed under 459 µs
144.76 µs avg.
99.5% completed under 260 µs
Write Latency
298 µs avg.
99.5% completed under 432 µs
107.12 µs avg.
99.5% completed under 142 µs
Aggregated Cluster Results (with multiple clients)
Read Throughput
36.2 GiB/s
123 GiB/s
Write Throughput
11.6 GiB/s
37.6 GiB/s
Read IOPS
1,978,330 ops/s
4,346,330 ops/s
Write IOPS
404,670 ops/s
1,317,000 ops/s
Creates
79,599 ops/s
234,472 ops/s
Stats
1,930,721 ops/s
3,257,394 ops/s
Deletes
117,644 ops/s
361,755 ops/s
Testing Read Throughput
Description
This test measures the client throughput for large (1MB) reads. The job below tries to maximize the read throughput from a single client. The test utilizes multiple threads, each one performing 1 MB reads.
Job Definition
Example of Test Output
In this test output example, results show a bandwidth of 8.95 GiB/s from a single client.
Testing Write Throughput
Description
This test measures the client throughput for large (1MB) writes. The job below tries to maximize the write throughput from a single client. The test utilizes multiple threads, each one performing 1MB writes.
Job Definition
Example of Test Output
In this test output example, results show a bandwidth of 6.87 GiB/s.
Testing Read IOPS
Description
This test measures the ability of the client to deliver concurrent 4KB reads. The job below tries to maximize the system read IOPS from a single client. The test utilizes multiple threads, each one performing 4KB reads.
Job Definition
Example of Test Output
In this test output example, results show 390,494 IOPS from a single client.
Testing Write IOPS
Description
This test measures the ability of the client to deliver concurrent 4KB writes. The job below tries to maximize the system write IOPS from a single client. The test utilizes multiple threads, each one performing 4KB writes.
Job Definition
Example of Test Output
In this test output example, results show 288,215 IOPS from a single client.
Testing Read Latency
This test measures the minimal achievable read latency under a light load. The test measures the latency over a single-threaded sequence of 4KB reads across multiple files. Each read is executed only after the previous read has been served.
Job Definition
Example of Test Output
In this test output example, results show an average latency of 229 microseconds, where 99.5% of the writes terminated in 334 microseconds or less.
Testing Write Latency
Description
This test measures the minimal achievable write latency under a light load. The test measures the latency over a single-threaded sequence of 4KB writes across multiple files. Each write is executed only after the previous write has been served.
Job Definition
Example of Test Output
In this test output example, results show an average latency of 226 microseconds, where 99.5% of the writes terminated in 293 microseconds or less.
Testing Metadata Performance
Description
The test measures the rate of metadata operations (such as create, stat, delete) across the cluster. The test uses 20 million files: it uses 8 client hosts, and multiple threads per client are utilized (136), where each thread handles 18382 files. It is invoked 3 times and provides a summary of the iterations.
Job Definition
Example of Test Output
Running All Benchmark Tests Together
If it is preferred to run all the tests sequentially and review the results afterward, follow the instructions below.
Preparation
From each client, create a mount point in /mnt/weka to a Weka filesystem and create there the following directories:
Copy the FIOmaster.txt file to your host and create the clients.txt file with your clients' hostnames.
Running the Benchmark
Run the benchmarks using the following commands:
Last updated