Vdbench Software (version 50407): http://www.infotinks.com/vdbench/vdbench50407.zip

Vdbench Guide (version 50407): http://www.infotinks.com/vdbench/vdbench-50407.pdf

Vdbench is a common benchmarking tool that has confusing documentation on installation and running it. 
Installing it required java. Also it requires you to download a zip file. I recommend getting it straight from the source which is oracle. I will provide a vdbench download link to the current latest version above.

Topology:
##########

These are not VMs and are bare metal servers.

Storage Server vol0 -----> client1
               vol1 -----> client2

client1 and client2 are connected to Storage server via NVME over TCP
client1 is connected to vol0
client2 is connected to vol1

NOTE: Since these are NVME over TCP devices for the volumes to be useable first they must be connected before mounting. In this case, I mean that the volumes are connected, however they are not mounted to a directory. The volumes can be seen from lsblk and /dev/ as nvme0n1. vdbench can run tests directly to block devices or to filesystems (just like FIO).

They see the drive as /dev/nvme0n1

Client1 and 2 are running Centos 7.9 with the current latest kernel 5.18.12-1.el7.elrepo.x86_64

Installing vdbench:
###################

We will install java on both clients with this command:

# yum install java-latest-openjdk-headless.x86_64

First lets download vdbench.
Go to oracle and download the latest vdbench zip file and documentation (optional). You will need an account for it.
https://www.oracle.com/downloads/server-storage/vdbench-downloads.html

NOTE: We will install vdbench on client1 and client2. During the test client1 will act as the master and client2 will act as a slave/worker.

IMPORTANT: vdbench must be installed on all of the clients (masters and workers) and into the same directories.

I downloaded vdbench50407.zip (that was the latest on 2022-07-21)

Copy vdbench50407.zip to client1 and extract

# show the downloaded file
cd /root/
ls -l /root/vdbench50407.zip

# move the file to another directory
mkdir vd
mv vdbench50407.zip vd
cd vd

# unzip to current directory (vd)
unzip vdbench50407.zip

Using vdbench:
###############

Now you have a ./vdbench file.

If you run ./vdbench without options or with  --help or -h it will error with:

java.lang.RuntimeException: No input parameters specified
        at Vdb.common.failure(common.java:350)
        at Vdb.Vdbmain.main(Vdbmain.java:555)


java.lang.RuntimeException: Invalid execution parameter: '-h'
        at Vdb.common.failure(common.java:350)
        at Vdb.Vdbmain.scan_args(Vdbmain.java:446)
        at Vdb.Vdbmain.check_args(Vdbmain.java:285)
        at Vdb.Vdbmain.main(Vdbmain.java:588)

These errors are normal.

Running a small test with vdbench from client1 to worker client2:
##################################################################

Sidenote: our worker is client2, so we don't need the connection on client1 to the volume (it doesn't hurt to have though)

Lets run a simple test on the client2.

Create this parameter file which will test small random reads and writes.

vi randRWsmall.parm

------------------------------ start of file (below this line)  ---------------------------
#This is a VDBench Parameter File for testing Max IOPS performance.  This Parm file tests Random Reads and Writes at small request sizes.

data_errors=1000000
messagescan=no
showlba=yes
histogram=(default,20,40,60,80,100,200,400,600,800,1m,2m,4m,6m,8m,10m,20m,40m,60m,80m,100m,200m,400m,600m,800m,1s,2s)

#Host Definition
hd=storage-vdb101-01,system=192.168.17.201,shell=ssh,user=root,jvms=1
#hd=storage-vdb102-01,system=192.168.17.202,shell=ssh,user=root,jvms=8
#hd=storage-vdb103-01,system=192.168.17.203,shell=ssh,user=root,jvms=8
#hd=storage-vdb104-01,system=192.168.17.204,shell=ssh,user=root,jvms=8
#hd=storage-vdb101-02,system=192.168.17.205,shell=ssh,user=root,jvms=8
#hd=storage-vdb102-02,system=192.168.17.206,shell=ssh,user=root,jvms=8
#hd=storage-vdb103-02,system=192.168.17.207,shell=ssh,user=root,jvms=8
#hd=storage-vdb104-02,system=192.168.17.208,shell=ssh,user=root,jvms=8

#Storage Definitions - sdb = ssd, sdc = sata
sd=sd_1,lun=/dev/nvme0n1,openflags=o_direct

#Mixed Workload Definitions
wd=wd_1,sd=sd_*

#Mixed Run Definitions

rd=rd_timeframe000,wd=wd_1,iorate=max,el=20,interval=1,forrdpct=(50),forseekpct=(70),forxfersize=(4k,32k,128k),forthreads=(128,256),pause=10
#rd=rd_timeframe001,wd=wd_1,iorate=max,el=300,interval=1,forrdpct=(0,10,30),forseekpct=(90,100),forxfersize=(4k,8k,32k),forthreads=(16,32,64),pause=5
------------------------------ end of file (above this line)  ---------------------------

How to read this file?

Mainly, we need to look at the Host Definition and the Storage Definitions.
You can go along with the downloaded guide to understand better.

First lets look at the Host Definition, this is where we tell it where to run the test.
Currently, we are only telling it to run the test on client2 (192.168.17.201) and with 1 worker on that machine.
* The "system=192.168.17.201" is our client2 and its IP.
* We tell it also to run 1 worker.
 - We can change it to say jvms=1 to have 8 parallel workers on the same node.

Next we tell it which storage to run against, here we specify that we want to run it against /dev/nvme0n1.
Our storage is presented as /dev/nvme0n1 on client1 and client2 so we specify "lun=/dev/nvme0n1".
If our storage was an /dev/sdb device, then I would have specified "lun=/dev/sdb".

Now lets kick off this job.

Note: it will connect to client2 via ssh using the specified user (root) and it will ask you for password, unless you setup passwordless authentication. Definitely recommended to setup passwordless ssh authentication from your vdbench system to the workers.

You can run the job from screen, tmux or via nohup. Its just going to take a while and it should run uninterrupted, you should be able to collect the results.

A normal job would kick off like this (make sure to run from tmux or screen):

# ./vdbench -f randRWsmall.parm -o randRWsmall.1/

With nohup it might look like this (it will save output to a file called randRWsmall.1.out instead of nohup.out)

# nohup ./vdbench -f randRWsmall.parm -o randRWsmall.1/ > randRWsmall.1.out 2>&1 &

Sidenote: if your default shell is not /bin/bash, you should prefix /bin/bash ./vdbench ..., so it will look like this:
# /bin/bash ./vdbench -f randRWsmall.parm -o randRWsmall.1/ > randRWsmall.1.out 2>&1 &
or like this:
# nohup /bin/bash ./vdbench -f randRWsmall.parm -o randRWsmall.1/ > randRWsmall.1.out 2>&1 &

Testing with 2 clients (client1 as localhost and client2 as remote):
######################################################################

Here is another test using 2 clients. I will use the master (client1) as a worker  and client2 as a worker.
So client1 connects to itself via ssh and to client2 via ssh.
For this to work you must setup passwordless ssh.

From client1 run these to setup passwordless ssh:

ssh-keygen
ssh-copy-id root@localhost  # client1
ssh-copy-id root@client2    # client2

Make sure these 2 commands work:

ssh root@client1 uptime
ssh root@client2 uptime

Now setup the following parameter file.

cd vd

vi randRWsmall-2.parm

------------------------------ start of file (below this line)  ---------------------------
#This is a VDBench Parameter File for testing Max IOPS performance.  This Parm file tests Random Reads and Writes at small request sizes.

data_errors=1000000
messagescan=no
showlba=yes
histogram=(default,20,40,60,80,100,200,400,600,800,1m,2m,4m,6m,8m,10m,20m,40m,60m,80m,100m,200m,400m,600m,800m,1s,2s)

#Host Definition
hd=remote1,system=192.168.17.201,shell=ssh,user=root,jvms=1
hd=localhost1,system=127.0.0.1,shell=ssh,user=root,jvms=1
#hd=storage-vdb102-01,system=10.171.197.221,shell=ssh,user=root,jvms=8
#hd=storage-vdb103-01,system=10.171.197.231,shell=ssh,user=root,jvms=8
#hd=storage-vdb104-01,system=10.171.197.241,shell=ssh,user=root,jvms=8
#hd=storage-vdb101-02,system=10.171.197.212,shell=ssh,user=root,jvms=8
#hd=storage-vdb102-02,system=10.171.197.222,shell=ssh,user=root,jvms=8
#hd=storage-vdb103-02,system=10.171.197.232,shell=ssh,user=root,jvms=8
#hd=storage-vdb104-02,system=10.171.197.242,shell=ssh,user=root,jvms=8

#Storage Definitions - sdb = ssd, sdc = sata
sd=sd_1,lun=/dev/nvme0n1,openflags=o_direct

#Mixed Workload Definitions
wd=wd_1,sd=sd_*

#Mixed Run Definitions

rd=rd_timeframe000,wd=wd_1,iorate=max,el=20,interval=1,forrdpct=(50),forseekpct=(70),forxfersize=(4k,32k,128k),forthreads=(128,256),pause=10
#rd=rd_timeframe001,wd=wd_1,iorate=max,el=300,interval=1,forrdpct=(0,10,30),forseekpct=(90,100),forxfersize=(4k,8k,32k),forthreads=(16,32,64),pause=5
------------------------------ end of file (above this line)  ---------------------------

Note the test specifies 2 workers to test 192.168.17.201 and 127.0.0.1.

Sidenote: the names that we pick for hd= can be anything, so I picked remote1 and localhost1

Now kick off the test against this parameter file and the output directory.

./vdbench -f randRWsmall-2.parm -o randRWsmall-2.1/

Example output:
==================

~/vd# ./vdbench -f randRWsmall-2.parm -o randRWsmall-2-output/

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Vdbench distribution: vdbench50407 Tue June 05  9:49:29 MDT 2018
For documentation, see 'vdbench.pdf'.

02:28:37.188 input argument scanned: '-frandRWsmall-2.parm'
02:28:37.189 input argument scanned: '-orandRWsmall-2-output/'
02:28:37.294 Starting slave: ssh 127.0.0.1 -l root /root/vd/vdbench SlaveJvm -m localhost -n 127.0.0.1-11-220722-02.28.37.156 -l localhost-0 -p 5570
02:28:37.298 Starting slave: ssh 192.168.17.201 -l root /root/vd/vdbench SlaveJvm -m 192.168.19.36 -n 192.168.17.201-10-220722-02.28.37.156 -l remote1-0 -p 5570
02:28:37.760 All slaves are now connected
02:28:39.002 Starting RD=rd_timeframe000; I/O rate: Uncontrolled MAX; elapsed=20; For loops: rdpct=50 seekpct=70 xfersize=4k threads=128

Jul 22, 2022    interval        i/o   MB/sec   bytes   read     resp     read    write     read    write     resp  queue  cpu%  cpu%
                               rate  1024**2     i/o    pct     time     resp     resp      max      max   stddev  depth sys+u   sys
02:28:40.045           1   198998.0   777.34    4096  49.67    0.404    0.275    0.532     5.08     2.08    0.234   80.5  11.9   6.5
02:28:41.018           2   274790.0  1073.40    4096  50.10    0.431    0.288    0.574     5.43     2.22    0.238  118.5  21.7  16.5
02:28:42.011           3   280796.0  1096.86    4096  49.97    0.428    0.284    0.573     4.25     2.65    0.234  120.3  21.6  16.4
02:28:43.009           4   282856.0  1104.91    4096  49.98    0.425    0.280    0.569     5.49     2.39    0.234  120.1  22.0  16.8
02:28:44.009           5   282836.0  1104.83    4096  49.93    0.425    0.281    0.570     4.52     2.79    0.235  120.3  21.6  16.2
02:28:45.010           6   281538.0  1099.76    4096  50.06    0.426    0.280    0.572     4.70     2.00    0.235  119.9  21.8  16.6
02:28:46.011           7   281256.0  1098.66    4096  50.13    0.427    0.281    0.575     4.67     3.48    0.236  120.2  21.9  16.9
02:28:47.008           8   283540.0  1107.58    4096  49.89    0.425    0.282    0.568     4.19     1.97    0.231  120.5  21.7  16.7
02:28:48.007           9   282924.0  1105.17    4096  50.07    0.424    0.283    0.566     4.05     2.88    0.232  120.0  22.1  17.0
02:28:49.006          10   284613.0  1111.77    4096  50.22    0.424    0.284    0.565     4.67     3.74    0.229  120.7  21.9  16.8
02:28:50.008          11   281907.0  1101.20    4096  50.11    0.428    0.284    0.573     4.83     3.35    0.237  120.7  21.8  16.7
02:28:51.009          12   283206.0  1106.27    4096  50.02    0.424    0.283    0.566     4.99     1.88    0.228  120.1  21.5  16.5

Issues & Troubleshooting:
############################

* Make sure vd bench is installed to the exact same absolute path on the master and test host. Ex: /root/vd

* Make sure that the same lun is accessible on all hosts. So in the above test parameters we call up on /dev/nvme0n1. So make sure you have an /dev/nvme0n1 on all of your test hosts

* Disable iptables on master and test hosts (otherwise might get error: no route to host)

Leave a Reply

Your email address will not be published. Required fields are marked *