This will be the last hardware design performance test that I'll be running for the roadtest.

 

This is the SATA performance test: UltraZed-EV-RD-SATA_Performance

The link provides a pre-built PetaLinux image with an included test script drive-test.sh that facilitates running SATA drive read/write and I/O performance tests.  The test script encapsulates the following 3 tests:

  1. dd - This utility is from the Coreutils Linux package and is a very simple tool which can be used to perform read/write throughput measurements to target storage devices
  2. hdparm – This tool is often used to measure the read performance of Flash and disk drives drives. It can also be used to change drive settings and even erase SSDs securely
  3. bonnie++ - This is a small utility for the purpose of performing extensive benchmarking of file system I/O performance

 

For the tests I'll be using this SSD drive: WD Blue 3D NAND 500GB Internal PC SSD - SATA III 6 Gb/s, 2.5"/7mm, Up to 560 MB/s - WDS500G2B0A.

Internal drives are normally unformatted, so I set this one up as a single NTFS partition to allow the drive to also be used with Windows.

 

I was initially concerned with possible power sequencing issues because the UltraZed-EV does not have a power connector for the SATA drive only a data interface which means that I need to provide an external power supply.  But SATA is designed electrically for hot plug applications and the data signals are AC coupled differential lines so there is little risk of damage.  I noticed that the ZCU102 board does provide a drive power connector which makes using a drive a lot more convenient, but I can understand the PCB real estate tradeoff.

 

This seems like a very simple test, but I ended up getting into a lot of trouble because I had formatted the drive as NTFS.

 

drive-test.sh

#
#        ** **        **          **  ****      **  **********  **********
#       **   **        **        **   ** **     **  **              **
#      **     **        **      **    **  **    **  **              **
#     **       **        **    **     **   **   **  *********       **
#    **         **        **  **      **    **  **  **              **
#   **           **        ****       **     ** **  **              **
#  **  .........  **        **        **      ****  **********      **
#     ...........
#                                     Reach Further
#
# ----------------------------------------------------------------------------
#
#  This design is the property of Avnet.  Publication of this
#  design is not authorized without written consent from Avnet.
#
#  Please direct any questions to the UltraZed community support forum:
#     http://www.ultrazed.org/forum
#
#  Product information is available at:
#     http://www.ultrazed.org/product/ultrazed-EG
#
#  Disclaimer:
#     Avnet, Inc. makes no warranty for the use of this code or design.
#     This code is provided  "As Is". Avnet, Inc assumes no responsibility for
#     any errors, which may appear in this code, nor does it make a commitment
#     to update the information contained herein. Avnet, Inc specifically
#     disclaims any implied warranties of fitness for a particular purpose.
#                      Copyright(c) 2016 Avnet, Inc.
#                              All rights reserved.
#
# ----------------------------------------------------------------------------
#
#  Create Date:         Mar 04, 2018
#  Design Name:         Disk Drive Performance Tests
#  Module Name:         drive-test.sh
#  Project Name:        UltraZed-EV EV Carrier SD Boot OOB
#  Target Devices:      Xilinx Zynq UltraScale+ EV MPSoC
#  Hardware Boards:     UltraZed-EV + EV Carrier
#
#  Tool versions:       Xilinx Vivado 2017.3
#
#  Description:         Script to run performance tests for
#                       block device (/dev/sd<x>) storage media
#
#  Dependencies:
#
#  Revision:            Mar 04, 2018: 1.0 Initial version
#
# ----------------------------------------------------------------------------
#!/bin/sh


SLEEP_INTERVAL=2s
IS_MOUNTED_TEST_RESULT=-1




function cleanup(){
    # Clean up before exiting
    # Cleanup the shrapnel the tests leave behind
    rm -rf /mnt/${BLOCK_DEVICE}/tmp
    rm -rf /mnt/${BLOCK_DEVICE}/test.tmp
    # Don't forget to unmount
    umount /mnt/${BLOCK_DEVICE}
    # Delete the mount point
    rm -rf /mnt/${BLOCK_DEVICE}
}


function init(){
    # Check to see if the device is mounted
    df | grep /dev/${BLOCK_DEVICE} > /dev/null
    IS_MOUNTED_TEST_RESULT=$?


    # If it is mounted
    if [ $IS_MOUNTED_TEST_RESULT == "0" ];
    then
        # Then unmount it
        umount /dev/${BLOCK_DEVICE}


    else
        # Device is not mounted.  Test if the mount point exists.
        if [ -e /mnt/${BLOCK_DEVICE} ];
        then
            # If it exists then do nothing
            echo " "
        else
            # It does not exist, so create it
            mkdir /mnt/${BLOCK_DEVICE}
        fi
    fi


    # Device has been unmounted or was not already mounted, so do that now
    mount /dev/${BLOCK_DEVICE} /mnt/${BLOCK_DEVICE}


    # Delete evidence of previous tests if it exists
    if [ -e /mnt/${BLOCK_DEVICE}/tmp ];
    then
        rm -rf /mnt/${BLOCK_DEVICE}/tmp
    fi


    if [ -e /mnt/${BLOCK_DEVICE}/test.tmp ];
    then
        rm -rf /mnt/${BLOCK_DEVICE}/test.tmp
    fi
}


## The usual terse usage information:
##
function usage_error(){
        echo >&2
        echo "Performance test utility for block device (eg /dev/sda1) storage media." >&2
    echo "Parse the 'dmesg' output to determine the system device the media" >&2
    echo "has been attached to." >&2
    echo "Usage:  $0 [OPTION]" >&2
    echo "-h      Display this help and exit" >&2
    echo "-d      Block device to use (usually sda1 or sdb1)" >&2
        echo "        if the drive is not partitioned this will be sans partition" >&2
    echo "        number (eg. sda or sdb)" >&2
    echo "-t      Test to run <bonnie++ | hdparm | dd>"
    echo "Eg:     $0 -d sda1 -t dd" >&2
        echo >&2
        exit 1
}


function script_intro(){
    echo " "
    echo "******************************************************************"
    echo "***      ****  **      **  ****    **  ********  **********    ***"
    echo "***     **  **  **    **   ** **   **  **            **        ***"
    echo "***    **    **  **  **    **  **  **  *******       **        ***"
    echo "***   **      **  ****     **   ** **  **            **        ***"
    echo "***  **  ....  **  **      **    ****  ********      **        ***"
    echo "***     ......                                                 ***"
    echo "***                                                            ***"
    echo "*** This is a simple script to run the dd, hdparm, and         ***"
    echo "*** bonnie++ test applications to determine the maximum        ***"
    echo "*** maximum achievable read and write performance for SATA     ***"
    echo "*** and USB SSDs and Flash drives.                             ***"
    echo "***                                                            ***"
    echo "*** More information about bonnie++ can be found at            ***"
    echo "*** http://www.coker.com.au/bonnie++/readme.html               ***"
    echo "***                                                            ***"
    echo "*** This test will unmount the drive if it is already mounted! ***"
    echo "***                                                            ***"
    echo "******************************************************************"
    echo " "
}


function dd_test() {
    echo "Use the 'dd'command to test how long it takes to write a 4GB file to the disk."
    echo "time sh -c dd if=/dev/zero of=/mnt/${BLOCK_DEVICE}/test.tmp bs=4k count=1000000 && sync"
    echo " "
    time sh -c "dd if=/dev/zero of=/mnt/${BLOCK_DEVICE}/test.tmp bs=4k count=1000000 && sync"
}


function hdparm_test() {
    echo " "
    echo "Use the 'hdparm' command to test the read times for the disk."
    echo "Run this test a few times and calculate the average."
    echo "hdparm -T -t /dev/${BLOCK_DEVICE}"
    hdparm -T -t /dev/${BLOCK_DEVICE}
    sleep ${SLEEP_INTERVAL}
    hdparm -T -t /dev/${BLOCK_DEVICE}
    sleep ${SLEEP_INTERVAL}
    hdparm -T -t /dev/${BLOCK_DEVICE}
    echo " "
}


function bonnie_test() {
    # Create the tmp folder for the bonnie++ test
    mkdir /mnt/${BLOCK_DEVICE}/tmp


    echo " "
    echo "Use the 'Bonnie++' command to test the time for sequential and random"
    echo "reads and writes for the disk."
    echo "NOTE: This test takes a few minutes, depending on the speed of the disk"
    echo "bonnie++ -d /mnt/${BLOCK_DEVICE}/tmp -r 4096 -n 16 -u root"
    echo " "
    time bonnie++ -d /mnt/${BLOCK_DEVICE}/tmp -r 4096 -n 16 -u root
}


# START HERE: Non-boilerplate code above should be contained within
# functions so that at this point simple high level calls can be made to
# the bigger blocks above.
# Check to see if the mass storage block device is enumerated.


while getopts "d:t:h" opt;
do
    case ${opt} in
        h)
            usage_error
            ;;
        d)
            BLOCK_DEVICE="$OPTARG"
            ;;
        t)
            TEST_TO_RUN="$OPTARG"
            ;;
        \?)
            echo "Invalid option: -$OPTARG" >&2
            usage_error
            ;;
    esac
done


if [ -b /dev/${BLOCK_DEVICE} ];
then


    script_intro


    read -p "Press enter to continue..."


    # Do some housekeeping
    init
    sleep ${SLEEP_INTERVAL}


    # Run the dd tests
    if [ $TEST_TO_RUN == "dd" ]
    then
        dd_test
        sleep ${SLEEP_INTERVAL}
    fi


    # Run the hdparm test
    if [ $TEST_TO_RUN == "hdparm" ]
    then
        hdparm_test
        sleep ${SLEEP_INTERVAL}
    fi


    # Run the bonnie++ test
    if [ $TEST_TO_RUN == "bonnie++" ]
    then
        bonnie_test
        sleep ${SLEEP_INTERVAL}
    fi


    # Clean up before exiting
    # Cleanup the shrapnel the tests leave behind
    cleanup


else
    echo "******************************************************************"
    echo " "
    echo "   No Mass Storage Block Device Enumerated!"
    echo "   Make sure the SATA or USB3 drive is connected to the board!"
    echo " "
    echo "******************************************************************"
    usage_error
fi

 

 

Test procedure

Boot and verify SATA interface is up

dmesg | grep -i sata

 

Check for the drive using fdisk -l.  Drive is /dev/sda1.  Type is HPFS/NTFS.

 

Running dd test

./drive-test.sh –d sda1 –t dd

The dd program copies less than 2GB of data before it thinks the disk is full even though fdisk knows that it is a 500GB drive.  Apparently this PetaLinux build doesn't support NTFS.

 

In Appendix I: Troubleshooting SATA Connection of the test document it states:

Verify that the SATA III drive is partitioned and formatted with a file system that is compatible with the Linux build generated from the PetaLinux project. The FAT32 file system is supported with the provided pre-built image accompanying this tutorial but other file system types might also be supported.

 

The SATA drive that was used in the reference test was a Delkin Utility+ 64GB SATA III SSD formatted as FAT32.  Windows 10 will not allow me to format a 500MB partition as FAT32.

 

But I noticed that I could configure the kernel in PetaLinux to add NTFS capability.

 

So, I built another image and ran the test again.

Now I have a problem because apparently NTFS is apparently read-only!

 

I discovered from searching that I could probably add NTFS-3G with a Yocto recipe which would allow me to have read/write capabiltiy.  At this point I've decided to punt and proceed doing the test with the drive formatted as EXT4.  I'll come back and figure this out later as I'd really like to use NTFS.  It seems that not many people are doing this in the embedded world?

 

Formatted the drive as EXT4.

 

Check for the drive using fdisk -l.  Drive is /dev/sda1.  Type is now Linux.

 

Running dd test

./drive-test.sh –d sda1 –t dd

 

Test is now running correctly.

4096000000 bytes (4.1 GB, 3.8 GiB) copied, 13.0229 s, 315 MB/s

 

I had thought I'd get better performance since the drive itself is rated at 560 MB/s read and 530 MB/s write but not sure how to determine what the overhead is.

The reference drive in the documentation (64GB Delkin) measured 76.3 MB/s.

 

Running hdparm test

./drive-test.sh –d sda1 –t hdparm

Timing cached reads:   2166 MB in  2.00 seconds = 1082.55 MB/sec

Timing buffered disk reads: 1394 MB in  3.00 seconds = 464.11 MB/sec

 

Comparable to the reference drive.

 

Running bonnie3++ test

./drive-test.sh –d sda1 –t bonnie++

 

12226 is the speed (in KBytes/sec) at which the dataset was written a single character at a time.

406518 is the speed (in KBytes/sec) is the speed at which a file is written a block at a time.

12239 is the speed (in KBytes/sec) at which the dataset was read a single character at a time.

545003 is the speed (in KBytes/sec) at which a file is read a block at a time.

 

Reference drive (much slower in the block write):

10912 is the speed (in KBytes/sec) at which the dataset was written a single character at a time.

73385 is the speed (in KBytes/sec) is the speed at which a file is written a block at a time.

12172 is the speed (in KBytes/sec) at which the dataset was read a single character at a time.

508666 is the speed (in KBytes/sec) at which a file is read a block at a time.

 

Time to move on to trying to get my project working.  I should mention that I did try to format the drive as FAT32 using mkdosfs on Ubuntu and that almost worked.  It formatted fine and is usable with Linux, but unfortunately Windows 10 will recognize the drive but will not allow me to do anything with it.  I can't mount it using disk management (it can see the volume but won't allow me to do anything with it except delete it) .   The horrors of OS interoperability.  I'll need to figure this out later also.  Large external SSDs and hard drives pre-formatted with FAT32 don't have these issues so I just don't know the correct method of setting up the drive.  I'd really prefer NTFS if I can get that to work with PetaLinux.

 

 

Links to previous posts for this roadtest:

  1. Avnet UltraZed-EV Starter Kit Road Test- the adventure begins.....
  2. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD
  3. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD continued
  4. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5
  5. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5 continued
  6. Avnet UltraZed-EV Starter Kit Road Test - Vitis AI
  7. Avnet UltraZed-EV Starter Kit Road Test - Overview
  8. Avnet UltraZed-EV Starter Kit Road Test - GStreamer difficulties
  9. Avnet UltraZed-EV Starter Kit Road Test - Network Performance Test