Posts mit dem Label ssd werden angezeigt. Alle Posts anzeigen
Posts mit dem Label ssd werden angezeigt. Alle Posts anzeigen

Montag, 15. Januar 2018

Performance test: 24 SSD 150 GB Intel on MegaRAID SAS 9361-24i

No RAID, simply jbod mode dstat results:

Reading:
lsscsi | grep SSDSC2BB15| awk '{printf"dd if=%s of=/dev/null bs=16K & \n",$7}'| sh
  0  18  21  60   0   0|6421M    0 | 120B  876B|   0     0 |  36k   53k
  0  18  17  64   0   1|6414M    0 | 120B  346B|   0     0 |  36k   53k
  0  18  14  67   0   1|6413M    0 | 120B  346B|   0     0 |  36k   53k
Writing:
lsscsi | grep SSDSC2BB15| awk '{printf"dd of=%s if=/dev/zero bs=16K & \n",$7}'| sh 
  0   9  11  77   0   3|   0  4503M| 120B  346B|   0     0 |  24k 8055
  0   9  11  77   0   3|   0  4525M| 120B  346B|   0     0 |  24k 8046
  0   9  13  75   0   3|   0  4505M| 120B  346B|   0     0 |  25k 7968

adding more stress:
lsscsi | grep SSDSC2BB15| awk '{printf"dd if=%s of=/dev/null bs=16K & \n",$7}'| sh
lsscsi | grep SSDSC2BB15| awk '{printf"dd of=%s if=/dev/zero bs=16K & \n",$7}'| sh 
  0  19   1  75   0   4|3420M 3401M| 120B  362B|   0     0 |  37k   36k
  0  18   1  77   0   3|3282M 3261M| 180B  362B|   0     0 |  36k   34k
  0  19   2  76   0   3|3297M 3215M| 180B  362B|   0     0 |  35k   33k

Not Bad:)

 

Dienstag, 17. Oktober 2017

Performance consumer nvme ssd vs samsung evo pro SSD

Micron 1100 256GB M.2 vs Samsung SSD 850 EVO 1TB






Micron vs Samsung

Donnerstag, 23. Februar 2017

Smartctl info for nvme

nvme smart-log /dev/nvme0
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning                    : 0
temperature                         : 26 C
available_spare                     : 100%
available_spare_threshold           : 10%
percentage_used                     : 0%
data_units_read                     : 174,995
data_units_written                  : 283,289
host_read_commands                  : 2,327,186
host_write_commands                 : 842,433
controller_busy_time                : 2
power_cycles                        : 2
power_on_hours                      : 2
unsafe_shutdowns                    : 0
media_errors                        : 0
num_err_log_entries                 : 0
Warning Temperature Time            : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1                : 26 C
Temperature Sensor 2                : 47 C
Temperature Sensor 3                : 0 C
Temperature Sensor 4                : 0 C
Temperature Sensor 5                : 0 C
Temperature Sensor 6                : 0 C
Temperature Sensor 7                : 0 C
Temperature Sensor 8                : 0 C

Donnerstag, 2. Februar 2017

NVME intel performance on CentOS7.3

[root@c1701 ~]# hdparm -tT   --direct /dev/nvme0n1



/dev/nvme0n1:

 Timing O_DIRECT cached reads:   2512 MB in  2.00 seconds = 1255.72 MB/sec

 Timing O_DIRECT disk reads: 3604 MB in  3.00 seconds = 1200.83 MB/sec

[root@c1701 ~]# hdparm -tT   --direct /dev/nvme0n1



/dev/nvme0n1:

 Timing O_DIRECT cached reads:   2398 MB in  2.00 seconds = 1198.11 MB/sec

 Timing O_DIRECT disk reads: 3720 MB in  3.00 seconds = 1239.86 MB/sec

[root@c1701 ~]# 

Samstag, 28. Januar 2017

PCI/SSD/NVM performance challenge begins!- PART-II

Recently we got a test server with PCI-e SSD from INTEL-P3600 1.2TB.
Operating system see it as: INTEL SSDPEDME012T4
[root@c1701 vector]# isdct show -nvmelog smarthealthinfo

- SMART and Health Information CVMD550300L81P2CGN -

Available Spare Normalized percentage of the remaining spare capacity available : 100
Available Spare Threshold Percentage : 10
Available Spare Space has fallen below the threshold : False
Controller Busy Time : 0x19
Critical Warnings : 0
Data Units Read : 0x380EC1
Data Units Written : 0xC0C8D4
Host Read Commands : 0x031AA3A3
Host Write Commands : 0x08B44DE8
Media Errors : 0x0
Number of Error Info Log Entries : 0x0
Percentage Used : 0
Power Cycles : 0x10
Power On Hours : 0x01FD
Media is in a read-only mode : False
Device reliability has degraded : False
Temperature - Celsius : 35
Temperature has exceeded a critical threshold : False
Unsafe Shutdowns : 0x08
Volatile memory backup device has failed : False


Now the question is rising: Do we need at all PCI-e NVMe SSD? How we can use it to boost any of our daily used systems??
The bad thing with PCI-e is usability, if it starts to fail one need to turn off the  BOX to replace it. For the time critical systems it is unacceptable. One can think well do some kind of HA pairs of boxes. But any HA layer adds its failure probability and for sure dumps down you HW performance.

As a first touch test I just started to check the Robinhood scans of the lustre fs.
This process runs as a daily cron job on the dedicated client.

Here is the benchmarks with recent "Lustre client" hardware:

Aim: to keep lustrefs file tables in mysql InnoDB.

===========================================
Robinhood 3 with CentOS7.3 and lustrefs 2.9 client
Lustre server 2.9 CentOS6.8,4x4mirrored Intel SSD MDS,
3Jbods-18 OSTS(450-600MB/s per OST),3xMR9271-8i per ost,24 disk per OST, Interconnect FDR 56Gbit,
with MPI tests up to >3.5GB/s random IO
Currently used: 129TB out of 200TB, no striped datasets.
Data distribution histogram: Files count/Size

time lfs find /llust  -type f | wc -l
[root@c1701 ~]# time lfs find /llust  -type f | wc -l
23i935507

real    11m13.261s
user    0m12.464s
sys     2m15.665s

[root@c1701 ~]# time lfs find /llust  -type d | wc -l
776191

real    6m40.706s
user    0m3.500s
sys     0m56.831s

[root@c1701 ~]# time lfs find /llust   | wc -l
24888641

real    3m52.773s
user    0m10.867s
sys     1m4.731s


The bottleneck is probably MDS, but it is fast enough to feed  our InnoDB with files stats and locations.

===========================================
Final robinhood InnoDB  database size is about ~100GB    
Engine InnoDB from Oracle MySQL: mysql-5.7.15-linux-glibc2.5-x86_64 (Why not MariaDB? there are some deviations with mariadb. It is a little bit slower than OMySQL +/- 5/10%. But for the final test conclusions this is not so relevant)

my.cnf
innodb_buffer_pool_size=40G
tmp_table_size=512M
max_heap_table_size=512M
innodb_buffer_pool_instances=64
innodb_flush_log_at_trx_commit=2

==================================
we setup here RAIDZ2 8x3TB HGST HUS724030ALA640 with LSI-9300-8i in IT mode.

FS setup for the tests:
  1. zpool create tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh -f
  2. zfs set compression=lz4 tank
  3. zpool create nvtank /dev/nvme0n1
  4. zfs set compression=lz4 nvtank
  5. mkfs.xfs -Lnvtank  /dev/nvme0n1 -f

Results of mytop query per second-qps:
  1. 8xRAIDZ2 compression none qps: ~98 burst up to 300
  2. 8xRAIDZ2 compression lz4 qps: ~107 burst up to 480
  3. nvmeZFS compression none qps: 5000 burst up to 12000
  4. nvme XFS qps: ~17000 burst up to 18000

First results are showing thath XFS wins against ZFS. The latency of the PCI-e impressive, but once you add any software raid on it like ZFS, compression, its becoming slow but not as slow as  8xSpinning disks.