Discussion:
SAN storage: Increasing Informix Write Parallelism in Innovator-C
(too old to reply)
s***@gmail.com
2017-05-03 19:18:34 UTC
Permalink
When using Innovator-C against SAN storage (which is unavoidable in some cloud infrastructure situations) on a Linux (CentOS 6.x) system, write latency becomes the bottleneck in performance (we've seen initialization of a 1/2 TiB cooked dbspace take over an hour, with low 100 MBps throughput, even though the storage benchmarks much higher MBPs with fio, when configured for lots of parallel large block size writes).

We're interested in increasing parallelism for the initialization as well as ensuring maximum parallelism for operations. fio does well with an iodepth of 64, keeping lots of writes in flight against the disk; this leads to considering increasing AIOVPS dramatically (this is a cooked file and Innovator-C does not support directio, so I conclude the AIOVPs will be used for access to this dbspace).

The following are stats from the initialization (the only dn use has been installation and creating dbspaces; the only significant dbspace is the single 1/2 TiB dbspace in question).

{code}
$ onstat -g iov

IBM Informix Dynamic Server Version 11.70.FC8IE -- On-Line -- Up 7 days 01:58:12 -- 1907928 Kbytes

AIO I/O vps:
class/vp/id s io/s totalops dskread dskwrite dskcopy wakeups io/wup errors tempops
fifo 7 0 i 0.0 0 0 0 0 1 0.0 0 0
kio -1 0 i 0.0 29687 2058 27629 0 89384 0.3 0 0
msc 6 0 i 0.0 496 0 0 0 497 1.0 0 496
aio 5 0 i 0.0 18613 6796 8349 0 17553 1.1 0 0
aio 8 1 i 0.0 2215 326 1878 0 2015 1.1 0 0
aio 9 2 i 0.0 2282 54 2227 0 1511 1.5 0 0
aio 10 3 i 0.0 1978 30 1947 0 1331 1.5 0 0
aio 11 4 i 0.0 1796 20 1773 0 1222 1.5 0 0
aio 12 5 i 0.0 1598 20 1578 0 1149 1.4 0 0
pio 4 0 i 0.0 0 0 0 0 1 0.0 0 0
lio 3 0 i 0.0 0 0 0 0 1 0.0 0 0

$ onstat -g glo

IBM Informix Dynamic Server Version 11.70.FC8IE -- On-Line -- Up 7 days 02:18:28 -- 1907928 Kbytes

MT global info:
sessions threads vps lngspins
0 35 13 0

sched calls thread switches yield 0 yield n yield forever
total: 12378189 9602519 2819012 7987793 169699
per sec: 57 45 12 39 0

Virtual processor summary:
class vps usercpu syscpu total
cpu 1 62.66 7.21 69.87
aio 6 8.25 22.12 30.37
lio 1 2.10 3.66 5.76
pio 1 1.57 3.83 5.40
adm 1 5.21 22.91 28.12
soc 1 27.90 26.76 54.66
msc 1 0.02 0.01 0.03
fifo 1 1.75 3.67 5.42
total 13 109.46 90.17 199.63

Individual virtual processors:
vp pid class usercpu syscpu total Thread Eff
1 1800 cpu 62.66 7.21 69.87 127.73 54%
2 1801 adm 5.21 22.91 28.12 0.00 0%
3 1802 lio 2.10 3.66 5.76 5.76 100%
4 1803 pio 1.57 3.83 5.40 5.40 100%
5 1804 aio 1.10 4.87 5.97 75.01 7%
6 1805 msc 0.02 0.01 0.03 0.04 73%
7 1806 fifo 1.75 3.67 5.42 5.42 100%
8 1807 aio 1.22 3.73 4.95 18.49 26%
9 1808 aio 1.27 3.55 4.82 15.46 31%
10 1809 aio 1.60 3.40 5.00 15.12 33%
11 1810 aio 1.61 3.27 4.88 14.03 34%
12 1811 aio 1.45 3.30 4.75 13.19 36%
13 1812 soc 27.90 26.76 54.66 NA NA
tot 109.46 90.17 199.63

$
{code}

Any thoughts/advice?

Many thanks -

Stephen
s***@gmail.com
2017-05-03 22:41:25 UTC
Permalink
Update: Art Kagel has replied and I have feedback from IBM Support.

Art indicates drive up AIOVPS until io on wakeup is <1 for at least one aiovp. This was my initial thought, but it seemed 8it would not help initialization if initialization is single threaded.

IBM support reiterated its single threaded, and that *Nothin* can be done about this.
(1) expansion of the cooked chunk file and
(2) zero filling and initialization of the chunk
You can't do anything about #2 as that must be done as you create the chunk. > But I think its really #1 that takes up most of the time.
You could in theory have a pool of cooked files already created with the
mkfile command to be the size you need, then when needed by the DB server,
add the chunk with those pre-filled files, saving lots of IO time.
For our Linux ext4 (not journaled) filesystem the best would seem to use fallocate to create the file and then add the file as a chunk to the dbspace.
We'll give this a try and post the results.
When using Innovator-C against SAN storage (which is unavoidable in some cloud infrastructure situations) on a Linux (CentOS 6.x) system, write latency becomes the bottleneck in performance (we've seen initialization of a 1/2 TiB cooked dbspace take over an hour, with low 100 MBps throughput, even though the storage benchmarks much higher MBPs with fio, when configured for lots of parallel large block size writes).
We're interested in increasing parallelism for the initialization as well as ensuring maximum parallelism for operations. fio does well with an iodepth of 64, keeping lots of writes in flight against the disk; this leads to considering increasing AIOVPS dramatically (this is a cooked file and Innovator-C does not support directio, so I conclude the AIOVPs will be used for access to this dbspace).
The following are stats from the initialization (the only dn use has been installation and creating dbspaces; the only significant dbspace is the single 1/2 TiB dbspace in question).
{code}
$ onstat -g iov
IBM Informix Dynamic Server Version 11.70.FC8IE -- On-Line -- Up 7 days 01:58:12 -- 1907928 Kbytes
class/vp/id s io/s totalops dskread dskwrite dskcopy wakeups io/wup errors tempops
fifo 7 0 i 0.0 0 0 0 0 1 0.0 0 0
kio -1 0 i 0.0 29687 2058 27629 0 89384 0.3 0 0
msc 6 0 i 0.0 496 0 0 0 497 1.0 0 496
aio 5 0 i 0.0 18613 6796 8349 0 17553 1.1 0 0
aio 8 1 i 0.0 2215 326 1878 0 2015 1.1 0 0
aio 9 2 i 0.0 2282 54 2227 0 1511 1.5 0 0
aio 10 3 i 0.0 1978 30 1947 0 1331 1.5 0 0
aio 11 4 i 0.0 1796 20 1773 0 1222 1.5 0 0
aio 12 5 i 0.0 1598 20 1578 0 1149 1.4 0 0
pio 4 0 i 0.0 0 0 0 0 1 0.0 0 0
lio 3 0 i 0.0 0 0 0 0 1 0.0 0 0
$ onstat -g glo
IBM Informix Dynamic Server Version 11.70.FC8IE -- On-Line -- Up 7 days 02:18:28 -- 1907928 Kbytes
sessions threads vps lngspins
0 35 13 0
sched calls thread switches yield 0 yield n yield forever
total: 12378189 9602519 2819012 7987793 169699
per sec: 57 45 12 39 0
class vps usercpu syscpu total
cpu 1 62.66 7.21 69.87
aio 6 8.25 22.12 30.37
lio 1 2.10 3.66 5.76
pio 1 1.57 3.83 5.40
adm 1 5.21 22.91 28.12
soc 1 27.90 26.76 54.66
msc 1 0.02 0.01 0.03
fifo 1 1.75 3.67 5.42
total 13 109.46 90.17 199.63
vp pid class usercpu syscpu total Thread Eff
1 1800 cpu 62.66 7.21 69.87 127.73 54%
2 1801 adm 5.21 22.91 28.12 0.00 0%
3 1802 lio 2.10 3.66 5.76 5.76 100%
4 1803 pio 1.57 3.83 5.40 5.40 100%
5 1804 aio 1.10 4.87 5.97 75.01 7%
6 1805 msc 0.02 0.01 0.03 0.04 73%
7 1806 fifo 1.75 3.67 5.42 5.42 100%
8 1807 aio 1.22 3.73 4.95 18.49 26%
9 1808 aio 1.27 3.55 4.82 15.46 31%
10 1809 aio 1.60 3.40 5.00 15.12 33%
11 1810 aio 1.61 3.27 4.88 14.03 34%
12 1811 aio 1.45 3.30 4.75 13.19 36%
13 1812 soc 27.90 26.76 54.66 NA NA
tot 109.46 90.17 199.63
$
{code}
Any thoughts/advice?
Many thanks -
Stephen
Loading...