Discussion:
Memory leak in 2.6.11-rc1?
Jan Kasprzak
2005-01-21 16:19:59 UTC
Permalink
Hi all,

I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
now, and I think there is a memory leak somewhere. I am measuring the
size of active and inactive pages (from /proc/meminfo), and it seems
that the count of sum (active+inactive) pages is decreasing. Please
take look at the graphs at

http://www.linux.cz/stats/mrtg-rrd/vm_active.html

(especially the "monthly" graph) - I've booted 2.6.11-rc1 last Friday,
and since then the size of "inactive" pages is decreasing almost
constantly, while "active" is not increasing. The active+inactive
sum has been steady before, as you can see from both the monthly
and yearly graphs.

Now I am playing with 2.6.11-rc1-bk snapshots to see what happens.
I have been running 2.6.10-rc3 before. More info is available, please ask me.
The box runs 3ware 7506-8 controller with SW RAID-0, 1, and 5 volumes,
Tigon3 network card. The main load is FTP server, and there is also
a HTTP server and Qmail.

Thanks,

-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
Whatever the Java applications and desktop dances may lead to, Unix will <
still be pushing the packets around for a quite a while. --Rob Pike <
Alexander Nyberg
2005-01-22 02:23:59 UTC
Permalink
Post by Jan Kasprzak
Hi all,
I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
now, and I think there is a memory leak somewhere. I am measuring the
size of active and inactive pages (from /proc/meminfo), and it seems
that the count of sum (active+inactive) pages is decreasing. Please
take look at the graphs at
http://www.linux.cz/stats/mrtg-rrd/vm_active.html
(especially the "monthly" graph) - I've booted 2.6.11-rc1 last Friday,
and since then the size of "inactive" pages is decreasing almost
constantly, while "active" is not increasing. The active+inactive
sum has been steady before, as you can see from both the monthly
and yearly graphs.
Now I am playing with 2.6.11-rc1-bk snapshots to see what happens.
I have been running 2.6.10-rc3 before. More info is available, please ask me.
The box runs 3ware 7506-8 controller with SW RAID-0, 1, and 5 volumes,
Tigon3 network card. The main load is FTP server, and there is also
a HTTP server and Qmail.
Others have seen this as well, the reports indicated that it takes a day
or two before it becomes noticeable. When it happens next time please
capture the output of /proc/meminfo and /proc/slabinfo.

Thanks
Alexander
Jens Axboe
2005-01-23 09:11:54 UTC
Permalink
Post by Alexander Nyberg
Post by Jan Kasprzak
Hi all,
I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
now, and I think there is a memory leak somewhere. I am measuring the
size of active and inactive pages (from /proc/meminfo), and it seems
that the count of sum (active+inactive) pages is decreasing. Please
take look at the graphs at
http://www.linux.cz/stats/mrtg-rrd/vm_active.html
(especially the "monthly" graph) - I've booted 2.6.11-rc1 last Friday,
and since then the size of "inactive" pages is decreasing almost
constantly, while "active" is not increasing. The active+inactive
sum has been steady before, as you can see from both the monthly
and yearly graphs.
Now I am playing with 2.6.11-rc1-bk snapshots to see what happens.
I have been running 2.6.10-rc3 before. More info is available, please ask me.
The box runs 3ware 7506-8 controller with SW RAID-0, 1, and 5 volumes,
Tigon3 network card. The main load is FTP server, and there is also
a HTTP server and Qmail.
Others have seen this as well, the reports indicated that it takes a day
or two before it becomes noticeable. When it happens next time please
capture the output of /proc/meminfo and /proc/slabinfo.
This is after 2 days of uptime, the box is basically unusable.
--
Jens Axboe
Andrew Morton
2005-01-23 09:19:18 UTC
Permalink
Post by Jens Axboe
This is after 2 days of uptime, the box is basically unusable.
hm, no indication where it all went.

Does the machine still page properly? Can you do a couple of monster
usemems or fillmems to page everything out, then take another look at
meminfo and the sysrq-M output?
Jens Axboe
2005-01-23 09:56:08 UTC
Permalink
Post by Andrew Morton
Post by Jens Axboe
This is after 2 days of uptime, the box is basically unusable.
hm, no indication where it all went.
Nope, that's the annoying part.
Post by Andrew Morton
Does the machine still page properly? Can you do a couple of monster
usemems or fillmems to page everything out, then take another look at
meminfo and the sysrq-M output?
It seems so, yes. But I'm still stuck with all of my ram gone after a
600MB fillmem, half of it is just in swap.

Attaching meminfo and sysrq-m after fillmem.
--
Jens Axboe
Andrew Morton
2005-01-23 10:32:48 UTC
Permalink
Post by Jens Axboe
But I'm still stuck with all of my ram gone after a
600MB fillmem, half of it is just in swap.
Well. Half of it has gone so far ;)
Post by Jens Axboe
Attaching meminfo and sysrq-m after fillmem.
(I meant a really big fillmem: a couple of 2GB ones. Not to worry.)

It's not in slab and the pagecache and anonymous memory stuff seems to be
working OK. So it has to be something else, which does a bare
__alloc_pages(). Low-level block stuff, networking, arch code, perhaps.

I don't think I've ever really seen code to diagnose this.

A simplistic approach would be to add eight or so ulongs into struct page,
populate them with builtin_return_address(0...7) at allocation time, then
modify sysrq-m to walk mem_map[] printing it all out for pages which have
page_count() > 0. That'd find the culprit.
Russell King
2005-01-23 20:03:15 UTC
Permalink
Post by Andrew Morton
Post by Jens Axboe
But I'm still stuck with all of my ram gone after a
600MB fillmem, half of it is just in swap.
Well. Half of it has gone so far ;)
Post by Jens Axboe
Attaching meminfo and sysrq-m after fillmem.
(I meant a really big fillmem: a couple of 2GB ones. Not to worry.)
It's not in slab and the pagecache and anonymous memory stuff seems to be
working OK. So it has to be something else, which does a bare
__alloc_pages(). Low-level block stuff, networking, arch code, perhaps.
I don't think I've ever really seen code to diagnose this.
A simplistic approach would be to add eight or so ulongs into struct page,
populate them with builtin_return_address(0...7) at allocation time, then
modify sysrq-m to walk mem_map[] printing it all out for pages which have
page_count() > 0. That'd find the culprit.
I think I may be seeing something odd here, maybe a possible memory leak.
The only problem I have is wondering whether I'm actually comparing like
with like. Maybe some networking people can provide a hint?

Below is gathered from 2.6.11-rc1.

bash-2.05a# head -n2 /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
115
ip_dst_cache 759 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
117
ip_dst_cache 770 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
133
ip_dst_cache 775 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
18
ip_dst_cache 664 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
20
ip_dst_cache 664 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
22
ip_dst_cache 673 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
23
ip_dst_cache 670 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 675 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 669 885 256 15 1

I'm fairly positive when I rebooted the machine a couple of days ago,
ip_dst_cache was significantly smaller for the same number of lines in
/proc/net/rt_cache.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Russell King
2005-01-24 11:48:53 UTC
Permalink
Post by Russell King
I think I may be seeing something odd here, maybe a possible memory leak.
The only problem I have is wondering whether I'm actually comparing like
with like. Maybe some networking people can provide a hint?
Below is gathered from 2.6.11-rc1.
bash-2.05a# head -n2 /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
115
ip_dst_cache 759 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
117
ip_dst_cache 770 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
133
ip_dst_cache 775 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
18
ip_dst_cache 664 885 256 15 1
...
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 675 885 256 15 1
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 669 885 256 15 1
I'm fairly positive when I rebooted the machine a couple of days ago,
ip_dst_cache was significantly smaller for the same number of lines in
/proc/net/rt_cache.
FYI, today it looks like this:

bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
26
ip_dst_cache 820 1065 256 15 1

So the dst cache seems to have grown by 151 in 16 hours... I'll continue
monitoring and providing updates.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Russell King
2005-01-25 19:32:07 UTC
Permalink
Post by Russell King
Post by Russell King
I think I may be seeing something odd here, maybe a possible memory leak.
The only problem I have is wondering whether I'm actually comparing like
with like. Maybe some networking people can provide a hint?
Below is gathered from 2.6.11-rc1.
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 669 885 256 15 1
I'm fairly positive when I rebooted the machine a couple of days ago,
ip_dst_cache was significantly smaller for the same number of lines in
/proc/net/rt_cache.
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
26
ip_dst_cache 820 1065 256 15 1
So the dst cache seems to have grown by 151 in 16 hours... I'll continue
monitoring and providing updates.
Tonights update:
50
ip_dst_cache 1024 1245 256 15 1

As you can see, the dst cache is consistently growing by about 200
entries per day. Given this, I predict that the box will fall over
due to "dst cache overflow" in roughly 35 days.

kernel network configuration:

CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_FWMARK=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_SYN_COOKIES=y
CONFIG_IPV6=y
CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=y
CONFIG_IP_NF_CONNTRACK_MARK=y
CONFIG_IP_NF_FTP=y
CONFIG_IP_NF_IRC=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_LIMIT=y
CONFIG_IP_NF_MATCH_IPRANGE=y
CONFIG_IP_NF_MATCH_MAC=m
CONFIG_IP_NF_MATCH_PKTTYPE=m
CONFIG_IP_NF_MATCH_MARK=y
CONFIG_IP_NF_MATCH_MULTIPORT=m
CONFIG_IP_NF_MATCH_TOS=m
CONFIG_IP_NF_MATCH_RECENT=y
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_DSCP=m
CONFIG_IP_NF_MATCH_AH_ESP=m
CONFIG_IP_NF_MATCH_LENGTH=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_MATCH_TCPMSS=m
CONFIG_IP_NF_MATCH_HELPER=y
CONFIG_IP_NF_MATCH_STATE=y
CONFIG_IP_NF_MATCH_CONNTRACK=y
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_CONNMARK=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_NAT=y
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_SAME=y
CONFIG_IP_NF_NAT_IRC=y
CONFIG_IP_NF_NAT_FTP=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_TARGET_TOS=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=y
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_CONNMARK=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_LIMIT=y
CONFIG_IP6_NF_MATCH_MAC=y
CONFIG_IP6_NF_MATCH_RT=y
CONFIG_IP6_NF_MATCH_OPTS=y
CONFIG_IP6_NF_MATCH_FRAG=y
CONFIG_IP6_NF_MATCH_HL=y
CONFIG_IP6_NF_MATCH_MULTIPORT=y
CONFIG_IP6_NF_MATCH_MARK=y
CONFIG_IP6_NF_MATCH_AHESP=y
CONFIG_IP6_NF_MATCH_LENGTH=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_TARGET_MARK=y
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Russell King
2005-01-27 08:28:09 UTC
Permalink
Post by Russell King
Post by Russell King
Post by Russell King
I think I may be seeing something odd here, maybe a possible memory leak.
The only problem I have is wondering whether I'm actually comparing like
with like. Maybe some networking people can provide a hint?
Below is gathered from 2.6.11-rc1.
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
24
ip_dst_cache 669 885 256 15 1
I'm fairly positive when I rebooted the machine a couple of days ago,
ip_dst_cache was significantly smaller for the same number of lines in
/proc/net/rt_cache.
bash-2.05a# cat /proc/net/rt_cache | wc -l; grep ip_dst /proc/slabinfo
26
ip_dst_cache 820 1065 256 15 1
So the dst cache seems to have grown by 151 in 16 hours... I'll continue
monitoring and providing updates.
50
ip_dst_cache 1024 1245 256 15 1
As you can see, the dst cache is consistently growing by about 200
entries per day. Given this, I predict that the box will fall over
due to "dst cache overflow" in roughly 35 days.
This mornings magic numbers are:

3
ip_dst_cache 1292 1485 256 15 1

Is no one interested in the fact that the DST cache is leaking and
eventually takes out machines? I've had virtually zero interest in
this problem so far.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Andrew Morton
2005-01-27 08:47:32 UTC
Permalink
Post by Russell King
3
ip_dst_cache 1292 1485 256 15 1
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries and grew no further.
It's now gradually shrinking. So there doesn't appear to be a trivial
bug..
Post by Russell King
Is no one interested in the fact that the DST cache is leaking and
eventually takes out machines? I've had virtually zero interest in
this problem so far.
I guess we should find a way to make it happen faster.
Alessandro Suardi
2005-01-27 10:19:20 UTC
Permalink
Post by Andrew Morton
Post by Russell King
3
ip_dst_cache 1292 1485 256 15 1
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries and grew no further.
It's now gradually shrinking. So there doesn't appear to be a trivial
bug..
Post by Russell King
Is no one interested in the fact that the DST cache is leaking and
eventually takes out machines? I've had virtually zero interest in
this problem so far.
I guess we should find a way to make it happen faster.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Data point... on my box, used as ed2k/bittorrent
machine, the ip_dst_cache grows and shrinks quite
fast; these two samples were ~3 minutes apart:


[***@donkey ~]# grep ip_dst /proc/slabinfo
ip_dst_cache 998 1005 256 15 1 : tunables 120 60
0 : slabdata 67 67 0
[***@donkey ~]# wc -l /proc/net/rt_cache
926 /proc/net/rt_cache

[***@donkey ~]# grep ip_dst /proc/slabinfo
ip_dst_cache 466 795 256 15 1 : tunables 120 60
0 : slabdata 53 53 0
[***@donkey ~]# wc -l /proc/net/rt_cache
443 /proc/net/rt_cache

and these were 2 seconds apart

[***@donkey ~]# wc -l /proc/net/rt_cache
737 /proc/net/rt_cache
[***@donkey ~]# grep ip_dst /proc/slabinfo
ip_dst_cache 795 795 256 15 1 : tunables 120 60
0 : slabdata 53 53 0

[***@donkey ~]# wc -l /proc/net/rt_cache
1023 /proc/net/rt_cache
[***@donkey ~]# grep ip_dst /proc/slabinfo
ip_dst_cache 1035 1035 256 15 1 : tunables 120 60
0 : slabdata 69 69 0

--alessandro

"And every dream, every, is just a dream after all"

(Heather Nova, "Paper Cup")
Martin Josefsson
2005-01-27 12:17:28 UTC
Permalink
Post by Andrew Morton
Post by Russell King
3
ip_dst_cache 1292 1485 256 15 1
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries and grew no further.
It's now gradually shrinking. So there doesn't appear to be a trivial
bug..
Post by Russell King
Is no one interested in the fact that the DST cache is leaking and
eventually takes out machines? I've had virtually zero interest in
this problem so far.
I guess we should find a way to make it happen faster.
I could be a refcount problem. I think Russell is using NAT, it could be
the MASQUERADE target if that is in use. A simple test would be to switch
to SNAT and try again if possible.

/Martin
Robert Olsson
2005-01-27 12:56:30 UTC
Permalink
Post by Andrew Morton
Post by Russell King
ip_dst_cache 1292 1485 256 15 1
I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing no NAT no filter.

Start
=====
ip_dst_cache 5 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0

After DoS
=========
ip_dst_cache 66045 76125 256 15 1 : tunables 120 60 8 : slabdata 5075 5075 480

After some GC runs.
==================
ip_dst_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0

No problems here. I saw Martin talked about NAT...

--ro
Robert Olsson
2005-01-27 13:03:35 UTC
Permalink
Oh. Linux version 2.6.11-rc2 was used.
Post by Robert Olsson
Post by Andrew Morton
Post by Russell King
ip_dst_cache 1292 1485 256 15 1
I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing no NAT no filter.
Start
=====
ip_dst_cache 5 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0
After DoS
=========
ip_dst_cache 66045 76125 256 15 1 : tunables 120 60 8 : slabdata 5075 5075 480
After some GC runs.
==================
ip_dst_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
No problems here. I saw Martin talked about NAT...
--ro
Russell King
2005-01-27 16:49:18 UTC
Permalink
Post by Robert Olsson
Post by Andrew Morton
Post by Russell King
ip_dst_cache 1292 1485 256 15 1
I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing no NAT no filter.
Start
=====
ip_dst_cache 5 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0
After DoS
=========
ip_dst_cache 66045 76125 256 15 1 : tunables 120 60 8 : slabdata 5075 5075 480
After some GC runs.
==================
ip_dst_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
No problems here. I saw Martin talked about NAT...
Yes, I can reproduce that same behaviour, where I can artificially
inflate the DST cache and the GC does run and trims it back down to
something reasonable.

BUT, over time, my DST cache just increases in size and won't trim back
down. Not even by writing to the /proc/sys/net/ipv4/route/flush sysctl
(which, if I'm reading the code correctly - and would be nice to know
from those who actually know this stuff - should force an immediate
flush of the DST cache.)

For instance, I have (in sequence):

# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
581
ip_dst_cache 1860 1860 256 15 1 : tunables 120 60 0 : slabdata 124 124 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
717
ip_dst_cache 1995 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
690
ip_dst_cache 1995 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
696
ip_dst_cache 1995 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
700
ip_dst_cache 1995 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
718
ip_dst_cache 1993 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
653
ip_dst_cache 1993 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
667
ip_dst_cache 1956 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
620
ip_dst_cache 1944 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
623
ip_dst_cache 1920 1995 256 15 1 : tunables 120 60 0 : slabdata 133 133 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
8
ip_dst_cache 1380 1980 256 15 1 : tunables 120 60 0 : slabdata 132 132 0
# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo
86
ip_dst_cache 1375 1875 256 15 1 : tunables 120 60 0 : slabdata 125 125 0

so obviously the GC does appear to be working - as can be seen from the
number of entries in /proc/net/rt_cache. However, the number of objects
in the slab cache does grow day on day. About 4 days ago, it was only
about 600 active objects. Now it's more than twice that, and it'll
continue increasing until it hits 8192, where upon it's game over.

And, here's the above with /proc/net/stat/rt_cache included:

# cat /proc/net/rt_cache|wc -l;grep ip_dst /proc/slabinfo; cat /proc/net/stat/rt_cache
61
ip_dst_cache 1340 1680 256 15 1 : tunables 120 60 0 : slabdata 112 112 0
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
00000538 005c9f10 0005e163 00000000 00000013 000002e2 00000000 00000005 003102e3 00038f6d 00000000 0007887a 0005286d 00001142 00000000 00138855 0010848d

notice how /proc/net/stat/rt_cache says there's 1336 entries in the
route cache. _Where_ are they? They're not there according to
/proc/net/rt_cache.

(PS, the formatting of the headings in /proc/net/stat/rt_cache doesn't
appear to tie up with the formatting of the data which is _really_
confusing.)
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Phil Oester
2005-01-27 18:37:45 UTC
Permalink
Post by Russell King
so obviously the GC does appear to be working - as can be seen from the
number of entries in /proc/net/rt_cache. However, the number of objects
in the slab cache does grow day on day. About 4 days ago, it was only
about 600 active objects. Now it's more than twice that, and it'll
continue increasing until it hits 8192, where upon it's game over.
I can confirm the behavior you are seeing -- does seem to be a leak
somewhere. Below from a heavily used gateway with 26 days uptime:

# wc -l /proc/net/rt_cache ; grep ip_dst /proc/slabinfo
12870 /proc/net/rt_cache
ip_dst_cache 53327 57855

Eventually I get the dst_cache overflow errors and have to reboot.

Phil
Russell King
2005-01-27 19:25:04 UTC
Permalink
Post by Phil Oester
Post by Russell King
so obviously the GC does appear to be working - as can be seen from the
number of entries in /proc/net/rt_cache. However, the number of objects
in the slab cache does grow day on day. About 4 days ago, it was only
about 600 active objects. Now it's more than twice that, and it'll
continue increasing until it hits 8192, where upon it's game over.
I can confirm the behavior you are seeing -- does seem to be a leak
# wc -l /proc/net/rt_cache ; grep ip_dst /proc/slabinfo
12870 /proc/net/rt_cache
ip_dst_cache 53327 57855
Eventually I get the dst_cache overflow errors and have to reboot.
Can you provide some details, eg kernel configuration, loaded modules
and a brief overview of any netfilter modules you may be using.

Maybe we can work out what's common between our setups.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Phil Oester
2005-01-27 20:40:12 UTC
Permalink
Post by Russell King
Can you provide some details, eg kernel configuration, loaded modules
and a brief overview of any netfilter modules you may be using.
Maybe we can work out what's common between our setups.
Vanilla 2.6.10, though I've been seeing these problems since 2.6.8 or
earlier. Netfilter running on all boxes, some utilizing SNAT, others
not -- none using MASQ. This is from a box running no NAT at all,
although has some other filter rules:

# wc -l /proc/net/rt_cache ; grep dst_cache /proc/slabinfo
50 /proc/net/rt_cache
ip_dst_cache 84285 84285

Also with uptime of 26 days.

These boxes are all running the quagga OSPF daemon, but those that
are lightly loaded are not exhibiting these problems.

Phil
Russell King
2005-01-28 09:32:06 UTC
Permalink
Post by Phil Oester
Vanilla 2.6.10, though I've been seeing these problems since 2.6.8 or
earlier.
Right. For me:

- 2.6.9-rc3 (installed 8th Oct) died with dst cache overflow on 29th November
- 2.6.10-rc2 (booted 29th Nov) died with the same on 19th January
- 2.6.11-rc1 (booted 19th Jan) appears to have the same problem, but
it hasn't died yet.
Post by Phil Oester
Netfilter running on all boxes, some utilizing SNAT, others
not -- none using MASQ.
IPv4 filter targets: ACCEPT, DROP, REJECT, LOG
using: state, limit & protocol

IPv4 nat targets: DNAT, MASQ
using: protocol

IPv4 mangle targets: ACCEPT, MARK
using: protocol

IPv6 filter targets: ACCEPT, DROP
using: protocol

IPv6 mangle targets: none

(protocol == at least one rule matching tcp, icmp or udp packets)

IPv6 configured native on internal interface, tun6to4 for external IPv6
communication.

IPv4 and IPv6 forwarding enabled.
IPv4 rpfilter, proxyarp, syncookies enabled.
IPv4 proxy delay on internal interface set to '1'.
Post by Phil Oester
These boxes are all running the quagga OSPF daemon, but those that
are lightly loaded are not exhibiting these problems.
Running zebra (for ipv6 route advertisment on the local network only.)

Network traffic-wise, 2.6.11-rc1 has this on its public facing
interface(s) in 8.5 days.

4: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
RX: bytes packets errors dropped overrun mcast
667468541 2603373 0 0 0 0
TX: bytes packets errors dropped carrier collsns
1245774764 2777605 0 0 1 2252

5: ***@NONE: <NOARP,UP> mtu 1480 qdisc noqueue
RX: bytes packets errors dropped overrun mcast
19130536 84034 0 0 0 0
TX: bytes packets errors dropped carrier collsns
10436749 91589 0 0 0 0
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
David S. Miller
2005-01-27 20:33:26 UTC
Permalink
On Thu, 27 Jan 2005 16:49:18 +0000
Post by Russell King
notice how /proc/net/stat/rt_cache says there's 1336 entries in the
route cache. _Where_ are they? They're not there according to
/proc/net/rt_cache.
When the route cache is flushed, that kills a reference to each
entry in the routing cache. If for some reason, other references
remain (route connected to socket, some leak in the stack somewhere)
the route cache entry can't be immediately completely freed up.

So they won't be listed in /proc/net/rt_cache (since they've been
removed from the lookup table) but they will be accounted for in
/proc/net/stat/rt_cache until the final release is done on the
routing cache object and it can be completely freed up.

Do you happen to be using IPV6 in any way by chance?
Russell King
2005-01-28 00:17:01 UTC
Permalink
Post by David S. Miller
So they won't be listed in /proc/net/rt_cache (since they've been
removed from the lookup table) but they will be accounted for in
/proc/net/stat/rt_cache until the final release is done on the
routing cache object and it can be completely freed up.
Do you happen to be using IPV6 in any way by chance?
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is that something you can confirm?
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
David S. Miller
2005-01-28 00:34:44 UTC
Permalink
On Fri, 28 Jan 2005 00:17:01 +0000
Post by Russell King
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is that something you can confirm?
Yep, it would be this change below. Try backing it out and see
if that makes your leak go away.

# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
# 2005/01/14 20:41:55-08:00 ***@gondor.apana.org.au
# [IPV6]: Fix locking in ip6_dst_lookup().
#
# The caller does not necessarily have the socket locked
# (udpv6sendmsg() is one such case) so we have to use
# sk_dst_check() instead of __sk_dst_check().
#
# Signed-off-by: Herbert Xu <***@gondor.apana.org.au>
# Signed-off-by: David S. Miller <***@davemloft.net>
#
# net/ipv6/ip6_output.c
# 2005/01/14 20:41:34-08:00 ***@gondor.apana.org.au +3 -3
# [IPV6]: Fix locking in ip6_dst_lookup().
#
# The caller does not necessarily have the socket locked
# (udpv6sendmsg() is one such case) so we have to use
# sk_dst_check() instead of __sk_dst_check().
#
# Signed-off-by: Herbert Xu <***@gondor.apana.org.au>
# Signed-off-by: David S. Miller <***@davemloft.net>
#
diff -Nru a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
--- a/net/ipv6/ip6_output.c 2005-01-27 16:07:21 -08:00
+++ b/net/ipv6/ip6_output.c 2005-01-27 16:07:21 -08:00
@@ -745,7 +745,7 @@
if (sk) {
struct ipv6_pinfo *np = inet6_sk(sk);

- *dst = __sk_dst_check(sk, np->dst_cookie);
+ *dst = sk_dst_check(sk, np->dst_cookie);
if (*dst) {
struct rt6_info *rt = (struct rt6_info*)*dst;

@@ -772,9 +772,9 @@
&& (np->daddr_cache == NULL ||
!ipv6_addr_equal(&fl->fl6_dst, np->daddr_cache)))
|| (fl->oif && fl->oif != (*dst)->dev->ifindex)) {
+ dst_release(*dst);
*dst = NULL;
- } else
- dst_hold(*dst);
+ }
}
}
Russell King
2005-01-28 08:58:59 UTC
Permalink
Post by David S. Miller
On Fri, 28 Jan 2005 00:17:01 +0000
Post by Russell King
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is that something you can confirm?
Yep, it would be this change below. Try backing it out and see
if that makes your leak go away.
Thanks. I'll try it, but:

1. Looking at the date of the change it seems unlikely. The recent
death occurred with 2.6.10-rc2, booted on 29th November and dying
on 19th January, which obviously predates this cset.
2. It'll take a couple of days to confirm the behaviour of the dst cache.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Russell King
2005-01-30 13:23:43 UTC
Permalink
Post by Russell King
Post by David S. Miller
On Fri, 28 Jan 2005 00:17:01 +0000
Post by Russell King
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is that something you can confirm?
Yep, it would be this change below. Try backing it out and see
if that makes your leak go away.
1. Looking at the date of the change it seems unlikely. The recent
death occurred with 2.6.10-rc2, booted on 29th November and dying
on 19th January, which obviously predates this cset.
2. It'll take a couple of days to confirm the behaviour of the dst cache.
I have another question whether ip6_output.c is the problem - the leak
is with ip_dst_cache (== IPv4). If the problem were ip6_output, wouldn't
we see ip6_dst_cache leaking instead?

Anyway, I've produced some code which keeps a record of the __refcnt
increments and decrements, and I think it's produced some interesting
results. Essentially, I'm seeing the odd dst entry with a __refcnt of
14000 or so (which is still in active use, so probably ok), and a number
with 4, 7, and 13 which haven't had the refcount touched for at least 14
minutes.

One of these were created via ip_route_input_slow(), the other three via
ip_route_output_slow(). That isn't significant on its own.

However, whenever ip_copy_metadata() appears in the refcount log, I see
half the number of increments due to that still remaining to be
decremented (see the output below). 0 = "mark", positive numbers =
increment refcnt this many times, negative numbers = decrement refcnt
this many times.

I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?

All that said, it's probably far too early to read much into these
results - once the machine has been running for more than 19 minutes
and has a significant number of "stuck" dst cache entries, I think
it'll be far more conclusive. Nevertheless, it looks like food for
thought.

dst pointer: creation time (200Hz jiffies) last reference time (200Hz jiffies)
c1c66260: ffff6c79 ffff879d:
location count function
c01054f4 0 dst_alloc
c0114a80 1 ip_route_input_slow
c00fa95c -18 __kfree_skb
c0115104 13 ip_route_input
c011ae1c 8 ip_copy_metadata
c01055ac 0 __dst_free
untracked counts
: 0
total
= 4
next=c1c66b60 refcnt=00000004 use=0000000d dst=24f45cc3 src=0f00a8c0

c1c66b60: ffff20fe ffff5066:
c01054f4 0 dst_alloc
c01156e8 1 ip_route_output_slow
c011b854 6813 ip_append_data
c011c7e0 6813 ip_push_pending_frames
c00fa95c -6826 __kfree_skb
c011c8fc -6813 ip_push_pending_frames
c0139dbc -6813 udp_sendmsg
c0115a0c 6814 __ip_route_output_key
c013764c -2 ip4_datagram_connect
c011ae1c 26 ip_copy_metadata
c01055ac 0 __dst_free
: 0
= 13
next=c1c57680 refcnt=0000000d use=00001a9e dst=bbe812d4 src=bae812d4

c1c66960: ffff89ac ffffa42d:
c01054f4 0 dst_alloc
c01156e8 1 ip_route_output_slow
c011b854 3028 ip_append_data
c0139dbc -3028 udp_sendmsg
c011c7e0 3028 ip_push_pending_frames
c011ae1c 8 ip_copy_metadata
c00fa95c -3032 __kfree_skb
c011c8fc -3028 ip_push_pending_frames
c0115a0c 3027 __ip_route_output_key
c01055ac 0 __dst_free
: 0
= 4
next=c16d1080 refcnt=00000004 use=00000bd3 dst=bbe812d4 src=bae812d4

c16d1080: ffff879b ffff89af:
c01054f4 0 dst_alloc
c01156e8 1 ip_route_output_slow
c011b854 240 ip_append_data
c011c7e0 240 ip_push_pending_frames
c00fa95c -247 __kfree_skb
c011c8fc -240 ip_push_pending_frames
c0139dbc -240 udp_sendmsg
c0115a0c 239 __ip_route_output_key
c011ae1c 14 ip_copy_metadata
c01055ac 0 __dst_free
: 0
= 7
next=c1c66260 refcnt=00000007 use=000000ef dst=bbe812d4 src=bae812d4
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Russell King
2005-01-30 15:34:49 UTC
Permalink
Post by Russell King
Anyway, I've produced some code which keeps a record of the __refcnt
increments and decrements, and I think it's produced some interesting
results. Essentially, I'm seeing the odd dst entry with a __refcnt of
14000 or so (which is still in active use, so probably ok), and a number
with 4, 7, and 13 which haven't had the refcount touched for at least 14
minutes.
An hour or so goes by. I now have 14 dst cache entries with non-zero
refcounts, and these have the following properties:

* The five from before (with counts 13, 14473, 4, 4, 7 respectively):
+ all remain unfreed.
+ show precisely no change in the refcounts.
+ the refcount has not been touched for more than an hour.
* They have all been touched by ip_copy_metadata.
* Their remaining refcounts are precisely half the number of
ip_copy_metadata calls in every instance.

No entries with a refcount of zero contain ip_copy_metadata() and do
appear in /proc/net/rt_cache.

The following may also be a pointer - from /proc/net/snmp:

Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs ReasmFails FragOKs FragFails FragCreates
Ip: 1 64 140510 0 0 36861 0 0 93549 131703 485 0 21 46622 15695 21 21950 0 0

Since FragCreates is 0, this means that we are using the frag_lists
rather than creating our own fragments (and indeed the first
ip_copy_metadata() call rather than the second in ip_fragment()).

I think the case against the IPv4 fragmentation code is mounting.
However, without knowing what the expected conditions for this code,
(eg, are skbs on the fraglist supposed to have NULL skb->dst?) I'm
unable to progress this any further. However, I think it's quite
clear that there is something bad going on here.

Why many more people aren't seeing this I've no idea.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Phil Oester
2005-01-30 16:57:02 UTC
Permalink
Post by Russell King
I think the case against the IPv4 fragmentation code is mounting.
However, without knowing what the expected conditions for this code,
(eg, are skbs on the fraglist supposed to have NULL skb->dst?) I'm
unable to progress this any further. However, I think it's quite
clear that there is something bad going on here.
Interesting...the gateway which exhibits the problem fastest in my
area does have a large number of fragmented UDP packets running through it,
as shown by tcpdump 'ip[6:2] & 0x1fff != 0'.
Post by Russell King
Why many more people aren't seeing this I've no idea.
Perhaps you (and I) experience more fragments than the average user???

Nice detective work!

Phil
Patrick McHardy
2005-01-30 17:23:10 UTC
Permalink
Post by Russell King
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?
Nice spotting. If conntrack isn't loaded defragmentation happens after
routing, so this is likely the cause.

Regards
Patrick
Patrick McHardy
2005-01-30 17:26:29 UTC
Permalink
Post by Patrick McHardy
Post by Russell King
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?
Nice spotting. If conntrack isn't loaded defragmentation happens after
routing, so this is likely the cause.
OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
so frag_list should be empty. So probably false alarm, sorry.
Post by Patrick McHardy
Regards
Patrick
Patrick McHardy
2005-01-30 17:58:27 UTC
Permalink
Post by Patrick McHardy
Post by Patrick McHardy
Post by Russell King
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?
Nice spotting. If conntrack isn't loaded defragmentation happens after
routing, so this is likely the cause.
OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
so frag_list should be empty. So probably false alarm, sorry.
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.

Regards
Patrick
Russell King
2005-01-30 18:45:07 UTC
Permalink
Post by Patrick McHardy
Post by Patrick McHardy
Post by Patrick McHardy
Post by Russell King
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?
Nice spotting. If conntrack isn't loaded defragmentation happens after
routing, so this is likely the cause.
OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
so frag_list should be empty. So probably false alarm, sorry.
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.
Good news - with this in place, I no longer have refcounts of 14000!
After 18 minutes (the first clearout of the dst cache from 500 odd
down to 11 or so), all dst cache entries have a ref count of zero.

I'll check it again later this evening to be sure.

Thanks Patrick.
Post by Patrick McHardy
===== net/ipv4/ip_output.c 1.74 vs edited =====
--- 1.74/net/ipv4/ip_output.c 2005-01-25 01:40:10 +01:00
+++ edited/net/ipv4/ip_output.c 2005-01-30 18:54:43 +01:00
@@ -389,6 +389,7 @@
to->priority = from->priority;
to->protocol = from->protocol;
to->security = from->security;
+ dst_release(to->dst);
to->dst = dst_clone(from->dst);
to->dev = from->dev;
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
David S. Miller
2005-01-31 02:48:36 UTC
Permalink
On Sun, 30 Jan 2005 18:58:27 +0100
Post by Patrick McHardy
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.
It's amazing how many bugs exist due to the local defragmentation and
refragmentation done by netfilter. :-)

Good catch Patrick, I'll apply this and push upstream.
Herbert Xu
2005-01-31 04:11:32 UTC
Permalink
Post by Patrick McHardy
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.
Well caught. The same thing is needed for IPv6, right?
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <***@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
YOSHIFUJI Hideaki / 吉藤英明
2005-01-31 04:45:59 UTC
Permalink
Post by Herbert Xu
Post by Patrick McHardy
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.
Well caught. The same thing is needed for IPv6, right?
(not yet confirmed, but) yes, please.

Signed-off-by: Hideaki YOSHIFUJI <***@linux-ipv6.org>

===== net/ipv6/ip6_output.c 1.82 vs edited =====
--- 1.82/net/ipv6/ip6_output.c 2005-01-25 09:40:10 +09:00
+++ edited/net/ipv6/ip6_output.c 2005-01-31 13:44:01 +09:00
@@ -463,6 +463,7 @@
to->priority = from->priority;
to->protocol = from->protocol;
to->security = from->security;
+ dst_release(to->dst);
to->dst = dst_clone(from->dst);
to->dev = from->dev;


--yoshfuji
Patrick McHardy
2005-01-31 05:00:40 UTC
Permalink
Post by YOSHIFUJI Hideaki / 吉藤英明
Post by Herbert Xu
Post by Patrick McHardy
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb->dst set.
Well caught. The same thing is needed for IPv6, right?
(not yet confirmed, but) yes, please.
We don't need this for IPv6 yet. Once we get nf_conntrack in we
might need this, but its IPv6 fragment handling is different from
ip_conntrack, I need to check first.

Regards
Patrick
YOSHIFUJI Hideaki / 吉藤英明
2005-01-31 05:16:36 UTC
Permalink
In article <***@trash.net> (at Mon, 31 Jan 2005 06:00:40 +0100), Patrick McHardy <***@trash.net> says:

|We don't need this for IPv6 yet. Once we get nf_conntrack in we
|might need this, but its IPv6 fragment handling is different from
|ip_conntrack, I need to check first.

Ok. It would be better to have some comment but anyway...
kozakai-san?

--yoshfuji
Yasuyuki KOZAKAI
2005-01-31 05:42:52 UTC
Permalink
Hi,

From: YOSHIFUJI Hideaki / $B5HF#1QL@(B <***@linux-ipv6.org>
Date: Mon, 31 Jan 2005 14:16:36 +0900 (JST)
Post by YOSHIFUJI Hideaki / 吉藤英明
|We don't need this for IPv6 yet. Once we get nf_conntrack in we
|might need this, but its IPv6 fragment handling is different from
|ip_conntrack, I need to check first.
Ok. It would be better to have some comment but anyway...
kozakai-san?
IMO, fix for nf_conntrack isn't needed yet. Because someone may change
IPv6 fragment handling in nf_conntrack.

Anyway, current nf_conntrack passes the original (not de-fragmented) skb to
IPv6 stack. nf_conntrack doesn't touch its dst.

Regards,
----------------------------------------
Yasuyuki KOZAKAI

Communication Platform Laboratory,
Corporate Research & Development Center,
Toshiba Corporation

***@toshiba.co.jp
----------------------------------------
David S. Miller
2005-01-31 05:11:50 UTC
Permalink
On Mon, 31 Jan 2005 06:00:40 +0100
Post by Patrick McHardy
We don't need this for IPv6 yet. Once we get nf_conntrack in we
might need this, but its IPv6 fragment handling is different from
ip_conntrack, I need to check first.
Right, ipv6 netfilter cannot create this situation yet.

However, logically the fix is still correct and I'll add
it into the tree.
Herbert Xu
2005-01-31 05:40:52 UTC
Permalink
Post by David S. Miller
On Mon, 31 Jan 2005 06:00:40 +0100
Post by Patrick McHardy
We don't need this for IPv6 yet. Once we get nf_conntrack in we
might need this, but its IPv6 fragment handling is different from
ip_conntrack, I need to check first.
Right, ipv6 netfilter cannot create this situation yet.
Not through netfilter but I'm not convinced that other paths
won't do this.

For instance, what about ipv6_frag_rcv -> esp6_input -> ... -> ip6_fragment?
That would seem to be a potential path for a non-NULL dst to survive
through to ip6_fragment, no?

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <***@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Russell King
2005-01-30 18:01:46 UTC
Permalink
Post by Patrick McHardy
Post by Patrick McHardy
Post by Russell King
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
first ip_copy_metadata() in ip_fragment() going to overwrite this
pointer without dropping the refcount?
Nice spotting. If conntrack isn't loaded defragmentation happens after
routing, so this is likely the cause.
OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
so frag_list should be empty. So probably false alarm, sorry.
I've just checked Phil's mails - both Phil and myself are using
netfilter on the troublesome boxen.

Also, since FragCreates is zero, and this does mean that the frag_list
is not empty in all cases so far where ip_fragment() has been called.
(Reading the code, if frag_list was empty, we'd have to create some
fragments, which increments the FragCreates statistic.)
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
Phil Oester
2005-01-30 18:19:13 UTC
Permalink
Post by Russell King
Post by Patrick McHardy
OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
so frag_list should be empty. So probably false alarm, sorry.
I've just checked Phil's mails - both Phil and myself are using
netfilter on the troublesome boxen.
Also, since FragCreates is zero, and this does mean that the frag_list
is not empty in all cases so far where ip_fragment() has been called.
(Reading the code, if frag_list was empty, we'd have to create some
fragments, which increments the FragCreates statistic.)
The below testcase seems to illustrate the problem nicely -- ip_dst_cache
grows but never shrinks:

On gateway:

iptables -I FORWARD -d 10.10.10.0/24 -j DROP

On client:

for i in `seq 1 254` ; do ping -s 1500 -c 5 -w 1 -f 10.10.10.$i ; done


Phil
Phil Oester
2005-01-28 01:41:39 UTC
Permalink
Post by Russell King
Post by David S. Miller
So they won't be listed in /proc/net/rt_cache (since they've been
removed from the lookup table) but they will be accounted for in
/proc/net/stat/rt_cache until the final release is done on the
routing cache object and it can be completely freed up.
Do you happen to be using IPV6 in any way by chance?
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is that something you can confirm?
FWIW, I do not use IPv6, and it is not compiled into the kernel.

Phil
Alexander Nyberg
2005-01-24 00:56:59 UTC
Permalink
Post by Andrew Morton
I don't think I've ever really seen code to diagnose this.
A simplistic approach would be to add eight or so ulongs into struct page,
populate them with builtin_return_address(0...7) at allocation time, then
modify sysrq-m to walk mem_map[] printing it all out for pages which have
page_count() > 0. That'd find the culprit.
Hi Andrew

I put something similar together of what you described but I made it a
proc-file. It lists all pages owned by some caller and keeps a backtrace
of max 8 addresses. Each page has an order, -1 for unused and if used it lists
the order under which the first page is allocated, the rest in the group are kept -1.
Below is also a program to sort the enormous amount of
output, it will group together backtraces that are alike and list them like:

5 times: Page allocated via order 0
[0xffffffff8015861f] __get_free_pages+31
[0xffffffff8015c0ef] cache_alloc_refill+719
[0xffffffff8015bd74] kmem_cache_alloc+84
[0xffffffff8015bddc] alloc_arraycache+60
[0xffffffff8015d15d] do_tune_cpucache+93
[0xffffffff8015bbf8] cache_alloc_debugcheck_after+280
[0xffffffff8015d31d] enable_cpucache+93
[0xffffffff8015d8a5] kmem_cache_create+1365

It's a bit of hackety-hack in the function trace routines because doing
__builtin_return_address(0) - 7 doesn't work very well when it
runs out of the stack and the function itself doesn't check for it.

Tested on x86 with and without CONFIG_FRAME_POINTER and x86-64 (which
might are the only archs it'll work on). I hope you like it ;)


Suggested use is
cat /proc/page_owner > pgown;
./below_program pgown pgsorted;
vim pgsorted

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>

struct block_list {
struct block_list *next;
char *txt;
int len;
int num;
};

struct block_list *block_head;

int read_block(char *buf, int fd)
{
int ret = 0, rd = 0;
int hit = 0;
char *curr = buf;

for (;;) {
rd = read(fd, curr, 1);
if (rd <= 0)
return -1;

ret += rd;
if (*curr == '\n' && hit == 1)
return ret - 1;
else if (*curr == '\n')
hit = 1;
else
hit = 0;
curr++;
}
}

int find_duplicate(char *buf, int len)
{
struct block_list *iterate, *item, *prev;
char *txt;

iterate = block_head;
while (iterate) {
if (len != iterate->len)
goto iterate;
if (!memcmp(buf, iterate->txt, len)) {
iterate->num++;
return 1;
}
iterate:
iterate = iterate->next;
}

/* this block didn't exist */
txt = malloc(len);
item = malloc(sizeof(struct block_list));
strncpy(txt, buf, len);
item->len = len;
item->txt = txt;
item->num = 1;
item->next = NULL;

if (block_head) {
prev = block_head->next;
block_head->next = item;
item->next = prev;
} else
block_head = item;

return 0;
}
int main(int argc, char **argv)
{
int fdin, fdout;
char buf[1024];
int ret;
struct block_list *item;

fdin = open(argv[1], O_RDONLY);
fdout = open(argv[2], O_CREAT | O_RDWR | O_EXCL, S_IWUSR | S_IRUSR);
if (fdin < 0 || fdout < 0) {
printf("Usage: ./program <input> <output>\n");
perror("open: ");
exit(2);
}

for(;;) {
ret = read_block(buf, fdin);
if (ret < 0)
break;

buf[ret] = '\0';
find_duplicate(buf, ret);
}

for (item = block_head; item; item = item->next) {
int written;

/* errors? what errors... */
ret = snprintf(buf, 1024, "%d times: ", item->num);
written = write(fdout, buf, ret);
written = write(fdout, item->txt, item->len);
written = write(fdout, "\n", 1);
}
return 0;
}




===== fs/proc/proc_misc.c 1.113 vs edited =====
--- 1.113/fs/proc/proc_misc.c 2005-01-12 01:42:35 +01:00
+++ edited/fs/proc/proc_misc.c 2005-01-24 00:59:23 +01:00
@@ -534,6 +534,62 @@ static struct file_operations proc_sysrq
};
#endif

+#if 1
+#include <linux/bootmem.h>
+#include <linux/kallsyms.h>
+static ssize_t
+read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+{
+ struct page *start = pfn_to_page(min_low_pfn);
+ static struct page *page;
+ char *kbuf, *modname;
+ const char *symname;
+ int ret = 0, next_idx = 1;
+ char namebuf[128];
+ unsigned long offset = 0, symsize;
+ int i;
+
+ page = start + *ppos;
+ for (; page < pfn_to_page(max_pfn); page++) {
+ if (page->order >= 0)
+ break;
+ next_idx++;
+ continue;
+ }
+
+ if (page >= pfn_to_page(max_pfn))
+ return 0;
+
+ *ppos += next_idx;
+
+ kbuf = kmalloc(count, GFP_KERNEL);
+ if (!kbuf)
+ return -ENOMEM;
+
+ ret = snprintf(kbuf, count, "Page allocated via order %d\n", page->order);
+
+ for (i = 0; i < 8; i++) {
+ if (!page->trace[i])
+ break;
+ symname = kallsyms_lookup(page->trace[i], &symsize, &offset, &modname, namebuf);
+ ret += snprintf(kbuf + ret, count - ret, "[0x%lx] %s+%lu\n",
+ page->trace[i], namebuf, offset);
+ }
+
+ ret += snprintf(kbuf + ret, count -ret, "\n");
+
+ if (copy_to_user(buf, kbuf, ret))
+ ret = -EFAULT;
+
+ kfree(kbuf);
+ return ret;
+}
+
+static struct file_operations proc_page_owner_operations = {
+ .read = read_page_owner,
+};
+#endif
+
struct proc_dir_entry *proc_root_kcore;

void create_seq_entry(char *name, mode_t mode, struct file_operations *f)
@@ -610,6 +666,13 @@ void __init proc_misc_init(void)
entry = create_proc_entry("ppc_htab", S_IRUGO|S_IWUSR, NULL);
if (entry)
entry->proc_fops = &ppc_htab_operations;
+ }
+#endif
+#if 1
+ entry = create_proc_entry("page_owner", S_IWUSR | S_IRUGO, NULL);
+ if (entry) {
+ entry->proc_fops = &proc_page_owner_operations;
+ entry->size = 1024;
}
#endif
}
===== include/linux/mm.h 1.211 vs edited =====
--- 1.211/include/linux/mm.h 2005-01-11 02:29:23 +01:00
+++ edited/include/linux/mm.h 2005-01-23 23:22:52 +01:00
@@ -260,6 +260,10 @@ struct page {
void *virtual; /* Kernel virtual address (NULL if
not kmapped, ie. highmem) */
#endif /* WANT_PAGE_VIRTUAL */
+#if 1
+ int order;
+ unsigned long trace[8];
+#endif
};

/*
===== mm/page_alloc.c 1.254 vs edited =====
--- 1.254/mm/page_alloc.c 2005-01-11 02:29:33 +01:00
+++ edited/mm/page_alloc.c 2005-01-24 01:04:38 +01:00
@@ -103,6 +103,7 @@ static void bad_page(const char *functio
tainted |= TAINT_BAD_PAGE;
}

+
#ifndef CONFIG_HUGETLB_PAGE
#define prep_compound_page(page, order) do { } while (0)
#define destroy_compound_page(page, order) do { } while (0)
@@ -680,6 +681,41 @@ int zone_watermark_ok(struct zone *z, in
return 1;
}

+static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
+{
+ return p > (void *)tinfo &&
+ p < (void *)tinfo + THREAD_SIZE - 3;
+}
+
+static inline void __stack_trace(struct page *page, unsigned long *stack, unsigned long bp)
+{
+ int i = 0;
+ unsigned long addr;
+ struct thread_info *tinfo = (struct thread_info *)
+ ((unsigned long)stack & (~(THREAD_SIZE - 1)));
+
+ memset(page->trace, 0, sizeof(long) * 8);
+
+#ifdef CONFIG_FRAME_POINTER
+ while (valid_stack_ptr(tinfo, (void *)bp)) {
+ addr = *(unsigned long *)(bp + sizeof(long));
+ page->trace[i] = addr;
+ if (++i >= 8)
+ break;
+ bp = *(unsigned long *)bp;
+ }
+#else
+ while (valid_stack_ptr(tinfo, stack)) {
+ addr = *stack++;
+ if (__kernel_text_address(addr)) {
+ page->trace[i] = addr;
+ if (++i >= 8)
+ break;
+ }
+ }
+#endif
+}
+
/*
* This is the 'heart' of the zoned buddy allocator.
*
@@ -709,6 +745,7 @@ __alloc_pages(unsigned int gfp_mask, uns
int alloc_type;
int do_retry;
int can_try_harder;
+ unsigned long address, bp;

might_sleep_if(wait);

@@ -825,6 +862,14 @@ nopage:
return NULL;
got_pg:
zone_statistics(zonelist, z);
+ page->order = (int) order;
+#ifdef X86_64
+ asm ("movq %%rbp, %0" : "=r" (bp) : );
+#else
+ asm ("movl %%ebp, %0" : "=r" (bp) : );
+#endif
+ __stack_trace(page, &address, bp);
+
return page;
}

@@ -877,6 +922,7 @@ fastcall void __free_pages(struct page *
free_hot_page(page);
else
__free_pages_ok(page, order);
+ page->order = -1;
}
}

@@ -1508,6 +1554,7 @@ void __init memmap_init_zone(unsigned lo
set_page_address(page, __va(start_pfn << PAGE_SHIFT));
#endif
start_pfn++;
+ page->order = -1;
}
}
Andrew Morton
2005-01-24 20:56:49 UTC
Permalink
Here is the output of your program (somewhat modified, I cut the runtime
by 19/20 killing the 1-byte reads :-) after 10 hours of use with
bk-current as of this morning.
hmm..

62130 times: Page allocated via order 0
[0xffffffff80173d6e] pipe_writev+574
[0xffffffff8017402a] pipe_write+26
[0xffffffff80168b47] vfs_write+199
[0xffffffff80168cb3] sys_write+83
[0xffffffff8011e4f3] cstar_do_call+27

55552 times: Page allocated via order 0
[0xffffffff80173d6e] pipe_writev+574
[0xffffffff8017402a] pipe_write+26
[0xffffffff8038b88d] thread_return+41
[0xffffffff80168b47] vfs_write+199
[0xffffffff80168cb3] sys_write+83
[0xffffffff8011e4f3] cstar_do_call+27

Would indicate that the new pipe code is leaking.
Jens Axboe
2005-01-24 21:05:39 UTC
Permalink
Post by Andrew Morton
Here is the output of your program (somewhat modified, I cut the runtime
by 19/20 killing the 1-byte reads :-) after 10 hours of use with
bk-current as of this morning.
hmm..
62130 times: Page allocated via order 0
[0xffffffff80173d6e] pipe_writev+574
[0xffffffff8017402a] pipe_write+26
[0xffffffff80168b47] vfs_write+199
[0xffffffff80168cb3] sys_write+83
[0xffffffff8011e4f3] cstar_do_call+27
55552 times: Page allocated via order 0
[0xffffffff80173d6e] pipe_writev+574
[0xffffffff8017402a] pipe_write+26
[0xffffffff8038b88d] thread_return+41
[0xffffffff80168b47] vfs_write+199
[0xffffffff80168cb3] sys_write+83
[0xffffffff8011e4f3] cstar_do_call+27
Would indicate that the new pipe code is leaking.
I suspected that, I even tried backing out the new pipe patches but it
still seemed to leak. And the test cases I tried to come up with could
not provoke a pipe leak. But yeah, it certainly is the most likely
culprit and the leak did start in the period when it was introduced.
--
Jens Axboe
Linus Torvalds
2005-01-24 22:35:47 UTC
Permalink
Post by Andrew Morton
Would indicate that the new pipe code is leaking.
Duh. It's the pipe merging.

Linus

----
--- 1.40/fs/pipe.c 2005-01-15 12:01:16 -08:00
+++ edited/fs/pipe.c 2005-01-24 14:35:09 -08:00
@@ -630,13 +630,13 @@
struct pipe_inode_info *info = inode->i_pipe;

inode->i_pipe = NULL;
- if (info->tmp_page)
- __free_page(info->tmp_page);
for (i = 0; i < PIPE_BUFFERS; i++) {
struct pipe_buffer *buf = info->bufs + i;
if (buf->ops)
buf->ops->release(info, buf);
}
+ if (info->tmp_page)
+ __free_page(info->tmp_page);
kfree(info);
}
Paulo Marques
2005-01-25 15:53:05 UTC
Permalink
Post by Linus Torvalds
Post by Andrew Morton
Would indicate that the new pipe code is leaking.
Duh. It's the pipe merging.
Have we just seen the "plumber" side of Linus?

After all, he just fixed a "leaking pipe" :)


(sorry for the OT, just couldn't help it)
--
Paulo Marques - www.grupopie.com

"A journey of a thousand miles begins with a single step."
Lao-tzu, The Way of Lao-tzu
Jens Axboe
2005-01-26 08:01:53 UTC
Permalink
Post by Linus Torvalds
Post by Andrew Morton
Would indicate that the new pipe code is leaking.
Duh. It's the pipe merging.
Linus
----
--- 1.40/fs/pipe.c 2005-01-15 12:01:16 -08:00
+++ edited/fs/pipe.c 2005-01-24 14:35:09 -08:00
@@ -630,13 +630,13 @@
struct pipe_inode_info *info = inode->i_pipe;
inode->i_pipe = NULL;
- if (info->tmp_page)
- __free_page(info->tmp_page);
for (i = 0; i < PIPE_BUFFERS; i++) {
struct pipe_buffer *buf = info->bufs + i;
if (buf->ops)
buf->ops->release(info, buf);
}
+ if (info->tmp_page)
+ __free_page(info->tmp_page);
kfree(info);
}
It's better now, no leak anymore. But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
--
Jens Axboe
Andrew Morton
2005-01-26 08:11:13 UTC
Permalink
Post by Jens Axboe
But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
Can you provide more details?
Jens Axboe
2005-01-26 08:40:05 UTC
Permalink
Post by Andrew Morton
Post by Jens Axboe
But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
Can you provide more details?
Hmm not really, I just seem to have a very large piece of
non-cache/buffer memory that seems reluctant to shrink on light memory
pressure. This makes the box feel sluggish, if I force reclaim by
running fillmem and swapping on/off again, it feels much better.

I should mention that this is with 2.6.bk + andreas oom patches that he
asked me to test. I can try 2.6.11-rc2-bkX if you think I should.
--
Jens Axboe
Andrew Morton
2005-01-26 08:44:19 UTC
Permalink
Post by Jens Axboe
Post by Andrew Morton
Post by Jens Axboe
But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
Can you provide more details?
Hmm not really, I just seem to have a very large piece of
non-cache/buffer memory that seems reluctant to shrink on light memory
pressure.
If it's not pagecache then what is it? slab?
Post by Jens Axboe
This makes the box feel sluggish, if I force reclaim by
running fillmem and swapping on/off again, it feels much better.
before-n-after /proc/meminfo would be interesting.

If you actually meant that is _is_ sticky pagecache then perhaps the recent
mark_page_accessed() changes in filemap.c, although I'd be surprised.
Post by Jens Axboe
I should mention that this is with 2.6.bk + andreas oom patches that he
asked me to test. I can try 2.6.11-rc2-bkX if you think I should.
They shouldn't be causing this sort of thing.
Jens Axboe
2005-01-26 08:47:44 UTC
Permalink
Post by Andrew Morton
Post by Jens Axboe
Post by Andrew Morton
Post by Jens Axboe
But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
Can you provide more details?
Hmm not really, I just seem to have a very large piece of
non-cache/buffer memory that seems reluctant to shrink on light memory
pressure.
If it's not pagecache then what is it? slab?
Must be, if it's reclaimable.
Post by Andrew Morton
Post by Jens Axboe
This makes the box feel sluggish, if I force reclaim by
running fillmem and swapping on/off again, it feels much better.
before-n-after /proc/meminfo would be interesting.
If you actually meant that is _is_ sticky pagecache then perhaps the recent
mark_page_accessed() changes in filemap.c, although I'd be surprised.
I don't think it's sticky page cache, it seems to shrink just fine. This
is my current situtation:

***@wiggum:/home/axboe $ free
total used free shared buffers cached
Mem: 1024992 1015288 9704 0 76680 328148
-/+ buffers/cache: 610460 414532
Swap: 0 0 0

***@wiggum:/home/axboe $ cat /proc/meminfo
MemTotal: 1024992 kB
MemFree: 9768 kB
Buffers: 76664 kB
Cached: 328024 kB
SwapCached: 0 kB
Active: 534956 kB
Inactive: 224060 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1024992 kB
LowFree: 9768 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 1400 kB
Writeback: 0 kB
Mapped: 464232 kB
Slab: 225864 kB
CommitLimit: 512496 kB
Committed_AS: 773844 kB
PageTables: 8004 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 644 kB
VmallocChunk: 34359737167 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB
Post by Andrew Morton
Post by Jens Axboe
I should mention that this is with 2.6.bk + andreas oom patches that he
asked me to test. I can try 2.6.11-rc2-bkX if you think I should.
They shouldn't be causing this sort of thing.
I didn't think so, just mentioning it for completeness :)
--
Jens Axboe
Jens Axboe
2005-01-26 08:52:30 UTC
Permalink
Post by Jens Axboe
Post by Andrew Morton
Post by Jens Axboe
Post by Andrew Morton
Post by Jens Axboe
But the 2.6.11-rcX vm is still very
screwy, to get something close to nice and smooth behaviour I have to
run a fillmem every now and then to reclaim used memory.
Can you provide more details?
Hmm not really, I just seem to have a very large piece of
non-cache/buffer memory that seems reluctant to shrink on light memory
pressure.
If it's not pagecache then what is it? slab?
Must be, if it's reclaimable.
Post by Andrew Morton
Post by Jens Axboe
This makes the box feel sluggish, if I force reclaim by
running fillmem and swapping on/off again, it feels much better.
before-n-after /proc/meminfo would be interesting.
If you actually meant that is _is_ sticky pagecache then perhaps the recent
mark_page_accessed() changes in filemap.c, although I'd be surprised.
I don't think it's sticky page cache, it seems to shrink just fine. This
total used free shared buffers cached
Mem: 1024992 1015288 9704 0 76680 328148
-/+ buffers/cache: 610460 414532
Swap: 0 0 0
MemTotal: 1024992 kB
MemFree: 9768 kB
Buffers: 76664 kB
Cached: 328024 kB
SwapCached: 0 kB
Active: 534956 kB
Inactive: 224060 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1024992 kB
LowFree: 9768 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 1400 kB
Writeback: 0 kB
Mapped: 464232 kB
Slab: 225864 kB
CommitLimit: 512496 kB
Committed_AS: 773844 kB
PageTables: 8004 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 644 kB
VmallocChunk: 34359737167 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB
The (by far) two largest slab consumers are:

dentry_cache 140940 183060 216 18 1 : tunables 120 60
0 : slabdata 10170 10170 0

and

ext3_inode_cache 185494 194265 776 5 1 : tunables 54 27
0 : slabdata 38853 38853 0

there are about ~40k buffer_head entries as well.
--
Jens Axboe
William Lee Irwin III
2005-01-26 09:00:40 UTC
Permalink
Post by Jens Axboe
Post by Jens Axboe
Slab: 225864 kB
dentry_cache 140940 183060 216 18 1 : tunables 120 60
0 : slabdata 10170 10170 0
and
ext3_inode_cache 185494 194265 776 5 1 : tunables 54 27
0 : slabdata 38853 38853 0
there are about ~40k buffer_head entries as well.
These don't appear to be due to fragmentation. The dcache has 76.99%
utilization and ext3_inode_cache has 95.48% utilization.


-- wli
Andrew Morton
2005-01-26 08:58:44 UTC
Permalink
Post by Jens Axboe
...
MemTotal: 1024992 kB
MemFree: 9768 kB
Buffers: 76664 kB
Cached: 328024 kB
SwapCached: 0 kB
Active: 534956 kB
Inactive: 224060 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1024992 kB
LowFree: 9768 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 1400 kB
Writeback: 0 kB
Mapped: 464232 kB
Slab: 225864 kB
CommitLimit: 512496 kB
Committed_AS: 773844 kB
PageTables: 8004 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 644 kB
VmallocChunk: 34359737167 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB
OK. There's rather a lot of anonymous memory there - 700M on the LRU, 300M
pageache, 400M anon, 200M of slab. You need some swapspace ;)

What are the symptoms? Slow to load applications? Lots of paging? Poor
I/O speeds?
Jens Axboe
2005-01-26 09:03:38 UTC
Permalink
Post by Andrew Morton
Post by Jens Axboe
...
MemTotal: 1024992 kB
MemFree: 9768 kB
Buffers: 76664 kB
Cached: 328024 kB
SwapCached: 0 kB
Active: 534956 kB
Inactive: 224060 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1024992 kB
LowFree: 9768 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 1400 kB
Writeback: 0 kB
Mapped: 464232 kB
Slab: 225864 kB
CommitLimit: 512496 kB
Committed_AS: 773844 kB
PageTables: 8004 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 644 kB
VmallocChunk: 34359737167 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB
OK. There's rather a lot of anonymous memory there - 700M on the LRU, 300M
pageache, 400M anon, 200M of slab. You need some swapspace ;)
Just forget to swapon again after the recent fillmem cleanup, I do have
1G of swap usually on as well!
Post by Andrew Morton
What are the symptoms? Slow to load applications? Lots of paging? Poor
I/O speeds?
No paging, it basically never hits swap. Buffered io by itself seems to
run at full speed. But application startup seems sluggish. Hard to
explain really, but there's a noticable difference to the feel of usage
when it has just been force-pruned with fillmem and before.
--
Jens Axboe
Parag Warudkar
2005-01-26 15:52:42 UTC
Permalink
I am running 2.6.11-rc2+ fix for the pipe related leak by Linus. I am
currently running a QT+KDE compile with distcc on two machines. I am
running these machines for around 11 hours now and swap seems to be
growing steadily on the -rc2 box - it went to ~260kb after 10hrs, after
which I ran swapoff. Now after couple hours it is at 40kb. The other
machine is Knoppix 2.4.26 kernel with lesser memory and it hasn't run
into swap at all.

On the -rc2 machine, however, I don't feel anything is sluggish yet. But
I think if I leave it running long enough it might run out of memory.

I don't know if this is perfectly normal given the differences between
2.4.x and 2.6.x VM. I will keep it running and under load for a while
and report any interesting stuff.

Here is /proc/meminfo on the rc2 box as of now -
***@localhost paragw]# cat /proc/meminfo
MemTotal: 775012 kB
MemFree: 55260 kB
Buffers: 72732 kB
Cached: 371956 kB
SwapCached: 40 kB
Active: 489508 kB
Inactive: 182360 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 775012 kB
LowFree: 55260 kB
SwapTotal: 787176 kB
SwapFree: 787136 kB
Dirty: 2936 kB
Writeback: 0 kB
Mapped: 259024 kB
Slab: 32288 kB
CommitLimit: 1174680 kB
Committed_AS: 450692 kB
PageTables: 3072 kB
VmallocTotal: 253876 kB
VmallocUsed: 25996 kB
VmallocChunk: 226736 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB

Parag
Post by Andrew Morton
Post by Jens Axboe
...
MemTotal: 1024992 kB
MemFree: 9768 kB
Buffers: 76664 kB
Cached: 328024 kB
SwapCached: 0 kB
Active: 534956 kB
Inactive: 224060 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1024992 kB
LowFree: 9768 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 1400 kB
Writeback: 0 kB
Mapped: 464232 kB
Slab: 225864 kB
CommitLimit: 512496 kB
Committed_AS: 773844 kB
PageTables: 8004 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 644 kB
VmallocChunk: 34359737167 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB
OK. There's rather a lot of anonymous memory there - 700M on the LRU, 300M
pageache, 400M anon, 200M of slab. You need some swapspace ;)
What are the symptoms? Slow to load applications? Lots of paging? Poor
I/O speeds?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Lennert Van Alboom
2005-02-02 09:29:58 UTC
Permalink
I applied the patch and it works like a charm. As a kinky side effect: before
this patch, using a compiled-in vesa or vga16 framebuffer worked with the
proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
80x25. Strangeness :)

Lennert
Post by Linus Torvalds
Post by Andrew Morton
Would indicate that the new pipe code is leaking.
Duh. It's the pipe merging.
Linus
----
--- 1.40/fs/pipe.c 2005-01-15 12:01:16 -08:00
+++ edited/fs/pipe.c 2005-01-24 14:35:09 -08:00
@@ -630,13 +630,13 @@
struct pipe_inode_info *info = inode->i_pipe;
inode->i_pipe = NULL;
- if (info->tmp_page)
- __free_page(info->tmp_page);
for (i = 0; i < PIPE_BUFFERS; i++) {
struct pipe_buffer *buf = info->bufs + i;
if (buf->ops)
buf->ops->release(info, buf);
}
+ if (info->tmp_page)
+ __free_page(info->tmp_page);
kfree(info);
}
Linus Torvalds
2005-02-02 16:00:10 UTC
Permalink
Post by Lennert Van Alboom
I applied the patch and it works like a charm. As a kinky side effect: before
this patch, using a compiled-in vesa or vga16 framebuffer worked with the
proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
80x25. Strangeness :)
It really sounds like you should lay off those pharmaceutical drugs ;)

That is _strange_. Is it literally just this single pipe merging change
that matters to you? No other changces? I don't see how it could
_possibly_ make any difference at all to anything else.

Linus
Lennert Van Alboom
2005-02-02 16:19:14 UTC
Permalink
Positive, I only applied this single two-line change. I'm not capable of
messing with kernel code myself so I prefer not to. Probably just a lucky
shot that the vesa didn't go nuts with nvidia before... O well, with a bit
more o'those pharmaceutical drugs even this 80x25 doesn't look too bad.
Hurray!

Lennert
Post by Linus Torvalds
before this patch, using a compiled-in vesa or vga16 framebuffer worked
with the proprietary nvidia driver, whereas now tty1-6 are corrupt when
not using 80x25. Strangeness :)
It really sounds like you should lay off those pharmaceutical drugs ;)
That is _strange_. Is it literally just this single pipe merging change
that matters to you? No other changces? I don't see how it could
_possibly_ make any difference at all to anything else.
Linus
Dave Hansen
2005-02-02 17:49:20 UTC
Permalink
I think there's still something funky going on in the pipe code, at
least in 2.6.11-rc2-mm2, which does contain the misordered __free_page()
fix in pipe.c. I'm noticing any leak pretty easily because I'm
attempting memory removal of highmem areas, and these apparently leaked
pipe pages the only things keeping those from succeeding.

In any case, I'm running a horribly hacked up kernel, but this is
certainly a new problem, and not one that I've run into before. Here's
output from the new CONFIG_PAGE_OWNER code:

Page (e0c4f8b8) pfn: 00566606 allocated via order 0
[0xc0162ef6] pipe_writev+542
[0xc0157f48] do_readv_writev+288
[0xc0163114] pipe_write+0
[0xc0134484] ltt_log_event+64
[0xc0158077] vfs_writev+75
[0xc01581ac] sys_writev+104
[0xc0102430] no_syscall_entry_trace+11

And some more information about the page (yes, it's in the vmalloc
space)

page: e0c4f8b8
pfn: 0008a54e 566606
count: 1
mapcount: 0
index: 786431
mapping: 00000000
private: 00000000
lru->prev: 00200200
lru->next: 00100100
PG_locked: 0
PG_error: 0
PG_referenced: 0
PG_uptodate: 0
PG_dirty: 0
PG_lru: 0
PG_active: 0
PG_slab: 0
PG_highmem: 1
PG_checked: 0
PG_arch_1: 0
PG_reserved: 0
PG_private: 0
PG_writeback: 0
PG_nosave: 0
PG_compound: 0
PG_swapcache: 0
PG_mappedtodisk: 0
PG_reclaim: 0
PG_nosave_free: 0
PG_capture: 1


-- Dave
Linus Torvalds
2005-02-02 18:27:17 UTC
Permalink
Post by Dave Hansen
In any case, I'm running a horribly hacked up kernel, but this is
certainly a new problem, and not one that I've run into before. Here's
Hmm.. Everything looks fine. One new thing about the pipe code is that it
historically never allocated HIGHMEM pages, and the new code no longer
cares and thus can allocate anything. So there's nothing strange in your
output that I can see.

How many of these pages do you see? It's normal for a single pipe to be
associated with up to 16 pages (although that would only happen if there
is no reader or a slow reader, which is obviously not very common).

Now, if your memory freeing code depends on the fact that all HIGHMEM
pages are always "freeable" (page cache + VM mappings only), then yes, the
new pipe code introduces highmem pages that weren't highmem before. But
such long-lived and unfreeable pages have been there before too: kernel
modules (or any other vmalloc() user, for that matter) also do the same
thing.

Now, there _is_ another possibility here: we might have had a pipe leak
before, and the new pipe code would potentially make it a lot more
noticeable, with up to sixteen times as many pages lost if somebody freed
a pipe inode without calling "free_pipe_info()". I don't see where that
would happen - all the normal "release" functions seem fine.

Hmm.. Adding a

WARN_ON(inode->i_pipe);

to "iput_final()" might be a good idea - showing if somebody is releasing
an inode while it still associated with a pipe-info data structure.

Also, while I don't see how a write could leak, but maybe you could you
add a

WARN_ON(buf->ops);

to the pipe_writev() case just before we insert a new buffer (ie to just
after the comment that says "Insert it into the buffer array"). Just to
see if the circular buffer handling might overwrite an old entry (although
I _really_ don't see that - it's not like the code is complex, and it
would also be accompanied by data-loss in the pipe, so we'd have seen
that, methinks).

Linus
Dave Hansen
2005-02-02 19:07:01 UTC
Permalink
This post might be inappropriate. Click to display it.
Linus Torvalds
2005-02-02 21:08:19 UTC
Permalink
Post by Dave Hansen
Strangely enough, it seems to be one single, persistent page.
Ok. Almost certainly not a leak.

It's most likely the FIFO that "init" opens (/dev/initctl). FIFO's use the
pipe code too.

If you don't want unreclaimable highmem pages, then I suspect you just
need to change the GFP_HIGHUSER to a GFP_USER in fs/pipe.c

Linus
Jens Axboe
2005-01-24 20:47:00 UTC
Permalink
Post by Alexander Nyberg
Post by Andrew Morton
I don't think I've ever really seen code to diagnose this.
A simplistic approach would be to add eight or so ulongs into struct page,
populate them with builtin_return_address(0...7) at allocation time, then
modify sysrq-m to walk mem_map[] printing it all out for pages which have
page_count() > 0. That'd find the culprit.
Hi Andrew
I put something similar together of what you described but I made it a
proc-file. It lists all pages owned by some caller and keeps a backtrace
of max 8 addresses. Each page has an order, -1 for unused and if used it lists
the order under which the first page is allocated, the rest in the group are kept -1.
Below is also a program to sort the enormous amount of
5 times: Page allocated via order 0
[0xffffffff8015861f] __get_free_pages+31
[0xffffffff8015c0ef] cache_alloc_refill+719
[0xffffffff8015bd74] kmem_cache_alloc+84
[0xffffffff8015bddc] alloc_arraycache+60
[0xffffffff8015d15d] do_tune_cpucache+93
[0xffffffff8015bbf8] cache_alloc_debugcheck_after+280
[0xffffffff8015d31d] enable_cpucache+93
[0xffffffff8015d8a5] kmem_cache_create+1365
Here is the output of your program (somewhat modified, I cut the runtime
by 19/20 killing the 1-byte reads :-) after 10 hours of use with
bk-current as of this morning.
--
Jens Axboe
Andrew Morton
2005-01-24 22:05:07 UTC
Permalink
This post might be inappropriate. Click to display it.
Jan Kasprzak
2005-02-07 11:00:30 UTC
Permalink
: I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
: now, and I think there is a memory leak somewhere. I am measuring the
: size of active and inactive pages (from /proc/meminfo), and it seems
: that the count of sum (active+inactive) pages is decreasing. Please
: take look at the graphs at
:
: http://www.linux.cz/stats/mrtg-rrd/vm_active.html

Well, with Linus' patch to fs/pipe.c the situation seems to
improve a bit, but some leak is still there (look at the "monthly" graph
at the above URL). The server has been running 2.6.11-rc2 + patch to fs/pipe.c
for last 8 days. I am letting it run for a few more days in case you want
some debugging info from a live system. I am attaching my /proc/meminfo
and /proc/slabinfo.

-Yenya

# cat /proc/meminfo
MemTotal: 4045168 kB
MemFree: 59396 kB
Buffers: 17812 kB
Cached: 2861648 kB
SwapCached: 0 kB
Active: 827700 kB
Inactive: 2239752 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 4045168 kB
LowFree: 59396 kB
SwapTotal: 14651256 kB
SwapFree: 14650584 kB
Dirty: 1616 kB
Writeback: 0 kB
Mapped: 206540 kB
Slab: 861176 kB
CommitLimit: 16673840 kB
Committed_AS: 565684 kB
PageTables: 20812 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7400 kB
VmallocChunk: 34359730867 kB
# cat /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <batchcount> <limit> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
raid5/md5 256 260 1416 5 2 : tunables 24 12 8 : slabdata 52 52 0
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8 : slabdata 4 4 0
rpc_tasks 12 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
rpc_inode_cache 8 10 768 5 1 : tunables 54 27 8 : slabdata 2 2 0
fib6_nodes 27 61 64 61 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 17 36 320 12 1 : tunables 54 27 8 : slabdata 3 3 0
ndisc_cache 2 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0
rawv6_sock 4 4 1024 4 1 : tunables 54 27 8 : slabdata 1 1 0
udpv6_sock 1 4 960 4 1 : tunables 54 27 8 : slabdata 1 1 0
tcpv6_sock 8 8 1664 4 2 : tunables 24 12 8 : slabdata 2 2 0
unix_sock 567 650 768 5 1 : tunables 54 27 8 : slabdata 130 130 0
tcp_tw_bucket 445 920 192 20 1 : tunables 120 60 8 : slabdata 46 46 0
tcp_bind_bucket 389 2261 32 119 1 : tunables 120 60 8 : slabdata 19 19 0
tcp_open_request 135 310 128 31 1 : tunables 120 60 8 : slabdata 10 10 0
inet_peer_cache 32 62 128 31 1 : tunables 120 60 8 : slabdata 2 2 0
ip_fib_alias 20 119 32 119 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 18 61 64 61 1 : tunables 120 60 8 : slabdata 1 1 0
ip_dst_cache 1738 2060 384 10 1 : tunables 54 27 8 : slabdata 206 206 0
arp_cache 8 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0
raw_sock 3 9 832 9 2 : tunables 54 27 8 : slabdata 1 1 0
udp_sock 45 45 832 9 2 : tunables 54 27 8 : slabdata 5 5 0
tcp_sock 431 600 1472 5 2 : tunables 24 12 8 : slabdata 120 120 0
flow_cache 0 0 128 31 1 : tunables 120 60 8 : slabdata 0 0 0
dm_tio 0 0 24 156 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 32 119 1 : tunables 120 60 8 : slabdata 0 0 0
scsi_cmd_cache 261 315 512 7 1 : tunables 54 27 8 : slabdata 45 45 216
cfq_ioc_pool 0 0 48 81 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_pool 0 0 176 22 1 : tunables 120 60 8 : slabdata 0 0 0
crq_pool 0 0 104 38 1 : tunables 120 60 8 : slabdata 0 0 0
deadline_drq 0 0 96 41 1 : tunables 120 60 8 : slabdata 0 0 0
as_arq 580 700 112 35 1 : tunables 120 60 8 : slabdata 20 20 432
xfs_acl 0 0 304 13 1 : tunables 54 27 8 : slabdata 0 0 0
xfs_chashlist 380 4879 32 119 1 : tunables 120 60 8 : slabdata 41 41 30
xfs_ili 15 120 192 20 1 : tunables 120 60 8 : slabdata 6 6 0
xfs_ifork 0 0 64 61 1 : tunables 120 60 8 : slabdata 0 0 0
xfs_efi_item 0 0 352 11 1 : tunables 54 27 8 : slabdata 0 0 0
xfs_efd_item 0 0 360 11 1 : tunables 54 27 8 : slabdata 0 0 0
xfs_buf_item 5 21 184 21 1 : tunables 120 60 8 : slabdata 1 1 0
xfs_dabuf 10 156 24 156 1 : tunables 120 60 8 : slabdata 1 1 0
xfs_da_state 2 8 488 8 1 : tunables 54 27 8 : slabdata 1 1 0
xfs_trans 1 9 872 9 2 : tunables 54 27 8 : slabdata 1 1 0
xfs_inode 500 959 528 7 1 : tunables 54 27 8 : slabdata 137 137 0
xfs_btree_cur 2 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
xfs_bmap_free_item 0 0 24 156 1 : tunables 120 60 8 : slabdata 0 0 0
xfs_buf_t 44 72 448 9 1 : tunables 54 27 8 : slabdata 8 8 0
linvfs_icache 499 792 600 6 1 : tunables 54 27 8 : slabdata 132 132 0
nfs_write_data 36 36 832 9 2 : tunables 54 27 8 : slabdata 4 4 0
nfs_read_data 32 35 768 5 1 : tunables 54 27 8 : slabdata 7 7 0
nfs_inode_cache 28 72 952 4 1 : tunables 54 27 8 : slabdata 10 18 5
nfs_page 2 31 128 31 1 : tunables 120 60 8 : slabdata 1 1 0
isofs_inode_cache 10 12 600 6 1 : tunables 54 27 8 : slabdata 2 2 0
journal_handle 96 156 24 156 1 : tunables 120 60 8 : slabdata 1 1 0
journal_head 324 630 88 45 1 : tunables 120 60 8 : slabdata 14 14 60
revoke_table 6 225 16 225 1 : tunables 120 60 8 : slabdata 1 1 0
revoke_record 0 0 32 119 1 : tunables 120 60 8 : slabdata 0 0 0
ext3_inode_cache 829 1150 816 5 1 : tunables 54 27 8 : slabdata 230 230 54
ext3_xattr 0 0 88 45 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_cache 1 96 40 96 1 : tunables 120 60 8 : slabdata 1 1 0
dquot 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_pwq 0 0 72 54 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_epi 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
kioctx 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
kiocb 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
fasync_cache 0 0 24 156 1 : tunables 120 60 8 : slabdata 0 0 0
shmem_inode_cache 847 855 760 5 1 : tunables 54 27 8 : slabdata 171 171 0
posix_timers_cache 0 0 176 22 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 17 122 64 61 1 : tunables 120 60 8 : slabdata 2 2 0
sgpool-128 32 32 4096 1 1 : tunables 24 12 8 : slabdata 32 32 0
sgpool-64 32 32 2048 2 1 : tunables 24 12 8 : slabdata 16 16 0
sgpool-32 140 140 1024 4 1 : tunables 54 27 8 : slabdata 35 35 76
sgpool-16 77 88 512 8 1 : tunables 54 27 8 : slabdata 11 11 0
sgpool-8 405 405 256 15 1 : tunables 120 60 8 : slabdata 27 27 284
blkdev_ioc 259 480 40 96 1 : tunables 120 60 8 : slabdata 5 5 0
blkdev_queue 80 84 680 6 1 : tunables 54 27 8 : slabdata 14 14 0
blkdev_requests 628 688 248 16 1 : tunables 120 60 8 : slabdata 43 43 480
biovec-(256) 256 256 4096 1 1 : tunables 24 12 8 : slabdata 256 256 0
biovec-128 256 256 2048 2 1 : tunables 24 12 8 : slabdata 128 128 0
biovec-64 358 380 1024 4 1 : tunables 54 27 8 : slabdata 95 95 54
biovec-16 270 300 256 15 1 : tunables 120 60 8 : slabdata 20 20 0
biovec-4 342 366 64 61 1 : tunables 120 60 8 : slabdata 6 6 0
biovec-1 5506200 5506200 16 225 1 : tunables 120 60 8 : slabdata 24472 24472 240
bio 5506189 5506189 128 31 1 : tunables 120 60 8 : slabdata 177619 177619 180
file_lock_cache 35 75 160 25 1 : tunables 120 60 8 : slabdata 3 3 0
sock_inode_cache 1069 1368 640 6 1 : tunables 54 27 8 : slabdata 228 228 0
skbuff_head_cache 5738 7185 256 15 1 : tunables 120 60 8 : slabdata 479 479 360
sock 4 12 640 6 1 : tunables 54 27 8 : slabdata 2 2 0
proc_inode_cache 222 483 584 7 1 : tunables 54 27 8 : slabdata 69 69 183
sigqueue 23 23 168 23 1 : tunables 120 60 8 : slabdata 1 1 0
radix_tree_node 18317 21476 536 7 1 : tunables 54 27 8 : slabdata 3068 3068 27
bdev_cache 55 60 768 5 1 : tunables 54 27 8 : slabdata 12 12 0
sysfs_dir_cache 3112 3172 64 61 1 : tunables 120 60 8 : slabdata 52 52 0
mnt_cache 37 60 192 20 1 : tunables 120 60 8 : slabdata 3 3 0
inode_cache 1085 1134 552 7 1 : tunables 54 27 8 : slabdata 162 162 0
dentry_cache 4510 12410 224 17 1 : tunables 120 60 8 : slabdata 730 730 404
filp 2970 4380 256 15 1 : tunables 120 60 8 : slabdata 292 292 30
names_cache 25 25 4096 1 1 : tunables 24 12 8 : slabdata 25 25 0
idr_layer_cache 75 77 528 7 1 : tunables 54 27 8 : slabdata 11 11 0
buffer_head 6061 22770 88 45 1 : tunables 120 60 8 : slabdata 506 506 0
mm_struct 319 455 1152 7 2 : tunables 24 12 8 : slabdata 65 65 0
vm_area_struct 18395 30513 184 21 1 : tunables 120 60 8 : slabdata 1453 1453 420
fs_cache 367 793 64 61 1 : tunables 120 60 8 : slabdata 13 13 0
files_cache 332 513 832 9 2 : tunables 54 27 8 : slabdata 57 57 0
signal_cache 379 549 448 9 1 : tunables 54 27 8 : slabdata 61 61 0
sighand_cache 350 420 2112 3 2 : tunables 24 12 8 : slabdata 140 140 6
task_struct 378 460 1744 4 2 : tunables 24 12 8 : slabdata 115 115 6
anon_vma 1098 2340 24 156 1 : tunables 120 60 8 : slabdata 15 15 0
shared_policy_node 0 0 56 69 1 : tunables 120 60 8 : slabdata 0 0 0
numa_policy 33 225 16 225 1 : tunables 120 60 8 : slabdata 1 1 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 19 19 32768 1 8 : tunables 8 4 0 : slabdata 19 19 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 2 2 16384 1 4 : tunables 8 4 0 : slabdata 2 2 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 33 35 8192 1 2 : tunables 8 4 0 : slabdata 33 35 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 146 146 4096 1 1 : tunables 24 12 8 : slabdata 146 146 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 526 546 2048 2 1 : tunables 24 12 8 : slabdata 273 273 88
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 5533 6100 1024 4 1 : tunables 54 27 8 : slabdata 1525 1525 189
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 409 480 512 8 1 : tunables 54 27 8 : slabdata 60 60 27
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 97 105 256 15 1 : tunables 120 60 8 : slabdata 7 7 0
size-192(DMA) 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
size-192 1747 2240 192 20 1 : tunables 120 60 8 : slabdata 112 112 0
size-128(DMA) 0 0 128 31 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 2858 4495 128 31 1 : tunables 120 60 8 : slabdata 145 145 30
size-64(DMA) 0 0 64 61 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 4595 23302 64 61 1 : tunables 120 60 8 : slabdata 382 382 60
size-32(DMA) 0 0 32 119 1 : tunables 120 60 8 : slabdata 0 0 0
size-32 1888 2142 32 119 1 : tunables 120 60 8 : slabdata 18 18 0
kmem_cache 150 150 256 15 1 : tunables 120 60 8 : slabdata 10 10 0
#

-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
Whatever the Java applications and desktop dances may lead to, Unix will <
still be pushing the packets around for a quite a while. --Rob Pike <
William Lee Irwin III
2005-02-07 11:11:49 UTC
Permalink
Post by Jan Kasprzak
Well, with Linus' patch to fs/pipe.c the situation seems to
improve a bit, but some leak is still there (look at the "monthly" graph
at the above URL). The server has been running 2.6.11-rc2 + patch to fs/pipe.c
for last 8 days. I am letting it run for a few more days in case you want
some debugging info from a live system. I am attaching my /proc/meminfo
and /proc/slabinfo.
Congratulations. You have 688MB of bio's.


-- wli
Linus Torvalds
2005-02-07 15:38:12 UTC
Permalink
Post by Jan Kasprzak
The server has been running 2.6.11-rc2 + patch to fs/pipe.c
for last 8 days.
# cat /proc/meminfo
MemTotal: 4045168 kB
Cached: 2861648 kB
LowFree: 59396 kB
Mapped: 206540 kB
Slab: 861176 kB
Ok, pretty much everything there and accounted for: you've got 4GB of
memory, and it's pretty much all in cached/mapped/slab. So if something is
leaking, it's in one of those three.
Post by Jan Kasprzak
# cat /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <batchcount> <limit> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
biovec-1 5506200 5506200 16 225 1 : tunables 120 60 8 : slabdata 24472 24472 240
bio 5506189 5506189 128 31 1 : tunables 120 60 8 : slabdata 177619 177619 180
Whee. You've got 5 _million_ bio's "active". Which account for about 750MB
of your 860MB of slab usage.

Jens, any ideas? Doesn't look like the "md sync_page_io bio leak", since
that would just lose one bio per md suprt block read according to you (and
that's the only one I can find fixed since -rc2). I doubt Jan has caused
five million of those..

Jan - can you give Jens a bit of an idea of what drivers and/or schedulers
you're using?

Linus
Jan Kasprzak
2005-02-07 15:52:02 UTC
Permalink
Linus Torvalds wrote:
: Jan - can you give Jens a bit of an idea of what drivers and/or schedulers
: you're using?

I have a Tyan S2882 dual Opteron, network is on-board tg3,
there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
there, but the drives are partitioned and partition grouped to form
software RAID-0, 1, 5, and 10 volumes - the main fileserving traffic
is on a RAID-5 volume, and /var is on RAID-10 volume.

Filesystems are XFS for that RAID-5 volume, ext3 for the rest
of the system. I have compiled-in the following I/O schedulers (according
to my /var/log/dmesg :-)

io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered

I have not changed the scheduler by hand, so I suppose the anticipatory
is the default.

No X, just serial console. The server does FTP serving mostly
(ProFTPd with sendfile() compiled in), sending mail via qmail (cca
100-200k mails a day), and bits of other work (rsync, Apache, ...).
Fedora core 3 with all relevant updates.

My fstab (physical devices only):
/dev/md0 / ext3 defaults 1 1
/dev/md1 /home ext3 defaults 1 2
/dev/md6 /var ext3 defaults 1 2
/dev/md4 /fastraid xfs noatime 1 3
/dev/md5 /export xfs noatime 1 4
/dev/sde4 swap swap pri=10 0 0
/dev/sdf4 swap swap pri=10 0 0
/dev/sdg4 swap swap pri=10 0 0
/dev/sdh4 swap swap pri=10 0 0

My mdstat:

Personalities : [raid0] [raid1] [raid5]
md6 : active raid0 md3[0] md2[1]
19550720 blocks 64k chunks

md1 : active raid1 sdd1[1] sdc1[0]
14659200 blocks [2/2] [UU]

md2 : active raid1 sdf1[1] sde1[0]
9775424 blocks [2/2] [UU]

md3 : active raid1 sdh1[1] sdg1[0]
9775424 blocks [2/2] [UU]

md4 : active raid0 sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
39133184 blocks 256k chunks

md5 : active raid5 sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
1572512256 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUUUUUUU]

md0 : active raid1 sdb1[1] sda1[0]
14659200 blocks [2/2] [UU]

unused devices: <none>

Anything else you want to know? Thanks,

-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
Whatever the Java applications and desktop dances may lead to, Unix will <
still be pushing the packets around for a quite a while. --Rob Pike <
a***@home.kernel.dk
2005-02-07 16:38:15 UTC
Permalink
Post by Jan Kasprzak
: Jan - can you give Jens a bit of an idea of what drivers and/or schedulers
: you're using?
I have a Tyan S2882 dual Opteron, network is on-board tg3,
there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
there, but the drives are partitioned and partition grouped to form
software RAID-0, 1, 5, and 10 volumes - the main fileserving traffic
is on a RAID-5 volume, and /var is on RAID-10 volume.
Filesystems are XFS for that RAID-5 volume, ext3 for the rest
of the system. I have compiled-in the following I/O schedulers (according
to my /var/log/dmesg :-)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered
I have not changed the scheduler by hand, so I suppose the anticipatory
is the default.
No X, just serial console. The server does FTP serving mostly
(ProFTPd with sendfile() compiled in), sending mail via qmail (cca
100-200k mails a day), and bits of other work (rsync, Apache, ...).
Fedora core 3 with all relevant updates.
/dev/md0 / ext3 defaults 1 1
/dev/md1 /home ext3 defaults 1 2
/dev/md6 /var ext3 defaults 1 2
/dev/md4 /fastraid xfs noatime 1 3
/dev/md5 /export xfs noatime 1 4
/dev/sde4 swap swap pri=10 0 0
/dev/sdf4 swap swap pri=10 0 0
/dev/sdg4 swap swap pri=10 0 0
/dev/sdh4 swap swap pri=10 0 0
Personalities : [raid0] [raid1] [raid5]
md6 : active raid0 md3[0] md2[1]
19550720 blocks 64k chunks
md1 : active raid1 sdd1[1] sdc1[0]
14659200 blocks [2/2] [UU]
md2 : active raid1 sdf1[1] sde1[0]
9775424 blocks [2/2] [UU]
md3 : active raid1 sdh1[1] sdg1[0]
9775424 blocks [2/2] [UU]
md4 : active raid0 sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
39133184 blocks 256k chunks
md5 : active raid5 sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
1572512256 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUUUUUUU]
md0 : active raid1 sdb1[1] sda1[0]
14659200 blocks [2/2] [UU]
My guess would be the clone change, if raid was not leaking before. I
cannot lookup any patches at the moment, as I'm still at the hospital
taking care of my new born baby and wife :)

But try and reverse the patches to fs/bio.c that mention corruption due to
bio_clone and bio->bi_io_vec and see if that cures it. If it does, I know
where to look. When did you notice this started to leak?

Jens
Jan Kasprzak
2005-02-07 17:35:43 UTC
Permalink
***@home.kernel.dk wrote:
: My guess would be the clone change, if raid was not leaking before. I
: cannot lookup any patches at the moment, as I'm still at the hospital
: taking care of my new born baby and wife :)

Congratulations!

: But try and reverse the patches to fs/bio.c that mention corruption due to
: bio_clone and bio->bi_io_vec and see if that cures it. If it does, I know
: where to look. When did you notice this started to leak?

I think I have been running 2.6.10-rc3 before. I've copied
the fs/bio.c from 2.6.10-rc3 to my 2.6.11-rc2 sources and booted the
resulting kernel. I hope it will not eat my filesystems :-) I will send
my /proc/slabinfo in a few days.

-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
Whatever the Java applications and desktop dances may lead to, Unix will <
still be pushing the packets around for a quite a while. --Rob Pike <
Jan Kasprzak
2005-02-07 21:10:17 UTC
Permalink
Jan Kasprzak wrote:
: I think I have been running 2.6.10-rc3 before. I've copied
: the fs/bio.c from 2.6.10-rc3 to my 2.6.11-rc2 sources and booted the
: resulting kernel. I hope it will not eat my filesystems :-) I will send
: my /proc/slabinfo in a few days.

Hmm, after 3h35min of uptime I have

biovec-1 92157 92250 16 225 1 : tunables 120 60 8 : slabdata 410 410 60
bio 92163 92163 128 31 1 : tunables 120 60 8 : slabdata 2973 2973 60

so it is probably still leaking - about half an hour ago it was

biovec-1 77685 77850 16 225 1 : tunables 120 60 8 : slabdata 346 346 0
bio 77841 77841 128 31 1 : tunables 120 60 8 : slabdata 2511 2511 180

-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
Whatever the Java applications and desktop dances may lead to, Unix will <
still be pushing the packets around for a quite a while. --Rob Pike <
Noel Maddy
2005-02-08 02:47:14 UTC
Permalink
Post by Linus Torvalds
Whee. You've got 5 _million_ bio's "active". Which account for about 750MB
of your 860MB of slab usage.
Same situation here, at different rates on two different platforms,
both running same kernel build. Both show steadily increasing biovec-1.

uglybox was previously running Ingo's 2.6.11-rc2-RT-V0.7.36-03, and was
well over 3,000,000 bios after about a week of uptime. With only 512M of
memory, it was pretty sluggish.

Interesting that the 4-disk RAID5 seems to be growing about 4 times as
fast as the RAID1.

If there's anything else that could help, or patches you want me to try,
just ask.

Details:

=================================
#1: Soyo KT600 Platinum, Athlon 2500+, 512MB
2 SATA, 2 PATA (all on 8237)
RAID1 and RAID5
on-board tg3
================================
Post by Linus Torvalds
uname -a
Linux uglybox 2.6.11-rc3 #2 Thu Feb 3 16:19:44 EST 2005 i686 GNU/Linux
Post by Linus Torvalds
uptime
21:27:47 up 7:04, 4 users, load average: 1.06, 1.03, 1.02
Post by Linus Torvalds
grep '^bio' /proc/slabinfo
biovec-(256) 256 256 3072 2 2 : tunables 24 12 0 : slabdata 128 128 0
biovec-128 256 260 1536 5 2 : tunables 24 12 0 : slabdata 52 52 0
biovec-64 256 260 768 5 1 : tunables 54 27 0 : slabdata 52 52 0
biovec-16 256 260 192 20 1 : tunables 120 60 0 : slabdata 13 13 0
biovec-4 256 305 64 61 1 : tunables 120 60 0 : slabdata 5 5 0
biovec-1 64547 64636 16 226 1 : tunables 120 60 0 : slabdata 286 286 0
bio 64551 64599 64 61 1 : tunables 120 60 0 : slabdata 1059 1059 0
Post by Linus Torvalds
lsmod
Module Size Used by
ppp_deflate 4928 2
zlib_deflate 21144 1 ppp_deflate
bsd_comp 5376 0
ppp_async 9280 1
crc_ccitt 1728 1 ppp_async
ppp_generic 21396 7 ppp_deflate,bsd_comp,ppp_async
slhc 6720 1 ppp_generic
radeon 76224 1
ipv6 235456 27
pcspkr 3300 0
tg3 84932 0
ohci1394 31748 0
ieee1394 94196 1 ohci1394
snd_cmipci 30112 1
snd_pcm_oss 48480 0
snd_mixer_oss 17728 1 snd_pcm_oss
usbhid 31168 0
snd_pcm 83528 2 snd_cmipci,snd_pcm_oss
snd_page_alloc 7620 1 snd_pcm
snd_opl3_lib 9472 1 snd_cmipci
snd_timer 21828 2 snd_pcm,snd_opl3_lib
snd_hwdep 7456 1 snd_opl3_lib
snd_mpu401_uart 6528 1 snd_cmipci
snd_rawmidi 20704 1 snd_mpu401_uart
snd_seq_device 7116 2 snd_opl3_lib,snd_rawmidi
snd 48996 12 snd_cmipci,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_opl3_lib,snd_timer,snd_hwdep,snd_mpu401_uart,snd_rawmidi,snd_seq_device
soundcore 7648 1 snd
uhci_hcd 29968 0
ehci_hcd 29000 0
usbcore 106744 4 usbhid,uhci_hcd,ehci_hcd
dm_mod 52796 0
it87 23900 0
eeprom 5776 0
lm90 11044 0
i2c_sensor 2944 3 it87,eeprom,lm90
i2c_isa 1728 0
i2c_viapro 6412 0
i2c_core 18512 6 it87,eeprom,lm90,i2c_sensor,i2c_isa,i2c_viapro
Post by Linus Torvalds
lspci
0000:00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600 AGP] Host Bridge (rev 80)
0000:00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI Bridge
0000:00:07.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5705 Gigabit Ethernet (rev 03)
0000:00:0d.0 FireWire (IEEE 1394): VIA Technologies, Inc. IEEE 1394 Host Controller (rev 46)
0000:00:0e.0 Multimedia audio controller: C-Media Electronics Inc CM8738 (rev 10)
0000:00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller (rev 80)
0000:00:0f.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
0000:00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
0000:00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
0000:00:10.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
0000:00:10.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
0000:00:10.4 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 86)
0000:00:11.0 ISA bridge: VIA Technologies, Inc. VT8237 ISA bridge [K8T800 South]
0000:00:13.0 RAID bus controller: Silicon Image, Inc. (formerly CMD Technology Inc) SiI 3112 [SATALink/SATARaid] Serial ATA Controller (rev 02)
0000:01:00.0 VGA compatible controller: ATI Technologies Inc Radeon RV200 QW [Radeon 7500]
Post by Linus Torvalds
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 sdb1[0] sda1[1]
489856 blocks [2/2] [UU]

md4 : active raid5 sdb3[2] sda3[3] hdc3[1] hda3[0]
8795136 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md5 : active raid5 sdb5[2] sda5[3] hdc5[1] hda5[0]
14650752 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md6 : active raid5 sdb6[2] sda6[3] hdc6[1] hda6[0]
43953408 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md7 : active raid5 sdb7[2] sda7[3] hdc7[1] hda7[0]
164103552 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 hdc1[1] hda1[0]
489856 blocks [2/2] [UU]

unused devices: <none>

================================
#2: Soyo KT400 Platinum, Athlon 2500+, 512MB
2 PATA (one on 8235, one on HPT372)
RAID1
on-board via rhine
================================
Post by Linus Torvalds
uname -a
Linux lepke 2.6.11-rc3 #2 Thu Feb 3 16:19:44 EST 2005 i686 GNU/Linux
Post by Linus Torvalds
uptime
21:30:13 up 7:16, 1 user, load average: 1.00, 1.00, 1.23
Post by Linus Torvalds
grep '^bio' /proc/slabinfo
biovec-(256) 256 256 3072 2 2 : tunables 24 12 0 : slabdata 128 128 0
biovec-128 256 260 1536 5 2 : tunables 24 12 0 : slabdata 52 52 0
biovec-64 256 260 768 5 1 : tunables 54 27 0 : slabdata 52 52 0
biovec-16 256 260 192 20 1 : tunables 120 60 0 : slabdata 13 13 0
biovec-4 256 305 64 61 1 : tunables 120 60 0 : slabdata 5 5 0
biovec-1 14926 15142 16 226 1 : tunables 120 60 0 : slabdata 67 67 0
bio 14923 15006 64 61 1 : tunables 120 60 0 : slabdata 246 246 0
Module Size Used by
ipv6 235456 17
pcspkr 3300 0
tuner 21220 0
ub 15324 0
usbhid 31168 0
bttv 146064 0
video_buf 17540 1 bttv
firmware_class 7936 1 bttv
i2c_algo_bit 8840 1 bttv
v4l2_common 4736 1 bttv
btcx_risc 3912 1 bttv
tveeprom 11544 1 bttv
videodev 7488 1 bttv
uhci_hcd 29968 0
ehci_hcd 29000 0
usbcore 106744 5 ub,usbhid,uhci_hcd,ehci_hcd
via_ircc 23380 0
irda 121784 1 via_ircc
crc_ccitt 1728 1 irda
via_rhine 19844 0
mii 4032 1 via_rhine
dm_mod 52796 0
snd_bt87x 12360 0
snd_cmipci 30112 0
snd_opl3_lib 9472 1 snd_cmipci
snd_hwdep 7456 1 snd_opl3_lib
snd_mpu401_uart 6528 1 snd_cmipci
snd_cs46xx 85064 0
snd_rawmidi 20704 2 snd_mpu401_uart,snd_cs46xx
snd_seq_device 7116 2 snd_opl3_lib,snd_rawmidi
snd_ac97_codec 73976 1 snd_cs46xx
snd_pcm_oss 48480 0
snd_mixer_oss 17728 1 snd_pcm_oss
snd_pcm 83528 5 snd_bt87x,snd_cmipci,snd_cs46xx,snd_ac97_codec,snd_pcm_oss
snd_timer 21828 2 snd_opl3_lib,snd_pcm
snd 48996 13 snd_bt87x,snd_cmipci,snd_opl3_lib,snd_hwdep,snd_mpu401_uart,snd_cs46xx,snd_rawmidi,snd_seq_device,snd_ac97_codec,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer
soundcore 7648 1 snd
snd_page_alloc 7620 3 snd_bt87x,snd_cs46xx,snd_pcm
lm90 11044 0
eeprom 5776 0
it87 23900 0
i2c_sensor 2944 3 lm90,eeprom,it87
i2c_isa 1728 0
i2c_viapro 6412 0
i2c_core 18512 10 tuner,bttv,i2c_algo_bit,tveeprom,lm90,eeprom,it87,i2c_sensor,i2c_isa,i2c_viapro
0000:00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600 AGP] Host Bridge
0000:00:01.0 PCI bridge: VIA Technologies, Inc. VT8235 PCI Bridge
0000:00:09.0 Multimedia audio controller: Cirrus Logic CS 4614/22/24 [CrystalClear SoundFusion Audio Accelerator] (rev 01)
0000:00:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
0000:00:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
0000:00:0e.0 Multimedia audio controller: C-Media Electronics Inc CM8738 (rev 10)
0000:00:0f.0 RAID bus controller: Triones Technologies, Inc. HPT366/368/370/370A/372 (rev 05)
0000:00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 80)
0000:00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 80)
0000:00:10.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 80)
0000:00:10.3 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 82)
0000:00:11.0 ISA bridge: VIA Technologies, Inc. VT8235 ISA Bridge
0000:00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
0000:00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 74)
0000:01:00.0 VGA compatible controller: ATI Technologies Inc Radeon R200 QM [Radeon 9100]
Personalities : [raid0] [raid1] [raid5]
md4 : active raid1 hda1[0] hde1[1]
995904 blocks [2/2] [UU]

md5 : active raid1 hda2[0] hde2[1]
995904 blocks [2/2] [UU]

md6 : active raid1 hda7[0] hde7[1]
5855552 blocks [2/2] [UU]

md7 : active raid0 hda8[0] hde8[1]
136496128 blocks 32k chunks

unused devices: <none>
--
Educators cannot hope to instill a desire for life-long learning in
students until they themselves are life-long learners.
-- cvd6262, on slashdot.org
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Noel Maddy <***@zhtwn.com>
Parag Warudkar
2005-02-16 04:00:13 UTC
Permalink
I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
After following this thread, I started monitoring /proc/slabinfo. It seems
size-64 is continuously growing and doing a compile run seem to make it grow
noticeably faster. After a day's uptime size-64 line in /proc/slabinfo looks
like

size-64 7216543 7216544 64 61 1 : tunables 120 60 0 :
slabdata 118304 118304 0

Since this doesn't seem to bio, I think we have another slab leak somewhere.
The box recently went OOM during a gcc compile run after I killed the swap.

Output from free , OOM Killer, and /proc/slabinfo is down below..

free output -
total used free shared buffers cached
Mem: 767996 758120 9876 0 5276 130360
-/+ buffers/cache: 622484 145512
Swap: 1052248 67668 984580

OOM Killer Output
oom-killer: gfp_mask=0x1d2
DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1
cpu 0 cold: low 0, high 2, batch 1
Normal per-cpu:
cpu 0 hot: low 32, high 96, batch 16
cpu 0 cold: low 0, high 32, batch 16
HighMem per-cpu: empty

Free pages: 7260kB (0kB HighMem)
Active:62385 inactive:850 dirty:0 writeback:0 unstable:0 free:1815 slab:120136
mapped:62334 pagetables:2110
DMA free:3076kB min:72kB low:88kB high:108kB active:3328kB inactive:0kB
present:16384kB pages_scanned:4446 all_unreclaimable? yes
lowmem_reserve[]: 0 751 751
Normal free:4184kB min:3468kB low:4332kB high:5200kB active:246212kB
inactive:3400kB present:769472kB pages_scanned:3834 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB
1*2048kB 0*4096kB = 3076kB
Normal: 170*4kB 10*8kB 2*16kB 0*32kB 1*64kB 0*128kB 1*256kB 2*512kB 0*1024kB
1*2048kB 0*4096kB = 4184kB
HighMem: empty
Swap cache: add 310423, delete 310423, find 74707/105490, race 0+0
Free swap = 0kB
Total swap = 0kB
Out of Memory: Killed process 4898 (klauncher).
oom-killer: gfp_mask=0x1d2
DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1
cpu 0 cold: low 0, high 2, batch 1
Normal per-cpu:
cpu 0 hot: low 32, high 96, batch 16
cpu 0 cold: low 0, high 32, batch 16
HighMem per-cpu: empty

Free pages: 7020kB (0kB HighMem)
Active:62308 inactive:648 dirty:0 writeback:0 unstable:0 free:1755 slab:120439
mapped:62199 pagetables:2020
DMA free:3076kB min:72kB low:88kB high:108kB active:3336kB inactive:0kB
present:16384kB pages_scanned:7087 all_unreclaimable? yes
lowmem_reserve[]: 0 751 751
Normal free:3944kB min:3468kB low:4332kB high:5200kB active:245896kB
inactive:2592kB present:769472kB pages_scanned:3861 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB
1*2048kB 0*4096kB = 3076kB
Normal: 112*4kB 9*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 2*512kB 0*1024kB
1*2048kB 0*4096kB = 3944kB
HighMem: empty
Swap cache: add 310423, delete 310423, find 74707/105490, race 0+0
Free swap = 0kB
Total swap = 0kB
Out of Memory: Killed process 4918 (kwin).

/proc/slabinfo output

ipx_sock 0 0 896 4 1 : tunables 54 27 0 :
slabdata 0 0 0
scsi_cmd_cache 3 7 576 7 1 : tunables 54 27 0 :
slabdata 1 1 0
ip_fib_alias 10 119 32 119 1 : tunables 120 60 0 :
slabdata 1 1 0
ip_fib_hash 10 61 64 61 1 : tunables 120 60 0 :
slabdata 1 1 0
sgpool-128 32 32 4096 1 1 : tunables 24 12 0 :
slabdata 32 32 0
sgpool-64 32 32 2048 2 1 : tunables 24 12 0 :
slabdata 16 16 0
sgpool-32 32 32 1024 4 1 : tunables 54 27 0 :
slabdata 8 8 0
sgpool-16 32 32 512 8 1 : tunables 54 27 0 :
slabdata 4 4 0
sgpool-8 32 45 256 15 1 : tunables 120 60 0 :
slabdata 3 3 0
ext3_inode_cache 2805 3063 1224 3 1 : tunables 24 12 0 :
slabdata 1021 1021 0
ext3_xattr 0 0 88 45 1 : tunables 120 60 0 :
slabdata 0 0 0
journal_handle 16 156 24 156 1 : tunables 120 60 0 :
slabdata 1 1 0
journal_head 49 180 88 45 1 : tunables 120 60 0 :
slabdata 4 4 0
revoke_table 6 225 16 225 1 : tunables 120 60 0 :
slabdata 1 1 0
revoke_record 0 0 32 119 1 : tunables 120 60 0 :
slabdata 0 0 0
unix_sock 170 175 1088 7 2 : tunables 24 12 0 :
slabdata 25 25 0
ip_mrt_cache 0 0 128 31 1 : tunables 120 60 0 :
slabdata 0 0 0
tcp_tw_bucket 1 20 192 20 1 : tunables 120 60 0 :
slabdata 1 1 0
tcp_bind_bucket 4 119 32 119 1 : tunables 120 60 0 :
slabdata 1 1 0
tcp_open_request 0 0 128 31 1 : tunables 120 60 0 :
slabdata 0 0 0
inet_peer_cache 0 0 128 31 1 : tunables 120 60 0 :
slabdata 0 0 0
secpath_cache 0 0 192 20 1 : tunables 120 60 0 :
slabdata 0 0 0
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 0 :
slabdata 0 0 0
ip_dst_cache 14 20 384 10 1 : tunables 54 27 0 :
slabdata 2 2 0
arp_cache 2 12 320 12 1 : tunables 54 27 0 :
slabdata 1 1 0
raw_sock 2 7 1088 7 2 : tunables 24 12 0 :
slabdata 1 1 0
udp_sock 7 7 1088 7 2 : tunables 24 12 0 :
slabdata 1 1 0
tcp_sock 4 4 1920 2 1 : tunables 24 12 0 :
slabdata 2 2 0
flow_cache 0 0 128 31 1 : tunables 120 60 0 :
slabdata 0 0 0
cfq_ioc_pool 0 0 48 81 1 : tunables 120 60 0 :
slabdata 0 0 0
cfq_pool 0 0 176 22 1 : tunables 120 60 0 :
slabdata 0 0 0
crq_pool 0 0 104 38 1 : tunables 120 60 0 :
slabdata 0 0 0
deadline_drq 0 0 96 41 1 : tunables 120 60 0 :
slabdata 0 0 0
as_arq 32 70 112 35 1 : tunables 120 60 0 :
slabdata 2 2 0
mqueue_inode_cache 1 3 1216 3 1 : tunables 24 12 0 :
slabdata 1 1 0
isofs_inode_cache 0 0 872 4 1 : tunables 54 27 0 :
slabdata 0 0 0
hugetlbfs_inode_cache 1 9 824 9 2 : tunables 54 27
0 : slabdata 1 1 0
ext2_inode_cache 0 0 1024 4 1 : tunables 54 27 0 :
slabdata 0 0 0
ext2_xattr 0 0 88 45 1 : tunables 120 60 0 :
slabdata 0 0 0
dnotify_cache 75 96 40 96 1 : tunables 120 60 0 :
slabdata 1 1 0
dquot 0 0 320 12 1 : tunables 54 27 0 :
slabdata 0 0 0
eventpoll_pwq 1 54 72 54 1 : tunables 120 60 0 :
slabdata 1 1 0
eventpoll_epi 1 20 192 20 1 : tunables 120 60 0 :
slabdata 1 1 0
kioctx 0 0 512 7 1 : tunables 54 27 0 :
slabdata 0 0 0
kiocb 0 0 256 15 1 : tunables 120 60 0 :
slabdata 0 0 0
fasync_cache 2 156 24 156 1 : tunables 120 60 0 :
slabdata 1 1 0
shmem_inode_cache 302 308 1056 7 2 : tunables 24 12 0 :
slabdata 44 44 0
posix_timers_cache 0 0 264 15 1 : tunables 54 27 0 :
slabdata 0 0 0
uid_cache 5 61 64 61 1 : tunables 120 60 0 :
slabdata 1 1 0
blkdev_ioc 84 90 88 45 1 : tunables 120 60 0 :
slabdata 2 2 0
blkdev_queue 20 27 880 9 2 : tunables 54 27 0 :
slabdata 3 3 0
blkdev_requests 32 32 248 16 1 : tunables 120 60 0 :
slabdata 2 2 0
biovec-(256) 256 256 4096 1 1 : tunables 24 12 0 :
slabdata 256 256 0
biovec-128 256 256 2048 2 1 : tunables 24 12 0 :
slabdata 128 128 0
biovec-64 256 256 1024 4 1 : tunables 54 27 0 :
slabdata 64 64 0
biovec-16 256 270 256 15 1 : tunables 120 60 0 :
slabdata 18 18 0
biovec-4 256 305 64 61 1 : tunables 120 60 0 :
slabdata 5 5 0
biovec-1 272 450 16 225 1 : tunables 120 60 0 :
slabdata 2 2 0
bio 272 279 128 31 1 : tunables 120 60 0 :
slabdata 9 9 0
file_lock_cache 7 40 200 20 1 : tunables 120 60 0 :
slabdata 2 2 0
sock_inode_cache 192 192 960 4 1 : tunables 54 27 0 :
slabdata 48 48 0
skbuff_head_cache 45 72 320 12 1 : tunables 54 27 0 :
slabdata 6 6 0
sock 6 8 896 4 1 : tunables 54 27 0 :
slabdata 2 2 0
proc_inode_cache 50 128 856 4 1 : tunables 54 27 0 :
slabdata 32 32 0
sigqueue 23 23 168 23 1 : tunables 120 60 0 :
slabdata 1 1 0
radix_tree_node 2668 2856 536 7 1 : tunables 54 27 0 :
slabdata 408 408 0
bdev_cache 9 9 1152 3 1 : tunables 24 12 0 :
slabdata 3 3 0
sysfs_dir_cache 2437 2440 64 61 1 : tunables 120 60 0 :
slabdata 40 40 0
mnt_cache 26 40 192 20 1 : tunables 120 60 0 :
slabdata 2 2 0
inode_cache 778 918 824 9 2 : tunables 54 27 0 :
slabdata 102 102 0
dentry_cache 4320 8895 264 15 1 : tunables 54 27 0 :
slabdata 593 593 0
filp 1488 1488 320 12 1 : tunables 54 27 0 :
slabdata 124 124 0
names_cache 11 11 4096 1 1 : tunables 24 12 0 :
slabdata 11 11 0
idr_layer_cache 76 77 528 7 1 : tunables 54 27 0 :
slabdata 11 11 0
buffer_head 2360 2385 88 45 1 : tunables 120 60 0 :
slabdata 53 53 0
mm_struct 65 65 1472 5 2 : tunables 24 12 0 :
slabdata 13 13 0
vm_area_struct 5628 5632 176 22 1 : tunables 120 60 0 :
slabdata 256 256 0
fs_cache 76 122 64 61 1 : tunables 120 60 0 :
slabdata 2 2 0
files_cache 64 64 896 4 1 : tunables 54 27 0 :
slabdata 16 16 0
signal_cache 96 119 512 7 1 : tunables 54 27 0 :
slabdata 17 17 0
sighand_cache 78 78 2112 3 2 : tunables 24 12 0 :
slabdata 26 26 0
task_struct 96 96 1936 2 1 : tunables 24 12 0 :
slabdata 48 48 0
anon_vma 1464 1464 64 61 1 : tunables 120 60 0 :
slabdata 24 24 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 :
slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 :
slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 :
slabdata 0 0 0
size-65536 3 3 65536 1 16 : tunables 8 4 0 :
slabdata 3 3 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 :
slabdata 0 0 0
size-32768 4 4 32768 1 8 : tunables 8 4 0 :
slabdata 4 4 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 :
slabdata 0 0 0
size-16384 4 4 16384 1 4 : tunables 8 4 0 :
slabdata 4 4 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 :
slabdata 0 0 0
size-8192 31 31 8192 1 2 : tunables 8 4 0 :
slabdata 31 31 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 0 :
slabdata 0 0 0
size-4096 56 56 4096 1 1 : tunables 24 12 0 :
slabdata 56 56 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 0 :
slabdata 0 0 0
size-2048 123 126 2048 2 1 : tunables 24 12 0 :
slabdata 63 63 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 0 :
slabdata 0 0 0
size-1024 252 252 1024 4 1 : tunables 54 27 0 :
slabdata 63 63 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 0 :
slabdata 0 0 0
size-512 421 448 512 8 1 : tunables 54 27 0 :
slabdata 56 56 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 0 :
slabdata 0 0 0
size-256 108 120 256 15 1 : tunables 120 60 0 :
slabdata 8 8 0
size-192(DMA) 0 0 192 20 1 : tunables 120 60 0 :
slabdata 0 0 0
size-192 1204 1220 192 20 1 : tunables 120 60 0 :
slabdata 61 61 0
size-128(DMA) 0 0 128 31 1 : tunables 120 60 0 :
slabdata 0 0 0
size-128 1247 1426 128 31 1 : tunables 120 60 0 :
slabdata 46 46 0
size-64(DMA) 0 0 64 61 1 : tunables 120 60 0 :
slabdata 0 0 0
size-64 7265953 7265954 64 61 1 : tunables 120 60 0 :
slabdata 119114 119114 0
size-32(DMA) 0 0 32 119 1 : tunables 120 60 0 :
slabdata 0 0 0
size-32 1071 1071 32 119 1 : tunables 120 60 0 :
slabdata 9 9 0
kmem_cache 120 120 256 15 1 : tunables 120 60 0 :
slabdata 8 8 0

Parag
Post by Noel Maddy
Post by Linus Torvalds
Whee. You've got 5 _million_ bio's "active". Which account for about
750MB of your 860MB of slab usage.
Same situation here, at different rates on two different platforms,
both running same kernel build. Both show steadily increasing biovec-1.
uglybox was previously running Ingo's 2.6.11-rc2-RT-V0.7.36-03, and was
well over 3,000,000 bios after about a week of uptime. With only 512M of
memory, it was pretty sluggish.
Interesting that the 4-disk RAID5 seems to be growing about 4 times as
fast as the RAID1.
If there's anything else that could help, or patches you want me to try,
just ask.
=================================
#1: Soyo KT600 Platinum, Athlon 2500+, 512MB
2 SATA, 2 PATA (all on 8237)
RAID1 and RAID5
on-board tg3
================================
Post by Linus Torvalds
uname -a
Linux uglybox 2.6.11-rc3 #2 Thu Feb 3 16:19:44 EST 2005 i686 GNU/Linux
Post by Linus Torvalds
uptime
21:27:47 up 7:04, 4 users, load average: 1.06, 1.03, 1.02
Post by Linus Torvalds
grep '^bio' /proc/slabinfo
biovec-(256) 256 256 3072 2 2 : tunables 24 12 0
: slabdata 128 128 0 biovec-128 256 260 1536 5
2 : tunables 24 12 0 : slabdata 52 52 0 biovec-64
256 260 768 5 1 : tunables 54 27 0 : slabdata
tunables 120 60 0 : slabdata 13 13 0 biovec-4
256 305 64 61 1 : tunables 120 60 0 : slabdata 5
5 0 biovec-1 64547 64636 16 226 1 : tunables 120
60 0 : slabdata 286 286 0 bio 64551 64599
64 61 1 : tunables 120 60 0 : slabdata 1059 1059 0
Post by Linus Torvalds
lsmod
Module Size Used by
ppp_deflate 4928 2
zlib_deflate 21144 1 ppp_deflate
bsd_comp 5376 0
ppp_async 9280 1
crc_ccitt 1728 1 ppp_async
ppp_generic 21396 7 ppp_deflate,bsd_comp,ppp_async
slhc 6720 1 ppp_generic
radeon 76224 1
ipv6 235456 27
pcspkr 3300 0
tg3 84932 0
ohci1394 31748 0
ieee1394 94196 1 ohci1394
snd_cmipci 30112 1
snd_pcm_oss 48480 0
snd_mixer_oss 17728 1 snd_pcm_oss
usbhid 31168 0
snd_pcm 83528 2 snd_cmipci,snd_pcm_oss
snd_page_alloc 7620 1 snd_pcm
snd_opl3_lib 9472 1 snd_cmipci
snd_timer 21828 2 snd_pcm,snd_opl3_lib
snd_hwdep 7456 1 snd_opl3_lib
snd_mpu401_uart 6528 1 snd_cmipci
snd_rawmidi 20704 1 snd_mpu401_uart
snd_seq_device 7116 2 snd_opl3_lib,snd_rawmidi
snd 48996 12
snd_cmipci,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_opl3_lib,snd_timer,snd_hwd
ep,snd_mpu401_uart,snd_rawmidi,snd_seq_device soundcore 7648
1 snd
uhci_hcd 29968 0
ehci_hcd 29000 0
usbcore 106744 4 usbhid,uhci_hcd,ehci_hcd
dm_mod 52796 0
it87 23900 0
eeprom 5776 0
lm90 11044 0
i2c_sensor 2944 3 it87,eeprom,lm90
i2c_isa 1728 0
i2c_viapro 6412 0
i2c_core 18512 6
it87,eeprom,lm90,i2c_sensor,i2c_isa,i2c_viapro
Post by Linus Torvalds
lspci
0000:00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600 AGP]
Host Bridge (rev 80) 0000:00:01.0 PCI bridge: VIA Technologies, Inc. VT8237
PCI Bridge
0000:00:07.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5705
Gigabit Ethernet (rev 03) 0000:00:0d.0 FireWire (IEEE 1394): VIA
Technologies, Inc. IEEE 1394 Host Controller (rev 46) 0000:00:0e.0
Multimedia audio controller: C-Media Electronics Inc CM8738 (rev 10)
0000:00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA
RAID Controller (rev 80) 0000:00:0f.1 IDE interface: VIA Technologies, Inc.
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
0000:00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1
Controller (rev 81) 0000:00:10.1 USB Controller: VIA Technologies, Inc.
VT82xxxxx UHCI USB 1.1 Controller (rev 81) 0000:00:10.2 USB Controller: VIA
Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81) 0000:00:10.3
USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller
(rev 81) 0000:00:10.4 USB Controller: VIA Technologies, Inc. USB 2.0 (rev
86) 0000:00:11.0 ISA bridge: VIA Technologies, Inc. VT8237 ISA bridge
[K8T800 South] 0000:00:13.0 RAID bus controller: Silicon Image, Inc.
(formerly CMD Technology Inc) SiI 3112 [SATALink/SATARaid] Serial ATA
Controller (rev 02) 0000:01:00.0 VGA compatible controller: ATI
Technologies Inc Radeon RV200 QW [Radeon 7500]
Post by Linus Torvalds
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 sdb1[0] sda1[1]
489856 blocks [2/2] [UU]
md4 : active raid5 sdb3[2] sda3[3] hdc3[1] hda3[0]
8795136 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
md5 : active raid5 sdb5[2] sda5[3] hdc5[1] hda5[0]
14650752 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
md6 : active raid5 sdb6[2] sda6[3] hdc6[1] hda6[0]
43953408 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
md7 : active raid5 sdb7[2] sda7[3] hdc7[1] hda7[0]
164103552 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid1 hdc1[1] hda1[0]
489856 blocks [2/2] [UU]
unused devices: <none>
================================
#2: Soyo KT400 Platinum, Athlon 2500+, 512MB
2 PATA (one on 8235, one on HPT372)
RAID1
on-board via rhine
================================
Post by Linus Torvalds
uname -a
Linux lepke 2.6.11-rc3 #2 Thu Feb 3 16:19:44 EST 2005 i686 GNU/Linux
Post by Linus Torvalds
uptime
21:30:13 up 7:16, 1 user, load average: 1.00, 1.00, 1.23
Post by Linus Torvalds
grep '^bio' /proc/slabinfo
biovec-(256) 256 256 3072 2 2 : tunables 24 12 0
: slabdata 128 128 0 biovec-128 256 260 1536 5
2 : tunables 24 12 0 : slabdata 52 52 0 biovec-64
256 260 768 5 1 : tunables 54 27 0 : slabdata
tunables 120 60 0 : slabdata 13 13 0 biovec-4
256 305 64 61 1 : tunables 120 60 0 : slabdata 5
5 0 biovec-1 14926 15142 16 226 1 : tunables 120
60 0 : slabdata 67 67 0 bio 14923 15006
64 61 1 : tunables 120 60 0 : slabdata 246 246 0
Module Size Used by
ipv6 235456 17
pcspkr 3300 0
tuner 21220 0
ub 15324 0
usbhid 31168 0
bttv 146064 0
video_buf 17540 1 bttv
firmware_class 7936 1 bttv
i2c_algo_bit 8840 1 bttv
v4l2_common 4736 1 bttv
btcx_risc 3912 1 bttv
tveeprom 11544 1 bttv
videodev 7488 1 bttv
uhci_hcd 29968 0
ehci_hcd 29000 0
usbcore 106744 5 ub,usbhid,uhci_hcd,ehci_hcd
via_ircc 23380 0
irda 121784 1 via_ircc
crc_ccitt 1728 1 irda
via_rhine 19844 0
mii 4032 1 via_rhine
dm_mod 52796 0
snd_bt87x 12360 0
snd_cmipci 30112 0
snd_opl3_lib 9472 1 snd_cmipci
snd_hwdep 7456 1 snd_opl3_lib
snd_mpu401_uart 6528 1 snd_cmipci
snd_cs46xx 85064 0
snd_rawmidi 20704 2 snd_mpu401_uart,snd_cs46xx
snd_seq_device 7116 2 snd_opl3_lib,snd_rawmidi
snd_ac97_codec 73976 1 snd_cs46xx
snd_pcm_oss 48480 0
snd_mixer_oss 17728 1 snd_pcm_oss
snd_pcm 83528 5
snd_bt87x,snd_cmipci,snd_cs46xx,snd_ac97_codec,snd_pcm_oss snd_timer
21828 2 snd_opl3_lib,snd_pcm
snd 48996 13
snd_bt87x,snd_cmipci,snd_opl3_lib,snd_hwdep,snd_mpu401_uart,snd_cs46xx,snd_
rawmidi,snd_seq_device,snd_ac97_codec,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_
timer soundcore 7648 1 snd
snd_page_alloc 7620 3 snd_bt87x,snd_cs46xx,snd_pcm
lm90 11044 0
eeprom 5776 0
it87 23900 0
i2c_sensor 2944 3 lm90,eeprom,it87
i2c_isa 1728 0
i2c_viapro 6412 0
i2c_core 18512 10
tuner,bttv,i2c_algo_bit,tveeprom,lm90,eeprom,it87,i2c_sensor,i2c_isa,i2c_vi
apro 0000:00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600
AGP] Host Bridge 0000:00:01.0 PCI bridge: VIA Technologies, Inc. VT8235 PCI
Bridge
0000:00:09.0 Multimedia audio controller: Cirrus Logic CS 4614/22/24
[CrystalClear SoundFusion Audio Accelerator] (rev 01) 0000:00:0b.0
Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev
11) 0000:00:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio
Capture (rev 11) 0000:00:0e.0 Multimedia audio controller: C-Media
Electronics Inc CM8738 (rev 10) 0000:00:0f.0 RAID bus controller: Triones
Technologies, Inc. HPT366/368/370/370A/372 (rev 05) 0000:00:10.0 USB
Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev
80) 0000:00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB
1.1 Controller (rev 80) 0000:00:10.2 USB Controller: VIA Technologies, Inc.
VT82xxxxx UHCI USB 1.1 Controller (rev 80) 0000:00:10.3 USB Controller: VIA
Technologies, Inc. USB 2.0 (rev 82) 0000:00:11.0 ISA bridge: VIA
Technologies, Inc. VT8235 ISA Bridge
0000:00:11.1 IDE interface: VIA Technologies, Inc.
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
0000:00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II]
(rev 74) 0000:01:00.0 VGA compatible controller: ATI Technologies Inc
Radeon R200 QM [Radeon 9100] Personalities : [raid0] [raid1] [raid5]
md4 : active raid1 hda1[0] hde1[1]
995904 blocks [2/2] [UU]
md5 : active raid1 hda2[0] hde2[1]
995904 blocks [2/2] [UU]
md6 : active raid1 hda7[0] hde7[1]
5855552 blocks [2/2] [UU]
md7 : active raid0 hda8[0] hde8[1]
136496128 blocks 32k chunks
unused devices: <none>
Andrew Morton
2005-02-16 05:12:10 UTC
Permalink
Post by Parag Warudkar
I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
After following this thread, I started monitoring /proc/slabinfo. It seems
size-64 is continuously growing and doing a compile run seem to make it grow
noticeably faster. After a day's uptime size-64 line in /proc/slabinfo looks
like
slabdata 118304 118304 0
Plenty of moisture there.

Could you please use this patch? Make sure that you enable
CONFIG_FRAME_POINTER (might not be needed for __builtin_return_address(0),
but let's be sure). Also enable CONFIG_DEBUG_SLAB.



From: Manfred Spraul <***@colorfullife.com>

With the patch applied,

echo "size-4096 0 0 0" > /proc/slabinfo

walks the objects in the size-4096 slab, printing out the calling address
of whoever allocated that object.

It is for leak detection.


diff -puN mm/slab.c~slab-leak-detector mm/slab.c
--- 25/mm/slab.c~slab-leak-detector 2005-02-15 21:06:44.000000000 -0800
+++ 25-akpm/mm/slab.c 2005-02-15 21:06:44.000000000 -0800
@@ -2116,6 +2116,15 @@ cache_alloc_debugcheck_after(kmem_cache_
*dbg_redzone1(cachep, objp) = RED_ACTIVE;
*dbg_redzone2(cachep, objp) = RED_ACTIVE;
}
+ {
+ int objnr;
+ struct slab *slabp;
+
+ slabp = GET_PAGE_SLAB(virt_to_page(objp));
+
+ objnr = (objp - slabp->s_mem) / cachep->objsize;
+ slab_bufctl(slabp)[objnr] = (unsigned long)caller;
+ }
objp += obj_dbghead(cachep);
if (cachep->ctor && cachep->flags & SLAB_POISON) {
unsigned long ctor_flags = SLAB_CTOR_CONSTRUCTOR;
@@ -2179,12 +2188,14 @@ static void free_block(kmem_cache_t *cac
objnr = (objp - slabp->s_mem) / cachep->objsize;
check_slabp(cachep, slabp);
#if DEBUG
+#if 0
if (slab_bufctl(slabp)[objnr] != BUFCTL_FREE) {
printk(KERN_ERR "slab: double free detected in cache '%s', objp %p.\n",
cachep->name, objp);
BUG();
}
#endif
+#endif
slab_bufctl(slabp)[objnr] = slabp->free;
slabp->free = objnr;
STATS_DEC_ACTIVE(cachep);
@@ -2998,6 +3009,29 @@ struct seq_operations slabinfo_op = {
.show = s_show,
};

+static void do_dump_slabp(kmem_cache_t *cachep)
+{
+#if DEBUG
+ struct list_head *q;
+
+ check_irq_on();
+ spin_lock_irq(&cachep->spinlock);
+ list_for_each(q,&cachep->lists.slabs_full) {
+ struct slab *slabp;
+ int i;
+ slabp = list_entry(q, struct slab, list);
+ for (i = 0; i < cachep->num; i++) {
+ unsigned long sym = slab_bufctl(slabp)[i];
+
+ printk("obj %p/%d: %p", slabp, i, (void *)sym);
+ print_symbol(" <%s>", sym);
+ printk("\n");
+ }
+ }
+ spin_unlock_irq(&cachep->spinlock);
+#endif
+}
+
#define MAX_SLABINFO_WRITE 128
/**
* slabinfo_write - Tuning for the slab allocator
@@ -3038,9 +3072,11 @@ ssize_t slabinfo_write(struct file *file
batchcount < 1 ||
batchcount > limit ||
shared < 0) {
- res = -EINVAL;
+ do_dump_slabp(cachep);
+ res = 0;
} else {
- res = do_tune_cpucache(cachep, limit, batchcount, shared);
+ res = do_tune_cpucache(cachep, limit,
+ batchcount, shared);
}
break;
}
_
Parag Warudkar
2005-02-16 06:07:06 UTC
Permalink
Post by Andrew Morton
Plenty of moisture there.
Could you please use this patch? =A0Make sure that you enable
CONFIG_FRAME_POINTER (might not be needed for __builtin_return_addres=
s(0),
Post by Andrew Morton
but let's be sure). =A0Also enable CONFIG_DEBUG_SLAB.
Will try that out. For now I tried -rc4 and couple other things - remov=
ing=20
nvidia module doesnt make any difference but removing ndiswrapper and w=
ith no=20
networking the slab growth stops. With 8139too driver and network the g=
rowth=20
is there but pretty slower than with ndiswrapper. With 8139too + some n=
etwork=20
activity slab seems to reduce sometimes.

Seems either an ndiswrapper or a networking related leak. Will report t=
he=20
results with Manfred's patch tomorrow.

Thanks
Parag
Andrew Morton
2005-02-16 23:52:55 UTC
Permalink
Post by Andrew Morton
Plenty of moisture there.
Could you please use this patch? =A0Make sure that you enable
CONFIG_FRAME_POINTER (might not be needed for __builtin_return_addr=
ess(0),
Post by Andrew Morton
but let's be sure). =A0Also enable CONFIG_DEBUG_SLAB.
=20
Will try that out. For now I tried -rc4 and couple other things - rem=
oving=20
nvidia module doesnt make any difference but removing ndiswrapper and=
with no=20
networking the slab growth stops. With 8139too driver and network the=
growth=20
is there but pretty slower than with ndiswrapper. With 8139too + some=
network=20
activity slab seems to reduce sometimes.
OK.
Seems either an ndiswrapper or a networking related leak. Will report=
the=20
results with Manfred's patch tomorrow.
So it's probably an ndiswrapper bug?
Parag Warudkar
2005-02-17 13:00:27 UTC
Permalink
Post by Andrew Morton
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network activity the
size-64 grew from ~ 1100 to 4500 overnight. Is this normal? I will keep it
running to see where it goes.

A question - is it safe to assume it is a kmalloc based leak? (I am thinking
of tracking it down by using kprobes to insert a probe into __kmalloc and
record the stack to see what is causing so many allocations.)

Thanks
Parag
Linus Torvalds
2005-02-17 18:18:03 UTC
Permalink
Post by Parag Warudkar
A question - is it safe to assume it is a kmalloc based leak? (I am thinking
of tracking it down by using kprobes to insert a probe into __kmalloc and
record the stack to see what is causing so many allocations.)
It's definitely kmalloc-based, but you may not catch it in __kmalloc. The
"kmalloc()" function is actually an inline function which has some magic
compile-time code that statically determines when the size is constant and
can be turned into a direct call to "kmem_cache_alloc()" with the proper
cache descriptor.

So you'd need to either instrument kmem_cache_alloc() (and trigger on the
proper slab descriptor) or you would need to modify the kmalloc()
definition in <linux/slab.h> to not do the constant size optimization, at
which point you can instrument just __kmalloc() and avoid some of the
overhead.

Linus
Badari Pulavarty
2005-02-18 01:38:35 UTC
Permalink
Post by Parag Warudkar
Post by Andrew Morton
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network activity the
size-64 grew from ~ 1100 to 4500 overnight. Is this normal? I will keep it
running to see where it goes.
A question - is it safe to assume it is a kmalloc based leak? (I am thinking
of tracking it down by using kprobes to insert a probe into __kmalloc and
record the stack to see what is causing so many allocations.)
Last time I debugged something like this, I ended up adding dump_stack()
in kmem_cache_alloc() for the specific slab.

If you are really interested, you can try to get following jprobe
module working. (need to teach about kmem_cache_t structure to
get it to compile and export kallsyms_lookup_name() symbol etc).

Thanks,
Badari
Parag Warudkar
2005-02-21 04:57:40 UTC
Permalink
Post by Parag Warudkar
Post by Andrew Morton
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network activity the
size-64 grew from ~ 1100 to 4500 overnight. Is this normal? I will keep
it running to see where it goes.
[OT]

Didn't wanted to keep this hanging - It turned out to be a strange ndiswrapper
bug - It seems that the other OS in question allows the following without a
leak ;) -
ptr =Allocate(...);
ptr = Allocate(...);
:
repeat this zillion times without ever fearing that 'ptr' will leak..

I sent a fix to ndiswrapper-general mailing list on sourceforge if any one is
using ndiswrapper and having a similar problem.

Parag

Parag Warudkar
2005-02-16 23:31:23 UTC
Permalink
Post by Andrew Morton
echo "size-4096 0 0 0" > /proc/slabinfo
Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
the .config? I tried -rc4 with Manfred's patch and with CONFIG_DEBUG_SLAB and
CONFIG_DEBUG.

I get the following output from
echo "size-64 0 0 0" > /proc/slabinfo

obj ffff81002fe80000/0: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/1: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/2: 00000000000008a8 <0x8a8>
: 3
: 4
: :
obj ffff81002fe80000/43: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/44: 00000000000008a8 <0x8a8>

How do I know what is at ffff81002fe80000? I tried the normal tricks (gdb
-c /proc/kcore vmlinux, objdump -d etc.) but none of the places list this
address.

I am attaching my config.

Parag
Andrew Morton
2005-02-16 23:51:42 UTC
Permalink
Post by Parag Warudkar
Post by Andrew Morton
echo "size-4096 0 0 0" > /proc/slabinfo
Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
the .config?
No good reason, I suspect.
Post by Parag Warudkar
I tried -rc4 with Manfred's patch and with CONFIG_DEBUG_SLAB and
CONFIG_DEBUG.
Thanks.
Post by Parag Warudkar
I get the following output from
echo "size-64 0 0 0" > /proc/slabinfo
obj ffff81002fe80000/0: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/1: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/2: 00000000000008a8 <0x8a8>
: 3
: 4
obj ffff81002fe80000/43: 00000000000008a8 <0x8a8>
obj ffff81002fe80000/44: 00000000000008a8 <0x8a8>
How do I know what is at ffff81002fe80000? I tried the normal tricks (gdb
-c /proc/kcore vmlinux, objdump -d etc.) but none of the places list this
address.
ffff81002fe80000 is the address of the slab object. 00000000000008a8 is
supposed to be the caller's text address. It appears that
__builtin_return_address(0) is returning junk. Perhaps due to
-fomit-frame-pointer.
Parag Warudkar
2005-02-17 01:19:09 UTC
Permalink
ffff81002fe80000 is the address of the slab object. =A000000000000008=
a8 is
supposed to be the caller's text address. =A0It appears that
__builtin_return_address(0) is returning junk. =A0Perhaps due to
-fomit-frame-pointer.
I tried manually removing -fomit-frame-pointer from Makefile and adding=
=20
-fno-omit-frame-pointer but with same results - junk return addresses.=20
Probably a X86_64 issue.
So it's probably an ndiswrapper bug?=20
I looked at ndiswrapper mailing lists and found this explanation for th=
e same=20
issue of growing size-64 with ndiswrapper -
----------------------------------
"It looks like the problem is kernel-version related, not ndiswrapper.=20
ndiswrapper just uses some API that starts the memory leak but the=20
problem is indeed in the kernel itself. versions from 2.6.10 up to=20
.11-rc3 have this problem afaik. haven"t tested rc4 but maybe this one=
=20
doesn"t have the problem anymore, we will see"
----------------------------------

I tested -rc4 and it has the problem too. More over, with plain old 81=
39too=20
driver, the slab still continues to grow albeit slowly. So there is a r=
eason=20
to suspect kernel leak as well. I will try binary searching...

Parag
Loading...