Bug 53501 - Duplicated MemTotal with different values
Summary: Duplicated MemTotal with different values
Status: RESOLVED CODE_FIX
Alias: None
Product: Memory Management
Classification: Unclassified
Component: Other (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Andrew Morton
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-02-08 09:39 UTC by sworddragon2
Modified: 2013-09-08 15:15 UTC (History)
0 users

See Also:
Kernel Version: Ubuntu 3.8.0-4.8-generic 3.8.0-rc6
Tree: Mainline
Regression: No


Attachments

Description sworddragon2 2013-02-08 09:39:26 UTC
The installed memory on my system is 16 GiB. /proc/meminfo is showing me "MemTotal:       16435048 kB" but /sys/devices/system/node/node0/meminfo is showing me "Node 0 MemTotal:       16776380 kB".

My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The old value of 16435048 kB could have its own key "MemAvailable".
Comment 1 Andrew Morton 2013-02-13 00:51:09 UTC
(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

On Fri,  8 Feb 2013 09:39:27 +0000 (UTC)
bugzilla-daemon@bugzilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=53501
> 
>            Summary: Duplicated MemTotal with different values
>            Product: Memory Management
>            Version: 2.5
>     Kernel Version: Ubuntu 3.8.0-4.8-generic 3.8.0-rc6
>           Platform: All
>         OS/Version: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: Other
>         AssignedTo: akpm@linux-foundation.org
>         ReportedBy: sworddragon2@aol.com
>         Regression: No
> 
> 
> The installed memory on my system is 16 GiB. /proc/meminfo is showing me
> "MemTotal:       16435048 kB" but /sys/devices/system/node/node0/meminfo is
> showing me "Node 0 MemTotal:       16776380 kB".
> 
> My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The old
> value of 16435048 kB could have its own key "MemAvailable".

hm, mine does that too.  A discrepancy between `totalram_pages' and
NODE_DATA(0)->node_present_pages.

I don't know what the reasons are for that but yes, one would expect
the per-node MemTotals to sum up to the global one.
Comment 2 David Rientjes 2013-02-13 01:45:46 UTC
On Tue, 12 Feb 2013, Andrew Morton wrote:

> >            Summary: Duplicated MemTotal with different values
> >            Product: Memory Management
> >            Version: 2.5
> >     Kernel Version: Ubuntu 3.8.0-4.8-generic 3.8.0-rc6
> >           Platform: All
> >         OS/Version: Linux
> >               Tree: Mainline
> >             Status: NEW
> >           Severity: normal
> >           Priority: P1
> >          Component: Other
> >         AssignedTo: akpm@linux-foundation.org
> >         ReportedBy: sworddragon2@aol.com
> >         Regression: No
> > 
> > 
> > The installed memory on my system is 16 GiB. /proc/meminfo is showing me
> > "MemTotal:       16435048 kB" but /sys/devices/system/node/node0/meminfo is
> > showing me "Node 0 MemTotal:       16776380 kB".
> > 
> > My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The old
> > value of 16435048 kB could have its own key "MemAvailable".
> 
> hm, mine does that too.  A discrepancy between `totalram_pages' and
> NODE_DATA(0)->node_present_pages.
> 
> I don't know what the reasons are for that but yes, one would expect
> the per-node MemTotals to sum up to the global one.
> 

I'd suspect it has something to do with 9feedc9d831e ("mm: introduce new 
field "managed_pages" to struct zone") and 3.8 would be the first kernel 
release with this change.  Is it possible to try 3.7 or, better yet, with 
this patch reverted?

If neither of these are the case, or you aren't comfortable building and 
booting a custom kernel, please send along your /proc/zoneinfo.
Comment 3 Andrew Morton 2013-02-13 04:00:46 UTC
On Tue, 12 Feb 2013 17:45:42 -0800 (PST) David Rientjes <rientjes@google.com> wrote:

> > > The installed memory on my system is 16 GiB. /proc/meminfo is showing me
> > > "MemTotal:       16435048 kB" but /sys/devices/system/node/node0/meminfo
> is
> > > showing me "Node 0 MemTotal:       16776380 kB".
> > > 
> > > My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The
> old
> > > value of 16435048 kB could have its own key "MemAvailable".
> > 
> > hm, mine does that too.  A discrepancy between `totalram_pages' and
> > NODE_DATA(0)->node_present_pages.
> > 
> > I don't know what the reasons are for that but yes, one would expect
> > the per-node MemTotals to sum up to the global one.
> > 
> 
> I'd suspect it has something to do with 9feedc9d831e ("mm: introduce new 
> field "managed_pages" to struct zone") and 3.8 would be the first kernel 
> release with this change.  Is it possible to try 3.7 or, better yet, with 
> this patch reverted?

My desktop machine at google in inconsistent, as is the 2.6.32-based
machine, so it obviously predates 9feedc9d831e.
Comment 4 sworddragon2 2013-02-13 04:08:32 UTC
There is even another thing that I'm wondering about. MemTotal of 16776380 kB on my system is missing ~0.82 MB to be exactly 16 GiB (I'm assuming this is reserved from the BIOS). Is this the correct behavior of MemTotal to show the physical installed memory minus the reserved BIOS area? I'm asking this because I was not able to find any information in /proc how many memory is reserved by the BIOS.
Comment 5 David Rientjes 2013-02-14 03:19:12 UTC
On Tue, 12 Feb 2013, Andrew Morton wrote:

> > > > The installed memory on my system is 16 GiB. /proc/meminfo is showing
> me
> > > > "MemTotal:       16435048 kB" but
> /sys/devices/system/node/node0/meminfo is
> > > > showing me "Node 0 MemTotal:       16776380 kB".
> > > > 
> > > > My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The
> old
> > > > value of 16435048 kB could have its own key "MemAvailable".
> > > 
> > > hm, mine does that too.  A discrepancy between `totalram_pages' and
> > > NODE_DATA(0)->node_present_pages.
> > > 
> > > I don't know what the reasons are for that but yes, one would expect
> > > the per-node MemTotals to sum up to the global one.
> > > 
> > 
> > I'd suspect it has something to do with 9feedc9d831e ("mm: introduce new 
> > field "managed_pages" to struct zone") and 3.8 would be the first kernel 
> > release with this change.  Is it possible to try 3.7 or, better yet, with 
> > this patch reverted?
> 
> My desktop machine at google in inconsistent, as is the 2.6.32-based
> machine, so it obviously predates 9feedc9d831e.
> 

Hmm, ok.  The question is which one is right: the per-node MemTotal is the 
amount of present RAM, the spanned range minus holes, and the system 
MemTotal is the amount of pages released to the buddy allocator by 
bootmem and discounts not only the memory holes but also reserved pages.  
Should they both be the amount of RAM present or the amount of unreserved 
RAM present?
Comment 6 Anonymous Emailer 2013-02-14 04:02:00 UTC
Reply-To: liuj97@gmail.com

On 02/14/2013 11:19 AM, David Rientjes wrote:
> On Tue, 12 Feb 2013, Andrew Morton wrote:
> 
>>>>> The installed memory on my system is 16 GiB. /proc/meminfo is showing me
>>>>> "MemTotal:       16435048 kB" but /sys/devices/system/node/node0/meminfo
>>>>> is
>>>>> showing me "Node 0 MemTotal:       16776380 kB".
>>>>>
>>>>> My suggestion: MemTotal in /proc/meminfo should be 16776380 kB too. The
>>>>> old
>>>>> value of 16435048 kB could have its own key "MemAvailable".
>>>>
>>>> hm, mine does that too.  A discrepancy between `totalram_pages' and
>>>> NODE_DATA(0)->node_present_pages.
>>>>
>>>> I don't know what the reasons are for that but yes, one would expect
>>>> the per-node MemTotals to sum up to the global one.
>>>>
>>>
>>> I'd suspect it has something to do with 9feedc9d831e ("mm: introduce new 
>>> field "managed_pages" to struct zone") and 3.8 would be the first kernel 
>>> release with this change.  Is it possible to try 3.7 or, better yet, with 
>>> this patch reverted?
>>
>> My desktop machine at google in inconsistent, as is the 2.6.32-based
>> machine, so it obviously predates 9feedc9d831e.
>>
> 
> Hmm, ok.  The question is which one is right: the per-node MemTotal is the 
> amount of present RAM, the spanned range minus holes, and the system 
> MemTotal is the amount of pages released to the buddy allocator by 
> bootmem and discounts not only the memory holes but also reserved pages.  
> Should they both be the amount of RAM present or the amount of unreserved 
> RAM present?
> 
Hi David,
	We have worked out a patch set to address this issue. The first two
patches have been merged into v3.8, and another two patches are queued in
Andrew's mm tree for v3.9.
	The patch set introduces a new field named managed_pages into struct
zone to distinguish between pages present in a zone and pages managed by the
buddy system. So
zone->present_pages = zone->spanned_pages - pages_in_hole;
zone->managed_pages = pages_managed_by_buddy_system_in_the_zone;
	We have also added a field named "managed" into /proc/zoneinfo, but
haven't touch /proc/meminfo and /sys/devices/system/node/nodex/meminfo yet.
If preferred, we could work out another patch to enhance these two files
as suggested above.
	Regards!
	Gerry
Comment 7 David Rientjes 2013-02-15 00:26:05 UTC
On Thu, 14 Feb 2013, Jiang Liu wrote:

> > Hmm, ok.  The question is which one is right: the per-node MemTotal is the 
> > amount of present RAM, the spanned range minus holes, and the system 
> > MemTotal is the amount of pages released to the buddy allocator by 
> > bootmem and discounts not only the memory holes but also reserved pages.  
> > Should they both be the amount of RAM present or the amount of unreserved 
> > RAM present?
> > 
> Hi David,
>       We have worked out a patch set to address this issue. The first two
> patches have been merged into v3.8, and another two patches are queued in
> Andrew's mm tree for v3.9.
>       The patch set introduces a new field named managed_pages into struct
> zone to distinguish between pages present in a zone and pages managed by the
> buddy system. So
> zone->present_pages = zone->spanned_pages - pages_in_hole;
> zone->managed_pages = pages_managed_by_buddy_system_in_the_zone;
>       We have also added a field named "managed" into /proc/zoneinfo, but
> haven't touch /proc/meminfo and /sys/devices/system/node/nodex/meminfo yet.
> If preferred, we could work out another patch to enhance these two files
> as suggested above.

I'm glad this is a known issue that you're working on, but my question 
still stands: if MemTotal is going to be consistent throughout 
/proc/meminfo and /sys/devices/system/node/nodeX/meminfo, which is 
correct?  The present RAM minus holes or the amount available to the buddy 
allocator not including reserved memory?
Comment 8 Anonymous Emailer 2013-02-20 05:21:19 UTC
Reply-To: simon.jeons@gmail.com

Hi David,
On 02/15/2013 08:26 AM, David Rientjes wrote:
> On Thu, 14 Feb 2013, Jiang Liu wrote:
>
>>> Hmm, ok.  The question is which one is right: the per-node MemTotal is the
>>> amount of present RAM, the spanned range minus holes, and the system
>>> MemTotal is the amount of pages released to the buddy allocator by
>>> bootmem and discounts not only the memory holes but also reserved pages.
>>> Should they both be the amount of RAM present or the amount of unreserved
>>> RAM present?
>>>
>> Hi David,
>>      We have worked out a patch set to address this issue. The first two
>> patches have been merged into v3.8, and another two patches are queued in
>> Andrew's mm tree for v3.9.
>>      The patch set introduces a new field named managed_pages into struct
>> zone to distinguish between pages present in a zone and pages managed by the
>> buddy system. So
>> zone->present_pages = zone->spanned_pages - pages_in_hole;
>> zone->managed_pages = pages_managed_by_buddy_system_in_the_zone;
>>      We have also added a field named "managed" into /proc/zoneinfo, but
>> haven't touch /proc/meminfo and /sys/devices/system/node/nodex/meminfo yet.
>> If preferred, we could work out another patch to enhance these two files
>> as suggested above.
> I'm glad this is a known issue that you're working on, but my question
> still stands: if MemTotal is going to be consistent throughout
> /proc/meminfo and /sys/devices/system/node/nodeX/meminfo, which is
> correct?  The present RAM minus holes or the amount available to the buddy
> allocator not including reserved memory?

What I confuse is why have /proc/meminfo and /proc/vmstat at the same 
time, they both use to monitor memory subsystem states. What's the root 
reason?

>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
Comment 9 David Rientjes 2013-02-20 07:09:49 UTC
On Wed, 20 Feb 2013, Simon Jeons wrote:

> What I confuse is why have /proc/meminfo and /proc/vmstat at the same time,
> they both use to monitor memory subsystem states. What's the root reason?
> 

This has nothing to do with this thread, but /proc/vmstat actually does 
not include the MemTotal value being discussed in this thread that 
/proc/meminfo does.  /proc/meminfo is typically the interface used by 
applications, probably mostly for historical purposes since both are 
present when procfs is configured and mounted, but also to avoid 
determining the native page size.  There's no implicit userspace API 
exported by /proc/vmstat.
Comment 10 Anonymous Emailer 2013-03-02 02:22:03 UTC
Reply-To: simon.jeons@gmail.com

On 02/20/2013 03:09 PM, David Rientjes wrote:
> On Wed, 20 Feb 2013, Simon Jeons wrote:
>
>> What I confuse is why have /proc/meminfo and /proc/vmstat at the same time,
>> they both use to monitor memory subsystem states. What's the root reason?
>>
> This has nothing to do with this thread, but /proc/vmstat actually does
> not include the MemTotal value being discussed in this thread that
> /proc/meminfo does.  /proc/meminfo is typically the interface used by
> applications, probably mostly for historical purposes since both are

Do you mean /proc/vmstat is not used by  applications.
sar -B 1
pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s 
pgsteal/s %vmeff
I think they are read from /proc/vmstat

> present when procfs is configured and mounted, but also to avoid
> determining the native page size.  There's no implicit userspace API
> exported by /proc/vmstat.
Comment 11 David Rientjes 2013-03-04 11:18:22 UTC
On Sat, 2 Mar 2013, Simon Jeons wrote:

> > This has nothing to do with this thread, but /proc/vmstat actually does
> > not include the MemTotal value being discussed in this thread that
> > /proc/meminfo does.  /proc/meminfo is typically the interface used by
> > applications, probably mostly for historical purposes since both are
> 
> Do you mean /proc/vmstat is not used by  applications.
> sar -B 1
> pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s
> pgsteal/s
> %vmeff
> I think they are read from /proc/vmstat
> 

Yes, there is userspace code that parses /proc/vmstat.
Comment 12 Anonymous Emailer 2013-03-04 23:39:41 UTC
Reply-To: simon.jeons@gmail.com

On 03/04/2013 07:18 PM, David Rientjes wrote:
> On Sat, 2 Mar 2013, Simon Jeons wrote:
>
>>> This has nothing to do with this thread, but /proc/vmstat actually does
>>> not include the MemTotal value being discussed in this thread that
>>> /proc/meminfo does.  /proc/meminfo is typically the interface used by
>>> applications, probably mostly for historical purposes since both are
>> Do you mean /proc/vmstat is not used by  applications.
>> sar -B 1
>> pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s
>> pgsteal/s
>> %vmeff
>> I think they are read from /proc/vmstat
>>
> Yes, there is userspace code that parses /proc/vmstat.

Then why both need /proc/meminfo and /proc/vmstat?
Comment 13 David Rientjes 2013-03-05 21:53:05 UTC
On Tue, 5 Mar 2013, Simon Jeons wrote:

> Then why both need /proc/meminfo and /proc/vmstat?
> 

Because we do not break userspace.
Comment 14 sworddragon2 2013-09-08 15:15:39 UTC
The bug doesn't appear anymore on the Linux Kernel 3.11.

Note You need to log in before you can comment on or make changes to this bug.