Discussion:
[gem5-dev] Review Request 3502: Split the hit latency into tag lookup latency and RAM access latency
Sophiane SENNI
2016-06-15 14:26:48 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

Review request for Default.


Repository: gem5


Description
-------

Split the hit latency into tag lookup latency and RAM access latency.

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs
-----

configs/common/Caches.py 629fe6e6c781
src/mem/cache/BaseCache.py 629fe6e6c781
src/mem/cache/base.hh 629fe6e6c781
src/mem/cache/base.cc 629fe6e6c781
src/mem/cache/tags/Tags.py 629fe6e6c781
src/mem/cache/tags/base.hh 629fe6e6c781
src/mem/cache/tags/base.cc 629fe6e6c781
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.cc 629fe6e6c781
src/mem/cache/tags/fa_lru.hh 629fe6e6c781

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-15 14:34:50 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated June 15, 2016, 2:34 p.m.)


Review request for Default.


Summary (updated)
-----------------

cache: Split the hit latency into tag lookup latency and RAM access latency


Repository: gem5


Description (updated)
-------

Changeset 10875:b498767cb7d8
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency


Diffs (updated)
-----

configs/common/Caches.py 629fe6e6c781
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.cc 629fe6e6c781
src/mem/cache/BaseCache.py 629fe6e6c781
src/mem/cache/base.hh 629fe6e6c781
src/mem/cache/base.cc 629fe6e6c781
src/mem/cache/tags/Tags.py 629fe6e6c781
src/mem/cache/tags/base.hh 629fe6e6c781
src/mem/cache/tags/base.cc 629fe6e6c781

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-15 14:43:13 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 15, 2016, 2:43 après-midi)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 10875:b498767cb7d8
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs
-----

configs/common/Caches.py 629fe6e6c781
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.cc 629fe6e6c781
src/mem/cache/BaseCache.py 629fe6e6c781
src/mem/cache/base.hh 629fe6e6c781
src/mem/cache/base.cc 629fe6e6c781
src/mem/cache/tags/Tags.py 629fe6e6c781
src/mem/cache/tags/base.hh 629fe6e6c781
src/mem/cache/tags/base.cc 629fe6e6c781

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-16 13:37:03 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 16, 2016, 1:37 après-midi)


Review request for Default.


Repository: gem5


Description
-------

Changeset 10875:b498767cb7d8
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-----

configs/common/Caches.py UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/fa_lru.cc UNKNOWN

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Jason Lowe-Power
2016-06-16 14:37:36 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-----------------------------------------------------------


Hi Sophiane,

Thanks for the contribution. It looks like some of the patch doesn't apply cleanly in reviewboard. Did you use the hg postreview extension? It may also help to use the "-o" option on the extension.

Cheers!

- Jason Lowe-Power
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 16, 2016, 1:37 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 10875:b498767cb7d8
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
configs/common/Caches.py UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/fa_lru.cc UNKNOWN
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-16 15:25:58 UTC
Permalink
Post by Sophiane SENNI
Post by Jason Lowe-Power
Hi Sophiane,
Thanks for the contribution. It looks like some of the patch doesn't apply cleanly in reviewboard. Did you use the hg postreview extension? It may also help to use the "-o" option on the extension.
Cheers!
Hi Jason,

If I use the hg postreview extension (with the following command hg postreview -o -u -e 3502), all the patch does not apply cleanly.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 16, 2016, 3:15 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Jason Lowe-Power
2016-06-16 15:27:44 UTC
Permalink
Post by Sophiane SENNI
Post by Jason Lowe-Power
Hi Sophiane,
Thanks for the contribution. It looks like some of the patch doesn't apply cleanly in reviewboard. Did you use the hg postreview extension? It may also help to use the "-o" option on the extension.
Cheers!
Hi Jason,
If I use the hg postreview extension (with the following command hg postreview -o -u -e 3502), all the patch does not apply cleanly.
Make sure you're applying your patch on top of the most recent version of gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.

For instance, I believe BaseCache.py was renamed Cache.py in the last few months (I don't remember exactly when).


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 16, 2016, 3:27 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
configs/common/Caches.py UNKNOWN
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-16 15:35:04 UTC
Permalink
Post by Sophiane SENNI
Post by Jason Lowe-Power
Hi Sophiane,
Thanks for the contribution. It looks like some of the patch doesn't apply cleanly in reviewboard. Did you use the hg postreview extension? It may also help to use the "-o" option on the extension.
Cheers!
Hi Jason,
If I use the hg postreview extension (with the following command hg postreview -o -u -e 3502), all the patch does not apply cleanly.
Make sure you're applying your patch on top of the most recent version of gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
For instance, I believe BaseCache.py was renamed Cache.py in the last few months (I don't remember exactly when).
For the current posted patch, I used the following command "hg diff -g", then I post the patch manually through reviewboard GUI. But using this method does not also work properly. As you noticed, some of the patch doesn't apply cleanly.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 16, 2016, 3:27 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
configs/common/Caches.py UNKNOWN
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-16 18:15:51 UTC
Permalink
Post by Sophiane SENNI
Post by Jason Lowe-Power
Hi Sophiane,
Thanks for the contribution. It looks like some of the patch doesn't apply cleanly in reviewboard. Did you use the hg postreview extension? It may also help to use the "-o" option on the extension.
Cheers!
Hi Jason,
If I use the hg postreview extension (with the following command hg postreview -o -u -e 3502), all the patch does not apply cleanly.
Make sure you're applying your patch on top of the most recent version of gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
For instance, I believe BaseCache.py was renamed Cache.py in the last few months (I don't remember exactly when).
For the current posted patch, I used the following command "hg diff -g", then I post the patch manually through reviewboard GUI. But using this method does not also work properly. As you noticed, some of the patch doesn't apply cleanly.
You are right, the patch was applied to gem5-stable. I will apply it to the most recent version.
Thanks.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 16, 2016, 3:27 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
configs/common/Caches.py UNKNOWN
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-16 15:14:25 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated June 16, 2016, 3:14 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-----

configs/common/Caches.py 629fe6e6c781
src/mem/cache/BaseCache.py 629fe6e6c781
src/mem/cache/base.hh 629fe6e6c781
src/mem/cache/base.cc 629fe6e6c781
src/mem/cache/tags/Tags.py 629fe6e6c781
src/mem/cache/tags/base.hh 629fe6e6c781
src/mem/cache/tags/base.cc 629fe6e6c781
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.hh 629fe6e6c781
src/mem/cache/tags/fa_lru.cc 629fe6e6c781

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-16 15:15:49 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated June 16, 2016, 3:15 p.m.)


Review request for Default.


Repository: gem5


Description
-------

Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-----

configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b
src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-16 15:27:13 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 16, 2016, 3:27 après-midi)


Review request for Default.


Repository: gem5


Description
-------

Changeset 10875:dd94e2606640
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-----

src/mem/cache/tags/fa_lru.cc UNKNOWN
src/mem/cache/tags/fa_lru.hh UNKNOWN
src/mem/cache/tags/base_set_assoc.hh UNKNOWN
src/mem/cache/tags/base.cc UNKNOWN
src/mem/cache/tags/base.hh UNKNOWN
src/mem/cache/tags/Tags.py UNKNOWN
src/mem/cache/base.hh UNKNOWN
src/mem/cache/base.cc UNKNOWN
src/mem/cache/BaseCache.py UNKNOWN
configs/common/Caches.py UNKNOWN

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-06-16 18:55:09 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated June 16, 2016, 6:55 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-----

src/mem/cache/tags/fa_lru.hh 80e79ae636ca
src/mem/cache/tags/base.cc 80e79ae636ca
src/mem/cache/tags/Tags.py 80e79ae636ca
src/mem/cache/tags/fa_lru.cc 80e79ae636ca
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca
src/mem/cache/tags/base.hh 80e79ae636ca
configs/common/Caches.py 80e79ae636ca
src/mem/cache/Cache.py 80e79ae636ca
src/mem/cache/base.hh 80e79ae636ca
src/mem/cache/base.cc 80e79ae636ca

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Pierre-Yves Péneau
2016-06-17 07:57:49 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
-----------------------------------------------------------


I don't like the variable names, I think it's confusing especially in the Python part which is the user part. "lookup_latency" does not clearly refer to the tag lookup action , and "ram_latency" is also not very clear. Maybe something like "tag_latency" and "line_latency" could be better ? I think the two parts of a cache are well identified in this example.

- Pierre-Yves Péneau
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 16, 2016, 8:55 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.hh 80e79ae636ca
src/mem/cache/tags/base.cc 80e79ae636ca
src/mem/cache/tags/Tags.py 80e79ae636ca
src/mem/cache/tags/fa_lru.cc 80e79ae636ca
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca
src/mem/cache/tags/base.hh 80e79ae636ca
configs/common/Caches.py 80e79ae636ca
src/mem/cache/Cache.py 80e79ae636ca
src/mem/cache/base.hh 80e79ae636ca
src/mem/cache/base.cc 80e79ae636ca
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-17 10:01:13 UTC
Permalink
Post by Sophiane SENNI
Post by Pierre-Yves Péneau
I don't like the variable names, I think it's confusing especially in the Python part which is the user part. "lookup_latency" does not clearly refer to the tag lookup action , and "ram_latency" is also not very clear. Maybe something like "tag_latency" and "line_latency" could be better ? I think the two parts of a cache are well identified in this example.
Hi Pierre-Yves,

I am agree with you that the variable names in the Python part should not be confusing for users. I reused the name from a previous discussion with Andreas H.
We need feedback from other users to see what are the best annotation. In Cache, there are tag arrays and data arrays, so maybe "tag_latency" and "data_line_latency" could be a solution.
Any feedback from other gem5 users would be useful.

Sophiane


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 16, 2016, 6:55 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.hh 80e79ae636ca
src/mem/cache/tags/base.cc 80e79ae636ca
src/mem/cache/tags/Tags.py 80e79ae636ca
src/mem/cache/tags/fa_lru.cc 80e79ae636ca
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca
src/mem/cache/tags/base.hh 80e79ae636ca
configs/common/Caches.py 80e79ae636ca
src/mem/cache/Cache.py 80e79ae636ca
src/mem/cache/base.hh 80e79ae636ca
src/mem/cache/base.cc 80e79ae636ca
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Radhika Jagtap
2016-06-20 12:55:45 UTC
Permalink
Post by Sophiane SENNI
Post by Pierre-Yves Péneau
I don't like the variable names, I think it's confusing especially in the Python part which is the user part. "lookup_latency" does not clearly refer to the tag lookup action , and "ram_latency" is also not very clear. Maybe something like "tag_latency" and "line_latency" could be better ? I think the two parts of a cache are well identified in this example.
Hi Pierre-Yves,
I am agree with you that the variable names in the Python part should not be confusing for users. I reused the name from a previous discussion with Andreas H.
We need feedback from other users to see what are the best annotation. In Cache, there are tag arrays and data arrays, so maybe "tag_latency" and "data_line_latency" could be a solution.
Any feedback from other gem5 users would be useful.
Sophiane
Thanks for bringing this up. I vote for 'tag_latency' (or 'tag_lookup_latency') and 'data_latency'.

If I understand correctly the patch has an impact on timing/stats only if sequential access is set to True and in that case only affects the hit latency. The timing on the miss path and allocation of mshr (mshr entry, mshr target, write buffer entry, ask for mem-side bus access) still uses the forwardLatency value. The forwardLatency used to be 'hit_latency' (at one point not so far in the past everything was 'hit_latency' anyway!). But this change makes a distinction between tag and data access and it is logical to make forward latency equal to tag_latency. If you also had this analysis in mind, please could you add a comment for forwardLatency somewhere?


- Radhika


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 16, 2016, 6:55 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.hh 80e79ae636ca
src/mem/cache/tags/base.cc 80e79ae636ca
src/mem/cache/tags/Tags.py 80e79ae636ca
src/mem/cache/tags/fa_lru.cc 80e79ae636ca
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca
src/mem/cache/tags/base.hh 80e79ae636ca
configs/common/Caches.py 80e79ae636ca
src/mem/cache/Cache.py 80e79ae636ca
src/mem/cache/base.hh 80e79ae636ca
src/mem/cache/base.cc 80e79ae636ca
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-20 15:23:44 UTC
Permalink
Post by Sophiane SENNI
Post by Pierre-Yves Péneau
I don't like the variable names, I think it's confusing especially in the Python part which is the user part. "lookup_latency" does not clearly refer to the tag lookup action , and "ram_latency" is also not very clear. Maybe something like "tag_latency" and "line_latency" could be better ? I think the two parts of a cache are well identified in this example.
Hi Pierre-Yves,
I am agree with you that the variable names in the Python part should not be confusing for users. I reused the name from a previous discussion with Andreas H.
We need feedback from other users to see what are the best annotation. In Cache, there are tag arrays and data arrays, so maybe "tag_latency" and "data_line_latency" could be a solution.
Any feedback from other gem5 users would be useful.
Sophiane
Thanks for bringing this up. I vote for 'tag_latency' (or 'tag_lookup_latency') and 'data_latency'.
If I understand correctly the patch has an impact on timing/stats only if sequential access is set to True and in that case only affects the hit latency. The timing on the miss path and allocation of mshr (mshr entry, mshr target, write buffer entry, ask for mem-side bus access) still uses the forwardLatency value. The forwardLatency used to be 'hit_latency' (at one point not so far in the past everything was 'hit_latency' anyway!). But this change makes a distinction between tag and data access and it is logical to make forward latency equal to tag_latency. If you also had this analysis in mind, please could you add a comment for forwardLatency somewhere?
The distinction between tag and data access also raises the question of the fillLatency value. Does the sequential access mode has to be taken into account on a fill ? In this patch, the fillLatency is assumed to be always equal to data_latency.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 20, 2016, 3:07 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and data access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and data are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and data access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and data are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus data access latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Radhika Jagtap
2016-06-20 13:00:15 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8425
-----------------------------------------------------------



src/mem/cache/tags/base.cc (line 59)
<http://reviews.gem5.org/r/3502/#comment7299>

more than 80 chars in the line



src/mem/cache/tags/base_set_assoc.hh (line 220)
<http://reviews.gem5.org/r/3502/#comment7298>

Bad white spaces around here.



src/mem/cache/tags/fa_lru.cc (lines 214 - 223)
<http://reviews.gem5.org/r/3502/#comment7297>

Indentation is is off.


- Radhika Jagtap
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 16, 2016, 6:55 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and RAM access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and RAM access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and RAM are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus RAM access latency.
Diffs
-----
src/mem/cache/tags/fa_lru.hh 80e79ae636ca
src/mem/cache/tags/base.cc 80e79ae636ca
src/mem/cache/tags/Tags.py 80e79ae636ca
src/mem/cache/tags/fa_lru.cc 80e79ae636ca
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca
src/mem/cache/tags/base.hh 80e79ae636ca
configs/common/Caches.py 80e79ae636ca
src/mem/cache/Cache.py 80e79ae636ca
src/mem/cache/base.hh 80e79ae636ca
src/mem/cache/base.cc 80e79ae636ca
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-20 15:07:46 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 20, 2016, 3:07 après-midi)


Review request for Default.


Changes
-------

New variable names in Caches.py : tag_latency and data_latency


Summary (updated)
-----------------

cache: Split the hit latency into tag lookup latency and data access latency


Repository: gem5


Description (updated)
-------

Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and data access latency

If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and data are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and data access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and data are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus data access latency.


Diffs (updated)
-----

configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Nikos Nikoleris
2016-06-27 17:00:55 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------


Thanks for contributing this. Could you please change the commit message such that the summary (first line) is no more than 65 characters and the body no more than 75 characters. The commit summary should start with the mem keyword.


src/mem/cache/tags/base_set_assoc.hh (line 217)
<http://reviews.gem5.org/r/3502/#comment7316>

Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.



src/mem/cache/tags/fa_lru.cc (line 212)
<http://reviews.gem5.org/r/3502/#comment7315>

Same here


- Nikos Nikoleris
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 20, 2016, 3:07 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and data access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and data are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and data access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and data are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus data access latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-28 09:19:05 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Post by Sophiane SENNI
src/mem/cache/tags/fa_lru.cc, line 212
<http://reviews.gem5.org/r/3502/diff/8/?file=56392#file56392line212>
Same here
Same here


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 20, 2016, 3:07 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and data access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and data are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and data access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and data are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus data access latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-30 10:30:45 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 28, 2016, 1:06 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Nikos Nikoleris
2016-06-30 12:25:40 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.
Thanks for doing this! I think you could create a new variable in the base class (BaseTags) and use that as the latency on every access. In any case, in the code the latency is not dependent on the replacement policy. Mind though that if the access is a miss the latency should always be tagLatency, even when we've enabled the parallel access. We could also move the sequentialAccess variable to BaseCache although I am not sure how applicable a parallel lookup to the tag and the data array is for a fully associative cache.


- Nikos


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 28, 2016, 1:06 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Nikos Nikoleris
2016-07-17 19:42:39 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.
Thanks for doing this! I think you could create a new variable in the base class (BaseTags) and use that as the latency on every access. In any case, in the code the latency is not dependent on the replacement policy. Mind though that if the access is a miss the latency should always be tagLatency, even when we've enabled the parallel access. We could also move the sequentialAccess variable to BaseCache although I am not sure how applicable a parallel lookup to the tag and the data array is for a fully associative cache.
Sophiane, how are things progressing? It would be really great to get done with this and commit it.


- Nikos


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 28, 2016, 1:06 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-18 08:10:23 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.
Thanks for doing this! I think you could create a new variable in the base class (BaseTags) and use that as the latency on every access. In any case, in the code the latency is not dependent on the replacement policy. Mind though that if the access is a miss the latency should always be tagLatency, even when we've enabled the parallel access. We could also move the sequentialAccess variable to BaseCache although I am not sure how applicable a parallel lookup to the tag and the data array is for a fully associative cache.
Sophiane, how are things progressing? It would be really great to get done with this and commit it.
Hi Nikos,

I was and I am still quite busy with some other work I have to finish. But I will try to publish the new version of the patch as soon as possible. Sorry for the delay.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 28, 2016, 1:06 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-20 10:45:11 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.
Thanks for doing this! I think you could create a new variable in the base class (BaseTags) and use that as the latency on every access. In any case, in the code the latency is not dependent on the replacement policy. Mind though that if the access is a miss the latency should always be tagLatency, even when we've enabled the parallel access. We could also move the sequentialAccess variable to BaseCache although I am not sure how applicable a parallel lookup to the tag and the data array is for a fully associative cache.
Sophiane, how are things progressing? It would be really great to get done with this and commit it.
Hi Nikos,
I was and I am still quite busy with some other work I have to finish. But I will try to publish the new version of the patch as soon as possible. Sorry for the delay.
Nikos,

For the new patch, I use the "accessLatency" variable, as the latency on every access, in the base class (BaseTags). I tried to initialize this variable in the constructor according to the access mode as follows:

BaseTags::BaseTags(const Params *p)
: ClockedObject(p), blkSize(p->block_size), size(p->size),
lookupLatency(p->tag_latency), dataLatency(p->data_latency),
accessLatency(p->sequential_access ?
// If sequential access, sum tag lookup and data access latencies
(p->tag_latency + p->data_latency) :
// If parallel access, take the max latency
// between tag lookup and data access
(p->tag_latency >= p->data_latency ?
p->tag_latency : p->data_latency )),
cache(nullptr), warmupBound(0),
warmedUp(false), numBlocks(0)
{
}

However, when checking by simulation, the value of "sequential_access" parameter is always taken as "False". Even if this paramater is set to "True" in configs/common/Caches.py

Do you have a solution to correctly initialize the "accessLatency" variable in the initialization list ?

Thanks.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juin 28, 2016, 1:06 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Nikos Nikoleris
2016-07-20 12:54:28 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 218
<http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
Would it make sense to move most of this code in the constructor? The flag sequentialAccess and the condition lookupLatency >= dataLantency shouldn't change during the simulation.
I think this code should be left here. Because the total access latency now depends on both tag_latency and data_latency, the value of the 'lat' parameter specifying the access latency is not correct when the accessBlock() function is called. As a result, the 'lat' value should be updated inside the function depending on if it is a sequential or parallel access.
Regarding your suggestion, I am agree with you that it would be better to move the code in the constructor since as you mentioned the flag sequentialAccess and the access latency value should not change during the simulation. I will see how I can modify the patch accordingly.
Thanks for doing this! I think you could create a new variable in the base class (BaseTags) and use that as the latency on every access. In any case, in the code the latency is not dependent on the replacement policy. Mind though that if the access is a miss the latency should always be tagLatency, even when we've enabled the parallel access. We could also move the sequentialAccess variable to BaseCache although I am not sure how applicable a parallel lookup to the tag and the data array is for a fully associative cache.
Sophiane, how are things progressing? It would be really great to get done with this and commit it.
Hi Nikos,
I was and I am still quite busy with some other work I have to finish. But I will try to publish the new version of the patch as soon as possible. Sorry for the delay.
Nikos,
BaseTags::BaseTags(const Params *p)
: ClockedObject(p), blkSize(p->block_size), size(p->size),
lookupLatency(p->tag_latency), dataLatency(p->data_latency),
accessLatency(p->sequential_access ?
// If sequential access, sum tag lookup and data access latencies
// If parallel access, take the max latency
// between tag lookup and data access
(p->tag_latency >= p->data_latency ?
p->tag_latency : p->data_latency )),
cache(nullptr), warmupBound(0),
warmedUp(false), numBlocks(0)
{
}
However, when checking by simulation, the value of "sequential_access" parameter is always taken as "False". Even if this paramater is set to "True" in configs/common/Caches.py
Do you have a solution to correctly initialize the "accessLatency" variable in the initialization list ?
Thanks.
Hi Sohpianne, Can you post the patch so that I can have a look? I guess you run into an ordering problem with how we initialize the objects. BTW instead of the a >= b ? a : b you can always use the std::max(a, b)


- Nikos


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 28, 2016, 1:06 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Nikos Nikoleris
2016-06-28 09:14:33 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8443
-----------------------------------------------------------


Just some some minor style issues.


src/mem/cache/tags/base_set_assoc.hh (line 217)
<http://reviews.gem5.org/r/3502/#comment7323>

spurious new line



src/mem/cache/tags/base_set_assoc.hh (line 225)
<http://reviews.gem5.org/r/3502/#comment7321>

Spurious new line



src/mem/cache/tags/base_set_assoc.hh (line 226)
<http://reviews.gem5.org/r/3502/#comment7319>

Please wrap the comment such it does not violate the 79 char per line limit



src/mem/cache/tags/fa_lru.cc (line 59)
<http://reviews.gem5.org/r/3502/#comment7320>

Please wrap the comment such it does not violate the 79 char per line limit



src/mem/cache/tags/fa_lru.cc (line 213)
<http://reviews.gem5.org/r/3502/#comment7324>

spurious new line



src/mem/cache/tags/fa_lru.cc (line 217)
<http://reviews.gem5.org/r/3502/#comment7325>

spurious new line



src/mem/cache/tags/fa_lru.cc (line 218)
<http://reviews.gem5.org/r/3502/#comment7322>

Please wrap the comment such it does not violate the 79 char per line limit


- Nikos Nikoleris
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 20, 2016, 3:07 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
cache: Split the hit latency into tag lookup latency and data access latency
If the cache access mode is parallel ("sequential_access" parameter set to "False"), tags and data are accessed in parallel. Therefore, the hit latency is the maximum latency between tag lookup latency and data access latency. On the other hand, if the cache access mode is sequential ("sequential_access" parameter set to "True"), tags and data are accessed sequentially. Therefore, the hit latency is the sum of tag lookup latency plus data access latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-28 09:57:15 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 28, 2016, 9:57 matin)


Review request for Default.


Summary (updated)
-----------------

mem: Split the hit_latency into tag_latency and data_latency


Repository: gem5


Description (updated)
-------

Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

Sum the two latency values for sequential access. Otherwise, take the max.


Diffs (updated)
-----

src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Nikos Nikoleris
2016-06-28 11:52:46 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8444
-----------------------------------------------------------


Sorry, that was probably unclear from my comment but the commit message body should be wrapped such that no line is more than 75 characters rather than the whole body being limited to 76 char.

- Nikos Nikoleris
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated June 28, 2016, 9:57 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
Sum the two latency values for sequential access. Otherwise, take the max.
Diffs
-----
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-06-28 13:06:56 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juin 28, 2016, 1:06 après-midi)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-07-20 13:32:07 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juil. 20, 2016, 1:32 après-midi)


Review request for Default.


Repository: gem5


Description
-------

Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Nikos Nikoleris
2016-07-23 14:05:13 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8519
-----------------------------------------------------------


Thanks for all the effort you've put into this. Please make this last set of changes and it should be good to go.


src/mem/cache/tags/Tags.py (line 68)
<http://reviews.gem5.org/r/3502/#comment7393>

This stale argument causes the problems you're facing please remove it, I don't think it is needed



src/mem/cache/tags/base.cc (line 61)
<http://reviews.gem5.org/r/3502/#comment7394>

Please remove the comment here. If you would like move the comment in the header to explain what the access latency is.



src/mem/cache/tags/base.cc (line 63)
<http://reviews.gem5.org/r/3502/#comment7396>

no need for parenthesis here



src/mem/cache/tags/base.cc (line 66)
<http://reviews.gem5.org/r/3502/#comment7395>

you don't need the parenthesis here over


- Nikos Nikoleris
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 20, 2016, 1:32 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-25 13:20:35 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/Tags.py, line 68
<http://reviews.gem5.org/r/3502/diff/11/?file=57299#file57299line68>
This stale argument causes the problems you're facing please remove it, I don't think it is needed
You are right. This change resolved the problem. Thanks Nikos.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8519
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-25 13:16:48 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juil. 25, 2016, 1:16 après-midi)


Review request for Default.


Repository: gem5


Description
-------

Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Nikos Nikoleris
2016-07-25 13:18:53 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------

Ship it!


Ship It!

- Nikos Nikoleris
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 1:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-25 15:04:58 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Nikos Nikoleris
2016-07-25 15:11:43 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.


- Nikos


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 1:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Jason Lowe-Power
2016-07-25 15:13:27 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,

Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!

Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 1:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-26 08:01:05 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,

I did not run any regression test. I only checked (with --debug-flags=Cache) if the cache access latencies were correct for:
- hits and misses
- parallel and sequential accesses
Regarding the stats, I only checked if the "sim_seconds" changes were intuitive when modifying:
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Jason Lowe-Power
2016-07-26 22:28:08 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.

Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 1:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-27 08:34:23 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.
Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).
I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-27 11:12:30 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.
Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).
I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
I successfully launched the regression tests. You are right. Some config files need to be changed. For instance, the "O3_ARM_v7a.py" file where the "hit_latency" has to be replaced by the two new parameters "tag_latency" and "data_latency". I will try to make all necessary changes according to the regression tests results.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-29 14:45:25 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.
Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).
I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
I successfully launched the regression tests. You are right. Some config files need to be changed. For instance, the "O3_ARM_v7a.py" file where the "hit_latency" has to be replaced by the two new parameters "tag_latency" and "data_latency". I will try to make all necessary changes according to the regression tests results.
Jason,

Below are the results of the regression tests I ran:

***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-simple: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-two-level: FAILED!

The last test failed with the following error (Do you know how to resolve it ?):

File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/tests/testing/../../configs/learning_gem5/part1/two_level.py", line 93, in <module>
system.cpu.icache = L1ICache(opts)
NameError: name 'L1ICache' is not defined


For the fs mode regression tests, it fails with the following error:

File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/configs/common/SysPaths.py", line 69, in system
raise IOError, "Can't find a path to system files."
IOError: Can't find a path to system files.

I have this error even if the $M5_PATH variable is correctly set.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 28, 2016, 10:31 matin)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Jason Lowe-Power
2016-07-29 15:08:07 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.
Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).
I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
I successfully launched the regression tests. You are right. Some config files need to be changed. For instance, the "O3_ARM_v7a.py" file where the "hit_latency" has to be replaced by the two new parameters "tag_latency" and "data_latency". I will try to make all necessary changes according to the regression tests results.
Jason,
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-simple: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-two-level: FAILED!
File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/tests/testing/../../configs/learning_gem5/part1/two_level.py", line 93, in <module>
system.cpu.icache = L1ICache(opts)
NameError: name 'L1ICache' is not defined
File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/configs/common/SysPaths.py", line 69, in system
raise IOError, "Can't find a path to system files."
IOError: Can't find a path to system files.
I have this error even if the $M5_PATH variable is correctly set.
I don't know why you're seeing that error for the FS tests. I would suggest adding some print statements to the python config scripts. Additionally, it may be helpful to run gem5 without the regression scripts to track down these issues.

For learning_gem5... It should be working. I just ran the test on the head and it passed. I imagine you need to modify learning_gem5/part1/caches.py to add the tag/data latencies.


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 10:31 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-29 15:28:55 UTC
Permalink
Post by Sophiane SENNI
Post by Nikos Nikoleris
Ship It!
How can I commit the patch ? I am not sure I have the commit access ?
You can't commit it youself, one of the maintainers will have to commit it for you.
Hi Sophanie,
Have you run all of the regression tests? Are there changes to the stats (I would expect so)? Have you checked to make sure the stat changes make intuitive sense? Thanks!
Also, it would be nice to see Andreas H., or Steve, or someone who has been modifying the cache code lately to take a quick look at this patch before it's committed.
Hi Jason,
- hits and misses
- parallel and sequential accesses
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)
Please run all of the regression tests that you can (e.g., the ones that don't require proprietary binaries). I know there are some other config files that will need to be changed (e.g., the learning_gem5 scripts), and there may be others.
Also, I expect this will require another patch to update the stats, too. But you don't have to post that one on reviewboard :).
I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
I successfully launched the regression tests. You are right. Some config files need to be changed. For instance, the "O3_ARM_v7a.py" file where the "hit_latency" has to be replaced by the two new parameters "tag_latency" and "data_latency". I will try to make all necessary changes according to the regression tests results.
Jason,
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker: passed.
***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-simple: passed.
***** build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-two-level: FAILED!
File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/tests/testing/../../configs/learning_gem5/part1/two_level.py", line 93, in <module>
system.cpu.icache = L1ICache(opts)
NameError: name 'L1ICache' is not defined
File "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/configs/common/SysPaths.py", line 69, in system
raise IOError, "Can't find a path to system files."
IOError: Can't find a path to system files.
I have this error even if the $M5_PATH variable is correctly set.
I don't know why you're seeing that error for the FS tests. I would suggest adding some print statements to the python config scripts. Additionally, it may be helpful to run gem5 without the regression scripts to track down these issues.
For learning_gem5... It should be working. I just ran the test on the head and it passed. I imagine you need to modify learning_gem5/part1/caches.py to add the tag/data latencies.
Ok for the FS tests.
For learning_gem5, that is odd. I replaced the hit_latencies by tag/data latencies, but the error is still there.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 28, 2016, 10:31 matin)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Pierre-Yves Péneau
2016-07-25 14:14:45 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8527
-----------------------------------------------------------

Ship it!


Ship It!

- Pierre-Yves Péneau
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 3:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Andreas Hansson
2016-07-27 16:52:20 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------



src/mem/cache/tags/Tags.py (line 57)
<http://reviews.gem5.org/r/3502/#comment7437>

Why would the tags care about the data latency?



src/mem/cache/tags/base.hh (line 75)
<http://reviews.gem5.org/r/3502/#comment7438>

Seems odd that the tags need to track this. Is this still the best division? Perhaps explain why it's here.



src/mem/cache/tags/base_set_assoc.hh (line 227)
<http://reviews.gem5.org/r/3502/#comment7439>

Could you add a comment here?

It seems to me this code is not right, as it checks if the data is technically written now, but we only need the data at time T.

Should we not rather add the dataLatency to the blk->whenReady and then do the plus or max opteration?



src/mem/cache/tags/fa_lru.cc (line 188)
<http://reviews.gem5.org/r/3502/#comment7440>

here we don't care about blk->whenReady?


Some minor concerns that hopefully make sense.

- Andreas Hansson
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 25, 2016, 1:16 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-28 09:55:28 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/Tags.py, line 57
<http://reviews.gem5.org/r/3502/diff/12/?file=57339#file57339line57>
Why would the tags care about the data latency?
This is required to initialize the "accessLatency" variable in tags/base.cc
If I am right, the "accessLatency" variable declared in tags/base.hh is used to consider the cache latency on every access (i.e. considering tags and data accesses). Previously, "accessLatency" was always assigned to "hit_latency". With this patch, "accessLatency" value is initialized according to the cache access mode (parallel or sequential).
Post by Sophiane SENNI
src/mem/cache/tags/base.hh, line 75
<http://reviews.gem5.org/r/3502/diff/12/?file=57340#file57340line75>
Seems odd that the tags need to track this. Is this still the best division? Perhaps explain why it's here.
This can be removed. We actually do not need it since data latency is already included in accessLatency.
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 228
<http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
Could you add a comment here?
It seems to me this code is not right, as it checks if the data is technically written now, but we only need the data at time T.
Should we not rather add the dataLatency to the blk->whenReady and then do the plus or max opteration?
I actually don't really understand what this code represents, which was already present before applying the patch. Because it seems to show a case where the cache latency is greater than accessLatency, when the lat variable is updated as follows:
lat = cache->ticksToCycles(blk->whenReady - curTick())
Can this situation actually occur ?
Post by Sophiane SENNI
src/mem/cache/tags/fa_lru.cc, line 188
<http://reviews.gem5.org/r/3502/diff/12/?file=57343#file57343line188>
here we don't care about blk->whenReady?
I we care about blk->whenReady in base_set_assoc.hh, I assume we have also to care about it here.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 25, 2016, 1:16 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Andreas Hansson
2016-07-28 12:04:13 UTC
Permalink
To be clear: Please go ahead with the patch as is. I merely wanted to hightlight that there is room for further patches and improvements.


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 10:31 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Andreas Hansson
2016-08-25 09:58:33 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 228
<http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
Could you add a comment here?
It seems to me this code is not right, as it checks if the data is technically written now, but we only need the data at time T.
Should we not rather add the dataLatency to the blk->whenReady and then do the plus or max opteration?
lat = cache->ticksToCycles(blk->whenReady - curTick())
Can this situation actually occur ?
blk->whenReady represents the fact that the block is technically not available yet. Due to how we do timing modelling we annotate the block when it arrives, but have to remember when it is _actually_ availalbe. Thus, anything we do here should add on top of the blk->whenReady. Same for fa_lru


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 10:31 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-08-26 10:17:19 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 228
<http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
Could you add a comment here?
It seems to me this code is not right, as it checks if the data is technically written now, but we only need the data at time T.
Should we not rather add the dataLatency to the blk->whenReady and then do the plus or max opteration?
lat = cache->ticksToCycles(blk->whenReady - curTick())
Can this situation actually occur ?
blk->whenReady represents the fact that the block is technically not available yet. Due to how we do timing modelling we annotate the block when it arrives, but have to remember when it is _actually_ availalbe. Thus, anything we do here should add on top of the blk->whenReady. Same for fa_lru
Ok. So if I understood, we actually need to apply the accessLatency on top of the blk->whenReady. Hence, the good code would be as follows:

if (blk->whenReady > curTick()
&& cache->ticksToCycles(blk->whenReady - curTick())
Post by Sophiane SENNI
accessLatency) {
lat = cache->ticksToCycles(blk->whenReady - curTick()) + accessLatency;
}

Does this change make more sense ?


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated juil. 28, 2016, 10:31 matin)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Andreas Hansson
2016-10-24 20:41:17 UTC
Permalink
Post by Sophiane SENNI
src/mem/cache/tags/base_set_assoc.hh, line 228
<http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
Could you add a comment here?
It seems to me this code is not right, as it checks if the data is technically written now, but we only need the data at time T.
Should we not rather add the dataLatency to the blk->whenReady and then do the plus or max opteration?
lat = cache->ticksToCycles(blk->whenReady - curTick())
Can this situation actually occur ?
blk->whenReady represents the fact that the block is technically not available yet. Due to how we do timing modelling we annotate the block when it arrives, but have to remember when it is _actually_ availalbe. Thus, anything we do here should add on top of the blk->whenReady. Same for fa_lru
if (blk->whenReady > curTick()
&& cache->ticksToCycles(blk->whenReady - curTick())
accessLatency) {
lat = cache->ticksToCycles(blk->whenReady - curTick()) + accessLatency;
}
Does this change make more sense ?
Yes. Also, could you add a comment to explain what is happening here.
Post by Sophiane SENNI
src/mem/cache/tags/fa_lru.cc, line 188
<http://reviews.gem5.org/r/3502/diff/12/?file=57343#file57343line188>
here we don't care about blk->whenReady?
I we care about blk->whenReady in base_set_assoc.hh, I assume we have also to care about it here.
Agreed.


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated Oct. 24, 2016, 2:56 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11688:9dba209f1590
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-07-28 10:31:39 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated juil. 28, 2016, 10:31 matin)


Review request for Default.


Repository: gem5


Description
-------

Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Pierre-Yves Péneau
2016-10-21 13:29:42 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8960
-----------------------------------------------------------


Hi,

Someone can commit this patch ? I don't have right access on the repository, either Sophiane.
Thank you.

- Pierre-Yves Péneau
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 12:31 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Jason Lowe-Power
2016-10-21 14:23:12 UTC
Permalink
Post by Sophiane SENNI
Post by Pierre-Yves Péneau
Hi,
Someone can commit this patch ? I don't have right access on the repository, either Sophiane.
Thank you.
Sorry we've been so slow on this patch. A couple of questions before I commit.

1. Are all of Andreas H.'s comments resolved? I'd like to see a "Ship It" from him.
2. You need to make sure the regressions are passing. I understand that our regression testing is poor, but I know that the learning_gem5 regression is failing because of this patch. The file configs/learning_gem5/part1/caches.py needs to be updated. There are likely other files that need to be updated as well (configs/examples/arm/devices.py comes to mind, there may be others).


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8960
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 10:31 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Pierre-Yves Péneau
2016-10-24 09:52:33 UTC
Permalink
Post by Jason Lowe-Power
Post by Pierre-Yves Péneau
Hi,
Someone can commit this patch ? I don't have right access on the repository, either Sophiane.
Thank you.
Sorry we've been so slow on this patch. A couple of questions before I commit.
1. Are all of Andreas H.'s comments resolved? I'd like to see a "Ship It" from him.
2. You need to make sure the regressions are passing. I understand that our regression testing is poor, but I know that the learning_gem5 regression is failing because of this patch. The file configs/learning_gem5/part1/caches.py needs to be updated. There are likely other files that need to be updated as well (configs/examples/arm/devices.py comes to mind, there may be others).
1. Sophiane answered to Andreas H.' issues but I did not respond (quote: "Please go ahead with the patch as is"). I assume it's ok even without a "Ship It" from him.
2. Regression tests have been done. Failures are due to missing CPU2000 benchmarks. The review will be update soon.


- Pierre-Yves


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8960
-----------------------------------------------------------
Post by Jason Lowe-Power
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated July 28, 2016, 12:31 p.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11536:1a3a96d435ed
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842
configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842
src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-10-24 15:02:00 UTC
Permalink
Post by Jason Lowe-Power
Post by Pierre-Yves Péneau
Hi,
Someone can commit this patch ? I don't have right access on the repository, either Sophiane.
Thank you.
Sorry we've been so slow on this patch. A couple of questions before I commit.
1. Are all of Andreas H.'s comments resolved? I'd like to see a "Ship It" from him.
2. You need to make sure the regressions are passing. I understand that our regression testing is poor, but I know that the learning_gem5 regression is failing because of this patch. The file configs/learning_gem5/part1/caches.py needs to be updated. There are likely other files that need to be updated as well (configs/examples/arm/devices.py comes to mind, there may be others).
1. Sophiane answered to Andreas H.' issues but I did not respond (quote: "Please go ahead with the patch as is"). I assume it's ok even without a "Ship It" from him.
2. Regression tests have been done. Failures are due to missing CPU2000 benchmarks. The review will be update soon.
The regression tests passed, except the ones that require proprietary binaries.


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8960
-----------------------------------------------------------
Post by Jason Lowe-Power
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated oct. 24, 2016, 2:56 après-midi)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11688:9dba209f1590
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-10-24 14:56:48 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated Oct. 24, 2016, 2:56 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 11688:9dba209f1590
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Sophiane SENNI
2016-10-25 09:18:10 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated Oct. 25, 2016, 9:18 a.m.)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 11688:74be5cba513a
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Andreas Hansson
2016-10-26 09:04:00 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8986
-----------------------------------------------------------



src/mem/cache/tags/base_set_assoc.hh (line 233)
<http://reviews.gem5.org/r/3502/#comment7716>

If you don't mind changing it, could we keep the operator on the first line?

The same goes for the condition in the if statement.

I am not sure if this is actually captured in the style, but it is definitley convention.

The same goes for the copy in FA-LRU

Thanks!


- Andreas Hansson
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated Oct. 25, 2016, 9:18 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11688:74be5cba513a
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-10-27 11:25:03 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------

(Updated Oct. 27, 2016, 11:25 a.m.)


Review request for Default.


Repository: gem5


Description (updated)
-------

Changeset 11688:1a792798e845
---------------------------
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-----

configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
-------

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI
Andreas Hansson
2016-10-27 12:11:27 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8995
-----------------------------------------------------------

Ship it!


thanks for getting this in shape

- Andreas Hansson
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated Oct. 27, 2016, 11:25 a.m.)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11688:1a792798e845
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Sophiane SENNI
2016-11-17 09:20:30 UTC
Permalink
Post by Sophiane SENNI
Post by Andreas Hansson
thanks for getting this in shape
Hi all,

Could someone commit this patch, please ?
Thanks


- Sophiane


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8995
-----------------------------------------------------------
Post by Sophiane SENNI
-----------------------------------------------------------
http://reviews.gem5.org/r/3502/
-----------------------------------------------------------
(Updated oct. 27, 2016, 11:25 matin)
Review request for Default.
Repository: gem5
Description
-------
Changeset 11688:1a792798e845
---------------------------
mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Diffs
-----
configs/common/Caches.py 4aac82f10951
configs/common/O3_ARM_v7a.py 4aac82f10951
configs/example/arm/devices.py 4aac82f10951
configs/learning_gem5/part1/caches.py 4aac82f10951
src/mem/cache/Cache.py 4aac82f10951
src/mem/cache/base.hh 4aac82f10951
src/mem/cache/base.cc 4aac82f10951
src/mem/cache/tags/Tags.py 4aac82f10951
src/mem/cache/tags/base.hh 4aac82f10951
src/mem/cache/tags/base.cc 4aac82f10951
src/mem/cache/tags/base_set_assoc.hh 4aac82f10951
src/mem/cache/tags/fa_lru.hh 4aac82f10951
src/mem/cache/tags/fa_lru.cc 4aac82f10951
Diff: http://reviews.gem5.org/r/3502/diff/
Testing
-------
Tested using --Debug-flags=Cache
Thanks,
Sophiane SENNI
Loading...