LOL, Daveperman indirectly citing superiority of his SSDs again(not saying you are wrong, just funny).
Cache sizes ARE computation related on a CPU though.![]()
It's why GPUs and CPUs can NEVER replace each other. The workload of the two are fundamentally different. Core 2 increased performance by increasing things that weren't computation related like the ALUs, but memory like caches, memory disambiguation and prefetchers.
Sandy Bridge will focus majorly on cache and memory functions and will be the next big improvement. Media enhancements like AVX are really the icing on the cake, not the main feature.
-
davepermen Notebook Nobel Laureate
and no, caches are not computational workload. they are memory-workload.
if i want to sum two registers 10 billion times, all i can do is call add reg0, reg1 10 billion times. no cache or so is used for this, but a computational part of the cpu: the adder unit.
and yes, i'm nitpicking. but i want accuracy from someone called intel user -
thinkpad knows best Notebook Deity
-
davepermen Notebook Nobel Laureate
arm isn't intel. atom is. they want a bit of the growing market. -
-
thinkpad knows best Notebook Deity
I know ARM isn't Intel for blank sakes, but the Atom still puts out too much heat to be inside a SmartPhone, 11nm maybe it might be. The GPU is just a high performance processor that exclusively handles graphics oriented operations. I wonder when we'll start having 128-bit CPU's
-
In a smartphone, CPU, memory controller, and the I/O controller aren't the only big contributors to power consumption. The display and the type of interface, operating systems, BIOSes and components on the motherboard also use hefty amount of power. If you can reduce that significantly you'll get that much closer to smartphones.
Moorestown will focus on not just integrating the graphics and memory controller like pineview and be done with it. Rest of the components I have mentioned above will be significantly improved.
First 2 Moorestown (smart)phones:
http://www.youtube.com/watch?v=5m79buEJQQY
http://www.youtube.com/watch?v=WfkzpdB97fg
The LG device is rumored to have 5 hours browsing time on 3G with its 1850mAH battery. It is 50% larger than the one in the iPhone 3GS, but considering the bigger screen and much better performance, its no way that far off. 45nm and Moorestown is enough to make Atom relevant for smartphones, 32nm with Medfield will get power parity. -
davepermen Notebook Nobel Laureate
only the moment you have to wait for memory data (and no other hyperthread has something to do that it waited for), memory starts to be relevant.
that is, by default, quite often, yes. but it doesn't change the fact that numerical power on it's own is 100% defined without any caches in mind. and can be used as those.
but in most real world apps, one has to consider the data part as well, not just the computing part. and there, memory bandwith and latency gets important, of course.
that's why gpu's by now are sort of suuuperhyperthreaded (they call it differently). they mostly have tons of jobs pending for their data, and still have another one in the actual core to compute while the others wait for the data. that way, they can hide the latencies in their main job (doing billions of times the very same again and again) -
Meaker@Sager Company Representative
You HAVE to include more cores, it has several benefits:
1. Larger contact area for heatsinks
2. Having 1156 pins requires a certain area around a die to actually make the connections, thats the minimum die area you can have. If you only have one core you have wasted space anyway.
Oh and if memory was such a bottleneck, why does increasing the clock speed/ decreasing the latencies have such a small effect on real world apps? -
davepermen Notebook Nobel Laureate
determinable by being able to scale about linear from 1 to 2 to 3 cores, but the 4th core normally never gained the same. there, increasing the memory performance helped (by f.e. reducing the amount of needed memory to access, etc)
but nowadays, not so much anymore. still obviously dependent on the workload.
for big scenes one renders (as an example), programmer have to take much care to prepare the data in a way that all cores get feed with data. it's not "just working well". programmers have to manage memory manually quite a bit. so there is a bottleneck, which so far can be fixed by programmers.
but that's work. work that could be spent better. -
BTW, Athlon 64 gained 20% by the lower latency memory controller alone, on PC apps, and way more on server apps.
Pentium M gained 30-40% over Tualatin with no better FPU or more ALUs. -
+1 -
thinkpad knows best Notebook Deity
Yeah, they increase everything within the time period of lets say 10 years, miniscule power improvements with each release, (few exceptions) but then you look back and there's 40-50% performance gain with processors, it's also marketting influence. Most of us would be fine with advanced Core 2's for the next 5 years, then Intel would release the i series, improving performance in the double digit percentages, but they released it now because they wouldn't make as much money right now, instead of releasing the technology when it would have been truly a need among power users. It's sort of like GTA 4, Rockstar probably should have released the PC version in August 09, or even December 09, and pushed the console version up a little too.
CPU isnt improving fast enough
Discussion in 'Hardware Components and Aftermarket Upgrades' started by jsteng, Jan 26, 2010.