Really? I'm doing an i5 dual vs i7 comparison with the GT 650m right now with my W110ER and the difference in fps between i5 and i7 is like 50%, mid 30's with i5 vs 50's with i7 on BF3 @ high settings @ 720p. I'll do a video running at 720p and at 1080p with both dual core and quad core to show the live difference. It's there and it's real.
What do you mean weren't rendering video? It's FPS, frames per second rendered. The GPU requires the CPU to "set up" each frame, it "feeds" the GPU so to speak. The slower CPU will greatly limit the GPU performance. There are four game benchmarks. What games would you consider then to make it "fair"?
I'm going to be doing a comparison between i5 dual core and i7 quad core with 680m soon, but from my 650m testing I can tell you in most newer games the dual core and/or slower CPU will bottleneck. I can easily just limit the clock of my i7 and show the difference. Some games it will affect, most newer big titles it will, and others it won't... so much.
I say this from an unbiased perspective. I have zero motive either way. Just speaking from experience. I've benched the hell out of the last AMD Llano generation, and the Trinity doesn't hold a candle to AMD's own Llano mobile chips:
DV6z & AMD Llano Optimization Guide
DV6z Cooling Mod
AMD E-450 vs Intel i3-2367
AMD Llano A8-3510MX vs Intel Sandy Bridge i5-2520M
AMD Radeon 6750m (7690m)
AMD A8 Llano 6620G IGP
AMD E-350 / Radeon 6310 IGP
-
-
The one "Hi Res" 1680x1050 with DX11 specifically says "No Render Score (CPU test)"
There are probably a handful of games that you can find that are CPU limited, but to what extent with mobile GPUs is the question, and have seen no real comparisons, other than being within 10% of eachother on a direct comparison, of the game the naysayer chose. -
-
There are better comparison reviews for desktops that show the actual differences side by side, and they are significant when using crazy desktop GPUs that never bottleneck. -
It's a 5870 desktop GPU in that comparison. The 7970m is much faster than that desktop 5870m even. -
I would like if people didn't bring out examples from Anandtech, seriously that site is starting to get worse than even Fudzilla, Tom's Hardware is probably the best overall (no bias) with HardOCP being the king of GPU benchmarks.
-
A fresh HP ENVY 6 Sleekbook Review if someone interested.
-
The A10-4600M is not a powerhouse of a CPU, but give it a fair comparison. If you are using a 720p screen with an HD7970, I guess that would be absolutely absurd, but considering the GX60 has a native 1080p display, its meant to be used at 1080p, so you must compare it in this space. -
For a DDR3 128-bit GPU, then the CPU won't be the limiting factor. But for a 256-bit GDDR5 GPU you will be CPU limited well before GPU limited with the A10.
Case in point, Just cause 2 is primarily a GPU limited game. I just ran the three Just Cause 2 benchmarks with my i7-3610QM at 2.0 GHz and 3.1 GHz at 1080p and high detail using my GTX 680m.
Results were:
Dark Tower
84.45 2.0GHz
86.36 3.1GHz
~ 2.3%
Desert Sunrise
100.10 2.0GHz
110.19 3.1GHz
~ 10.1%
Concrete Jungle
45.56 2.0GHz
54.90 3.1GHz
~ 20.5%
That's like 11% average slower FPS just by changing clock speed alone, not to mention architecture, FPU limits, etc. And the A10-4600m is probably somewhat close to an i5-3210m in general CPU performance. I would have to say just a primarily GPU limited game would suffer by 15-20% FPS due to CPU and if there's any CPU demands beside the GPU, like in games like BF3, Dirt 2/3, Metro 2033, GTA IV, etc it would suffer even more.
I want AMD to succeed more than anyone. I would hate to see them fall behind. But I just don't think the GPU choice is a little overpowered for what the CPU can support, and also adds too much extra cost to the machine. In any case neither one of us has the machine nor are there any good benchmark figures out there to compare with. So we will just have to wait and see.
Personally, I just want an A10-4600m in a 13.3" with 1600x900 screen with no discrete GPU, backlit keyboard, and 60WHr battery, less than 1" thick and under 4lbs. Seems achievable yet nobody even comes close. -
That good information on the Intel platform, but none of that is conclusive on how the AMD processor is affected in the games. Its better to stop speculating like its fact and just wait for an actual head to head comparison.
-
I'm not speculating, you're making wild speculations. I'm making educated assumptions based on all my experience with computers. Been in the industry for 30+ years. I think I can make a valid assessment.
-
-
Based on multiple sources, we can say that the AMD Trinity architecture has somewhere between ~75% of the per-clock performance of the Intel IVB architecture. The math at this point begins to get a little less accurate, but a general conclusion can still be made. The i7-3610QM (2GHz) in HT's testing runs at ~87% of the clock speed that the A10-4600M runs, assuming it is not in the B0 state, which it rarely achieves. This means that on average, when the i7-3610QM is at 2.0GHz, per core, the 4600M will perform at a 12% deficit to the i7. And that is not even factoring in that the Intel has 4 complete cores, rather than shared FPU's. This would put the 4600M more than 20% behind the i7 @ 2.0GHz, and in reality, it is most likely much farther behind that that.
It's not wild assumption, it's math and common sense. -
Karamazovmm Overthinking? Always!
-
-
-
Edit: In response to what you said to Karamazovmm, go read the thread he linked, it's got the data that you claim he doesn't have. It is you that refuse to believe other's data. -
Ive read the entire GX60 thread, it has a bench showing Shogun 2 running neck and neck to i7/680m that he refuses to believe. That thread shows no CPU limiting, and much to the contrary. You can make generalizations about how a processor works, but that doesnt mean on a sepcific load it works the same as you are generalizing.
-
Fat Dragon Just this guy, you know?
See Karamazovmm's post up above - he found the GX60 thread that I couldn't find. I could be wrong about the nature of the game benchmarks in that thread, but they at least appear to indicate that the GX60 performs at disadvantages of 10-45% versus an Intel platform, depending on the game.
I did the research a couple months ago when I was building my desktop - in the end it simply didn't make sense to go Trinity in a box with a dedicated GPU, as much as I would have preferred to support AMD. Of course, in that case there's also the TDP problem - mobile Trinity is competitive with Ivy Bridge's TDP, but decent members from FX, Phenom, and Trinity for the desktop all run at 125 TDP, compared to 77 TDP for an i5 that crushes them in performance while still offering four HT cores. Desktop and Laptop aren't entirely comparable, but there's enough parallel between mobile and desktop Trinity gaming to be relevant.
The fact with Trinity and Llano before it is that they are simply inferior to Intel's architecture on all but a few highly multithreaded tasks, and people whose usage pattern involves CPU-intensive tasks (and that includes high-level gaming with a top-tier GPU) will suffer from using a Trinity processor. On the other hand, average users or budget gamers will regularly benefit from an APU solution in their less-intensive computing, whether it's better graphics performance or simply getting the same overall responsiveness in their everyday computing as an Intel processor would offer, but for less money. -
If you read the entire thread, you'd see a link to a review with many benchmarks in it. Karamazovmm then took those results and compared them to those he found in other reviews, and these are some of the results.
Max Payne, ultra avg fps
GX60 - 39
AW (i7) - 61.2
660m - 19
670m - 23
Crisys 2, ultra avg fps
GX60 - 49
AW (i7)- 54.5
660m - 22
670m - 28.7
Mafia 2, ultra avg fps
GX60 - 52
AW (i7) - 88.8
660m - 42
670m - 55
Borderlands 2, ultra avg fps
GX60 - 50
Clevo (i7) 62.9
660m - 41.3
670m - 46
Some are close, some aren't. Regardless,they all show that the A10, regardless of the game, caused a bottleneck. It's just simple fact. -
Afaic i would hate having a premium gpu like the 7970M and know that it's not even running at full capacity. Especially when the A10-4600M is already the top of Trinity cpus, meaning you can't upgrade for better perfs...unless Trinity 2.0 refresh is retro-compatible. But it's an hazardous bet.
-
What they dont tell you is that they use CL11 RAM most likely, which actually screws the Trinity performance, needs at least CL9. On a Russian site I saw them list it as CL 11 anyway in the GX60. And with driver updates since those reviews, things have improved even more. So if benches get better with better graphics drivers, then its not only CPU limited. There were actually Enduro issues until recent driver updates.
Its a whole lot easier to just wait for a mature test with head to head comparison to get the truth. And since its still better than a 670M provides with Intel, I dont see why anyone would be negative about the combo. -
I'm not being negative about the combo, I agree it's pretty cool that MSI actually committed to such a combo. However, the fact is that the A10 will bottleneck the 7970M, to varying degrees.
Also, you are mistaking IGP performance with CPU performance. Trinity CPU performance is almost completely unaffected by a difference of CL11 to CL9 RAM timings. The IGP would show improvement, but even then, it wouldn't be that large of a difference. (<1%) Also, while the driver updates are nice, they only help the dGPU performance, which is already being limited by the CPU in those tests, so you would see very minimal gains. -
Karamazovmm Overthinking? Always!
I avoided putting numbers from the clevo machines due to the underutilization issue that all of the results would have. I did a basic comparison to what I could manage from the reviews that had the settings used for the benchies. some of them didnt, and most of the reviews had varying settings from what I could get in form of data, basically those guys tried to put the machine in the best light possible.
also this is why that benchie of s2tw is wrong
Here is another thing, anandtech is not gospel, and shouldnt be considered as such. Anand as everyone can notice likes mac, that alone makes he instantly lose credit to some (which is childish), jarred likes nvidia much more than amd, another example would be how you can tell that kristian is hiding something when he writes about SSDs and the list goes on.
But anyway, there is no denying that the AMD apus will bottleneck higher end gpus. the 7970m gives around the performance of the 7850. The 5870 is a lot behind that gpu in terms of performance, by that alone you can tell how much worse the amd apus are for computational tasks.
We arent arguing if the apus are so bad that is unusable, they are good enough for a lot of people. For example, Im taking a little vacation in an island, just using a AMD c50 netbook, its visibly slower than my i7 3840qm, its visibly slower than my i5 2415m, also visibly slower than the p7350 that I have. And I can really and easily tell the difference in speed, all systems are using SSDs. Too bad that the other AMD system that I have here is just too old, its packing a turion cpu.
One great thing would be for tommyboy to buy the gx 60 and gives us some benchies, I would be extremely happy to have numbers. -
Thats just it, the driver updates are helping FPS, Im not stupid, been here for years. Thus it completely refutes the theories on how much the A10 is limiting.
Im not mistaking anything, and how does RAM performance not effect CPU performance, maybe you're thinking Intel CPUs? It does, and it appeared to help a 3Dmark11 Physics score go up more then 10% while using the HD7970.
One great thing to do is stop speculating on how terrible it is until you have the hard facts. Just accept that its not as bad as youre saying when running at 1080P. And wait for the real benches to prove it right or wrong. Just wait and quit blabbing about how it limits so much, when you dont know how much, might be 2%, might even be 10%, wait and see. -
Well, the GX60 is a pretty nice gaming rig at the time we speak but the thing is it has a big chance to become obsolete sooner than intel counterparts as games become more and more cpu intensive. Which makes it a not so good deal for me. Just my opinion.
-
Here is a good testing at 1080p medium and 1080p extreme;
Source
7970M could not do better with I7 either, because in extreme you are GPU limited and not CPU... -
Fat Dragon Just this guy, you know?
-
Karamazovmm Overthinking? Always!
I really dont think that the gx 60 suffers from underutilization. Amd last year didnt had any problems with their bacon + llano notebooks. However I cant refute that because I still dont have enough numbers, but just looking at the difference from the i5 vs the i7, the performance is there in the right spot. And if by the difference in drivers you are not even taking into account that the i7 + 7970m combination was reviewed several months ago.
We have given numbers, from several games, he cames out with nothing. Atom ant posts the only benchie that can be valid on a 6 page owners lounge that nobody pulls out any numbers.
But anyway, sleeping dogs heavily favors amd, like borderlands 2 heavily favors nvidia. When a sli setup of 680m has trouble passing the single 7970m, there is something really wrong. and its a well known fact that the 680m is indeed a little bit better than the 7970m, but for me its not 300 bucks better. -
If someone need an HP Envy 6-1110us with A8-4555M, than Amazon has it in their lightning deal box today from 3:40 PM PST. The current price is $580 and I assume it will go down to $500 or so... No tax, free shipping...
-
It would just be easier to stop trying to explain it with all the "common sense" voodoo math, and wait for the real benches. Thats all I am saying, quit speculating either way. -
Karamazovmm Overthinking? Always!
i7+7970m and the a10+7970m, get the same games at the same settings. If you actually have read anything that we have posted you would have saw that already. There is another review that I saw, there is problem there though, I cant use the 3 data points there because they are not comparable to the others (different settings).
The only point of entry that you can refute on my analysis is the use of the clevo machine, which adds the underutilization issue.
and stop with this nonsense of drivers, the i7+7970m were tested using several months older drivers, the new ones are getting much better performance. -
-
First gaming test of ASUS U38N:
ASUS VivoBook U38N gaming performance: Skyrim, CoD 3 MW, Dirt 3, Crysis 2, Fifa 13, NFS Most Wanted - YouTube
It seems there IS an A10 version and it's the version under test on the video.
Please note that the installed memory os 6 Gb (2 +4 ??? maybe) so there is a high chance we are not looking at dual channel performance... (which is a shame of course...)
I was expecting it to compete with the 620M installed in the UX32VD but I was - unfortunately- too optimistic. Nevertheless quite good performance... Now let's see what ASUS asks for this guy... (or it'll be the X32VD for me...) -
Karamazovmm Overthinking? Always!
you dont know how drivers work. -
-
Karamazovmm Overthinking? Always!
that still doesnt change the fact that its cpu limited. that also doesnt change the fact that at the price its selling its a good buy. -
-
Fat Dragon Just this guy, you know?
-
-
There is obviously some kind of a limiting factor not related to the APU, those numbers are no way near those of a A8-4500m (A10-4655 should be slightly faster). My best guess is some kind of a driver issue or a power saving feature turned on.
-
See here, zero effect on performance with dedicated GPU using single RAM chip or fast or slow CAS: http://forum.notebookreview.com/gam...amd-radeon-hd-6750m-benchmarking-results.html
-
-
Whether Trinity, Llano, Sandy Bridge, Ivy Bridge, it doesn't matter, they all would have to wait the same. -
Your numbers dont represent the difference between CL9 and CL11 on Trinity platforms. Again, using some form of extrapolation and interpolation and ignoring clock cycle latency of the CPU vs the RAM. There is always a trade off on latency and bandwidth, and the Trinity should be latency sensitive past CL9. Its something you'll have to see to believe, I know and it seems you benched it youself with 5% benefit on Llano, which is significant. Now is it more sensitive on Trinity?
-
"Trinity should be latency sensitive past CL9."
Data please?
It's been well proven in that RAM latency has next to no effect on CPU performance, especially now that we are using such fast DDR3 memory. The 5% performance loss under Llano with a dGPU was when in single channel, which literally cuts bandwidth in half, not latency. CPU's would be far more affected by memory speed and bandwidth than latency, especially when the latency difference between CL11 and CL9 is quite small. Now, you might be able to argue that a set of CL5 RAM vs. CL11 RAM would have a positive effect on IGP performance, but even at that low of a latency, the CPU would be largely unaffected.
AnandTech - Sandy Bridge Memory Scaling: Choosing the Best DDR3
You can see here that Sandy Bridge, a much faster architecture, is barely being effected by memory speed or timings. Your argument has no basis in fact. -
And yet you have no real proof to the contrary, just conjecture by comparison. The fact is, regardless of frequency, latency can matter. There is some math you can go do to see what is best for the CPU.
I have seen benches where even DDR3-1333 gets slightly better benches than 1600, because of lower latency. Maybe not all, but I would bet that several would have noticeable improvement.
Go to GX60 thread, Meeker saw a great (~10%) improvement in Physics score in 3dMark11 while using the HD7970. By using CL8 modded RAM vs CL11 stock. -
No recent architecture has shown meaningful increases in performance due to decreased RAM latency, and that is a fact, with research to prove it. Trinity is no different. It is the same basic structure as Bulldozer, and it is well documented that increased memory speed did not help Bulldozer much, so by common logic, Trinity should follow a similar trend. You say I have no proof to the contrary when I show many logical benchmarks that back up my point, and simple logic is all that is required to make a conclusion.
-
..*cough* ..nice sentiment and all.
But the apu works on main ram, and the performance depends partly on the transfer speed to the memory bus. So increasing the ram speed/bus speed affects the graphics performance. A lot. In crossfire or not, on llano you could easily double the graphics performance compared to the "reference" stock design by increasing the memory and bus-speed. Reducing memory latency probably affects it less than the speed, but because of the way it's set up it would make sense that it might have an impact. Shouldn't be difficult to test either.
The Ultimate AMD Trinity Notebook List
Discussion in 'Hardware Components and Aftermarket Upgrades' started by davidricardo86, Jul 10, 2012.