Title Edit (day after thread creation), reasons = thread topic deviated as thread went on lol.
Hi NBR,
Some of you might have read about how I returned an Acer laptop with a GT420M GPU earlier this month, and bought another one with an HD5650 GPU, expecting a little better performance and less heat.
This is weird. I received the laptop today (specs are in sig), took some hours to install everything etc, and booted up 3DMark06 just now.
LOW FRAMERATES! I checked ATI GPU Clock Tool and made sure the laptop is using the 5650, at its default clock, but I'm consistently getting less than half the framerate in all GPU tests compared to stock GT420M laptop I had last month... Even after boosting the core/mem clocks of the 5650 up to 650/900, the framerates are still only ~55% to 60% of what I got with the GT420M laptop.
Did I install something wrong? My current catalyst version is 10.11 AFAIK... this is not what I was expecting, after all this hassle.
Thanks!
-
Meaker@Sager Company Representative
I told you this, 3dmark06 is somewhat CPU limited, so your slower CPU could be impacting performance.
Also Nvidia's cards are stronger in SM2 DX9 where as ATI's do better in SM3 and DX11.
It is a little low however, care to post a compare? I can at least compare it to my 4670 results. -
Hey Meaker,
I tried updating the driver in Device Manager, and apparently it did. Now the 5650 at stock speeds performs on par with the stock GT420M. Somehow the catalyst installation did not install a new driver for the 5650? That was weird.
I'll try to conduct some more tests, right now don't have any statistics to compare other than some framerate impressions.
edit: now the driver version is from april 2010 for 5650. Before it was a november driver, but the card was recognized as 5600/5700 series rather than 5650. (i'm guessing this is a driver problem) -
Yeah we could use dome figures before we start do discuss anything.
EDIT: Well I see problem solved itself -
-
Sure it is- I'm running 10.11 right now.
You don't want to run 3DMark 11 though- you just don't wanna know
Have you tried this (64bit Cats 10.11)? -
What I installed before was 10-11_mobility_vista_win7_32-64_ccc.exe, maybe it's different and thus incompatible. -
-
There’s no need to uninstall aold drivers unless something is seriously messed up.
EDIT: Bronsky is right- if you have switchable graphics that's the only way- otherwise it's gonna claim it's unsupported. -
Or is there some other merit to force the HD5650 in BIOS?
So I should keep the preexisting Catalyst driver and more or less overwrite it with the new?
Thanks!
edit:
@downloads,
I'm now installing the driver you provided me with, and went into custom installation and indeed, the ATI display driver is not the newest (among ATI steaming sdk v2 runtime, and wmv9 playback), so definitely this is going the right way. -
You need to force HD5650 in BIOS so that to the installer it would look like there's no switchable graphics. If you just choose HD5650 and the other card will still be there the installer will not work.
At least that's the story with ATI+ integrated Intel- I assume it's the same for ATI + integrated ATI. -
I see, I didnt read your message before installing so now after reboot, it shows 5600/5700 in Device Manager again.
Lemme try the forcing then! Brb.
Edit:
back, it didn't do anything (the procedure with switching to discrete in phoenix bios and then installing etc). I guess ATI+AMD platform it doesn't matter. The Device Manager still shows "5600/5700 series" for the 5650. -
It should show 5600/5700- the driver version number is what's important.
Cats 10.11 are 8.791.0.0 -
So the April driver detects it as exactly what it is, yet the Oct/Nov driver doesn't? Yeesh. -
It's done on purpose- Radeon HD5650 and Radeon HD5730 are the very same chip/card just the latter one has 650MHz core. So there's no point pretending these are different cards hence "5600/5700 series" name.
-
Riggght right, I guess that's the logic behind it ha ha.
@Meaker,
Here's an initial round of 3DMark06 results (including temperature info etc)
I OC'd the GPU to 700/1000 core/mem clocks.
3DMark Score 8682.0 3DMarks
SM2.0 Score 3265.0
HDR/SM3.0 Score 3967.0
CPU Score 2837.0
HWMonitor results:
Max CPU temp: 65 C
(wooow, the i5 460M I had in the previous Acer spiked over 85 C in 3dmark06 tests)
Max GPU temp: 69 C
(about same temp as OC'd GT420M after the test) -
Weird, so looking at 3DMark06 scores in 4820TG club thread, it seems that my CPU is scoring higher than a lot of the CPUs, but the SM2.0/3.0 scores are consistent and drastically lower. What might this indicate?
edit: also, at stock clocks, i have same sm2.0/3.0 scores as 3820TG / 4820TGs, if that means anything. -
No way a 5650 is slower than a GT420M when the latter is slower than a GT330M in 3dmark06.
- My old i5-450m + GT330M stock = 7200 no AA/AF
- Current i3-380m + GT420M stock does = 6800 no AA/AF, 6400 with 4xAA / 8x optimized AF (btw the performance hit is so low to the point of amazing)
- GT420M @ 435M = 7600
- My old i5-450m + GT330M stock = 7200 no AA/AF
-
My AMD N930 + 5650 Stock = 7427..so far
It seems the highest stable overclock I can get on it (no freezing in intensive games) is ~ 725 core 1000 mem. And that yields about 8800 score.
Although I'm finding it suspicious that the card performs poorly in Assassin's Creed on max. Back when I had the GT420M laptop large city views got framerate down to ~20 to 24 fps. Now with this, even OC'd to limit, fps cuts down to 10~12 (but 40s and 50s fps when I'm not directly looking at far distance views). Optimization problem? Conflict? I'm missing some directX component? CPU bottleneck? Gaargh.
edit: also, my old i5 460M + GT420M (OC'd to 675/900) got over 8600 in 3dmark06, and was stable too. So it was not far off at all to this which surprises me. Why are those lucky s in 3820TG or 4820TG getting over 9000 scores EASILY (much lower overclocks than non-timeline Acers with 5650, regardless of AMD or Intel CPU)? -
Cheers -
for example, downloads's benchmark here:
http://forum.notebookreview.com/acer/468091-acer-aspire-5740g-owners-lounge.html#post6024467
his 5740G has 5650 overclocked to 670/900, and only scored 8365 (it was 10.3 cat, probably a bit better now but shouldn't be significantly boosted by 10.11). It has i5 430M.
Mine performed similarly when GPU is overclocked to 670/900 (~8400), and if I OC it to 725/1000 like in sig, gets to 8800.
NOW, look here:
Hendrickson's timelinex 3820TG, also with i5 430M + 5650 (same as download's 5740G):
http://forum.notebookreview.com/ace...chmarks-tweaks-mods-upgrades.html#post6307438
He got 9115 when OC'd to 680/1065!!!
And this guy's 4820TG:
http://forum.notebookreview.com/ace...melinex-4820tg-owners-lounge.html#post6458735
With an i3 350M, which is even more weaksauce (note how CPU score in 3dmark is 500 below mine, I get 2.8k), but @ 725/1090 (which I did successfully benchmark and get ~8850) it got 9004!
The major difference I see between the timelineX scores and other laptops' scores is that those guys have unusually high SM3.0 scores with 5650 overclocked to the same level, about 300 more. Same with their SM2.0 scores.
Sorry for the long post, it just seems very intriguing. -
3DMark heavily favors VRAM frequency- you get about 300points for every 100MHz OC. That's why people who can get up to 1100Mhz or do can get above 9K easily. This performance gain is not reflected in games though.
You have a very different CPU so regardless of your CPU score you cant be sure how does it affect performance.
Your score is more or less OK.
Apart from that 3DMark06 is based on DX9.0b (PS2.0) and some DX9.0c (PS3.0) so at this point it doesn't really matter.
Download 3DMark 11 and use it to benchmark- regardless of how you score it's going to be closer to reality than video memory dependent DirectX 9.0 benchmark from the past -
Yep I read about 3dmark11 yesterday, will probably dl later. What's the general ballpark of scores that peeps here are getting with Acer i3/5 + 5650?
Edit:
but yeah I agree my CPU is quite different. I've done some intense gaming last night with very overclocked stable GPU, and HWMonitor reports CPU Tmax a 67 C, that's just absurdly low lol. -
I just did the 3DMark11 test...it was really letterboxed lol, wow laggy as hell. Score is here, also in my sig.
Downloads did you get a similar score? -
Yeah- I got P1210 on 670/900. Not bad.
-
I notice a big chunk of framerate decrease in physics test compared to intel cpu 3dmark11 results though, so AMD CPU here is definitely lacking.
I'm really hoping some form of overclocking can be achieved with CPU, it's only running at 67 C max...feels so underclocked compared to GPU which gets to ~75 C overclocked in demanding games. -
That's provided the temp reading is real because I saw AMD Phenom II X3 N830 2.1 GHz reach 95*C after an hour of Prime95 in Acer Aspire 5551G.
Use RealTemp to verify that you get a proper reading. -
Eh it doesn't support the N930 processor.
edit:
also, there doesn't seem to be ANY option present that could potentially allow overclocking the CPU here.... setFSB doesn't even open (it says chipset error), and all other CPU overclocking tools are too old to work.
Back to topic though, is there anything else that could report realistic CPU temperature? -
Oh boy I'm really confused now...been testing the laptop in several games (Assassin's Creed, Crysis, Crysis Warhead, Need for Speed Hot Pursuit, Two Worlds II) with fraps.
And the framerates at the same settings all seem to be lower than my previous i5 460M + GT420M laptop. Some games this is really bad e.g. Need for Speed, I'm only getting a little over half of what I used to get for framerates with the old laptop.
The reason why I returned the old laptop was because I wanted to get something with a bit more horsepower behind it. Now I'm wondering if I made a very bad choice. -
Please help me figure out how I might overclock the CPU, I was able to find out that it was bottlenecking the framerates completely in games:
thread -
My AMD N920 has never gone over 73c
on Prime95 no matter what mode I have it testing on.
-
Karamazovmm Overthinking? Always!
@jerg you know what? you should return the laptop or sell it. Its clear that you are not satisfied with your purchase.
The AMD cpu is weak as hell, its performance is comparable as the core 2 duo line. Just a quick look at notebook check you can see that the combination of a fairly good CPU (the i7 720qm) with a lower performance card (4650m) can achieve better results than your combination of CPU and GPU.
Synthetic benchmarks serves no purpose and are indicative of nothing. Some are clearly affected by the CPU, others are affected by the HD, and most of it are balanced to be poster boy for either GPU companies.
the gaming results is what matters. And those are given you such a headache, just return it. -
My suggestion is to check the drivers as well as the speed the PCI-E Bus is running at using CPUZ.
There is no way the AMD CPU slightly performance can cause the difference, there is a bug somewhere. -
What drivers should I check? I've reinstalled the GPU drivers to death, gone through various catalyst versions several times already. -
Is the CPU throttling? Acers can be dumb in that manner - might be worth looking into the Definitive Guide.
-
-
Try RMClock - it might be able to at least read your system's speed and throttle.
Also, your system at stock clocks is actually a bit faster than someone else's benchmark of a similar/same machine. -
As for the benchmark difference, it's hard to know for sure without comparing what the individual sm3/2/cpu marks are. Also mine has 6GB rather than 4GB Ram, could have made a difference.
I really doubt that CPU is throttling though, as AMD OverDrive can display instantaneous clocks/temps/loads of each of the 4 cores, I can check it against several windowed intensive games, and all the cores run at 1995 MHz all the time while the games run, and they all *seem* to be used AFAIK (at least in Assassin's Creed, Crysis, Crysis Warhead, Two Worlds II, and Need for Speed, which are all the games I've installed for this specific purpose).
This rules out CPU throttling?
Also not PCI-E throttling,
CPU doesn't (I still don't know) bottleneck? Since none of the cores get pushed to 100%,
Doesn't seem to be a GPU driver problem...
wut. -
Clock speed /= throttling.
For Intel at least, the throttle speed is different than the core clock; your core clock could still be 2 GHz, but the effective speed could be throttled to 1 GHz or lower.
When my system originally did it, core speed was 1.83 GHz, but dropped to 833 MHz or lower when the throttling kicked in.
EDIT: Example here - red line is core clock, purple line is throttle speed. -
-
Meaker@Sager Company Representative
Your scores do not suggest that is happening.
Did you post that compare link?
Here is my 4670 best result:
http://3dmark.com/3dm06/15051267
That was at 780/1040 or there abouts. -
Ok I got RMClock to work (it was a 64-bit issue, solved it). How might I go about testing for throttling?
edit: never mind, it's outdated and doesn't recognize my CPU other than its current load % (which is identical to task manager).
Hey Meaker,
The link does not seem to work? -
Meaker@Sager Company Representative
Try the link again.
-
Could you do it with 1280x768 instead? My laptop doesn't support 1280x1024.
Anyway here's my 1280x768 result.
edit: and your 3dmark06 version is different (yours is 1.10, mine 1.20)..
edit: I can see your CPU is quite a bit better lol, and this is an old benchmark, newer (like 3dmark11) benchmarks will probably put even more distance between the CPUs.
Anyway getting Offtopic there.
I've been thinking, maybe there is a reason why the CPU never gets hot in games? (Throttling?) Perhaps it's not because the AMD has better thermal design? I need some functional software to show it though! -
Meaker@Sager Company Representative
Mine hangs around 65C in prime 95 blend.
My 1024x768 result can be found here:
Result
I think my CPU is at 2ghz in that one.
You can look at my sig to see what I have on the way -
-
Hi, personally I don't know why anyone would use old 'benchmarking' software such as 3dmark06, in the real world of gaming it is more about frame rates not overall numbers...
Chris -
Need for Speed - Hot Pursuit
Same track/race,
benchmarked via FRAPS 60 second min/max/avg fps log.
CPU @ 2.00 GHz
avg FPS: 30.8
CPU @ 1.80 GHz
avg FPS: 26.0
CPU @ 1.60 GHz
avg FPS: 20.5
In all tests the GPU 5650 is overclocked at 700/1000 MHz.
In none of the tests does core usage seem to indicate 100% usage, it's all around 80% core 1 usage, and 50% core 2, 3, 4 usage, consistently.
Anyone care to decipher this? -
I am a big AMD fan, but the truth is they need turbo core badly in their mobile chipset. Core clock speed is more important then just about anything. I dont know how many benchmarks where I have seen a core 2 duo 3ghz stomp a 2.4ghz core 2 quad in gaming FPS. Core I5 is going to beat the AMD clock for clock on applications that are only dual threaded which is the majority and then factor in turbo boost / hyper threading and it just makes the lead that much worse.
Very few games benefit from a quad core cpu, games love core clock speed and cache. AMD chips have no l3 cache, not alot of l2 cache and except the N640 poor clock speeds. AMD does have an X920 ? quad core 2.3ghz mobile chip with unlocked multipler. it would be the answer we need, but i cant find it for sale in america. -
Yep, I returned the laptop. Gonna buy a 5820TG when I get refunded.
ATI 5650 much worse than NV GT420M in 3DMark06?!
Discussion in 'Acer' started by jerg, Dec 7, 2010.