GRID ran buttery smooth on my old m6864fx with t5750, 4 GB RAM and a RADEON Mobility 2600/512. At 1280x800 everything max except FSAA (none) and driver crowds set to low.
Now, with a t5550 (slighly slower CPU), same ram and 9500m GS the game runs badly unless I lower everything to medium 1280x800. I also notice Mass Effect isn't as smooth.
Anyone else having issues with the 9500m GS in GRID
Both machine score within 50 points of each other in 3dmark06 ~ 3700.
-
3700? just a bit low
Is your GPU in DDR2? -
nope ddr2, the 2600 was ddr3
-
which is why the 2600 was better, ddr2 to ddr3 makes a nice difference, the 9500m has a normal 3dmark06 in the high 3000's to low 4000's depending on ur clocks
-
Remember that 3dMark06 isn't a very accurate representation of real in-game performance. Like HaloGod2007 said, the DDR3 in the HD 2600 gives a good boost over the DDR2 in the 9500m GS.
-
even though 2600 is better, i cannot make the difference between max and mid, check your driver settings, as the 2600 ddr3 is no more than 15% more powerful comparied to the 9500m gs
-
Sorry to hear this. I am also a 8600m gt owner(same as 9500m gs) except mines gddr3. I decided to trade in my laptop with a hd2600 for a 8600m gt laptop(asus g1s) and the results are dissapointing, I score higher in 3dmark but in games I cant run them maxed like i used to, I played grid on everything high and got about 30-40fps, now im getting 20-30 overclocked the same way i overclocked the hd2600. I figured crysis would run better aswell, but its the same bs, I get 5fps less..which isnt bad but it hurts when you read about how much better the 8600m gt is for days(3dmark) and when you buy it, you realize it's mostly lies or bad comparisons. I wonder why the hd2600 performs so well with newer games, it has less rops and fill rate, so im out of ideas. I really dont care about the technical talk anymore, because i listened and it' just isnt true no matter what way you slice it.. I want my 40fps in grid more than i want 1000 more points in a dumb benchmark program nvidia obviously tailored its drivers to excel in.
ps. sorry to hear the news, I think someone needs to post a thread on the hd2600 vs the 8600m gt that owns both, and simply use game comparisons not benchmarks. -
I think there are two variants of the Mobility Radeon HD2600.The other one is the XT version which is more powerful.
-
-
think there are two variants of the Mobility Radeon HD2600.The other one is the XT version which is more powerful.
yes that is correct, but a little off topic. He is talking about the hd2600 gddr3.
I think the point is now that iv'e owned both
8600m gt ddr2 > hd2600 gddr3 (in 3dmarks)
hd2600 > 8600m gt gddr3 (in games)
so if you care to see the truth when it comes to games
hd2600gddr3>8600m gt ddr3>8600m gt ddr2> hd 2600 ddr2
I never owned a ddr2 card so i cant really tell anyone which is better, but i have owned the hd2600 gddr3 and i now own a 8600m gt gddr3, like I say the 3dmark scores are definetly highers, but the in game performance does not even come close to matching the score differance, I get less fps in most games (crysis,bioshock,grid,cod4) when using high shaders. I'm extremely pissed off that I traded laptops and ended up with slower game performance, i swapped a few drivers but they only make a 1- 2fps diff on average...but they make a 1000 point diff in 3dmark05-06 ???? I read wiley c's reviews and noticed he claims to get 30fps in assasins creed, well that aint right, his photo is looking into the sky and half into the city, yet he claimed it was the most graphic intensive scene he could capture, and he got just 30fps..yet in game it averages 18-28 fps with max settings..so all thats happened is people are reporting their highest acheived framerates and calling it there average... -
wiley, yes i played most new games on it...Remember i own the same card as you, but you get wicked 3dmark06 scores
we get about the same in 05 however..
I would have to use memory..I posted some results in a notepad file on a jump drive somewhere, i'll take a look for it then post it here..
I will only mention the ones i know for certain.
off hand if you wanna know the diff between the source engine i remember
css stress test
1024x768
no aa
trilinear
color correction enabled
reflect everything on
high shadows(well high everything)
just turn off aa and use trilinear instead of ans..
I got 181.xx fps average
and crysis off hand
dx10 med spec 1280 x 800 ---roughly 25 fps
brb ill find some old info i wrote down and then we can see the diff, would be benificial to let others know if there fav game excels on the hd2600 or the 8600m gt
ps..do a benchmark of ccs stress test with those settings and see what u get, yours should get even higher than mine (130fps) -
I don't think there's a 2600XT in laptops other than the HP HDX?
-
-
well i posted a big comment however it was reset by me..
anyways the jist of what i wanted to say
I looked here
http://en.wikipedia.org/wiki/Comparison_of_ATI_graphics_processing_units#Mobility_Radeon_Series
I looked here
http://en.wikipedia.org/wiki/GeForce_8_Series#GeForce_8600M_Series
Nothing that tells me why the ati hd2600 plays newer games smoother than the 8600m gt
only thing i can conclude from paper is that they have near equal bandwidths, equal power with the exception of unified shader pipelines. Now i know theres been alot of talk about effiiency of the nvidia and the extra ROPs, but there is no professional article on the statement that i could find via google(myriads of searches).
I think the performance in games is better on the hd2600 for a main point, since they are able to pump out the same amount of data roughly. When a game brings each card to the limits it then boils down to which card can calculate faster, the 88 extra unified shader pipes in the hd2600 would essentially allow for more simultaneous calculations at the same rate. that is more logical than making up some theory on how the pipelines in nvidias cards are more efficent...because when you state that, then im assuming you mean the nvidia pipelines are able to calculate 1vs1 but at a faster??quicker?? rate...basically what im trying to say is that its a chip and not a dynamic thinker, the card has bandwidth limits (physical) and speed limits (core processor engineering aka physical) limitations....OK I give up, I dont know why ...lol...maybe someone with electrical engineering or a phd with experience in silicon valley could comment on the tech specs and what exactly they mean in laymans terms...all i know is that USP were made by ati, and nvidia fanboys claim the USP are more efficient on the nvidia cards???lol cant find that info anywhere, all i found was that they handle information differantly basically the nvidia card can handle 1 op at a time like the ati card, but it can vary a greater amount of operations...lol...still makes no sense why people say its more efficient...maybe im the kinda guy that reads encyclopedias and need facts not dreams...anyways if you are good with tech spec info please comment on why the nvidia 8600m gt gddr3 sounds better, however the hd 2600 gddr3 performs (clearly) better...and why the gap isnt as small as people claim(lol in fact i read the gap was edged to the 8600) bah...im picked off...thanks -
i dont have the css benchmark tool.
-
Well im not trying to flood the forum but i found a very interesting article finally on ati vs nvidia...
Wiley, what do you mean you dont have the program?? the stress test is built into counterstrike source? or did you mean you dont own the game?
ok here goes this is the article and it answers most of my previous question, this should be made into a seperate thread but these arguements never get serious, just diluted with lies and opinions.
ok here
http://www.cdrinfo.com/sections/news/Details.aspx?NewsId=14458
very interesting if you read further down, heres what i scanned....
ATI and the Unified-Shader architecture
Graphics are generally made up by vertices and pixels. Both should be shaded by the graphics chip, with two different mechanisms. These are the vertex pipelines and the pixel pipelines. The operation of these pipelines is controlled by both hardware and software. Regarding the software layer, the possibility to use a common or unified layer to control both vertex and pixel pipelines is feasible. A unified shader architecture is currently a debate in the graphics industry, whether or not a Unified Shader pipeline is better.
NVIDIA chose to independently control the Vertex and Pixel shaders, with its G70 GPU, a reformed and improved version of the NV40. By expanding the number of shaders as well as the efficiency of the pipelines, NVIDIA has managed to increase the overall shader performance by 50%.
On the other hand, ATI has already revealed that it will adopt the Unified-Shader architecture in the development of the Xbox 360 GPU. According to the Unified-Shader concept, no distinct dedicated vertex and pixel shader engines are used. Instead these are replaced by an array of Unified-Shader engines able to execute both kinds of instructions.
ATI believes that such an approach offers the best performance and also better allocation of the GPU resources, compared to NVIDIAs dual shaders. NVIDIAs answer to these claims is that the Unified-Shader technology could be also a
future option for the company, as long as it will be assured that its operation will be smooth, easily controlled and predictable.
so in short, nvidia has sofwaftware(drivers) that show improvements of 50 percent. Ati is first to use them like i said earlier(showing im not dumb), and that the unified shader pipeline is a technilogical idea(LOL) that allows calculations of the vertex and pixel shaders to be done on the seperate ENGINES(physical) lol so my point in my previous post was correct, I feel **** smart today...lol usually i find out im wrong hence the constant reading...but yeah the 120 pipes is basically 120 vs 32 shader engines however the nvidia is 50 more efficient(lol to itself) and that ati actually was the forerunner of the concept(aka leader) hence the newer ati cards containing more pipelines...well thats that
ps
after finding that my presumption on pipeline differances but similar bandwidth theorys seem correct. anyone beg to differ -
mullenbooger Former New York Giant
Eazy, what 3dmark06 score go you get for your 8600gt?
-
I get 4600-5200(unstable).
also doesnt that question defeat the purpose of coming into this thread ? I think the point in here is...
The 8600m gt gddr3 9500m gs 8600m gt ddr2 look great in 3dmark05-06, however in reality they run games slower than the hd2600 gddr3.
the purpose now i guess is for people to share their information, and figure out why the hd2600 performs better in games but worse in 3dmark.. -
masterchef341 The guy from The Notebook
the generally accepted results as of today are:
8600m gt gddr3 > hd 2600 pro gddr3 > 8600m gt ddr2 > hd 2600 pro ddr2
but the gddr3 cards should be really close either way, as should the ddr2 cards.
now if you have the hd 2600 xt gddr3, i don't know, that might have higher clocks or something.
as far as 3dmark not being a good representation of real world performance, get used to it. -
mullenbooger Former New York Giant
I only ask because for some reason I thought I remember that you got really low 3dmark scores for your 8600. I get ~4000 at 1280:1050, no overclocking.
-
I have no problems with the 9500 GS all my games run on high and max with 30+ FPS
-
to answer a few questions
I don't get really low 3dmark scores, i get the common results. I cant get uncommon results like 6000 3dmark06 points.
masterchef341
Why do you always say the same thing everytime? Do you just copy and past your opinion to every topic solidifying your views?
I own a 8600m gt gddr3, and like the NBR guy that made this thread I get lower framerates than my last laptop (hd2600 gddr3) he owned the same card aswell.
The point should be, many people read these forums searching for info, and I personally made the mistake of swithing over as the performance given out from fanboys is exagerated.
and now for a opinion that really isnt a opinion. The hd2600 does perform better than the 8600m gt in crysis,cod4,bioshock,grid,mass effect, and potentially many others but i dont own the entire library of PC games. So to tell someone that has owned both that the 8600m gt gddr3> hd 2600 gddr3 means next to nothing when his eyes and fraps record much much smoother framerates(5-10fps diff) and that is not a small differance in favor of the 8600m gt.. Im picked off from reading the posts of others and jumping to the conclusion that the extra 200 bucks for the asus g1s would be worth it based on the framerates and 3dmark scores floating around here.
I suggest reading the thread before replying to it -
-
Well to all you losers saying that the 9500M GS sucks. Let's see the 8600M GT ddr3 or that stupid ATI POS get 5962 3DMarks (06) completely stable.
I run Crysis @ 1440x900 w/ shaders and shadows High, and everything else at Very High - Perfectly playable - Even runs at realistic speed, so I would say around 25-35 FPS (visual guess).
The only problem is heat, and my extremely high OCes get limited. I'll post my values when I get back on my notebook...
Anyone figure out a way to get fan control on this thing yet? This is still my top priority, as I'm sure my OC will go much higher: Since dumbass ASUS set their fan boost threshold at the same temp. as the diode downclock threshold (97C). I tried flashing the modified BIOS I'm using (Kalyway/CPU alias fixed) with a new nested VGA BIOS module I edited with Nibitor, but it didn't work so far. NO FAN CONTROL!!! WHY?!! WHY CAN'T WE JUST RUN OUR MACHINES THE WAY WE WANT?!! -
I spoke too soon.
I did a clean reinstall and I'm running the same drivers as before, but I'm getting 3DMark06 scores over a thousand points less than before, at the same settings. I can't figure out, for the life of me, what is causing the huge reduction. Does anyone have any suggestions on what I should check, that could be affecting my scores?
I noticed I was scoring low at first with the newer driver, so I tried an old one that I had recorded and here's the difference.
(BEFORE REINSTALL)
3DMark06 (1024x768) nvlddmkm! (Recovers w/out restart)
177.92 5949 Overclocked: 655-1524-488 MAXIMUM OVERCLOCK
(AFTER REINSTALL)
3DMark06 (1024x768) nvlddmkm! (Recovers w/out restart)
177.92 4968 Overclocked: 655-1524-488 MAXIMUM OVERCLOCK
The only software changes this time around, is I'm running McAfee instead of the Norton trial, but I terminated all/most of it's processes before runnning the benchmark and received similar results. Anyone else have any suggestions? I also tried reflashing the BIOS, and reinstalling the chipset drivers and gpu driver (with Driver Sweeper/Safe Mode). -
Dox's 181.20 with XP
at 1280x800 I get 43xx at stock clocks -
GRID also tends to run better on ATi hardware, from many benchmarks that I've seen.
-
But yeah some games run better on ATi hardware, like NFS series.. while some play better on NV hard like Lost Planet
GRID - 9500m GS experiences
Discussion in 'Gaming (Software and Graphics Cards)' started by lewdvig, Sep 12, 2008.