Waiting on the answer to this, thanks!
-
Same GPU, no changes beside the clock and deactiveted units, as on a 9800 GTX.
There is only some optimization to the orginal G80 GPU, wich was only used on the 8800 GTX, 8800 Ultra and 8800 GTS(384MB/786MB Version, NOT the 512 MB/1024 MB Versions). The desktop 8800 GT has already this optimizations since it uses a G92 GPU. Samething for the 8800M GTX. The G94 GPU, which is used for the 9600 GT is simply a halfed G92.
Between the GPUs of a 8800 GT and a 9800 GTX is no difference in the GPU architecture, only deactived unit block, and so no difference in the architecture of the shader units.
The whole thing is only PR. -
500 Core/1250 Shader/800 Memory (8800M GTS) vs 650 Core/1625 Shader/900 Memory (9600 GT)
The 8800M GTX has the same clock as the 8800M GTS but 1/3 more units. Compared to the 9600 GT clocks the perfomance should be very comparable. -
Sure, there is 40 % difference.................in the shader clocks.......Sometimes it is more efficient to have more clock then more execution units.
But the 8800M were already "generation 9", or better said they even were the first cards based on the G92 GPU.
I belive the 9800M GT will be based on the G94b (desktop 9600 GT shrinked to 55nm) and the clocks will decide if it is faster as a 8800M GTX or not. -
At Multicom.no the 8800M GTX costs 1670 NOK (300+dollars) more than the 9800M GT. Thank you all for helping me save money!
-
I didn't want to post here guys, but I figured i should clear this up since there are a lot of laptop junkies posting about hardware as if they know something and I am upset that people are believing this junk. It's not true.
And I don't know why people are trying to think the 8800m GTX is equaled by the 9800m gt.
Lets get something straight, I've been building PCs for 8 years and I'm 20 years old now. I started quite young and I'd like to say I'm quite the hardware aficionado. Hell I've just built a GTX 280 SLI system with my qx9650 and Striker II Extreme watercooled by about 600 dollars in parts, so I'd like to think I know what I'm talking about.
What people are saying is incorrect, the 9800m GT is nowhere near the 8800M GTX. I understand paladin44 is doing his job saying they're statistically insignificant, this is not true Paladin, but mistakes happen and I forgive you since you're only advertising what your suppliers tell you.
Let me break it down this way.
The "64 cores" that are referred to in the newly-loved 9800 GT means 64 shader units. This is in no way DIFFERENT From the G92 96 shaders that are in the 8800M GTX.
The 8800M GTX's statistics are the following:
96 shader units (or cores)
500 mhz core clock
1250 mhz shader clock
1600 mhz memory clock
51.2 GB/S Memory bandwidth
24.1 GigaTexels fill rate
256-bit memory bandwidth
THe 9800M GT statistics (per that blooper by xotic PC releasing the specs of the card on that gpu-z, yet we never see it in the next batches of screenshots).
64 shader units (or cores)
500 MHZ core clock
800 mhz memory clock
1250 shader clock.
.
Now why on earth would these notebook sellers tell you that the "difference" is insignificant? Simple, if you use 3dmark to prove this, we know that a 9800m GT is on a montevina platform which uses ddr3. Trust me when I say ddr3 gives about 4-5000 more 3dmarks depending upon the CPU and the memory FSB>
When I had a DDR2 desktop rig and I upgraded to a DDR3 rig the only thing that changed was my q6600 to a qx9650, and my ddr2 6400 at low timings (4-4-4-12, which matter to 3dmark) to ddr3 12800 at 7-7-7-20. I went from 14500, to about 20 000. Think I'm kidding? I'm not.
I know alot of people want a ddr3 laptop to say "hehe yea i got ddr3 a whole TWO GIGABYTES" And they're quite proud. I'd like to say good for you. My desktop runs 2x2gb dominators at 1800 with 8-8-8-24 timings, so I must say I have it pretty good.
The only reason I am here now debating a montevina purchase is due to a return of my m15x from alienware and the availability of the quad core. I know what ddr3 offers, and contrary to common belief, memory bandwidth does help in a significant amount of games. If you're too lazy to google then I can't help you. Just because a few games don't benefit from the higher overhead doesn't mean that all games will not, a good example will be looking at those benchmarks available at all high-end hardware sites.
Why am I posting this? Because I Don't want you to get misled. a 3dmark score of 10 000 with ddr3 on a 9800 M GT with an equivalently-clocked montevina gpu will not be equivalent to a montevina with an 8800m GTX, I'd like to hedge bets on about a difference of 20-30 percent.
The way I'd like to compare the 2 video cards is comparing and looking at benchmarks of the 9600 GT and the 8800 GT. Yes I know that the 8800 GT has 112 shaders as opposed to the 8800 M gtx's 96, but it should give you a healthy idea of how different these 2 video cards are.
Montevina is a great platform and I, myself will be getting it; but don't let yourself believe you neeed ddr3, and with that compromise in mind don't think that a 9800m GT and a montevina will offer better performance in GAMING than a 8800M GTX and santa-rosa. You have to remember, in games that actually benefit from ddr3 that the montevina 9800m GT will, maybe at BEST tie the santa rosa 8800m GTX. Put them in a game that doesn't benefit from memory speed such as, say, Oblivion, you will see that the 8800m GTX outmuscles the 9800m GT>
You guys should actually stop believing what one retailer or someone tells you, sometimes we make mistakes or we're misinformed, There is a big difference in the GPUs, as there should be. Do a little look for yourself and try to see this because I just want you guys to know what you're getting into.
All the best. -
interesting in this wall of text here you do not have a link to prove anything you say >.>. I'm not saying you are wrong, but you shouldn't say someone else is misleading people and not offer any proof of it.
-
If you go to the Sager section of the forums, you will notice that Justin@XoticPC posted benchmarks of the 9800m GT scoring 9600+ in 3dMark06 @ 1280x1024 resolution.
-
Right, but 3dmark is affected by ddr3 performance, as I've stated Dreidel.
Tell them to bench the 9800 GT in a ddr2 system to level out the playing field and see the discrepancy -
The difference you saw was between the processor upgrade, not the memory upgrade.
Its been stated in many places, even this forum that 3dmark is highly CPU bound.
I have a q6600 + 4gb of ddr2 + an 8800gts on my home rig.
My roomate has a q6600 + 4gb of ddr3 + the same exact 8800gts.
His scores beat me by around 50 points. -
DDR 3 does jack for 3d mark no idea what you are talking about 4k increase..... My PC has 2.7dc, 4gig ddr3 and 8800gt vs my 5793 their is only a 1.5k point difference which is the .2 difference in clocks and the GPU..
-
Here is the benchmarks for gaming in general ddr3 vs ddr2 on the newer chipsets. They are desktops but do not misunderstand the purpose of what I"m trying to say.:
http://hardocp.com/article.html?art=MTUxMyw1LCxoZW50aHVzaWFzdA==
Psycroptik you are correct, I was mistaken about 3dmark, I meant all SYNTHETIC benchmarks such as everest, superpi and whatnot. And when you say I got that much from a processor upgrade, what's the difference here? I went from an overclocked fsb at 1200 to a stock fsb at 1333 and from 8mb cache to 16. The difference between santa rosa and montevina chips is that the front side bus is greatly increase from 800 to 1066, but this time also running 1:1 with the memory (2:1 if you know that ddr3 is twice as fast but who cares) which increases the score as well. You are right in saying that the memory speed doesn't affect it, but going from a 65nm q66 to a 45nm qx9650 on stock doesn't result in that big of a difference unless you're also changing other parameters of the system : EG Chipset, memory, and thus affecting the cpu scores by affecting synthetic benchmarks because as you know, running 1:1 significantly improves performance, here let me show you:
http://www.anandtech.com/mb/showdoc.aspx?i=3283&p=7
Shows roughly about a 600 point increase with ddr3 alone, so say that the score was 9000 something, subtract 600.
I'm almost certain that if they bench the ddr2 systems that the 8800m GTX will win by a bigger portion than you think. The specs prove it. -
-
also the fact that the 8800M Gtx is considerably weaker than my 8800 GT
-
someguyoverthere Notebook Evangelist
There's no way simply switching to DDR3 adds 5000 3dmarks. Especially if you consider the higher latency. Also, I'm assuming you're talking about system RAM since GPU RAM is GDDR RAM.
All of that is frankly academic. What matters to be is FPS in games. If 3dmarks are even a half decent way of predicting that, then who cares how the 9800 GT manages to be equal to the 8800GTX, as long as it is. -
You know all that about computers and you still bought a m15x?
I call bluff -
Noel might have a point here. Put it simply - maybe whatever gains ddr3 is supposed to have is being soaked up by the weakness of the 9800gt giving us a relatively regular benchmark number to the 8800gtx. if it is equal then + ddr3 should give it higher 3dmarks
think about it too- shouldn't we get faster benchmarks cuz of faster processors? -
One of the DDR2 here does bad, the other is comparable to the DDR3. Nothing to go crazy over here..
Here they all do about the same, kinda going against what you said a little more now..
Were still doing the same, that EVGA nForce board with DDR2 doesn't do so well though..
Still very comparable, even in Crysis..
I think you might have been reading this wrong and/or comparing the green bars with the blue... The green bars are all different CPU's. -
Err,
The 8800m GTX is not far off from your 8800 GT champ:
http://hothardware.com/Articles/NVIDIA_GeForce_8800M_Preview/?page=2
As I see, being 16 SP short, plus being downclocked on the shader, memory and core (it would be too hot to fit in a notebook) so being 9.6 gb/s short in texture fill rate is okay with m.e
What I was saying is that the 9800 GT is not statistically insignificant compared to the 8800m GTX. All signs point to it being better, vantage isn't fair to use just yet until both are benched on the same board and processor.
As far as I can see, the GTX is much faster, as it should be and is also more expensive. The 8800m GTX is not deprecated until the 9800M GTX is officially released, that's how nvidia works. -
I wouldn't be surprised if it is weaker because the 9800gt is so much cheaper than the 880mgtx.
-
Also this link doesn't show a 600 point difference. Might be the wrong one, or your reading it wrong..
If you even read the first paragraph it says...
"Futuremark's 3DMark06 benchmark scores depend solely on two subsystem evaluations: how powerful is your 3D graphics card(s) and just how good is your CPU at crunching numbers? In fact, memory bandwidth and access latencies have no noticeable effect on the final score. Of course, there are games that also show little benefit from improvements in the memory subsystem. Regardless, 3DMark06 is and always will be a synthetic benchmark and we put more stock in actual gaming performance results." -
-
Being completely legitimate, if this is the case and 9800m Gt is more like 8800m gts (both share the 64 number), then what the heck is 9800m gts? Furthermore, if the estimates that the hd3870 is between 8800m GTX and GTS are correct, it would turn out to be the best performing card for this particular config with DDR3, and for 195 less. I'm a little skeptical about it all though. Too bad we can't bench the system for a while (esp with 3870 in there).
-
If you read my wall of text psyc I said that in some games ddr3 has a clear advantage, in some games it does NOT. I did say that in my official post. Some games you will see the difference and some you wont. Vantage is not a fair comparison either since it likes high memory bandwidth.
-
I read it, gave me a headache though.
-
Nvidias naming conventions give me a headache...
-
You're telling me.
I'm trying to find some Fully buffered dimm benchmarks of an equivalently clocked Xeon versus a DDR3 Core 2 counterpart to illustrate my point in 3dmark06 because It's not just a socket change, it's a chipset/socket/memory change.
I could understand if it's just 2 q6600s on 2 different motherboards and memory, since their socket and ability is the same. However when a new chipset with a new socket and memory is introduced as whole, without an alternative (You cannot get ddr2 montevina), this will show that there is more of a difference clock for clock than we think, we just need Chaz or someone to step up and show everyone this because he has the means to benchmark a 9800 GT on the same testbed as an 8800m GTX to show -
@ Noel -
You seem pretty knowledgeable about this stuff. If the gigaflops of the 9800M GT and the 8800M GTX are the same as reported by nVIDIA, should the cards perform the same? The 8800M GTX and 9800M GT are both rated at 360 gigaflops, and the 8800M GTs and the 9800M GTS are both rated at 240 gigaflops. I think it was noted earlier that 9000 series is supposed to used shaders that are 40% more efficient than the previous generation (8000 series). Couldn't this account for the performance increase at the same number of stream processors?
Perhaps the 9800M GTS is using last generation stream processors or something -- I don't know how, unless it's just an underclocked 9800M GT, it could be rated at only 2/3 the processing power of the 9800M GT.
I am not saying that DDR3 isn't playing a role in the 3DMark 06 score. Just offering some alternative possibility as to why the GT is getting similar marks as the 8800M GTX with 1/2 the SPs.
Can Justin or Paladin explain this gigaflops difference? It's vexing me and probably a lot of other people out there. -
We already have benchmarks up from Justin @ XoticPC for the 9800GT and I can tell you that the scores are almost identical to the 8800GTX on the old DDR2 platform
-
Not to mention the drivers for the 9800m GT are pre-mature and new drivers could attribute to a better benchmark.
-
I think the best way to solve this would be to have someone at Sager fire up Crysis on an NP8660 and crunch out some framerates for us =P
-
^^ agree... lol
-
^^
^^ I agree TO...................... -
Gigaflops is a terminology for the amount of work one piece of hardware can handle.
Although both have 360 gigaflops but that doesn't necessarily mean they have the same texture fill rate (which is more important), but likely they have similar memory fill rates.
About your question about more efficient shaders, this is actually incorrect. The ability of the shaders to be more "efficient" persay is a bit of a lie, if we look at nvidia's new nomenclature they've just stated that the shaders are now cores. You've also gotten it reversed, the lower-shader count GPUs, when competing against eachother will fall bigtime short in shader intensive games, eg crysis. To see the analogy, look at an 8800 GT and compare it to a 9600 GT>.
In general ,what I'm going to expect to see when these 2 are pitting against eachother, is that as the amount of filters increase (resolution, aa, etc) the 8800m GTX will start to pull away. In some games the 9800 GT will tie if the games will benefit from it's architecture.
The GT 200 does have a "more efficient" shader but don't expect to see that fabrication anytime soon. It's the GTX260 and 280 that you see on shelves today. What nVidia did with the GT200 shaders is that there are now 3 streaming multiprocessors per Texture Processing Cluster and there are now 10 TPCs as opposed to 8.
This gives a higher shader count of 240 (10*8*3) which isn't really more efficient.
So per TPC we have:
24 Shader processing units
8 texture fetch units.
10x24 = 240 shaders
8x 10 = 80 texture fetch units.
plus a 512mbit memory bus
This all equals a huge and overpowered single GPU solution.
Also, when addressing your last generation first generation, there is really no such thing my friend, From the G80 architecture (The 8800 GTX and the 8800 GTS 640 and 320) the G92 was a die shrink with Onboard HD decoding. -
Please google unified driver architecture!
THERE IS NO SUCH THING AS PREMATURE DRIVERS, THEY'RE UNIFIED DRIVERS THAT SUPPORT PRESENT AND PAST NVIDIA GPUS HENCE THE PRAISE -
Noel,
Thanks for taking time to go through all of that.
I wish XoticPC and PowerNotebooks offered the 8800M GTX as an option at the very least... :-\
(Though I know it's not under their control) -
Texture fill rate is something I consider vital, it's basically the amount of pixels rendered on a screen. Eg if you're playing a game like Crysis, a higher texture fill rate will usually result in a higher ability to render games at better frames. Texture fill-rate is becoming less important though since the introduction of shader cores since they offload the stress, here is a good definition that is not available anymore, courtesy of gpureview.
Eurocom offers the 8800M GTX on the 860TU I may get it, but I"m not sure. I may go back to my original plan and just wait it out until I find a better looking laptop. The clevo is awfully heavy (8 pounds nearly with batter) and the same size as an m15x but a lot less good-looking. I may or may not get it depending on my ability to force a quad in there. -
so, 9800m GT is not close to 8800m GTX because:
1) the new system uses DDR3
or
2) one card is actually slower than the other? -
One system is using a completely new interface with a new socket. This is more than just a change in ram or comparison to, say, a q6600 on ddr3 ram and ddr2 ram. This socket P is designed for one memory interface, ddr3. This allows intel to change the way how the processor and memory communicate since the speed is now established at a baseline of atleast 1066 MHZ (Since ddr3's lowest speed is 1066 approved by JEDEC). ALSO the 9800 GT is slower statistically in every facet I've seen so far from the 8800m GTX, so Id like to know how people are deducing they're the same.
An x9000 benchmark on a 9800GT isn't really fair either, since it's much faster (2.8 I think right, or 3.0? Nonetheless compared to our beloved t9300 that's atleast 300 megahertz per core) -
The 9800M GT will definitely not perform as well as the 8800M GTX in real-world applications (ie gaming). It will perform somewhere between the 8800M GTS and 8800M GTX, but probably closer to the GTS.
The 9800M GTX on the other hand, will be quite a lot better than the 8800M GTX. 112 stream processors and 420 GLOPS. -
Absolutey V_C, I'm with ya man.
I'm just fed up with people taking some words as gospel, I'm trying to help everyone get educated on what a video card's primary attributes -
It all depends on how "watered down" the 9800M is when we get it.
-
which card is more future proof though? i do game, but i dont care that much about running every single game at highest resolution w/ 132432141324x antialiasing and other stuff, i just want a gfx card that will last me a few years and still be able to run most games i throw at it well
-
I don't think they are selling the 8X series anymore here in the US. So I think that leaves you with the 9x or the 9X.
-
I realise that, i just want to know if im getting more or less for my money since the prices are comparable
-
I went with the 9800m gtx in my order(NP5976) at Xotic. But I would like to see some games benches on the 9800m gt.
I wanted to be more further proof. I hope I spent my money wisely. ???? Compute Cores 112 Gigaflops 420..... -
I don't have time to wait until early august eta so 9800GT for me
-
can someone answer my question in post#88?
-
One card is slower than the others.
-
Differences? 8800M GTX and 9800M GT
Discussion in 'Sager and Clevo' started by wr0ck, Jul 15, 2008.