as easy as pie.
-
i hope it is easy for mobile drivers to be modded for the 4870s as that is the only thing holding me back from the grabbing the W90 atm are the drivers . i had a 4870x2 in my desktop for a few days and then became frustrated with how the drivers were and the layout of ATI vs nvidia control panel etc so i dumped it and grabbed the 260gtx's instead, when it comes to nvidia mobile there cards are so overpriced its just stupid, so this time i think im going to give ATI a whirl and hope i have better luck then i did in the past.
-
Also, the Mobile 3870s do in fact have the full number of stream processors as their desktop processors. Only problem is that when compared to how nVidia calculates SPs, it's actually less, which contributes in a way as to why on the desktop side of things, an 8800 GT outperforms a 3870 most of the time (320/5 = 64 5D SPs for the 3870 vs 112 3D SPs for the 8800GT).
-
How does the calculation of stream processors work anyways? I've always wondered about that... I'm too lazy to read the entire wiki on it right now so could someone water it down?
-
You mean to compare the numbers between Nvidia and ATi? Divide ATi's number by five, and you have Nvidia's equivalent.
-
ill agree on the shader stuff, but that 8x to 16x theory...i can't.been tried and tested when i had my gateway. and there is quite a big difference in performance when cutting the lane speed in half. i ran allot of test on that. i can't do it with the d901c because it wont let me drop my link speed. it stays locked at 16x..even on battery. with one card installed.
-
-
correct.
i should have said some, because it wasn't with everything. not until the link was dropped to 1x that it took a serious hit on all.
are you able to change your link speed ichime? -
I know this is off topic, but it's relevant slightly because it illustrates the kind of business practices NVidia has fell to:
Via Fudzilla:
Tomorrow, on the opening day of CeBIT 2009, Nvidia will formalize the name change of its Geforce 9800GTX+ performance segment card to become Geforce GTS 250 in order to integrate this GPU with its new product naming scheme.
What has just been revealed, however, is the company's strategic marketing plan to revitalize the flow of consumerism in its performance segment. The fact is, a major portion of Nvidia's success in such a situation lies with the objectivity of its product reviewers. Positive reviews will go a long way in influencing the general consensus of public opinion, and therefore the company has decided to "improve," or should we say manipulate its assets.
According to Hardware.fr, the following information details Nvidia's guidelines to boards partners regarding the GTS 250 launch:
March 3rd Reviews go live of GTS 250 1GB fast boards (738/1100)
March 10th GTS 250 1GB and 512MB fast boards (738/1100) available for sale.
March 17th GTS 250 1GB slow boards (738/1000) available for sale.
NOTE: The older slow 1GB (1000MHz memory) boards should not go on sale till after March 17th. The only exceptions are if partners can overclock the memory on these to hit 1100MHz.
Simply put, Nvidia is sending out higher clocked GTS 250 cards to reviewers within the first week of launch, and is then sending out slower GTS 250 cards for the majority of sales from retailers like Newegg, NCIX, Micro Center, and other distributors.
Nvidia is suggesting that its board partners hide the existence of these higher clocked review cards by branding them as "overclocked" models to avoid market confusion. The note suggests that should partners wish to sell the slower cards with 1.0ns DDR3 modules (rated up to 1000MHz) prior to March 17th, the chips must be overclocked to at least 1100MHz, or the speed of the 0.8ns modules.
Moreover, it seems as if the company is concerned about its slower 9800GTX+ stock being rebranded to meet the specifications of the GTS 250. It needs to ensure it can fit cards with both fast and slow memory chips under one product name.
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12329&Itemid=34 -
makes me wonder if the 9800m gtx is really some form of a gts card....interesting read.
here goes some of that explaining i was looking for as to why the ati cards are better at aa
http://news.softpedia.com/news/Nvidia-and-Ubisoft-Row-Over-Assassin-039-s-Creed-Patch-85451.shtml -
It seems plausible the "new" GTX 260M and 280M will just be rebranded 55nm G92s (the desktop 9800 GTX+), or essentially a higher clocked 55nm version of the quadro 3700. Not to knock the card, it would be decent, but they'll get slaughtered by ATI again
-
it only appears to be when AA is enabled...without it...it paints a completely different picture...
and they are still getting slaughtered as long as AA is a part of the equation...at least from quite a few games i have tested... -
Assassin's Creed is the only DX 10.1 game as far as I know, and even that is removed. They're on the same playing field on a DirectX level, ATI's cards are just better this time around in AA/High resolutions.
-
-
true, but you dont just go faster with out something happening. aa is meant to show quality and when dealing in super high quality means you cant cut corners for speed. (not saying they are, just the words im using to describe it) for example.... i did my gtx280 and found that 8x AA ran far better than 4x AA can you explain this to me? not trying to be funny and i really am looking for a real explanation. i ran that test 4 times and 8x did far better than 4x on a gtx280??
-
will the GTX280m be going against the Mobility HD 4870 GDDR5?
-
In name only.
-
500/850 on the 4850, 550/850 on the GDDR3 4870. I wish the Asus wasn't so buggy so we could get some nice single card benchmarks out of its GPU.
-
does anyone have a gpuz screen shot of the specs up? talking about these 4870's
-
I have both.
The DDR3 4870 is a just a 4850 with +50 on the core and a different BIOS. Fugazi. -
that's what i thought. i knew i wasn't crazy... those specs are nothing like what they advertised. and if that's the case..... that's some serious bs going there.
some one is telling stories...either they are some serious under clocked 4870's posing as 4850's or vice versa...still..i dont see them specs matching what Jlbrightbill just posted. -
My desktop card, for reference:
-
so is there really any point in paying extra money for the Mobility 4870 GDDR3 when you already have the Mobility 4850 GDDR3? because of the 50MHZ differance, can't you just use ATI Overdrive and push the 4850 up to 550MHZ?
additionally in ATI Overdrive, what are the max clocks you can push with the
Mobility 4870 GDDR3?
also does the pixel fillrate, texture fillrate, and bandwidth increase as the clock speed increases? -
that's just it, doesn't matter if it's gddr10 those numbers they gave would have me fired up if i bought a 4870 right now.
example... i had 9800m gt's for one week and i ditched them for 9800m gtx's because them gt's we're no way even close to a 9800m gtx. people thought they we're for some reason,but once i started the testing..that pretty much closed all doubt. now if i had a 4870 and i paid for a 1gig/800/89gb's/880 gigaflops card...i dammn well better have that. but i would have hit them up before buying it. to find out just what card they put in it.
flip side...
this may be why the unit doesn't cost as much. because they put the low grade 4870 in. and rushed it to market.... so the plot thickens yet again.... -
2. 4850 = 550/900, 4870 = 600/900
3. Yes to all -
If you have a Mobility 4850, it's not worth it to get a 4870 GDDR3. Regarding pixel shaders, etc, here's what it shows when I'm running my standard overclock:
The Mobility edition still has the 880 gigaflops, 800 SP, etc but the GDDR3 handicaps the card a bit. Lets compare two nearly identically performing cards, the 4870 1GB and GTX 260 Core 216.
Core:
- 4870 - 750 MHz
- GTX 260 - 575 MHz
Memory:
- 4870 - 3600 MHz Effective
- GTX 260 - 2000 Effective
Regarding Shaders, NVidia runs a fewer number of higher quality Shader or stream process units at higher MHz. ATI runs many, many more Shader/SPUs but at a lower MHz, linked with the core. GTX 260 has Shader clocks at 1242, while the 4870's is linked to that 750 MHz core.
How does this apply to the Mobility 4870? Dropping the core to 600 MHz (I'm going off of the guy at XtremeSystems who had one, I don't trust the GPU-Z yet from these buggy W90's) and moving to GDDR3 removed 2 of the 4870's biggest strengths. Cutting the core down cripples shading and general rendering power. Cutting to GDDR3 removes the memory bandwidth advantage.
All that said, at least ATI is bringing out the GDDR5 and the chip will overclock nicely I'm sure, NVidia can't even bring a GT200 chip to market. -
I see, thanks! looks like I'm probably going to wait for the Mobility 4870 GDDR5 (any clue when it's coming out?) or by some miracle the GTX280m outperforms it.
-
Alienware is supposed to have them by April.
-
The whole point of GDDR5 was to make up for the lowly 256 bit memory bus and keep the chip small and cheap, but powerful. The 280 GTX comparatively has a 512 bit bus making it a lot more costly to make, but slower GDDR3 memory because they didn't jump on it like ATI. It's the brute force route that failed for Nvidia, ATI had a much better price/performance deal with the 4800s.
Anyway, without GDDR5 which is what makes the 4870 beast, the mobile 4870 isn't going to come close to its desktop equivalent. It'll be an upgrade from the 9800M GTX, but it won't outclass it without the mojo. A 55nm refresh of the 9800M with available GDDR5 would be enough to fend off the 4870 till the real deal arrives - the 40nm 260 gtx -
only one problem with that as well...the gtx295 is the current king and it doesn't use gddr5...so i wouldn't be so quick throwing around new tech vs old tech till something changes...
just an observation and not a what i think should happen... -
so Nvidia has a seemingly weaker GPU (on paper) with rebranding troubles and probably going to cost more, against ATI's GDDR5 4870. Looks like now it all depends on Nvidia's drivers. Somehow I have the feeling the GTX280m will still be able to play Crysis better with driver magic.
-
I didn't say the 4870 was more powerful (although the 4870 x2 is arguably just as capable), but that ATI created a much better price/performance ratio with the 4870 via GDDR5. I mean, it was more powerful than the 260 GTX, and was what, $150 cheaper the day it came out?
-
$199 and $299 at launch.
In case you haven't heard yet either, in response to NVidia's newest rebranding of the GTS 250 / 9800 GTX+ / 8800 GTS 512, ATI dropped the price on 4870's to $149, and the 4850 to $129. So looks like price / performance just got reshuffled again. -
this is true anothergeek. but with any problem, one side comes with a solution. and the 4870x2 was ati's answer to that call and now the gtx295 is the reply answer....and this time it meets the price/performance mark.
these boards probably cost each side 50 dollars to make. so im sure they have plenty of room to bicker the prices...hey, good looking out ati! keep it up and we will all have cards for the little of nothing price... -
But you know Nvidia is hurting because of it
-
who cares. as long as i get a good product at a lower cost...im the satisfied customer...
and so far...im still satisfied with my old 9800m gtx card. -
Well, your system is a monster. I wouldn't be concerned till i7, err i8 and triple 380M GTX is king...
-
rotflmao!
that was a good one. -
I have high hopes for both GT300 and RV870. NVidia got spanked and won't be sitting on things like they did the past year and year to date so far, and ATI needs to not lose the edge. Should mean good stuff for all of us.
-
im in agreement. i just want a good competitive product....
oh yeah....and to be near the top 10 people in the world when it comes to over clocking titles...*LOL* -
for desktops, I suggest you start getting yourself some liquid helium. -
i was top ten gtx280 single card.
and i am top 10 9800m gtx card.
matter of fact... make that for the
8600m gs
8800m gts
9800m gts
9800m gt
so why the helium? -
Hmmm, you're not lying. I just did an ORB search for 9800m GTX and you've got the top 3 scores... pretty impressive. But at the same time, why would you deny liquid helium? It's pretty much necessary to achieve a top score for a desktop.
Anyways, if the desktop 4870 price really drops to $149 then I know exactly what I'm buying myself over the summer! Can't wait for Crossfire... I'm so late to the game... -
thank you sir.
because grasshopper...to be in competition right now...i would need to have 3 gtx280's and since that's not the hot thing.... no since in wasting money and time on that project. the new area of over clocking is in these laptops. i just need the right starting block for it...and so far...all these records we're obtained thru air and nothing fancy....as of yet.
edit: also..it's not the temps that are slowing me down...it's the system wide over clocking...like the ocz whitebook/ aw m17 / msi gt725 and now the w90vp -
When is the CeBit exhibition by the way?
I don't plan on retiring my current laptop anytime soon so most of the news on this thread is just for interest's sake. Hopefully when the time comes for a proper performance laptop neither company will be doing what Nvidia seems to be doing now in the laptop market. -
them 4850/4870's are looking mighty suspect as well
-
Too bad it's MXM 3.0, that sucks. Looks like I'll be looking at the 4870 for my upgrade. -
Geforce GTX280M and 260M to launch at CeBit
Discussion in 'Gaming (Software and Graphics Cards)' started by ichime, Feb 24, 2009.