I was wondering that since the ati 5000 series cards will be out soon ...
that nvidia will follow suit with the nvidia 300 series cards ...
![]()
-
I was wondering the same thing recently. Anyone know?
-
The ATi haven't come out for notebooks, last I checked, so we're still waiting to see.
-
ummmm the 5 series wont be out for a couple months
-
Google. He is your friend.
The Search Button. He is your ally. -
-
been rumoured for q2 for the 300 series
-
I am asking about the mobility laptop cards ...
I would feel very bad if I suddenly bought a new laptop and then found out that a few weeks later they refresh and upgrade the graphics card to a better one ...
thats why I am asking ... -
you will always feel bad .... pick the one with the highest chance of upgrading and suck it up
-
Whether Nvidia releases G200 chips as 300M before then is anyone's guess.
In other words, no one knows anything about anything, so buy whatever you want now. -
Desktop GTX 300 was scheduled to be released end of this month, but is now pushed back until Q1 2010-ish. So by trends, you can expect a mobile variant in Q4 2010, Q1 2011.
-
-
Well they have no choice ATM
also the Market for high end laptops is pretty low -
-
If ATI keeps churning out DDR3 cards for laptops and nVidia arrives with their GT200 core translations, then even their beloved 5870 will not save them. Not to mention that nVidia has the huge advantage over ATI when it comes to certain gamers... PhysX is a beautiful thing to behold, and not every game is going to use the Havok engine. A friend of mine complained about playin RE5 when he reached the water levels. He has a 4890 and an AMD quad core in his desktop. My 9280 breezed through it with better framerates all around and also at a FAR higher screen res (mine was 1920x1200 and his was 1280 x 1024). It happened in NFS Shift as well, and Unreal Tournament 3 he had to disable all PhysX in the options menu. ATI's saving grace is their memory and the raw power of their cards. nVidia's saving grace is their PhysX.
If an ATI card could outperform a nVidia card by a significant amount (like how the 5870s murder anything that doesn't say GTX 295 on it) then I'll be looking toward an ATI card. PhysX or not. If they can't do that, then nVidia for me. I love my PhysX mmm hmm -
dun dun dun
HAVOK OH NOEESS. Doesn't alienware have gddr5 cards coming out for the m17x all powerful? the twin 4870s? -
And how could there be use for the Havoc engine in a Real Time Strategy game? Thinking about Warcraft 3 and the original starcraft I don't see any physics in them -
-
-
- since PhysX is not compatible with ATI GPUs it seems its forced to run on the CPU (since you say performance increased once disabled). If this is the case, then he would be insanely CPU limited.
- he seems to be having other issues since the desktop 4890 is faster than a mobile 280. -
ATI cards were having serious performance issues with RE5 until the 9.9 Catalyst. It's still not perfect, but it's a big improvement.
-
@notyou
He has a quad core AMD black edition clocked at 3.2 GHz. He does not have a CPU problem. He also has 8GB DDR2 RAM @ 1066MHz. No problems there either. The physics in games is problematic for ATI cards. His computer kicks my laptop's butt at other games that inherently use Havok Engine (like Crysis Warhead) or that just don't use Physics in it at all (don't remember any games like this offhand... X-men Origins may be one since little is moveable/destructible in that game).
Also, the 4870s completely destroy the 9800GTX+ cards. The 280m is the 9800GTX+ with slightly lower clocks. In fact, I cannot overclock my card to the base specs of the 9800GTX+ without it crashing. The mobile 4870s are *not* as powerful as the 280m. 280ms in SLI also outperform mobile 4870s in Crossfire. If the 285m is based on the GT200 chip architecture (meaning 240 Stream processors and not 128) and all other things held the same (maybe slightly higher clock speeds on the card), it will blow the other mobile GPUs out of the water. The ATI mobile cards already have 320 stream processors in them, there's nothing to increase there. Their desktop cards have the edge over nVidia's counterparts because of their faster memory, but their best in a series has never been able to beat nVidia's best. I don't know why. Price is a serious factor in buying a card, but let's face it, dual 4870s in Alienware's M17x are $100 more expensive than dual 280ms. And with the 5000 series on their way, those cards should have dropped in price. If ATI brings out GDDR5 memory on their mobile cards, then nVidia is in trouble. But as it stands, if a lower clocked DDR3 5000 series card comes out and nVidia releases one of its GT200 architecture cards, ATI will be the ones with a problem. A DDR3 4870 cannot compete within 10-15% of a GTX 285 in desktops. You need the DDR5 version.
While ATI doesn't manufacture the boards, they provide them with the technology. If they provide them with GDDR5 mobile tech, it will be produced at some point.
And 128 bus width doesn't even recognize beyond a certain memory size. While your point is valid, with 256 bit being the base maximum for mobile cards, going back to a 128 bit would make no sense, as I believe they would not be able to produce cards with larger usable memory than 256MB on them. Which is a decent bit of power inhibiting there. Not all games may use it, but enthusiast-type settings and AA love your video memory. They eat it up like free pizza. The way I meant it, all other things held equal, DDR3 would have to replace DDR5 RAM on ATI cards. -
I can assure you that the 40nm technology used for the 285M or 290M (if they will ever come out) will not allow the use of 240 shaders, the TDP is just impossible to bare in a laptop. The best you will see is a core with maybe around 170-180 shaders and clocks somewhere around 550/1375/950. I base this on the fact that the 260M GTS uses 96 shaders and has a 38W TDP (although the bus is just 128bit). If all else is kept equal, then that means 190 shaders for 75W. But things don't really work this way, for example the 250M GTS uses the same core but with 10% decrease in clocks and it has 28W TDP.
My bet is on a 40nm 290M GTX with 160 shader cores being the next best thing from Nvidia with clocks of 550/1375/900, expecting anything more is unrealistic. Still pretty good if you would ask me.
-
Also lower clocks do affect TDP of course, that's why cards on the MXM 2.1 boards need to be overvolted to allow for overclocking past a certain number. But this is still all speculation, however I'm sure it could be done. The old C2D and C2Q had TDP of 25 and 35W, if the new i7 mobiles can match that then we shall have a winner for the video cards ^^ -
Soviet Sunrise Notebook Prophet
Since when was the GTX 280M 65nm and 85W TDP.
-
You post tells me I'm wrong, would you clear it up for me? -
The GTX 280M is a 55nm card and has a 75W TDP.
-
huh... so GPU-z and 1/3 of those websites are right and CPU-z and 2/3 of those websites are wrong... Oh well. Well that makes a bit of my post invalid then O.O
I think...
Oh well... *sigh* -
Soviet Sunrise Notebook Prophet
You have been added to the KGB watch list.
-
-
-
I was wondering if Nvidia is able to forever keep this up ...
I mean everytime ATI releases a new card ... all Nvidia does is to release a new card with an increase in the number of engine cores ... but can this last forever ??? first they release a base card ... then they simply increase the number of engine cores ... just to match up with the competition that is ATI ...
and also endlessly play the re-branding game over and over ...
and also now that directX 11 is just around the corner ... how is nvidia going to match up to the competition ...
-
Soviet Sunrise Notebook Prophet
Only time will tell, but I hear what you're saying. I hope Nvidia plays their trump card soon.
-
. So as I was saying 550/1375/950 with 160 SP :-D The best you will ever see on 40nm. Although... you never know.
If my laptop would be able to take 40nm cards that would be great, but I guess I will have to skip them. Wait until Fermi / or whatever ATI is throwing out and the next die-shrink. What is the next die-shrink anyway? -
-
Found it, 32 and then 28nm
http://www.techpowerup.com/index.php?103791
Fermi on 28nm sounds really good ... yummy. And TSCM says Q3 2010 for getting the equipment ready.
Also:
http://www.fudzilla.com/content/view/14864/34/
28nm in late 2011, early 2012(.
-
nVidia would be foolish to release their next mobile iteration as a DX10 variant. ATi will have DX 11 mobile cards on the market which will force nVidia to release a mobile version of fermi. The problem is, fermi's extra transistors are geared towards making it more of a CPU than GPU so I'm not sure if it will even beat out the ATi 5000 series in performance. The added heat from all the transistors sure won't be worth it.
-
And a GPU is basically a CPU that can only process graphics, a smarter graphic CPU will be immensely good for performance. I think they just allow it to be more adaptable to the workstation-class users like those people who make games and movies like The Incredibles.
Either way, nVidia wouldn't let its gamers down, they still do a lot of advertising in games where you see "nVidia... the way it's meant to be played". I don't think they would still be pushing that out if their new GPUs were not slated to demolish the competition. -
I think Fermi will be great on the desktop. Possibly outpeforming AMD/ATI by a long shot in some situations. However, I can't believe how nVidia is investing all of its energy into one market that hasn't even proven to be a money maker yet. By focusing on the on GPGPU market they have nearly abandoned the mobile(laptop) sector. Lets not forget how much laptop sells are growing vs desktops. This will now be the second generation where nVidia will have engineered a GPU that is ill suited for the laptop market. On the bright side I do think Fermi is more scalable than the GT200. Hopefully we will see Fermi sooner than we saw the GT200 make it to the mobile market. Unfortunately, I think it may be at the cost of performance.
I see Fermi beating Ati in peformance on the desktop while Ati beating Fermi in the mobile sector. ATI will probably have a primarily downclocked version of their desktop cards for their mobile cards whereas, nVidia will have to chop their GPUs in half. At 40nm, Fermi is still a very large chip. -
FYI 4870's do have ddr5 in the m17x
-
SoundOf1HandClapping Was once a Forge
Who else is excited for Intel Larrabee? I know I am! Anyone else?
I kid, I kid. -
Fermi seems like an O.K. move for nVIDIA, but I read they got pushed out of the chipset market ( I remember something about nVidia ticking off Intel somehow) so they're leaving ATI open to catch the ball here. Honestly though... all of this rebranding stuff nVidia has been doing makes sense to me now ( still makes me mad sometimes ) but they have been working on Fermi this whole time. Fermi is nVidia's wild card, and they're putting alot of chips into this round I think. With OpenCL I could see nV really coming out on top here in a few years with this though... we'll see.
since the ATI 5000 series cards will be out soon ... when will the Nvidia 300 series cards come out too ???
Discussion in 'Sager and Clevo' started by Sparky894, Oct 16, 2009.