how long do you think it will take for the notebooks to get a gpu who supports directx 10??? Do i need to wait a year or two or just a few month??
-
-
No one knows for sure--the nVidia 8000-series cards will supposedly be dx10. I would say that you can look for dx10 desktop cards to come out relatively soon, and as with any new technology, it will take a little while to filter down to the notebook market (getting power consumption down, form factors, etc) I would say that once vista comes out we will start to see a rush of dx10 cards soon afterward.
-
okej, but the first cards only will be avaible for detoskop laptops because they wont have to scale them down as mutch as for exampel 14" is that correct??
-
My guess would be that they will start putting mid-rage dx10 cards that don't consume as much power and aren't as big into notebooks by the time Santa Rosa comes out.
-
The Intel GMA3000's should also be DX10 capable. The 8800 series ARE out, and the ARE DX10. But it will take a long time for those cards to hit the notebook market, when just the card alone can draw more power than an entire notebook system.
The higher-end cards like the 8800 will probably be for desktop replacements, but that's normal. The best card you can get in a 14" machine now is an X1600/Go7600. -
Notebook Solutions Company Representative NBR Reviewer
You can be sure that there will be DX 10 videocards in notebooks when Santa Rosa platform is out (Q1 2007). I think you can definately count on 8600 GO and 8400 GO as well as the 8800 GO. I do not know about ATI's DX10 cards for notebooks, they have not even released their desktops card yet.
Charlie -
okej so you say I probably could get an notebook whit dx 10 in say march-may something??
-
-
mobius1aic Notebook Deity NBR Reviewer
LOL Intel GMA3000. I hate their media accelerators, I can't believe they are going to make a DX10 one. You would think the negativity towards any Intel GMA would be enough to get them to quit making them. I understand GMA3000 is aimed alot towards business stuff, but still, Intel GMAs suck balls.
-
~10% above the X1900 series.
Something like 130W under load should be a decent estimate.
The negativity is only from people who expect it to be a gaming GPU. That's a tiny minority.
Most people either have the 950 because it does what they need, or they have a "proper" GPU in the first place.
In any case, the GMA950 has been a huge success for Intel. Don't assume that just because it can't run the latest games in 1600x1200, it's universally hated.
Personally I like the idea of the GMA3000. I'm looking for a thin & light notebook with low power consumption.... and as big a featureset as possible. So for me, a GMA3000 would work very well. (At least if the driver ends up good enough) -
-
mobius1aic Notebook Deity NBR Reviewer
-
That's just not enough to compete with Intel's IGP's.
The problem is that until very recently, there's only one serious manufacturer of motherboards for Intel notebooks. That's Intel.
Sure, ATI and NVidia have some perfectly good motherboards, but if you use those in your notebook, it can't get the Centrino label, and then it'll sell like crap.
So instead, manufacturers stick with Intel CPU's, Intel motherboards with onboard Intel GPU. And if they want graphics performance, they stick an AMD or Intel discrete GPU on top of it, raising costs and power consumption.
So Matrox or Via coming back wouldn't make a difference as long as Intel rules the motherboard market so completely. -
-
-
There is still a lot to be worked out, and there haven't been any announcements about mobile DX 10 GPUs yet. -
ltcommander_data Notebook Deity
Before looking for Mobile DX10 GPUs I would look for mainstream and low-end desktop DX10 GPUs first. In this case, I haven't heard much on the G82 and G83 at all. I would assume they are targeted for early Q1 to coincide with Vista so mobile parts would have to come after that. ATI doesn't look to have mainstream desktop R600 derivatives ready until late Q1/early Q2 so mobile parts from them will take a while.
-
*cry*
was hoping for a dx10 notebook a year from now -
The 177W figure given above is only an approximation, and depends on which test you believe. I've seen figures as low as 120-130W as well. It's really hard to measure accurately, and also depends on the rest of the system (A faster CPU might be able to stress the GPU more, making that use more power as well)
-
The 177 watt figure comes from Jen-Hsun Huang;
-
ltcommander_data Notebook Deity
I wonder what's the biggest contributer to power? I think the stream processors are fairly efficient and they wouldn't be operating all the time anyways. Probably the memory controller is one of the biggest power consumers. If they cut back to a "smaller" 256-bit memory controller they should be able to shave off quite a bit of power consumption and transistor count, which kind of go hand in hand.
Still when GPU makers make massive power savings on mobile parts I think it's because they go through the major power consuming sections and lay the transistors by hand. Or at least double check the computer program's calculations. In this case though, a significant amount of the G80 is already done by hand, which is why the increase in power consumption has been relatively small over say the R580. That does mean though that they really don't have much more room to tweak for power. Mobile parts will probably have to use the 80nm process, although that'll only give a small improvement since the half-node is a cost-node rather than a power-node. I believe TSMC is working on a 80nm power-tweaked process, but that may not be suitable for fast GPUs. -
So I expect them to take a good chunk of the power consumption, but you're probably right the memory controller can get expensive too. And let's not underestimate the actual memory either.
And keep in mind that this generation they've had to add a lot of scheduling logic just for keeping track of "threads" and assigning work units to different stream processors. On CPU's at least, that kind of work is one of the most complex parts, and is responsible for a large part of the power consumption. (And that's with maybe 9 execution units if we count everything. The 8800 has 128 to keep track of)
From what I know, a large part of the power savings achieved on mobile parts is typically made just by lowering clock speeds and disabling a few shader units. The power consumption doesn't scale linearly with clock speed (see the P4 for a good example. Just raising the clock speed by 5% on one of the faster models could give you something like a 20-30% increase in power consumption), and newly released GPU's are always very aggressively clocked to begin with. So just by lowering clock speeds, say, 10%, they might be able to make huge savings in power consumption.
Of course, over time they also polish the actual transistor layout, tweaking things and adding a few power-saving techniques, but a large part of the saving is just from reducing performance a bit.
And then there's plain old undervolting as well. Cherrypicking the best parts and lowering voltage on them to reduce power consumption is an old trick as well. -
-
ltcommander_data Notebook Deity
Actually I don't think the stream processors are more complex. In order to increase the clock speed, nVidia did the opposite. The stream processors now use scalar ALUs instead of vector ones and the pipeline depth has been increased. Both contribute to allowing for higher clock speeds. However, the high clock speeds and number are needed to make up for the fact that each individual stream processor is weaker than a full shader.
http://www.dailytech.com/article.aspx?newsid=4891
In any case, the latest news from nVidia is that they are currently working on 9 other DX10 cards which are divided into 3 products with 3 models in each segment. They are schedule to launch in February. Either the 3 product segments are going to be GeForce, Go, and Quadro or they're going to be all desktop but low-end, mainstream, and high-end. I think it may well be that they are all desktop parts on the 80nm process. The high-end being a GX2 part, a GTX, and a GTS, the middle-end being a GT, GS, and something, and the low-end being a GT, GS, and TC. Still, I'm sure mobile parts aren't far behind. -
The pipeline depth however, doesn't decrease complexity. (And typically, deeper pipelines mean higher power consumption)
directx 10 for notebooks?
Discussion in 'Gaming (Software and Graphics Cards)' started by fizzyflaskan, Nov 9, 2006.