After all those beautiful videos from games like Crysis that try to show the improvements that DX10 brings over DX9, it seems someone is also taking the time to unlock the DX9 generation. Has anyone seen the latest trailers for Id's upcoming Rage?
http://www.shacknews.com/onearticle.x/48294
For download:
http://www.fileshack.com/file.x/10928/HD+Rage+FileShack+QuakeCon+2007+Footage
Maybe my standards are low, but if you watch the HD download version, the shot in the repair shop at the beginning, that guy was almost as real as an actual actor.
http://www.theinquirer.net/?article=41481
What's more, despite looking so good Rage doesn't need DX10. Now I know DX10 features are also available in OpenGL, but what Cormack specifically said was that not only is DX10 not used, but DX10 features aren't used either. It seems the DX9 generation isn't as limited as it is made out to be.
What's more, Id isn't the only major developer being slow to implement DX10.
http://www.computerandvideogames.com/article.php?id=169204
Epic also feels that DX10 will be slow to grow, being only something that people will "dabble in" for the next while since the market is constrained by the population with DX9 level PCs and DX9 level consoles. I guess it is kind of ironic that Microsoft is pushing DX10 for PC, but if people start making DX10 out as too big an improvement, it'll discourage people from buying Microsoft's own XBox 360. I guess them saying that DX9 hardware should last the next 2 years at least is reassuring to me.
-
ltcommander_data Notebook Deity
-
I don't think anyone ever doubted that DX9 still has a lot of potential. To me the huge advantage of DX10 was forcing graphics engineers to switch over to more efficient microarchitectures for the GPU's, with unified shaders and such. Even though the DX9 software still has a lot of potential, it has to be admitted that for new GPU's DX10 is the way to go.
-
I could care less about the added visual effects that are included in DX10. What I was really excited about was the increase in coding efficiency and thus an increase in GPU efficiency. I'd like to see my 8600m GT run a game 25-50% faster in DX10 than it does with the same game/same settings in DX9.
Hardware being created today is absolutely spectacular. I'm still impressed by the general architecture of Nvidia's Geforce 8 series and AMD's HD2000 series. Intel's rapid development of multi-core technology is astounding and perpendicular recording has even improved performance on the old mechanical HD front! But where on earth is the software that will efficiently utilize all of this new tech? We're in a cycle of hardware vs software maturity where our hardware has outpaced the scope our software can utilize.
Professional apps have been taking advantage of extra CPU cores for awhile now. Games are getting in on that same action. Now it is the GPUs turn.
I had hoped that DX10 would be that turn, but I guess I can just sit and be happy with the latest driver releases. They are steadily increasing performance in Supreme Commander, at least.
(By the way, being able to play Supreme Commander on a MBP with settings maxxed (no AA) at native resolution (last UEF level, 20+ FPS in the final 2 minutes) is absolutely jaw-dropping, I can understand why companies feel there is more to be had on the DX9 front. Amazing stuff, really) -
Unified shaders are only more efficient for DX10-style features. If they were so efficient at DX9-style shaders, we'd have had them already. Don't you think NVidia and ATI are bright enough to pick the best-performing architectures they can come up with? They certainly don't need to be "forced".
It's not like they try to cripple their cards as much as possible, until the big bad Microsoft tells them "now you *have* to implement your cards this way."
DX10 doesn't even require unified shaders. The reason ATI and NVidia switched to them for this generation is... that they're better suited for DX10 support. The reason they didn't use them last year is... they're not as well suited for DX9 support -
Clearly this is not the case, because DX10 GPU's are more efficient in DX9 games than DX9 GPU's are. Just look at the benchmarks. Saying that they're not as well suited for DX9 support is misleading, because while it's true that unified shader's perform better in DX10 than in DX9 (assuming both codepaths are fully optimized), it's also true that unified shaders will perform better in DX9 than traditional shaders.
-
masterchef341 The guy from The Notebook
@ jalf
actually its a cost / performance thing.
as parts get more horsepower, you have to increase design complexity to keep increasing performance. pixel shaders didn't exist until it was practical to introduce them from a hardware perspective. then vertex shaders, geometry, and now unified shaders that can do either of the 3 on any given clock cycle.
the fact that dx10 is not connected to unified shaders should be the first sign that unified shaders are not here because of dx10. you almost said it yourself.
"DX10 doesn't even require unified shaders. The reason ATI and NVidia switched to them for this generation is... that they're better suited for DX10 support."
that doesn't make sense. dx10 doesn't care what kind whether your geometry shader could be used for something else or not. it just makes sense to do it when it makes sense from cost/design point. as complexity in games increase, the need for more modular hardware increases. the same is true for new dx10 games and new dx9 games.
when pixel shaders first came around, no one was thinking "unified shaders make sense". that would be meaningless. there was only one type of shader.
when vertex shaders came out, they were an expensive attribute. it obviously made more sense to put a large number of pixel shaders (cheaper) and a few vertex shaders (expensive) to optimize performance / cost.
then geometry shaders came out. even more expensive! so now you are juggling 3 types of shaders, and developers are making software. the practical realization developers now have is that in most scenes, one type of shader is the limiting factor, and others are going unused, waiting.
only then can the hardware take the next step, which is to unify the shader processors. -
http://en.wikipedia.org/wiki/Correlation_is_not_causation
Global Warming is caused by the declining number of pirates!
In theory, if both codepaths are fully optimized, non-unified shaders will always be more efficient. That's a simple fact.
Unified shaders mean that every shader processing unit is capable of processing every type of shader. That's not efficient. Specialization is efficient.
Assuming you can predict how many of each type of shader unit is going to be needed, you can implement just that. If you need, say, 16 pixel shaders for every 8 vertex shader units, then you can make a card with 16 pixel shaders and 8 vertex shaders, and get the exact same performance as one that had 24 shader units, each capable of doing any type of shader. But the unified version would be a lot bigger and more expensive. And 16 of the shader processors would never ever need the vertex shader logic they carry around. While the last 8 don't actually need the pixel shader logic they implement.
Where unified shaders shine are when you can't predict the distribution of shader types. That's true for DX10 because shaders can be glued together to feed into each others and loop around, and because of geometry shaders, a big unknown that's going to be handy, but in the short term, it's going to be almost unused.
It wasn't true for DX9 because back then, you could make a pretty good guess at the distribution of shader types required. So in those cases, unified shaders are bigger and more complex, while achieving the same performance. -
-
masterchef341 The guy from The Notebook
i know all about correlation and causation. believe me. i LOVE the flying spaghetti monster, and I have used the global warming pirate example many times before.
anyway, I think what you meant to say (or what you heard or read) was that an individual shader (specialized) is more powerful than an individual unified shader. that is true. no argument. i would even go so far as to say that 16 pixel / 8 vertex specialized shaders, if all being used 100%, would outperform 24 unified shaders, (if you had a budget and you designed them both to be otherwise equal) because the specialized shaders are better at what they do.
or maybe what you meant, is that since all the dx10 cards have unified shaders, that dx10 games can completely ignore the old shader ratio issues, whereas dx9 games still have to give a nod to older cards with specialized shaders. thats somewhat true. but even dx9 games can start loosening up the ratio concerns. and even older dx9 games can't keep perfect ratios of pixel / vertex / geometry shading. sometimes there are just geometry intensive scenes. it happens.
24 unified shaders, collectively, will beat 16 pixel / 8 vertex shaders, as a collective whole, despite each individual specialized shader being more powerful than each unified shader. modularity becomes more important than specialization as technology advances. 24 unfied shaders that are 90% as good and 100% utilized beats 16 pixel / 8 vertex shaders that set the 100% good standard and are only 66% utilized.
you can't argue with that. but you don't know where i got those numbers from. the point is that its up to the numbers to decide which is better. across many games (and even scenes within games) you will have a wide variety of shader utilization. but the unified shaders will perform consistently high.
without nvidia and ati being forced to switch to unified, you have to assume that they made a cost / performance analysis and made the right decision. it has nothing to do with dx 10. it has everything to do with hardware. -
Agreed !
that was the point of Unified shaders...to maximize efficiency -
ltcommander_data Notebook Deity
Reading the posts, I get the feeling that unified shaders are one of the main reasons people feel that DX10 and DX10 GPUs are more efficient. Not that I don't think that's valid, since I don't know for sure, but it occurs to me that other architectural changes seem to also be very significant. We'll ignore the fact that the new DX10 GPUs have more transistors, more shaders, bigger internal caches, more RAM, more bandwidth, etc., which already give them a heads up over the previous generation.
Besides unified shaders, the way the shaders themselves have been made up has changed. Before, the shaders were vector and fairly straightforward SIMD. Now nVidia uses a scalar Stream Processor whereby each SP works on an individual component (say a pixel) of an independent thread and groups of 16 SP are doing the same instruction. ATI now tries to find 5 independent instructions to operate on 5 components of an individual thread. Both are supposed to increase efficiency with nVidia trying to maximize SP utilization (ie. churn out pixels at full rate all the time) while ATI is trying to do the most work on a thread (ie. a group of pixels) at a time.
If done properly these architectural changes should really increase efficiency and should be applicable to DX9 games to. Whether, it's more significant than unified shaders I'm not sure. And in ATI's case, their performance is basically completely dependent on a good compiler being able to extract 5 independent instructions for every thread, which so far looks doubtful that they are accomplishing it. -
Iceman0124 More news from nowhere
DX10 wont really do anything DX9 cant, but it,as well as the hardware and software matures, it will do things much more efficiently, right now things are similar to when DX8 hit the street, there was nothing for it for quite some time, and when titles finally started trickling out, first gen hardware really couldnt keep up with it.
DX10 Unnecessary - Id's John Cormack
Discussion in 'Gaming (Software and Graphics Cards)' started by ltcommander_data, Aug 7, 2007.