The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous page

    AMD Blames DirectX for Poor PC Quality

    Discussion in 'Gaming (Software and Graphics Cards)' started by mushishi, Mar 18, 2011.

  1. FXi

    FXi Notebook Deity

    Reputations:
    345
    Messages:
    1,054
    Likes Received:
    130
    Trophy Points:
    81
    Well then he has no excuse. It is a ridiculous direction he wants to take things. If all his experience tells him "let's toss away 20 years of progress because I can't get the code to run fast enough", well good for him. I'm sure AMD can help us all by writing a driver of their own that bypasses OpenGL and DirectX yes? And that this driver will then be picked up by the top 10 or 20 developers around the world, yes?

    Don't hold your breath for that one.
     
  2. Richteralan

    Richteralan Notebook Evangelist

    Reputations:
    20
    Messages:
    416
    Likes Received:
    11
    Trophy Points:
    31
    Well I am sure there's some agenda behind that since AMD wants to push their CPU+GPU package.

    And most people don't know is: GPUs have chip-level defects. The driver can help to correct those defects in a software level, which makes it transparent to most devs.
    Console chips have the same defects, but since the chip never changes, devs know what "not to do" with Console chips.

    With such a gigantic different configuration in PC world, I cannot imagine anyone wants to get rid of standard API.
     
  3. Lithus

    Lithus NBR Janitor

    Reputations:
    5,504
    Messages:
    9,788
    Likes Received:
    0
    Trophy Points:
    205
    I don't think you understand what you just said.

    C/C++ are both compiled languages. It makes no sense to say that it should "work on the GPU, natively". C/C++ both need to be compiled into assembly before it will run on anything. A compiler translates the language to the native assembly for the hardware (in the form of a binary or executable) which is then run.

    The point of an API is to allow you to code in C/C++ and have your code work on a GPU without specifying the exact architecture that you're working with. Now, I can see someone arguing that DirectX is a poor API - I don't know enough about it myself to make an informed decision - but to say that we should get rid of APIs for graphics programming, I just don't see how that's possible.

    In short, most games are already coded in C/C++. The only way this is possible is with a unified API.

    (And by the way, what the heck is "rendering code"?)
     
  4. gdansk

    gdansk Notebook Deity

    Reputations:
    325
    Messages:
    728
    Likes Received:
    42
    Trophy Points:
    41
    Okay, let me explain a few things. A renderer is something that draws, often if they're implemented in software they're called rasterizers. As it is currently, the main code for an application is often written in C or C++ (which are compiled the native architecture of the CPU). The program uses library "hooks" to use either OpenGL or Direct3D. These commands are not implemented by either Microsoft or Khronos, but rather your GPU vendor (AMD, Intel or Nvidia) on their GPU, where most of the rendering steps occur. Currently the only ways to to have an effect as to how this is done (beside all your calls) is to use shaders which can do entire steps of the rendering processor. Now, it is entirely possible using parallel C code to do each step of render on the GPU, such has been done with CUDA. It has often been common to have software renderers written in C targeting x86, especially for high quality rendering systems. This allows the developer to, potentially, write their entirely own rendering backend, with their own choices in quality vs. quantity on the GPU, where currently they have little choice, comparatively. If AMD and Nvidia both offer C/C++ to be compiled for their internal compute arrchitecture it will allow close to the metal coding without any of the syntactic qualms of assembly languages. I'm fully aware how CPU programs must be compiled, and for their target architecture. What I'm suggesting is that a similar solution would make sense for programming the GPU as well. Although the compute cores will need to become more complex before such is feasible. If we can get the benefits of low level coding (less need for intercommunications between the GPU and CPU and etc) with little of the costs (with a write once build anywhere system) it sounds like the way to go. With the continually increasing complex of compute cores and the increasingly parallel nature of CPUs will demand that the GPU and CPU to merge their functionality, eventually. I see no reason to discontinue support for the current solution, but I think it'd be great to have fully programmable massively parallel coprocessors (a true GPGPU).
     
  5. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    Larrabee. The idea was before it's time. I still think Intel will bring it to market sometime. 32 in-order, superscalar, x86 cores (similar to Atom's architecture) on one card. If gaming can be handled by the compute functionality or the process for rendering can be moved widescale from what it is now to directcompute or openCL then maybe the whole "ditch DX" might could work.

    I still don't see it happening in the immediate future.
     
  6. lozanogo

    lozanogo Notebook Deity

    Reputations:
    196
    Messages:
    1,841
    Likes Received:
    0
    Trophy Points:
    55
    Interesting article. Would be nice if, like pointed in the article, at robust yet flexible enough API could be developed. However I could not understand this statement:
    "On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call. "

    Do you know guys how is it possible? Do consoles have some 'magical tricks' behind or is information missing (like 10-20k before shaders and/or lighting, etc.)??
     
  7. Richteralan

    Richteralan Notebook Evangelist

    Reputations:
    20
    Messages:
    416
    Likes Received:
    11
    Trophy Points:
    31
    Let me quote something By Roderic from Beyond3D

     
  8. lozanogo

    lozanogo Notebook Deity

    Reputations:
    196
    Messages:
    1,841
    Likes Received:
    0
    Trophy Points:
    55
    Does it mean then that such comparison was apples vs oranges? Sorry for asking again but I am not well versed in such topics.

    Thanks.
     
  9. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    Notice the word geometry, they're talking polygons here, and taking this into account next gen consoles were designed with specialized architecture refined to pump out polygons efficiently, were as the PC is a more general purpose machine that is used for more things than just 3D gaming.

    We need to back up and rethink how to program 3D graphics before we started down and got stuck on the polygon road. We need to go back to the common denominator which was before the polygon pushing technology of the GPU, which is the CPU and we have now been given a way with a different method of doing 3D graphics that doesn't require the compications of different GPU architectures and system setups. This new way is a promising 3D engine called Unlimited Detail point cloud algorithm.

    We just need the videogame industry not to be lazy and resist learning a new way of programming 3D videogames. You see that catch phrase that was used by Mr. Huddy of the ability to "program direct to metal" was used back in 1994 when both the Sega Saturn and Sony Playstation 1 was released.

    To those that aren't old enough to have followed this battle of the next generation videogame console wars of 1994, I'll give you a brief account of what took place, and how we're in the situation we are today...

    In 1994, the GPU was in it's infancy and along with the PC this brand new technology would also be used in the very first so called "Next Generation" of video game consoles, more specifically on the Sony Playstation 1, as Sega instead opted to use 2 powerful RISC CPUs aided by 2 VDPs.

    This decison on Sega's part of using an architecture based on supercomputing and massive parallel processing, was to be the downfall of Sega's 32 bit videogame console.

    The Sega Saturn's architecture proved to be "ahead of it's time" since game developers had to relearn how to program using 4 processors running in parallel, in order to achieve substantial results in their games. This seemed too complicated for the lazy development community and so the game industry instead chose to give full support over to the much simpler development platform of the Sony Playstation 1 architecture, which set the standard for a CPU and GPU that we still follow today.

    The marketing for the Playstation won the hearts and minds of the average gamer who didn't really understand supercomputing and parallel processing that was offered in the Sega Saturn system, and so the Playstation having hit critical mass and taking over the majority of the videogame console market drowned out the potential of the Sega Saturn, so all who were left to develop games for the Saturn was Sega's inhouse development teams.

    Luckily Sega had the best development teams in the world such as AM2 and Sonic team who vowed to show the true potential of the Sega system by programming "directly to metal", and they came out with stunning videogames with graphics that could not be replicated on the Playstation, unless, of course, history were to be rewritten and other game developers didn't abandon the Saturn, but instead would have taken a shot and put effort into learning how to program "directly to metal" by developing for the Sega system, we probably would have many different and innovative methods of programming 3D videogames today, since now Intel and AMD has followed Sega's lead by implementing parallel processing into their new line of CPU dual and quad cores.

    The only way now besides what Mr. Huddy is proposing, is to use a system that won't prefer any one GPU to any other, but instead utilize what is most common and standard in every PC, the CPU, and my understanding is that Ultimate Detail point cloud algorithm is this savior system for all us PC gamers.
     
  10. classic77

    classic77 Notebook Evangelist

    Reputations:
    159
    Messages:
    584
    Likes Received:
    0
    Trophy Points:
    30
    1) There is a massive difference in PS3 and PC output: Resolution (720 vs 1080 with AA and goodies). And if you compare 720 to 720, PC framerate (assuming you have a $1500 desktop) will be 4-5x the console.

    2) 360 and PS3 both use API's.

    3) Imagine the stability issues if we got rid of PC API's...the average game programmer doesn't even work with the same type of logic/code that would be needed to program for low level processing. Hugely increase game cost by needing to expand teams.

    4) the cost argument? really? again? I spent $1300 on my machine (see sig). The GAMING components on my machine cost about as much as a PS3. Those people with a PS3 also have a PC (probably without a great graphics card or a fast CPU). If they had NOT bought a PS3, but added the cost of one to the PC they had...they would have a gaming machine.

    Summary:

    There is no argument here. Saying that a ton of console fanbois cant see the difference between PC and console means nothing. I can. anybody with objectivity and a trained eye can. Console lovers play console because they are probably not to interested in dispaly resolution, framerate, AA, AF, tessellation, driver tweaking, overclocking etc. Any survey of the ability of console gamers to tell the difference between console and PC would be inherently biased, not only because they like their console, but mostly because their demographic is substantially more n00b than ours.
     
  11. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    seriously? still?

    consoles and PCs are using the same types of hardware. the xbox 360 is using a triple core powerpc processor (a normal processor) and an ATI x1900 class GPU (a normal gpu)

    none of this has anything to do CPU rendering, or Unlimited Detail (that's a company that is still seeking funding to work on a CPU rendering algorithm and engine)

    this is an issue about hardware optimization, whether the DX API is getting in the way of performance.
     
  12. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    My intention was to bring us back to the root of this problem that was discussed in the article.

    Where we took a wrong turn in PC 3D graphics history and got stuck on this enclosed API/DX/GPU technology road.

    But now we come to a fork in this road that points in a new innovative way. Do we choose to ignore this cut off (Unlimited Detail), and keep going down this problematic road (API/DX/ upgrade GPU)?
     
  13. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    Sure you can improve performance without DirectX but it would segregate the market. Not to mention if the GPU architecture changed, then you would have to write emulators for backwards compatibility, and that would probably reduce performance more than DirectX.

    Look at 3Dfx. Their Monster 3D card was the first true 3D consumer accelerator and used the Glide API, pretty much proprietary to 3Dfx, and it was still even an API and not a direct to metal coding. Meaning assembly language is the most ridiculously difficult way to program. I did some assembly programming in the wee years of the Commodore 64, Tandy Color Computer, and Commodore Amiga. That was with very simplistic hardware and was a pita to program then. I can't imagine what it'd be like now.

    Unlimited Detail rendering is problematic too as in it has zero development or backing by any major corporation. I don't see Unlimited Detail's tech coming to fruition unless they can prove it's more efficient than DirectX and reduces the workload of the programmers. I don't see that happening.
     
  14. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    Again, the GPU is not the problem. this thread is about the first 2 components API/DX - you're literally trying to sneak in the GPU to the back of the list (API/DX/GPU) so that you can push the conversation towards polygon issues, and talking about the *amazing* unreleased, unfinished, hyped up tech of a private company (Euclideon).

    It's frustrating because you don't understand computer graphics, and that you completely fell for Euclideon's "unlimited detail" speech, which is straight up advertising, all because you are vulnerable, all because you are bothered that your GPU can't run Battlefield Bad Company 2 at 32xaa. It's nuts. It doesn't make any sense to keep pushing someone else's private unreleased unfunded maybe-one-day-could-be technology so hard.
     
  15. mushishi

    mushishi Notebook Consultant

    Reputations:
    137
    Messages:
    238
    Likes Received:
    0
    Trophy Points:
    0
    Yeah let's stop while this thread is still open. Having it closed because you can't stop pontificating about polygons and unlimited detail will be disappointing.

    I'd really like to see confirmation of what Huddy is saying about game developers. If they really do want to get rid of the API and program direct to metal, then I wonder what their proposal is.
     
  16. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    Right there. It's kind of irresponsible to make such statements without offering a counter to what you say sucks. I guess we're all speculating (seems to be the norm for 99% of forums), but I hope he comes back with what he REALLY meant. It's kinda like being married, spouse says one thing but means something completely different or you're left to figure out what they really meant. :D
     
  17. Lithus

    Lithus NBR Janitor

    Reputations:
    5,504
    Messages:
    9,788
    Likes Received:
    0
    Trophy Points:
    205
    jacob808: Do yourself a favor and stop interjecting in technical threads. You have no clue what you're talking about. You're like the guy going into an automobile forum and insisting that everyone should switch to spaceships. A lot of us here have actually dabbled in DirectX/OpenGL. You playing games does not equal a developer making games.
     
  18. 2.0

    2.0 Former NBR Macro-Mod®

    Reputations:
    13,368
    Messages:
    7,742
    Likes Received:
    1,027
    Trophy Points:
    331
    PUBLIC SERVICE ANNOUNCEMENT:

    The ignore feature is there just for these sorts of emergencies.

    Use it. It's free. It's fun. And most importantly, it preserves sanity.
     
  19. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    must... find... ignore button... where is it?
     
  20. Harleyquin07

    Harleyquin07 エミヤ

    Reputations:
    603
    Messages:
    3,376
    Likes Received:
    78
    Trophy Points:
    116
    It's under control panel, one of the options on the left sidebar has the buddy/ignore list. Simple matter of adding the username into the applicable area.

    Back on topic: I don't really get the whole point behind the AMD manager's argument, there's only so many ways you can program games for PCs and DirectX and OpenGL are the most familiar APIs for lay users. If PC gaming quality was really so poor, then it's quite possible that developers are being constrained by something else other than the tools they have to work.
     
  21. Richteralan

    Richteralan Notebook Evangelist

    Reputations:
    20
    Messages:
    416
    Likes Received:
    11
    Trophy Points:
    31
    it's not developers are being constrained.

    Look at recent PC games. How many of them are being developed ground up on PC using D3D10+?

    Probably less than 3 or even 0.

    Most of recent PC games are console ports which uses old D3D9. And we all know a post-release D3D10+ patch doesn't mean better performance and/or image quality.

    It's the developers who chose to develop on D3D9 and then about too much D3D overhead on PC platform?

    And even with console ports we see a lot of bugs and performance issues on PC, which hardware suppose to be at least 5x better than current gen consoles.

    If those devs can't even get D3D9 PC ports right, how would you expect them to dev code-to-metal right?

    I seriously don't get this "blame DirectX" argument.
     
  22. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    Wait, where'd everyone go? Damn stealth button... must've ignored myself.
     
  23. Harleyquin07

    Harleyquin07 エミヤ

    Reputations:
    603
    Messages:
    3,376
    Likes Received:
    78
    Trophy Points:
    116
    You're reiterating the point I'm trying to make, all I'm saying is that the tools are there for developers to utilise. Perhaps the reason why we're not seeing higher-quality or more technically demanding games on the market might be due to the heavy influence of gaming consoles on the current industry environment or other factors like the need to keep games accessible to as wide a target audience as possible. The developers are likely making conscious decisions on whether to build games from the ground-up exclusively using the latest tools available to them or to keep their options open and use older APIs which are simpler and cheaper to utilise and cater to the gaming publisher's requirements.

    Speaking from the outside, I'm not privy to all of the factors taken into consideration when developing and publishing games. What I do know is that blaming the current state of the PC gaming industry on the developing tools is like putting the cart before the horse.
     
  24. Richteralan

    Richteralan Notebook Evangelist

    Reputations:
    20
    Messages:
    416
    Likes Received:
    11
    Trophy Points:
    31
    Yes I was elaborating your post.

    Apologize if I am too repetitive.
     
  25. Lithus

    Lithus NBR Janitor

    Reputations:
    5,504
    Messages:
    9,788
    Likes Received:
    0
    Trophy Points:
    205
    I think we're forgetting that most games are developed on an engine (Unreal, Gamebryo, CryEngine) and not straight DirectX. That's another level of indirection to consider.
     
  26. Harleyquin07

    Harleyquin07 エミヤ

    Reputations:
    603
    Messages:
    3,376
    Likes Received:
    78
    Trophy Points:
    116
    Good point, although I'm not sure if the passiveness involved is simply a matter of the developers deciding to use a cost-effective solution already available rather than spend time and resources doing up an engine from scratch.

    Edit: Looking at the reply below mine, perhaps the AMD manager should be blaming the overreliance on established game engines in the PC gaming scene at present rather than the base APIs they were originally built from.
     
  27. lozanogo

    lozanogo Notebook Deity

    Reputations:
    196
    Messages:
    1,841
    Likes Received:
    0
    Trophy Points:
    55
    Exactly. In the article where Huddy mentions something like 'no wonder they all look similar' I immediately thought: UE3 or UE3.5 and the many games that use it.
     
  28. mushishi

    mushishi Notebook Consultant

    Reputations:
    137
    Messages:
    238
    Likes Received:
    0
    Trophy Points:
    0
    The only PC Exclusive developers I know of use engines also.

    Tripwire Interactive is the only PC Exclusive FPS developer I know of and their games run on the UE2.5 Engine and their next ROH2 will be UE3.

    CD Projekt created The Witcher on the Auora Engine with the renderer made from ground up. CD Projekt already released a great video from the developers of their next engine for The Witcher 2. Does it count if CD Projekt made an engine exclusively for the PC? Maybe AMD will be proven wrong if CD Projekt's DX11 engine made exclusively for the PC destroys performance of the console port engines?
     
← Previous page