Well I just got home from work and this was originally being discussed in the 5870 vs 460m thread that got closed, before I could give a response to Pistacho, who asked,
"But, what are GFlops and TFlops? I know they are Gigaflops and Teraflops, but what do they measure? What do you mean with "raw processing power?"
Well Flops refer to floating point operations per second. It measures how many polygons or triangles it can calculate, draw and move around on the fly in a second. This is the "raw processing power" that we speak of in terms of a GPUs strength.
For those that don't know, 3 dimensional games are built using polygons to make a scene onscreen, and the more polygons the developers use to build the models, characters, environments, the more realistic it looks.
Usually game developers use triangles to put together a model, and you need major computing power to calculate where all the triangles verticies will be onscreen at any given time and games such as the original Crysis has up to 4 million polygons on screen at once, this is where the floating point operations come in.
The more the GPU can calculate the faster the triangles can be painted on screen and the less lag you'll encounter, meaning higher framerates in game.
I want to go on with my explanation, but for now this will suffice, and if I'm wrong any where on this then feel free to correct me.
-
Architecture is more important, IMHO. Just like how a HD5870 has 1600 shaders, and a HD6970 has 1536 shaders, yet the HD6970 is still a fair bit faster... (though many things changed on the HD6970, not just the shader architecture).
Also, AMD's large 5 VLIW architecture often returns masive TFLOPS results, yet it's proven difficult to fully utilize in games, and even in GPGPU applications. 4VLIW was supposed to alleviate this, but is still early in it's maturity. -
Yes it is so very wrong.
What you are talking about is rasterization.
FLOP is just a simple measure of computing power on floating points, decimals. These calculations are done on programmable shader units, not the ROPs.
And as JeremyShaw said how a computer calculates is the important part and why a CPU while having less FLOP than a GPU, a CPU is still far superior for some operations and GPU is superior for others.
And as others have said which you continuously ignore, it's not polygons that take most of the processing power on GPUs, it's shaders. So whatever. It's not the number of polygons that makes Crysis difficult to run, but the fact the number of shading operations that kill GPUs. If it was just polygons, no GPU would have any issues with Crysis, throwing up polygons on a screen is simple and easy for any modern GPU.
A HD5870M is capable of 700 MILLION polygons/second which far exceeds Crysis 4 million. Again it's not the polygons, polygons are easy for GPUs, it's the shader processing/post processing, the shadowing, ambient occlusion, HDR, Depth of Field etc. -
FLOPS - floating point operations per second (the 'S' is not optional).
Basically one indicator of raw performance. Graphics involves a lot of floating point math - everything is generally a 3D vector (x, y, z) - all floats or doubles. The vectors get manipulated to draw whatever you see onscreen, so you're talking millions of vectors being added/multiplied per fraction of a second. Theoretically, the more FLOPS a GPU is capable of, the faster it can perform those operations and therefore it can render more frames quicker. -
Because PC gaming and hardware was in it's infancy, I wondered what the problem was and how I could alleviate it, as the months went by, the pioneers of PC gaming hardware/software were coming out with solutions, one such solution was having the Falcon 3.0 simulation utilize a math coprocessor, (This in my opinion was the first prototype or gimmick to manifesting the idea of the GPU). The purpose of the math coprocessor was to help with the floating point calculations and enable high fidelity flight model. And even though with a math coprocessor I still wasn't able to acheive ideal framerates, it did help produce higher framerates.
Since this was back in the early 90s I can't recall it so well, but I also read articles that the math coprocessor did help with computing the polygons, and back then shading and lighting was not even on the map.
A couple of years past and I'm following stories in PC magazines that a new technology was being developed that would help accelerate graphics. At this time also 3D gaming was in it's infancy and many companies were trying to increase framerates through software by developing new programming techniques that didn't rely on system taxing polygons. One such company was Novalogic. My understanding was that Novalogic came up with a way to do 3D using sprites instead of Polygons because it didn't require as much calculating power.
Anyway going back to that new technology the GPU finally came to market, but also the next generation video game consoles were to utilize this new tech and it was all about how many polygons these GPUs could push. So as videogame software was being pioneered polygons were everything, but again as the years went by, developers noticed it was a problem because of framerates and that it took awhile for hardware to catch up in able for it to be capable of pushing millions of polygons.
Knowing this a couple of years back I read microsoft was to put out a set of tools that developers could utilize to help in easing game development, it was called Direct X. And I thought, "ah they realize that we need a new way of programming because polygons are just to taxing."
Which leads us to today. Although now we have hardware that can push millions of polygons per second, the game development community still pushes the envelope with huge polygon counts that still would cripple modern hardware. I'm not just talking flat polygons, but textured, shaded, 3 Dimensional ones.
The idea though is to move away with high polygon counts, even though we have capable hardware, but to use better lighting, shading, and new software technology like tesselation and all that good stuff on DX 11, and so you're right. Now hopefully the focus will be on these special effects to make a game look more realistic than on polygons, and Nvidia intuits this trend so they gear their GPUs not so much for pushing polygons now, but make them more specialized for those special effects of the DX software tools, and then you can see were CUDA and Physx come in.
It will be about software that tweaks the hardware in order to get optimum performance. And Nvidia is taking us into this future with their market share and development community support.
THIS! Thank you Lithus! -
Note the last caveat, theoretically more FLOPS equals faster frame rendering. Considering games aren't all about floating point operations as already mentioned, trying to compare FLOPs just to prove the superiority of one brand of GPU over another is not a completely objective comparison.
Not everyone here is under 18 and coming over from the console scene to PC gaming. -
That's why 4 million textured, shaded and so on and so forth polygons moving in sync, on screen at once in Crysis will destroy a 5870. And yes you are correct that the shadowing and such will also tax the GPU, but first and foremost the number one most important thing in a virtual 3 dimensional game world are the polygons that make it and how many there are in order for a given GPU to be able to calculate it at a playable framerate of 30 fps.
At this time though we guage a GPUs strength by how many FLOPs. When polygons became insignificant as far video game development goes then we might use something else.
The trend does show that with Direct X, they are trying to do different programming techniques through software that will require less computations compared to computing polygons. I'm sure break throughs will come through Cuda and Physx. Nvidia already has lighting algorithms such as HBAO that hardly tax their GPU architectures and tesselation is used to substitute for less polygons.
Also Nvidia is delving into 3D gaming by fooling the eyes with 2 images, which says alot because, so far the traditional way of doing 3D is by creating environments with polygons. If they can get rid of calculation intensive 3D polygons and instead use sprites like in 2D games which are less demanding on hardware and marry the two into their 3D technology that they are pioneering, then this changes the whole ball game. -
I keep getting the impression I'm reading something that's been lifted wholesale from an external source without proper understanding, it might be just me though. The shift from hardware Transform & Lighting in the early 2000s to the use of shaders to emulate those same functions for me illustrates the change in focus from raw polygon output (the de facto standard of the late 1990s) to more sophisticated lighting and rendering techniques with the latest APIs.
One analogy that helps me understand the current situation is the related CPU benchmark changes from pi-calculations to a standardised benchmark that enables the use of multiple cores. That's not to say that pi-calculations are completely irrelevant and an obsolete benchmark, only that they no longer provide an objective comparison when viewed separately.
The fixation on FLOPs just to prove the superiority(?) of Nvidia over ATI cards isn't sufficiently comprehensive or objective as a standalone benchmark. -
Infact, I'll submit and say, with the info that I've read, if it is true, the 5870 is more powerful than the 460m because of the difference in FLOPS. -
The point I'm trying to emphasise (and I think others might agree) is that FLOPS is BUT ONE of MANY indicators as to a card's performance. The general conclusion amongst most forum users here is the 460m and 5870m are evenly matched and trade blows across different benchmarks. The key difference between the two competing cards is mostly down to price and the games being played on the systems.
-
I'm not trying to get too convoluted, but just to make the point that there are many other factors besides FLOPS. -
One more way of putting it...
HD5870 = 2700 GFLOPS (2.7 TFLOPS)
GTX580 = 1580 GFLOPS (1.5 TFLOPS)
I'm *sure* the HD5870 is faster... wait... 80% of that is only simple MADD? Curses, only 20% of that is transcendental math? -
masterchef341 The guy from The Notebook
@lithus - the S is not required, but the meaning would change. You can say FLOP to denote a single floating point operation. FLOPS in that context would mean multiple floating point operation s.
Of course, this means something different than FLOPS as floating point operations per second, so maybe we are on the same page. Both versions are commonly used.
@jacob - no. first - all polygons are flat, all polygons are 2 dimensional. also, these two words (flat, 2d) mean the same thing. next, for ages in CS time (several years) we realized that adding more polygons is a poor method of increasing model quality, because there are more efficient ways to add quality both in production AND rendering.
FilearallaxMapping.jpg - Wikipedia, the free encyclopedia
File:Normal map example.png - Wikipedia, the free encyclopedia
those are two simple techniques by today's standards that have been around for a while. there are also more nontrivial techniques of higher complexity that increase model detail that don't involve polygons.
at this point, it's all about lighting and shading. the end. that's where all the computational complexity goes.
---
i even found it verbatim on wikipedia - GPUs were originally all about rendering polygons and mapping textures onto the polygons. Now it is all about shading:
Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations. -
It's just like comparing RISC to CISC processors, RISC architecture usually can do more operations per second for certain operations, but they do not have complex operations, and complex operations like MMX etc operations found on intel would probably be very processing intensive to implement on RISC architecture. Moreover, if it's those complex operations you are going to be using most commonly, an intel will probably be a lot more efficient than a SUN processor in those tasks.
For games, most games have reduces polygon counts by quite a lot and moved to shading as said. It used to be normal mapping, but tessellation mapping is probably going to be the in thing now. -
masterchef341 The guy from The Notebook
it's also important to note that the tessellation process itself is non-trivial and very computationally intensive. also, tessellation by itself doesn't do anything, it just splits polygons into more polygons in a reliable and useful way. But- splitting up a triangle into 2+ equivalent triangles doesn't change the rendered image. However, when used in tandem with a displacement map, you can generate more complex geometry without having to model additional 3d detail.
So the rendered poly count increases, the production poly count stays low. This takes advantage of the fact that our hardware is already really good at pumping a ton of polygons, but it's hard to get polygons right in the studio. In other words, the hardware engineers decided that it was worth it to create a (complex in hardware) tessellation engine to generate a ton of polygons (easy for the hardware) to add more detail. -
Well there is a new system in development to make graphics completely Infinitely Detailed, check this out YouTube - Unlimited Detail Technology
-
FLOPS is not a definitive standard for judging performance or quality of graphics cards. It's just one factor among many.
Compare it to cars. Sure, horsepower matters, but just because your car has 250hp doesn't mean it's guaranteed to perform better or go faster than another car with 235hp. -
Let's take some hard numbers:
The desktop Radeon 6970 is rated at 2.7TFLOPS single-precision, and 675GFLOPS double-precision. A desktop GeForce GTX 580 is only 1.581TFLOPS single precision. Yet when you look at gaming performance, the 580 beats the 6970 in almost everything, while having almost half the FLOPS.
Repeat after me: FLOPS are not everything. They are important, but there are a large number of factors that go into GPU performance in gaming.
If you want a car analogy, horsepower isn't everything. You have to look at torque, horsepower, fuel efficiency, vehicle weight, etc. to have a meaningful comparison. A diesel Ford F-350 isn't going to keep up with a stock Corvette in a race, even though the truck turns 400HP and the 'vette is about about 430. -
Thank you Pitabred.
I'm not sure what jacob808 is going after, but there's a reason there's tons of benchmarks run with every video card to assess it's performance with said game or program. Otherwise they could just publish the FLOPS number and that's the end. Same thing with CPU's. Sure in the P4 era, you could more or less know the performance because such and such GPU was 2.0 GHz but these days a 2GHz mobile i3 will blow the doors off a Pentium 4 2GHz dual core.
Oh, and sign me up for the F-350 any day! -
If you've read an earlier locked thread regarding notebookcheck data between the 460m and the 5870m, the obsession over FLOP becomes less of a mystery.
The car analogy is a great one, horsepower (FLOP?) isn't the best comparison between different models. -
masterchef341 The guy from The Notebook
in the name of sanity...
-
Seeing how no one believes me, I think alot of the members on this forum should watch this video since, it'll educated them on how video games are traditionally made and why if the development community keeps going in this direction we will need to always upgrade our GPUs in order not to have lousy framerates with each new game release.
Now I'm sure both Nvidia and Ati are supressing this new technology because it's all done in software, at least that's my understanding, and they have done away with polygons which is the basic building block of 3d videogames.
It reminds me of what Novalogic, the makers of Comanche has been trying to do back in the 90s. I'm not sure what happened to them, but I heard their technology has been in use in the medical field.
Dam I plan on following how this technology progresses because this is what we need to have games looking 10 times better and 100 times faster than the not even released yet Battlefield 3.
Thanks for the link Rot! It has redeemed me on this forum and all that I've been saying, now hopefully others won't try to suppress this thread and that link you posted, because it's powerful technology that could change the whole face of videogaming both software and hardware wise. -
This is the real deal! what I imagined for years, has finally come true!
Here's part 1 & 2 of the video.
width='480' height="390"><param name="movie" value="http://www.youtube.com/v/Q-ATtrImCx4?fs=1&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Q-ATtrImCx4?fs=1&hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width='480' height="390"></embed></object>
width='640' height="390"><param name="movie" value="http://www.youtube.com/v/l3Sw3dnu8q8?fs=1&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/l3Sw3dnu8q8?fs=1&hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width='640' height="390"></embed></object>
it's a huge breakthrough in videogame technology.
Even at the end of the 2nd video, the commentator says that they're trying to team up with ATI or Nvidia, if anyone of the 2 will agree, but most likely this would stop those 2 GPU hardware giants from making a profit since no one will have to upgrade to newer more powerful GPU technology anymore, so if both those companies refuse then the would use their tech in cell phones, and less powerful mobile devices and even Nintendo Wii, with the commentator remarking that then with this new tech the Nintendo Wii would surpass the Playstation 3 in rendering power just because of this new programming technique! The thing I imagined all my life has manifested!
ATOMS! HAHAHAH! that's what they call those points that they use to substitute for polygons! I always thought that if videogames wanted to progress then we have to mimic real life and instead of build a world out of polygons one day we will build a virtual world with individual cells! Cells=Atoms! *lightning flashes* Bwahahahahahaah!
width='480' height="390"><param name="movie" value="http://www.youtube.com/v/JWujsO2V2IA?fs=1&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/JWujsO2V2IA?fs=1&hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width='480' height="390"></embed></object>Last edited by a moderator: May 6, 2015 -
The "unlimited detail" thing is a bunch of bull. You really have to stop getting so excited about a topic you don't really understand.
Basically the proposal is to create a gigantic search tree for each "atom" and then only process the "atoms" that are within the view of the camera. This is all good and swell until you realize that this search tree would be gigantic. It's not going to fit into VRAM and it's not going to fit into main memory. Have fun querying your HDD for every frame. Notice how all the demonstrations are basically repeating objects over and over. The data to render a full scene wouldn't even fit on a HDD.
Furthermore, notice the distinct absence of movement except for the camera. When you calculate movement on a polygon system, you move a few vertexes and the entire texture moves since the texture is mapped to the polygon. To move a voxel system, you need to move every single "atom". That's millions of operations to get a character to lick their lips.
Finally - lighting. Polygons have normals that can be calculated. Voxels require a separate calculation per "atom". There's a name for that type of lighting - it's called ray tracing. Instead of frames per second, you're going to get hours per frame. -
You see, if this tech is real, not only will we NOT NEED to spend thousands of dollars on expensive hardware upgrades just to play the latest and greatest in graphics technology with each new generation of videogames, but it would work on even a lowly cell phone or netbook, also videogame developers will have to relearn how to "develop" games using the new technique. Alot of people specialized with doing things the "old polygon way" will be upset and will resist having to relearn their jobs.
But who knows? Maybe they have already struck a deal with Nvidia, since we can see a trend with the Nvidia cards hardware more focused on shading and lighting rather than FLOPS or raw polygon processing power, since the Unlimited Detail technology has done away for the need of polygons, just how I've been explaining what would aleviate the framerate problem all along. -
this tech is bs you can turn it anyway you wan the simple calculation of the amount of point needed makes it impossible
-
With polygons you're still calculating where every point/vertex of the thousands, even millions of polygons has to be onscreen, even if it's not visible. But with the Unlimited Detail only what's drawn in every pixel say on a 1280x720 resolution will need to be calculated on the fly. This is very much the same principle of 2D sprites and why 2D sprites doesn't require huge amounts of processing power.
Again I could be wrong, but this is how I understand it. -
-
With more modern games such as Arma, it's the same principle, but modern games just up the polygon count to smooth out the angles, which inturn takes up more processing power and requires more expensive hardware to run.
Unlimited Detail technology will alleviate this problem for us by doing away with the problematic intensive calculations of processing more and more polygons, with it's new algorithm.
It's a win situation for ALL gamers everywhere. If this technology takes of within the next year, we won't have to buy another more expensive computer with a more powerful GPU or CPU just to play the next Battlefield or Crysis or Call of Duty that comes out.
Hopefully when they release the commercial SDK, game developers every where will adopt the technology and make games with it. Then no longer will we have Nvidia or ATi fanboys or Playstation or Xbox fanboys, because the same game will be able to be played on your current system, or even cell phone with no degradation of graphics. Meaning Alienware M17x graphics will be the same on your cellphone. -
I'm closing this thread for two reasons:
1. The original subject has been sufficiently addressed.
2. The thread has gone off topic with material that would best be served in another thread of its own.
FLOPs why it's important
Discussion in 'Gaming (Software and Graphics Cards)' started by jacob808, Mar 16, 2011.