The question is, "Can the PS4 compete with a 680M?"
A: Nope. PC will always be ahead of consoles in terms of hardware, especially with the top of the line mobile gpu like the 680M.
It's really misleading when people say the 680M will "last only another 2-3 years" because on the PC platform the gpu becomes "outdated" as a result of the graphical demand (graphics quality) of games increase as the years pass so that you can't play new games at max settings anymore. On the PS4, game developers will limit the graphics setting so the hardware (which will never change until the release of PS5) can run their games.
If you're talking about graphical capabilities, the 680M is, and always will be, superior to the PS4 if the rumors/speculations about the actual PS4 hardware is true. Graphics you see from PC will always be better than on console (unless sony/microsoft develops exclusive supercomputer chips).
-
-
HopelesslyFaithful Notebook Virtuoso
-
Look at the jaggies in the PS4 game play. Im pretty surre the GTX 680m will smack a PS4 like it came home and dident have dinner if we leave out the AA and MSA and enjoy the jaggys that is the PS4. -
-
My issue with this whole thing is economics and physics. While I know nVidia charge a fortune for their technology, it usually stands to reason the more expensive device runs faster than the cheaper ones when you remove styling. Heck even take into account AMD's own 7970m. People are saying a $300-$400 entire SYSTEM running at less than 100W (assuming based on various articles) will outperform a CPU and GPU costing at least twice using 150W+? Not only that but we know the computing power of the PS4 is half that of the 7970m. So even if you write directly to metal, I don't see how you will overcome those limitations. I think it will manage 1080p/60 just fine to be honest, but it can't hold a candle to an Ivy Quad core + 680m or 7970m.
-
-
-
Not to mention those clever tricks will be running on X86 hardware with standard PC GPU. Those code optimizations can be made for PC easily too.
-
I think a 680m should hold up pretty well. Normally a console has the advantage of optimization on mediocre hardware. But this time the consoles are getting PC hardware, that means ports to PC should carry over similar optimization. You must remember that 1x680M cost as much as the new console is likely to cost, its likely that the PS4 won't match this level of performance unless AMD drop their pants on the contract and give a cheap comparable package, which is entirely possible.
-
For all the snobby, elitist, PC-gaming console haters out there, look on the bright side: You won't have to complain about poorly-optimized console ports anymore, since the next generation consoles are basically fixed-platform PC's running x86-x64 CPU's and Radeon GPU's. Your quad core and greater CPU's are going to realize their potential as developers finally take multi-threading seriously. Hopefully, games in the future will be made with the PC as the leading platform and incrementally pared down for the consoles, not the other way around.
-
I don't know about that. As stated repeatedly, the PS4 will be "coded to the metal" (shouldn't it be silicon instead of metal?), which means assembly code, which means it might as well be any other platform on the planet. It will still need to be compiled for DirectX. And since assembly code is the lowest level of programming, it will likely still be ported console to PC.
-
You'd be amazed how much of a difference optimization makes. For example, The older PS3 or Xbox360 made really good use of very limited vRAM to make titles like FF13 look absolutely stunning whereas the same effect would need twice or quadruple as much vRAM on the PC.
That being said, the PS4 is much more PC like now with an x86 processor so I'd imagine ports would be much easier. The GPU is essentially a midrange Radeon with a massive 8gb GDDR5 pool. -
I think the cliche line "you get what you pay for" is perfect for this topic. If you want performance (and awesome cheaper games, mods, customization, etc.), you have to drop the big bucks. -
Not to mention BF3 on PCs support 64 player games whereas consoles is 24 players. However im unsure if its because of hardware limitations or server costs being limited from EAs end
-
Kade Storm The Devil's Advocate
-
TheBlackIdentity Notebook Evangelist
-
Nope, Sony's dev libraries use their own API, not DirectX.
-
PS4 is using ATI HD7000 graphics and an AMD processor, just like XBOX
they will not be using 6 custom processors like on the PS3, so games will not be delayed due to the custome programming needs like the past Play Station systems. no optimizing will be needed
the consoles are essentially pc computers without the windows operating system to slow them down . but with much slower HDD . the Xbox is supposed to have DDR3 memory compared to the 4's DDR5 -
Software is used to drive the hardware. You can have Mac OS with same hardware as Windows machine, but the Mac OS can't run DirectX 11 because it's Microsoft. So it doesn't matter the hardware, period. It will still have to be ported to DirectX API regardless. Maybe it will be a bit easier because hardware is similar but it doesn't reduce the task by a whole lot. Xbox *COULD* use DirectX because it's a Microsoft API, and that would port easily to PC, like the original XBox.
-
The Eurogamer Orbis vs.Durango spec analysis had a good section on differences in the development tools of the two consoles:
-
Buy both, the trick is knowing when to make the investment, and plan accordingly. I have always owed a PS ever since they came out, but I never waited in line to buy mine. I typically waited until the price came down, or until the newer slim model came out. I had no problem doing this because I had either a decent desktop or laptop that allowed me to play my favorite games. I did buy a fat 40GB PS3, but as soon the slim models came out I sold it and upgraded.
So you can buy a nice gaming laptop now and by the time it starts to become obsolete, the PS4 will have probably come down in price and possibly be available in a more efficient/slim model. I like having the best of both worlds with a console and a gaming computer. And certain games play better on each respective system (controller vs mouse and keyboard) IMO. Besides I agree with some of the other posters who say you really can't compare the two. A laptop computer is just capable of so much more than a modern console. One is made specifically for gaming and runs on a limited OS and the other is made for so many different tasks and can be configured to run on several different OS. In my case I need a laptop, and gaming is secondary, so that choice was easy. -
HHHHHHHHH the 680m is better bro.
-
Rumors peg it as a 1100ish core GCN GPU, would be similar most likely to the current 7000 GPUs.
-
Kade Storm The Devil's Advocate
A portion of the quote from the article that Kevin referenced, bold portion is my emphasis.
This last portion is very misleading, erroneous and--potentially speaking--factually unfounded despite acknowledging that the GPU is only one part of the overall hardware. Why? It clearly fails to demonstrate a whole appreciation of how those results are achieved on the PS3, and simply sums it up as magic dust forged between API and the RSX despite the acknowledgement of the Cell.
It is *not* magic being extracted from the RSX, but rather, a great deal of development done in conjunction with the Cell processor that allows for a wide girth of post processing that gives many of these games their 'next gen' sheen. And even then, while they looked amazing, they didn't exactly trounce what was being accomplished at higher resolutions on the PC side. It's one of those things that lacks grounds for well informed speculation because we're dealing with exclusives.
Let's have a look at how much of a role Cell really plays in the equation from an actual PS3 exclusive title. If even Guerilla Games can be forthcoming about this then we can just ignore that bolded part of the quote entirely because to say that they're getting all of this out of the RSX is just overarching, even in the context of the overall article. -
-
Kade Storm The Devil's Advocate
You might be interested in the following because Guerilla Games is only one vocal example out of many.
Here's Resistance:
http://www.insomniacgames.com/tech/articles/0108/files/RFOM_Debriefing_public.pdf
War Hawk:
Inside the Developers Studio: Dylan Jobe – PlayStation Blog
Motor Storm:
http://www.beyond3d.com/content/interviews/38/1
Uncharted:
Q&A: Naughty Dog's PS3 Project - IGN
http://www.naughtydog.com/docs/Naughty-Dog-GDC08-UNCHARTED-Tech.pdf
Resistance 2 (Power Point Presentation):
http://www.insomniacgames.com/tech/articles/0208/files/insomniac_spu_programming_gdc08.ppt
Ratchet and Clank (heralded for the vastness of what was taking place on-screen, at the time):
http://www.insomniacgames.com/tech/articles/0108/files/spu_shaders.pdf
Final Fantasy XIII (a multiplatform title that was actually more impressive on the PS3, for once):
Final Fantasy 13 to Use "Nearly 100%" of PS3
DiRT:
Interview with Simon Goodwin of Codemasters on the PS3 game DiRT and Ambisonics. | Etienne Deleflie on Art and Technology
inFamous:
Infamous Q&A: Good Versus Evil - GameSpot.com
And this part about infamous is rather telling:
It's rather boring to sift through this stuff, but in my opinion, it beats the 'coding to metal' meme, because at this point, getting all of the above through just the RSX would be coding from thin air. Sure, it's not as phantasmic and mystical an explanation, but that was never the right way to approach this subject. Coding to metal is a relevant point, but the yield isn't this bloated two-fold/three-fold stuff that one often hears. It is way overused now and has now been reduced to a meme used by specific kinds of folk to misrepresent the potential of console hardware in order to feed some very wild speculation (i.e. two fold performance, etc.). I still stand by my opinion that even the current generation consoles aren't that much better than their equivalent--roughly speaking--PC hardware counterparts. -
-
And the design for the Cell processor is from the 70's. What made the interconnected bus over multiple parallel processing elements interesting was that it was finally possible to execute and complete each of these elements within the space of, say 1/30th of a second. That's the difference between having lots of independent runs that then could be spliced at some undetermined point in the future, for parts of a static wave-model, or something like that. And between being able to use parallelized code supposed to complete within each new frame generated to the graphics card.
But since people generally speaking are about as technically advanced as a binder - including people in games-magazines, who of course don't let that stop them from opining about the hardware. That doesn't matter, even when it's actually used to create extremely impressive results in terms of animation and scene complexity, per object effects, object interference effects, etc.
For example. Very early on, one of the major demos some engineer at IBM had made was a way to create a shadow based how the actual light hits the object. Rather than the way it's done now, which is to flatten the model, and then cast a black impression of it down on the scenery. "Advanced" effects eventually allowed slightly transparent shadows - this took 10 years. But what happens is of course that the shadow isn't accurate, it doesn't respond to lighting conditions, you don't get several lighting sources and therefore shadows, you don't get shades on the back of objects that correspond to the angle the light comes in, you don't get shadows moving in the same movement as the torch that is the light-source, etc., etc.
So, here someone shows how it's possible to use, and complete, with fairly light code - high-level, complex math that executes every clock-cycle, to generate the "raycast" result in real time.
And no one sees the value in it, because no one knows about it, and your average games-journalist is as technical as a binder, but will flap their gums about the tech anyway.
Either that, or people genuinely want to go back to 8-bit or something. Which would obviously be so much more awesome than limited real-time ray-casting. -
Probably been said already, but a console need only produce 30 FPS constantly whereas a PC needs to pump out 60 or more on average. If you level the playing field and ask yourself if you can get 30 frames a sec with a 680M down the road, you'll find yourself saying "yes" a LOT more than you thought you would.
-
Why would a console only need to produce 30 fps? If a game such as Battlefield 3 could hit 60fps on console, that would make it much more enjoyable. I started playing bf3 on my xbox but then switched to a rig capable of 60+fps, and when I turned back on my xbox it just felt different.
Sure a controller doesn't move as fast as a mouse so 30fps on pc is different then 30 on console, but I think many people will be expecting 60fps on next gen consoles. -
The bottom line is I was finishing Far Cry 3 on my notebook last night, and my wife was playing Skyrim on the Xbox. I basically sit in front of the TV with a lapdesk and headphones, on the couch, so I was watching her play for a bit. Aside from the atrocious load times, that almost 8 year old 360 was putting out a great gaming experience. I don't really think the ps4 has anything to worry about in staying relevant for years, and to speak to the topic, a 680M will serve just fine for a couple years.
Hardcore PC gamers are never satisfied with performance, but the crowd that lives here that will kvetch over every lost frame when every exposed advanced rendering option is enabled is the edge case. I've started to learn to just "shut up and play" for the most part and it's amazing how much fun I'm having playing games instead of figuring out how to get my wife to see my point in why I should get a 680M when the notebook I have is still damn fast.Not that I'm not still interested in hardware advances, that's definitely not true, but there's budgetary, economic, business, architectural and manufacturing reasons why the absolute latest and greatest PC hardware isn't in the ps4.
-
-
Kade Storm The Devil's Advocate
Yeah. I thought the whole discussion was about whether 7970m/680m will be able to hold up against the console life cycle, which is the popular factor against which everything else will be largely competing. And in that case, FPS for FPS, resolution for resolution, the aforementioned cards will do much better than just two years. To say that PC standard is 60 FPS and console standard is 30 FPS is true and valid for enthusiasts, but when making these comparisons, I will gauge by the console standard, and as far as multiplatform games are concerned, my single nVidia G92b still matched and beat every multi-platform console counterpart to date. I don't see any reason why that will change now that the architecture is even more similar to that of the PC.
-
Games that are out today on 680M actually have some troubles being maxed out. And some of those games are console ports from old tech. So to think that 2014-2015 games will run just fine (aka comparable to console settings...which we don't even know what they are) coupled with the fact that most gamers in the enthusiast PC market (someone willing to drop 2K on a gaming rig) has too many leaps in logic for my feeble brain to contend with. -
Kade Storm The Devil's Advocate
Yes, we have to tone down settings. I had to tone down settings when I had the 8600m GT SLi and the 8800m GTX. What's the point? It was still well ahead of what the consoles were putting out both in terms of image quality and setting features.
In fact, I will just give you an example of a recent title, Crysis 2, something that's been overplayed in this very thread as an example of 'console magic'.
Bear with me, please. Also, keep in mind that I don't think it is black and white, but that doesn't mean that I am going to back off the argument and give way to completely opposing claims, either. So the following is just an anecdote and is no way the bottom line, but neither is this belief that just because we have to tone down our games, that we are somehow at the same level already and being compromised. No. Absolutely not. As referenced far earlier in this thread and other threads, Battlefield 3 is running at near LOW settings on the consoles and I could get more out on the same single card at higher setting and resolution.
Fact is that if you want a fair comparison, you have to settle by those very fair standards. If you want to argue price and whatnotelse, that is a separate discussion. The topic here is whether or not a 680m/7970m can match or not. . . So having that said. . .
I took my old system. 280m GTX with a QX9300. I downclocked the CPU to 1.9 GHz, not that it even matters in this case. I down clocked the card to 400 MHz core, 1000 MHz Shader & 600 MHz memory. This was one weakened system and that card was made to run at a weaker setting than the 8800m GTX, but within that realm, which is a fair comparison. With the PS3, we had the Cell handle most of the post processing overhead and fidelity for the graphics chip it was far beyond the mere workings of the RSX (7800 GTX) alone. With the 360, we had the conventional 3 core dual threaded CPU doing quite a bit of work as well along with a unified GPU shader architecture. So when you stack the points up together, a grossly underclocked 280m GTX or a single 8800m GTX, are a relatively fair match against the holistic output of consoles. This is the advantage, in my argument, that consoles have yielded thus far and it is no where near two fold.
Then I proceeded to run Crysis 2 with FXAA enabled, because I can't stand jagged visuals, at a slightly higher resolution than the console counterpart (Full 1280x800p as opposed to 1152x720p on Xbox360 and 1024x720p on PS3 with meagre up scaling). The game was run at high setting since that's the lowest one can go with the PC version and even that's more than what one would get on consoles in terms of shadow quality and SSAO ( IQGamer: Tech Analysis: Crysis 2 (360 vs PS3 vs PC))
I ran the game through various scenes and intensive fire fights with the following results.
Average FPS: 42-45.
Maximum FPS: 75.
Minimal FPS: 35.
When the game is run at very high, all of the above rates are only affected by about a margin of 5 FPS, so really, one would be grasping at straws to argue the results.
On the consoles, with lower resolution, settings, and other compromises and that supposed 'optimisation', this game was already struggling to maintain 30 FPS and was consistently dipping into the lower 20s and worse. Clearly the 8800m GTX wins out. So there you have it. . . an 8600m GT SLi could've done better. Too bad that no one's got videos of such feats because PC gaming standards have always been a bit high, whether we like that idea or not. There is also no purpose in bringing up Crysis 1 for the obvious reason that its console port is running on a different and scaled down engine.
Of course I don't think it is black and white. One would be very wrong to assume that, especially given my post from the previous page. Although I will also contend that this 'grey leaning' mentality isn't an excuse to argue the exact opposite and feed wild speculation about how this hardware will fail to hold up against consoles. It's not black and white. . . either way. So those of us who are more optimistic about these cards are just as much in the right when we make these arguments as those who are sceptical. Honestly though, we should be the ones called the sceptics because the claims about console potential, based on the above and other similar observations, seem a bit stretched. In fact, that'd be a true leap in logic, but you're already seeing where this is going, don't you? It's two sides making claims and believe you me, both sides have enough evidence--I'd think we have more of it, in my opinion--that can be interpreted in specific ways to support the opinion.
I am sorry to say this but you're just plain wrong when you say that my post is making a bold assumption. I've primarily spent the last many pages referencing actual facts about what these consoles can and cannot do to support very grounded assumptions as opposed to the 'code to metal = magic' thinking. To say that I am talking about people putting in money for this or that is not my assumption, that is your projection. You want to start a discussion about price and viability -- let's see that thread and then gauge my opinions on that front. I've primarily been a console gamer for what I consider a very practical reason. However, in being from that camp, I also clearly see where similar PC components--despite being pricey and hard to optimise in terms of driver support etc.--excel. -
-
Well.. Another thing that makes hardware last longer for games now than just.. 7-8 years ago, is that you can play the game in reasonably low detail, and still get a passable experience. You lose full-screen filtering, you lose degrees of texture precision, maybe an amount of objects, some of the particular lighting effects, etc. But it's not the difference between the wireframe and a full textured world.
Technically on the development side as well, the solutions made 5 years ago are optimized, but not made more complex. So you see the same type of effect, or same technique now as you did 5 years ago. Only thing that changed was more intelligent ways to display models, to avoid visual breakage. And none of that really was dependent on massively higher performing hardware.
Basically when you get to the baseline here, which.. we who own laptops know is pretty low in the first place.. then increasing the graphics card performance isn't actually going to give you better animation, or much better lighting, or extremely much higher game-world complexity. Instead better graphics cards above a certain level gives you the option to run all post-processing filters twice. Or add a second run for the texture filtering.
But we're never going to see hardware in the way it is currently made, where adding graphics card cores, or upping the graphics card performance is going to give developers the option to add.. say.. a ring in a leather strap around someone's neck that dangles convincingly to the gravitation and bounce against the model, with pull from the sides, etc. We're never going to see that on current dedicated cpu and dedicated gpu architectures.
Best attempt so far is with AMD's gcn cards, where the apu part can help run certain functions in the graphics context that the graphics card otherwise couldn't run. But that too is fairly weak in terms of the amount of objects or how much per object traversal it could conceivably do.
Point is that if you don't really care about getting ssao+texture filteringx8+AA/fullscenex4 in 1920x1080, at absolutely full detail -- then a desktop card from last year is going to last you 8 years and well beyond that.
And the sad thing is that the next consoles, which both will have that apu+gpu setup, may very well end up being one of the most successful consoles ever made. Because they will allow you to play the latest games in reasonable framerate, without having to tweak settings, without installing, etc., etc. for a long time. And they will have the latest eyecandy, including things like Tressfx.
It just won't be in the highest resolution, or in 60fps. Which I think matters. But no one else do, so Sony fired the guys who made WipeoutHD, for example. Because "console level" means 720p@30fps. Or else the big publishers start to get annoyed about splitted markets, and unhealthy competition.
That's how this really works. There is tech now, to put the baseline level much higher. Could be made right now. But that won't happen, because of the way the industry works. And until that changes significantly -- a 680m is going to last you for years. That's just how it is. -
It all boils down to optimization. If the game isn't optimized, it doesn't matter what your GPU is - you're gonna have a bad time.
-
-
Greetings from earth!
The 680m will not hold up.
History/Logic dictates you will be dissapointed when your 680m in your 2000 dollar machine can't hold a candle to a PS4/XBOX720 -
-
Kade Storm The Devil's Advocate
And there's some stuff about history on the previous pages. You could read into this stuff, but it's pretty obvious that you're actually not interested in learning about any of this so I think I'm already falling for bait.
As for logic. Well, you can actually start by getting acquainted with the this, as it really helps explain why you've failed to make a single credible point in this thread.
-
Nope I don`t think 680M will be able to keep up with the PS4.
PS4 have a 8-core CPU, probably low clocked, but it have 8 cores non the less. It have AMD HD 7860 (performance between 7850/7870) so its a little weaker than GTX 680M. But PS4 have 8GB of GDDR5 for the entire system. Meaning if the system use 3-4GB the GPU have 4GB GDDR5 on its own. 176GB/s bandwidth according to Sony. GTX 680M have a bandwidth of 115GB/s. Thats a pretty big difference.
Just look at the PS3 games and the graphic they were able to squeeze out of that console despite not so stellar hardware. It had 25GB/s in bandwidth for the GPU, 256MB for the system, 256MB for the GPU. Plus the GPU and the Cell CPU is a lot worse than the PS4 hardware.
So yeah, the PS4 will beat the current notebooks out there imo
I`ll finish with a quote from AMD
-
and look at how these rookies bash me for stating my opinion...its like they are agressively against me because they have no life -
killkenny1 Too weird to live, too rare to die.
Well, this has become one depressing thread...
-
The 8 core AMD APU will be slower than current Intel core i7 cpus, besides APUs being competitive in the low power deparment, not enthusiats or power users.
The HD7850 bandwidth and current AMD's GPUs have huge bandwidth compared to nVidia. HD7970m has much more bandwidth on stock than 680m, and we don't see that translating to much more performance.
It is true that PS4 has massive bandwidth systemwide, but that does not mean we have slowpokes either. We can still overcome the speed limitation with proper memory management. A lot of things won't need such memory bandwidth.
Ps3 managed to get good looking games, albeit most compare to low settings of PC games. They do not look better than their PC counterparts and... run slow, and have massively downgraded scope including less people on multiplayer, smaller maps, lower textures/maps, limited lightining, limited or non existent AA/AF, FOV, resolution to sub 720p, massive loading times etc etc etc.
Consoles are very very crappy, horrible user experience compared to a PC. They never managed to output something that the 8800 GTX could not handle better at the time. And the 8800 GTX came out when ps3 launched. -
Karamazovmm Overthinking? Always!
-
The big advantage of consoles is optimization. If what Cloudfire says is accurate (which from what I've read it think so), it sounds likes its going to have a GPU similar to the 7970m level of power, which kind of punch for punch with a 680m at stock clocks. Such power will go much further in a console.
We'll see. From the closely watching demos like WatchDogs and StarWars I haven't seen anything that a 680m couldn't do right now, things may change though.
The sad thing is, I've never know a GPU to stay relevant for the same length of time as a console...
Can a 680M compete with the PS4?
Discussion in 'Gaming (Software and Graphics Cards)' started by fluent, Feb 22, 2013.