^^ good info, thanks! however as I expected the same number of CUDA cores, also one thing, it has lower clocks than 550m, what does this mean? they are really just going at heat and performance/watt?
-
-
I think it could mean that Nvidia actually was telling us the truth earlier: that their Kepler GPUs will give us better performance/watt compared with Fermi
This ancient picture:
-
Well, I feel much better for future console releases and to max out those badly written ones
-
GT630M = GT 650M?
Looks like a GPU-Z error. I haven't got another explanation. -
I'm going to have to wait until it's confirmed that those chips are or aren't Kepler parts.
For all I know, we're looking at GT 525Ms or something. -
Isn't kepler just a shrink of current fermi tech anyway?
-
Here is something REALLY interesting about Kepler. I`m just quoting small stuff of a very long informative article, but these are the most important bits.
Physics hardware makes Kepler/GK104 fast | SemiAccurate -
Here are the specs for the entire Kepler series. Take it with a boatload of salt. Could be pure BS
Entire Nvidia Kepler Series Specifications, Price & Release Date - Lenzfire -
Nice! Though i think that article is too detailed to be BS (or not?). Should we expect a GTX 680M to be a downclocked GTX 660? And yeah it kinda sorta bash AMD hard.
-
-
)
BTW thanks for the news about 600 specshowever looks way too optimistic, I am a little pessimistic about this
-
-
My computer broke down and this is probably the worst timing imaginable... I really want to wait for the new Ivy Bridge/GPU updates.
Do you think when the new hardware comes out, the price will be greater than the current models, or will it just replace the current models without an increase in price? -
680M = GTX 660 - 512sp, 256-bit
675M = GTX 650 Ti - 448sp, 224-bit
670M = GTX 650 - 256sp, 192-bit
660M = GTX 640 - 192sp, 128-bit
Man, the 670M and 660M would be looking so disappointing.
670M definitely wouldn't be toasting the 580M, unless Kepler has super shaders.
I don't know, this seems pretty fishy.
Ideally, I'd like to see the GTX 680M come from the 660 Ti (but make it 256-bit), so each GPU could be moved up one notch, meaning the 670M would come from the GTX 650 instead.
-
512sp would be amazing for a tdp limit of 100W.
i doubt itll happen. and all the charts suggest that GTX 680/675 have 256-bit memory interface unless nvidia changes that last second. -
1.75 Gb already makes it bs .
-
@Kevin: You really think that the GTX 660M will be like GTX 640? Isn`t the GTX 650 more plausable? 192 bit like the 560M and 256 cores instead of 192? That would have been amazingIt is rumored that the cores will be more effective than Fermi too, so keep that in mind.
-
Yes, it being the GTX 650 makes much more sense, from technical and PR perspectives.
-
this is so fake...
even if rough specs (memory width, amount VRAM, shaders) were right, than it's impossable that final CLOCKS of core and memory would already be known this far before a release. Even nvidia doesnt know the final clocks yet, nor price.
Also 224bit is VERY unlikely, always been 64bit modules a card is build with. -
Nvidia pr team is working real hard
-
Fake and BS two words.
Nevertheless it's really possible scenario when GTX 660M will be equal GTX 560M. New process, lesser thermal efficiency, but total efficiency the same. -
That doesnt matter, afcourse it will have lower TDP/performance that's because of the 28nm.
What you do is gambling. I could also make a table with specs. Would you believe me? because that's the same that happened now -
also take account that the green teams 3xxm series were very quickly phased out as it was more of a gap filler and there was no real new articture difference between the gtx 260m and a gts 360m that was found in the asus series of laptop. Perhaps nvidia will pass on the any desktop/mobile 6xx series, it's all speculation
-
Star Forge Quaggan's Creed Redux!
-
Nvidia has confirmed multiple times that all of the high-end 600M chips are Kepler.
-
Report: AMD Radeon HD 7750/7770 Specifications by VR-Zone.com
Probably gonna be the 7700M serie. Will be quite a jump from Whistler. (30-40%?)
Seems good performance for just 55W and 80W (desktop) chips
This will also say that 7800 (desktop) cards will be used for 7800M (and 7900M?) -
640 and 512 are shader counts I'd expect of the 7600 series, not 7700.
In fact, even the previous article, which VR-Zone referas to in that one, says the 7770 has 896 SP or 14 CUs, while many other sources have also reported the 7770 and 7750 as having 896 and 832 SPs, respectively.
Seems like some wires were crossed, and they're actually talking about the 7670 and 7650.
I expect the desktop 7700 to become the 7800M, 7800 to become the 7900M, and it will not shock me if the aforementioned 7600 replaces the rebranded 6770M (which is currently being passed off as the 7600M) with true next-gen parts. -
Just for some perspective, if you need a new computer now is a great time to buy. SB processors are already stupid fast and the current top tier GPU's can eat up anything thrown at it. Technology moves so fast that it is very easy to take a "I will wait for the most opportune time to buy" but that rarely comes along.
I would say we are in that opportune time right now because new consoles are still a few years off; probably from the bad economy. If new consoles arrive in late 2013 then even the most tricked out laptop you can buy at the END of this year, won't run the majority of console ports. Just look at how some of the PC requirements that are needed to run console ports. Sure we may play with higher resolution, but overall, it's just poor optimization.
I have found with every new console release, only the absolute best desktops that are available around the time of the console launch have any longevity. The reality is that the money is being made on the consoles and then porting to the PC as an afterthought. So whether you have 5870s/6990s/7990s...it won't make a difference in a few years. Right now if you have a very powerful notebook, you should be set until the next consoles arrive.
That is just my take on it. My next purchase will be about a year or so after the consoles hit (unless there are some amazing PC exclusives that come out). I got great return on investment with my m15x with the 8800GTX and will likely get another couple years out of my current rig. -
^^
What you said made sense in 2005/6. When xbox360 and PS3 were aiming for highend GPUs inside. This time, (current desktop) midrange parts will be used.
6670 for xbox 720/next
I know ports are extremely poor, but 6670 performance is already outclassed by current highend notebooks. So in 2013/2014, 6670 performance will be outclassed by midrange notebooks.
Remember there will be a wide range of hardware for nextgen consoles
Nintendo Wii U (lowend)
Xbox 720 (midrange)
PS4 (unknown, but probably between midrange and highend) -
My take is that once the new tech come out people will try to get their hands on the new stuff thus a price drop for current tech.
-
Oh maybe I am way out of the loop, and you are correct I was going on the past regarding consoles. Are the specs leaked for the next Xbox? If it is using a midrange desktop GPU from 2012 I will be rather disappointed. I liked how they usually were right on the cutting edge of current technology when released. What are the details of the new Xbox?
edit: I did research the webs and I will be very shocked if they use a GPU like that. October/November of 2013 and they are using a GPU that is midrange at best from 2011? I think it's FUD but time will tell. Sure there are a lot of console specific optimizations and you don't need a bleeding edge card but that just seems silly. Plus it's not even on the die shrink which would reduce heat, improve efficiency and power.....doesn't make sense. I guess maybe I could keep my 580M SLI for like 10 years then. -
-
The Wii U is actually going to be something like 3 times more powerful than the 360 and PS3.
If the 6670 rumors are true, the NeXtbox is in the range of 6 times the current HD consoles.
Sony will probably survey this and go for 8 or 9 times the current gen, if they stick to their status quo. -
Yeah, i know 3 or 6 times the current speed sounds impressive.
But we had that power already in 2010 with a 5870 or GTX285.
Also keep in mind x times faster doesn't translate in x times BETTER.
It just can calculate more, but if they will use tessaltion 6670 is a weak choice.
and nintendo Wii U uses 4000 serie card, so that's even dx10.
If they were to release every 4 years a new console, it would be ok. But not for 8-10 years like now...
I know some people say tesselation is not that impressive, but it IS. It's just not good inplented (yet).
But I think it's rather a big (visual) difference if you watch demos like 3dmark11 or the heaven benchmark. -
If this will become the newest trend, RIP console gaming
-
TheBluePill Notebook Nobel Laureate
-
TheBluePill Notebook Nobel Laureate
The Xbox 360's Xenos Processor was essentially a Radeon X1800, (Released Oct 2005, a month or so before the X360). So yes, they did use the top tier graphics processor of the day in the 360.
However, the next Xbox really only has to shoot for 1080p playback at 30+ fps. So much has advanced in the last 7 years, even an upper-mid range part like the 6770 would perform VERY well. Really, they just want the Direct X extension update and enough muscle to power that 1080p resolution.
I think this is a good thing, as MS is actually still concerned with their "games for windows" program and having the ability to easily port titles that will make them playable is a good thing. One of MS's biggest advantages against OSX is in the game market, so cutting that down would be unwise. -
-
-
Sorry guys, double-post. Anyway, moooooving on....
-
To be honest guys, anyone who thinks that the next generation Xbox is going to utilize 2011 hardware for their GPU core is seriously misinformed. This isn't a winning solution for Microsoft, and isn't going to generate their gaming division any economic surplus in the long run.
We need to think about this from a business perspective first and foremost, as that is what Microsoft, a company, is running. What keeps any company or business generating revenue and investors is profit, pure and simple. Investors don't give a sh*t about the power of the hardware or how many frames a game is going to run at, again, they care about a constant revenue stream, profitability, and assurance that their investment decision was sound and protected.
Why is this important in this conversation or even relevant? Simple; Microsoft, or any other major console manufacturer for that matter, doesn't make their core revenue intake from hardware sales. On the contrary as history will dictate, they suffer a loss over time for their hardware placement strategy to supply a market for their real money maker, royalties (ie: software, peripherals, etcetera).
With that in mind, think about releasing a system with out-dated hardware that is already over a generation old. Yes, it is more "powerful" than the previous console hardware out there, but how long will it provide developers a stable platform to produce the content they want to deliver to their audience compared to the competition? How long will this hardware platform continue to deliver the results that studios demand before a newer generation is called for to step things up to the next level?
Research and development isn't cheap, and you can bet that the amount of money that Microsoft, Nintendo, and Sony threw into the R&D of the current consoles was extremely high, bordering excessive and something not ever seen in this industry to this date even when considering inflation. With this in mind, it isn't sound business from an investment point of view to release something that isn't going to offer a life-cycle on par or greater to the previous product of the same line offered.
The next Xbox will not launch with this dated hardware, nor will the next Playstation. The Wii U can get away with offering slightly-better-than current-gen hardware (and I say this lightly as nothing is finalized) because the current install base is invested in how they interact with their device rather than the polygons it can push. Throw a couple million more of those polygons at someone with a more intuitive interaction interface and you can bet they will be back for more classic Nintendo titles, time and time again. This becomes especially true now that the install base for an HDTV in the average home is much higher than it was six years ago to take advantage of the mainstream market that Nintendo initially targeted and continues to capitalize on.
This next generation will be something special visually, and the hardware used won't be anything we have to date in February of 2012. Whether the gameplay and design will match is another story in itself, however. This is not to say that the current top-end GPU will be less powerful than what is offered in raw performance for the next-gen consoles, but at that same performance level offered will be a smaller die-size, more efficiency in production, heat management, and energy consumption. Again, all things that drive profit margins, and make board directors keep their jobs when investors come a-knockin'. -
TheBluePill Notebook Nobel Laureate
Microsoft has actually improved upon the Xenos processor since its inception, with die shrinks from 90nm, to 65nm to 45nm over the years. So, even though the xbox platform is fairly static, the actual hardware has improved somewhat for efficiency and lower cost. (Smaller the die, the more on a wafer and the lost the cost.)
If i had to guess, i would say that MS will outfix the next Xbox with a 28nm part. That suggests a minimum of a 77xx or better equivalant of the GPU. To further lower costs, i could see MS even going with an All-in-one APU unit. Perhaps they will have a custom variant of a Fusion style processor.
No matter if they go integrated or discreet, the next Xbox will be a powerhouse, even if it is "only" a 7750. They have really worked the current Xenos to a level that most people would have never thought possible 7 years ago. With a huge boost in power from a more modern GPU, they should have no problem getting another 8 years of life out of it.
It really comes down to consumer expectations meeting return on investment potential. Could MS put a 7970 in the next xbox.. sure.. but it would probably cost $700 at launch. I am putting my bet on a relative of the 77xxM part as the heart of the next box. They could still meet the $299 price point they seek, and get a solid machine out the door that will wow people until 2020. -
-
Wii U is real though. Looks really awful. -
Free bump. Any new info?
-
Do you think Sony will release concurrently with Microsoft since MS beat Sony to the market by a year with the last release? Or perhaps Sony would prefer a year lag? I don't think it would hurt either of them any way.
-
here's a 3dmark11 score for the 7770.. It's very likely this will be the base for one of the higher end mobility cards..
Radeon HD 7770 scores P3535 in 3DMark 11
http://chinese.vr-zone.com/10056/amd-radeon-hd-7770-02142012/
Edit: my over clocked 5870m (oc'd to almost match the 5770 from which it was derived) is only about 28% slower in the graphics score..
that doesnt sound too good..
hopefully something higher will be the basis for the flagship mobility card.. -
It's amazing that a 128-bit card with 640 GCN shaders is faster than the stock 6970M in the 3DMark11 GPu score.
I wish people would stock saying the next mobile cards won't be a significant leap. -
its not that amazing, since the 6970m is a downclocked Barts XT Chip which is used in the 6870 and after all we are talking about the 7770 desktop part, so its not fair to compare it to the mobile cards.. also the 7770 has a TDP of 85 watts
In my opinion, I think the 128-bit vs 192 and so on doesnt matter at all, since they are not relevant and no handicap for these new cards, cuz they use a new architecture..
My biggest hope from Kepler and AMDs 28nm GPUs is that smaller laptops can now have more powerful GPUs and maybe even some 13.3 inch laptopsBut please with nvidia optimus
-
First off, the 6970M is a downclocked 6850. There's no reason it can't be compared to the desktop chips within this context. So how is it not impressive? They've improve the architecture, to the point where a 640 shader 128-bit card outpaces the previous gen's 960 shader 256-bit part, and it does so at a much lower TDP.
That is impressively efficient, at least. AMD could probably get that down to 60W for a mobile derivative, at it'll be hugely overclockable.
But I'm more interested in the 7800 series and what it means for the 7900M.
Concerning next series of graphic card coming up (600m/7000m series)
Discussion in 'Gaming (Software and Graphics Cards)' started by KaWiCH, Jan 7, 2012.