It looks like something is coming: Mysterious AMD Radeon MXM GPUs, Litho XT and Strato PRO Surface Amidst Shipping Data. And Litho XT will probably utilize stacked memory, yeah guys.
-
Hm...
Out of curiosity darnok44... what makes you think the Litho XT will utilize stacked memory?
I mean, it would be great if it did, but I'm just wondering the reasoning behind it.
Also... 2 GB seems relatively low for GPU VRAM... even if HBM might be in play there.
I noticed that the Strato PRO does have GDDR5 written on it, while Litho XT mysteriously omits this little detail (which could be an indication of HBM - but we cannot be hasty).
However... this article states that Litho XT comes with GDDR5:
AMD's new Strato Pro and Litho XT GPUs spotted - AMD - News : ocaholic
Little hard evidence as of yet.
Any indications when these new GPU's might be released? -
Yeah, You have right. I let myself being overtaken by wishful thinking and mislead by one comment under the article. Admit, my mistake
. In the description of Litho XT is clearly stated that is MXM A board with 2Gb of memory, nothing about HBM.
Nothing is known yet about release date of those cards and being honest we don't know anything about their specs either. -
lol "Captain Jack" card pre-alpha leak -- desktop 390X or 380X??
Jack Sparrow's Powertriturbo likes this. -
197 watt power consumtion... i dont think they can make a good mobile card out of it :/
-
But if those leaks are indication of what AMD achieved then it's promising. Specially if we look on power consumption figures. It's only slightly above gtx980 and have basically that same efficiency.
-
Has anyone downloaded the pics by any chance? They are no longer available. Thanks.
-
They are still on wccftech page: AMD's Radeon R9 390X ES Performance Numbers Allegedly Leaked - Faster Than The GeForce GTX 980, Consumes 197W in Gaming.
triturbo likes this. -
Now I can sleep well
-
Thats actually higher efficiency than Maxwell, and with unsupported drivers at that. 4K will only increase the performance gap.
These are probably R9 380X scores with 3072 SP not the 4096 SP Juggernaut that is supposed to get liquid cooling (unless it is severely underclocked in the ES variant). -
Let us pray this is the case! I can't wait for AMD's new MXM GPU.. Might make me keep this R4 however, it better not be $1000...
-
Dont mean to hijack but it seems I can't make my own thread about the Athlon 64 X2 QL-64. Can it run games of this day and age?
-
Absolutely not. Unless you're going to play something like Papers Please.
-
I finally decided.
I will buy 980m SLI full hd Clevo.
Are there any plans from Clevo for a 980m SLI 4K? -
honestly, amd would be in deep sh!t if they can't even beat maxwell efficiency with 20nm process node. amd should just try harder and wipe out maxwell completely, only this way nvidia can stop milking their existing architectures
-
Well not really sure if it will be 20nm (I hope it is) since there isn't enough info to point at either, but if it's 28nm it will most likely use a GloFo 28nm process, since there is earlier evidence that it's more efficient than TSMC's.
-
Why are you people waiting for AMD when you could have bought GTX 980M and 970M for 2 months now?
Why are you waiting for a mobile GPU nobody have heard about that may not be here until say March next year which may end up matching GTX 980M for all we know? -
Looking at your sig, have you asked yourself the same question?
-
Im not waiting for AMD am I?
Alienware is at fault here for not selling Maxwell and rumors about AW18 being discontinued. Doesnt exactly help -
LOL no (as if you would ever buy anything from AMD), but you're waiting for Alienware. What difference does it make.
triturbo likes this. -
Maxwell is already here. We can buy it. Its not a fabled product we dont know the performance of (which obviously is valid for AW18 as well). AMDs next mobile cards is a huge unknown both in performance and in release date.
Nothing wrong with waiting if one believe it will be much better than Maxwell. Im just curious why people are waiting instead of buying now. Is it price? -
The same reason you're waiting for the mythical new Alienware SLI notebook, because you believe it will be much better than the current Clevos on the market.
LOL this is so perfect. -
Do you seriously not understand that I cant just buy something thats not on the market yet?
Alienware 18 IS better than Clevo for a lot of various reasons. Alienware 18 already exist. I owned one. I know how it perform, and what I like about it over Clevo.
AMDs next graphic cards doesnt exist. Nobody knows anything about it.Last edited by a moderator: Nov 29, 2014 -
AW18 with 980M/970M SLI already exists? Anybody know anything about it? Does it come with soldered CPUs like the rest of the new Alienwares? That would make it perform worse than a Clevo in many people's eyes...Last edited by a moderator: Nov 29, 2014
-
Are you seriously not done yet?
M18X have been upgraded with 980M SLI by users. AW18 will obv perform the same.
"Does it come with soldered CPUs like the rest of the mew Alienwares"
There is only 1 new Alienware and that is AW13...
And AW18 is rumored to be discontinued. Try to keep up.
If one would launch with soldered CPUs anyway, there are other ways to go with AW18.
....Last edited by a moderator: Nov 29, 2014 -
I requested cleanup in this thread. Lets just end this timewaste right here.
-
People waiting for amd, we dont know about performance thats for sure, but february desktop means june mobile, you might be in for a long wait, just saying.
-
Lol I honestly don't care if people wait or not but if they want to let them wait, also if they want HBM then the wait is well worth it, especially considering the low memory bandwidth of mobile GM204 (if you ever wondered why 980M is so far behind desktop 970 in some cases there is your answer). Also with leaked benchmarks showing a Pirate Islands card more efficient than Maxwell even with unsupported drivers I don't blame anyone for being interested. For 1080P and thin laptops like GS60 970M is a great card, but there are people who probably want better performance and higher resolutions, and for higher resolutions 980M isn't good enough. They know they will get something better if they wait because 980M isn't very good, it is a milking attempt after all, just like the 680M.
-
-
980m is not very good?!
-
2.0™ wins the thread©®™
And the reason 980M isn't as good as the desktop 970 is because the memory is severely downclocked, from 7GHz effective to 5GHz effective. Bus width remains the same at 256 bit.heibk201 likes this. -
Last edited by 2.0™; Today at 12:01 AM. Reason: jab retort removal service. Fee: $999.98 + Tax + VAT + LOL
and
Last edited by 2.0™; Today at 12:01 AM. Reason: HAHAHAHAHAHAHAHAHAHAHA.... erm... HAHAHAHAHAHAHAHA.
I am legit dying of laughter. -
He even did the .01 discount there! Total bargain, black friday stuff!!
-
You guys started charging money for cleaning up now? Have Charles been cutting down on the payments lately?
If the following happens with R9 M390X happens, it will be worth the wait:
- 20nm process for reduced power consumption and heat.
- HBM for greater memory bandwidth. Im still very curious to what this will bring on the table regarding performance though.
- XDMA engine for much greater scaling than Crossfire and SLI
- Much better price than 980M. Maxwell is much cheaper than what Kepler were , and you can get 980M for $720. Not sure how much lower AMD can go.
- Most importantly, better performance than GTX 980M. With 20nm, XDMA, HBM and say 4-6 months after 980M came out, they better bring something better to the table.
GTX 970 isnt better solely because of the bandwidth. It got 128 more cores, 1664 vs 1536, and they are clocked over 100MHz above 980M. Bandwidth wise, based on the clock/core, its only 24GB/s above GTX 980M. The memory bandwidth on the GTX 980M is more or less spot on if the GTX 970 is perfectly calibrated after bandwidth.
Im not saying HBM wouldnt help, but Im also curious and a little sceptical about what it actually can do for performance -
Why, so you can come around and say that it is efficient only because it's on 20nm process and resume to glorify the damn Maxwell. Just get one and call it a day already.
At 1080p - nothing, really.
To sum up, to even consider getting one it has to be WAY better, AND cheaper?! Okay. I'll repeat myself - get the damn Maxwell and call it a day. It's already proven working in Alienwares and you'll be the first one to try it in an AW18, what's not to like? As for me, I wont mind if it costs the same, or even more, given it's more advanced product. -
HBM should also use less power running at the same bandwidth as traditional memory. So it could be used to further reduce power consumption rather than just to increase the memory bandwidth to the point of negligible performance returns.
-
I don't get why bandwidth for higher resolutions is an issue for Maxwell for example, is the GPU utilization low because of bandwidth on 3k-4k? I don't think so, I think there is enough bandwidth even with current models (for 3k-4k). What may happen is exactly the efficiency (I guess, though again I don't understand from hardware much, maybe there is some advantage to it).
Also people waiting for AMD might just be waiting because they like AMD more, just like we would for Nvidia, Cloud (well, I did not during Kepler
). I think AMD has a real good ace in the hole atm (I just pointed out that it might take long). Fingers crossed for the red team!
-
From my memory with playing around with 980M, increasing the core clock had a greater impact on the framerate than increasing the memory clock did. In Crysis 3, a bandwidth hungry game, at 3K resolution.
Obviously a more powerful GPU core will require more memory bandwidth to feed it.
My point is that the 980M didn't seem to be bandwidth starved. The amount of bandwidth seems about right for the number of shader cores it has.
But we've pretty much reached the limit of GDDR5 bandwidth for laptops, so something new is needed to keep feeding the ever growing mobile GPU cores and prevent future bottlenecks.
Sent from my Nexus 5Cloudfire and Robbo99999 like this. -
Nah. It's downclocked because nVidia wants it to be, and for no other reason. Also, memory bandwidth isn't very much helpful in most games. The huge memory buffer might help the 980M more than hurt it, because it doesn't need to empty its buffer for more data faster. If it had say... 2GB vRAM, that memory buffer would likely have been a MUCH larger bottleneck (from what I'm thinking).
Also, a stronger core doesn't need more memory bandwidth to feed it. How much memory a game uses is up to the game; a cartoony game could use insane vRAM footprints if devs used super high resolution models and whatever... looks really have little to do with it. It's something people are confused about often XD. -
Memory clock and memory bandwidth are totally different things no? (1250mhz memory and 256bit bandwidth)
-
You have it wrong.
Memory clock (5000MHz effective) * memory bus width (256-bit) / 8 (to get bytes) = memory bandwidth (in GB/s)
So 5000 * 256 / 8 = 160GB/s
Remember, a small b is "bits" and a large b is "bytes". So 1000GB/s is gigabytes per second. And 1000Gb/s is gigabits per second (divide by 8 for bytes; only 125 gigabytes). Internet speed is also measured in bits, so that's why with say a 100Mbps connection you only get about 12.5MB/s download speed. Also, further to that fact, when using bits, people usually type bps instead of b/s. So 1GB/s and 1Gbps is how they're normally written. -
Sorry, got it confused with bus width. Thanks mate!
-
Check the edit too
-
They are different, but no 256bit isn't memory bandwidth, it's the memory bus.
Memory Bandwidth Formula: (Memory clock)MHz / (1GHz/1000MHz) * Memory Multiplier(4X for GDDR5) * (Memory Bus)bit * (1Bite/8bit)
And saying that because the core is being fully utilized and bandwidth is unimportant is false. What Memory Bandwidth does is feed the core with data from the memory, higher memory bandwidth means the core gets the data it needs sooner so there is less delay during which the core does basically nothing. Now the size of data being moved around from the core to memory and then back in 1080P is relatively small. This gets much larger with higher resolutions, higher quality textures, and AA. So memory bandwidth becomes much more important at higher resolutions, and especially in Multi-GPU configurations (SLI/Crossfire). The way it works in SLI/Crossfire is even though you have 2 or more cards the bandwidth does not multiply, because both cards need to have the same data in the memory and because of Alternate Frame Rendering each card has to work on all the data necessary for the entire screen on alternating frame (I think this is where Civ: BE does things differently in Mantle because each card gets to work on a portion of the same frame, so the latency is lower and frametimes are more consistent even though the framerate is a bit lower). The case is similar with framebuffer/VRAM size where you want to have 2 8GB cards vs 4GB for just 1 card, because again it doesn't double. -
Remember that Maxwell use compressed algorithms to boost bandwidth 33% higher than previous generations.
GTX 980 with 224GB/s reported memory bandwidth beats GTX 780Ti (336GB/s) and R9 290X (320GB/s). Improved bandwidth efficiency is one of the reasonsCakefish likes this. -
Look above your post
-
It can't always compress stuff, it seems it mostly helps AA. And 290X Crossfire is faster than 980 SLI in 4K (780Ti SLI falls even more behind probably because of only 3GB memory) probably because of higher memory bandwidth besides better Crossfire scaling.
I saw it later after I refreshed the page. -
780Ti has better memory bandwidth than any AMD card, and that applies also to SLI and Crossfire. You should check single R9 290X vs single 980 at 4k; apparently 970s have better SLI scaling than 980s for some odd reason right now. Single cards'll tell the best story.
-
No I said 780Ti only has 3GB memory and probably very bad scaling in SLI too, and 970 has nothing to do with SLI scaling when compared to 980, 980 SLI is still faster but the reason 970 SLI scaling appears better is because the memory bandwidth is less of a bottleneck on it since 970 has much lower shader performance yet the same memory bandwidth.
Radeon R9-M295X
Discussion in 'Gaming (Software and Graphics Cards)' started by Tsubasa, Mar 15, 2014.