The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Radeon R9-M295X

    Discussion in 'Gaming (Software and Graphics Cards)' started by Tsubasa, Mar 15, 2014.

  1. darnok44

    darnok44 Notebook Consultant

    Reputations:
    177
    Messages:
    285
    Likes Received:
    87
    Trophy Points:
    41
    King of Interns and triturbo like this.
  2. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Hm...
    Out of curiosity darnok44... what makes you think the Litho XT will utilize stacked memory?
    I mean, it would be great if it did, but I'm just wondering the reasoning behind it.
    Also... 2 GB seems relatively low for GPU VRAM... even if HBM might be in play there.

    I noticed that the Strato PRO does have GDDR5 written on it, while Litho XT mysteriously omits this little detail (which could be an indication of HBM - but we cannot be hasty).

    However... this article states that Litho XT comes with GDDR5:
    AMD's new Strato Pro and Litho XT GPUs spotted - AMD - News : ocaholic

    Little hard evidence as of yet.
    Any indications when these new GPU's might be released?
     
  3. darnok44

    darnok44 Notebook Consultant

    Reputations:
    177
    Messages:
    285
    Likes Received:
    87
    Trophy Points:
    41
    Yeah, You have right. I let myself being overtaken by wishful thinking and mislead by one comment under the article. Admit, my mistake :(. In the description of Litho XT is clearly stated that is MXM A board with 2Gb of memory, nothing about HBM.
    Nothing is known yet about release date of those cards and being honest we don't know anything about their specs either.
     
  4. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    triturbo likes this.
  5. SegaDE

    SegaDE Notebook Guru

    Reputations:
    53
    Messages:
    74
    Likes Received:
    6
    Trophy Points:
    16
    197 watt power consumtion... i dont think they can make a good mobile card out of it :/
     
  6. darnok44

    darnok44 Notebook Consultant

    Reputations:
    177
    Messages:
    285
    Likes Received:
    87
    Trophy Points:
    41
    But if those leaks are indication of what AMD achieved then it's promising. Specially if we look on power consumption figures. It's only slightly above gtx980 and have basically that same efficiency.
     
  7. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,239
    Trophy Points:
    231
    Has anyone downloaded the pics by any chance? They are no longer available. Thanks.
     
  8. darnok44

    darnok44 Notebook Consultant

    Reputations:
    177
    Messages:
    285
    Likes Received:
    87
    Trophy Points:
    41
  9. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,239
    Trophy Points:
    231
    Now I can sleep well :D
     
  10. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Thats actually higher efficiency than Maxwell, and with unsupported drivers at that. 4K will only increase the performance gap.
    These are probably R9 380X scores with 3072 SP not the 4096 SP Juggernaut that is supposed to get liquid cooling (unless it is severely underclocked in the ES variant).
     
  11. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,456
    Messages:
    8,707
    Likes Received:
    3,315
    Trophy Points:
    431
    Let us pray this is the case! I can't wait for AMD's new MXM GPU.. Might make me keep this R4 however, it better not be $1000...
     
  12. AquaLady

    AquaLady Newbie

    Reputations:
    0
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    5
    Dont mean to hijack but it seems I can't make my own thread about the Athlon 64 X2 QL-64. Can it run games of this day and age?
     
  13. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Absolutely not. Unless you're going to play something like Papers Please.
     
  14. Tsubasa

    Tsubasa Notebook Enthusiast

    Reputations:
    0
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    5
    I finally decided.

    I will buy 980m SLI full hd Clevo.

    Are there any plans from Clevo for a 980m SLI 4K?
     
  15. heibk201

    heibk201 Notebook Deity

    Reputations:
    505
    Messages:
    1,307
    Likes Received:
    341
    Trophy Points:
    101
    honestly, amd would be in deep sh!t if they can't even beat maxwell efficiency with 20nm process node. amd should just try harder and wipe out maxwell completely, only this way nvidia can stop milking their existing architectures
     
  16. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Well not really sure if it will be 20nm (I hope it is) since there isn't enough info to point at either, but if it's 28nm it will most likely use a GloFo 28nm process, since there is earlier evidence that it's more efficient than TSMC's.
     
  17. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Why are you people waiting for AMD when you could have bought GTX 980M and 970M for 2 months now?
    Why are you waiting for a mobile GPU nobody have heard about that may not be here until say March next year which may end up matching GTX 980M for all we know?
     
  18. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Looking at your sig, have you asked yourself the same question?
     
  19. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Im not waiting for AMD am I? :p
    Alienware is at fault here for not selling Maxwell and rumors about AW18 being discontinued. Doesnt exactly help
     
  20. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    LOL no (as if you would ever buy anything from AMD), but you're waiting for Alienware. What difference does it make.

     
    triturbo likes this.
  21. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Maxwell is already here. We can buy it. Its not a fabled product we dont know the performance of (which obviously is valid for AW18 as well). AMDs next mobile cards is a huge unknown both in performance and in release date.

    Nothing wrong with waiting if one believe it will be much better than Maxwell. Im just curious why people are waiting instead of buying now. Is it price?
     
  22. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    The same reason you're waiting for the mythical new Alienware SLI notebook, because you believe it will be much better than the current Clevos on the market.

    LOL this is so perfect.
     
  23. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Do you seriously not understand that I cant just buy something thats not on the market yet?

    Alienware 18 IS better than Clevo for a lot of various reasons. Alienware 18 already exist. I owned one. I know how it perform, and what I like about it over Clevo.
    AMDs next graphic cards doesnt exist. Nobody knows anything about it.
     
    Last edited by a moderator: Nov 29, 2014
  24. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    AW18 with 980M/970M SLI already exists? Anybody know anything about it? Does it come with soldered CPUs like the rest of the new Alienwares? That would make it perform worse than a Clevo in many people's eyes...
     
    Last edited by a moderator: Nov 29, 2014
  25. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Are you seriously not done yet?

    M18X have been upgraded with 980M SLI by users. AW18 will obv perform the same.

    "Does it come with soldered CPUs like the rest of the mew Alienwares"
    There is only 1 new Alienware and that is AW13...
    And AW18 is rumored to be discontinued. Try to keep up.
    If one would launch with soldered CPUs anyway, there are other ways to go with AW18.

    ....
     
    Last edited by a moderator: Nov 29, 2014
  26. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    [​IMG]
     
    Last edited by a moderator: Nov 29, 2014
    maxheap likes this.
  27. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    I requested cleanup in this thread. Lets just end this timewaste right here.
     
  28. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    People waiting for amd, we dont know about performance thats for sure, but february desktop means june mobile, you might be in for a long wait, just saying.
     
  29. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Lol I honestly don't care if people wait or not but if they want to let them wait, also if they want HBM then the wait is well worth it, especially considering the low memory bandwidth of mobile GM204 (if you ever wondered why 980M is so far behind desktop 970 in some cases there is your answer). Also with leaked benchmarks showing a Pirate Islands card more efficient than Maxwell even with unsupported drivers I don't blame anyone for being interested. For 1080P and thin laptops like GS60 970M is a great card, but there are people who probably want better performance and higher resolutions, and for higher resolutions 980M isn't good enough. They know they will get something better if they wait because 980M isn't very good, it is a milking attempt after all, just like the 680M.
     
  30. 2.0

    2.0 Former NBR Macro-Mod®

    Reputations:
    13,368
    Messages:
    7,741
    Likes Received:
    1,022
    Trophy Points:
    331
    A'ight. You guyz iz funny doh.
     
    triturbo likes this.
  31. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    980m is not very good?! :eek:
     
  32. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    2.0™ wins the thread©®™

    And the reason 980M isn't as good as the desktop 970 is because the memory is severely downclocked, from 7GHz effective to 5GHz effective. Bus width remains the same at 256 bit.
     
    heibk201 likes this.
  33. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Last edited by 2.0™; Today at 12:01 AM. Reason: jab retort removal service. Fee: $999.98 + Tax + VAT + LOL
    and
    Last edited by 2.0™; Today at 12:01 AM. Reason: HAHAHAHAHAHAHAHAHAHAHA.... erm... HAHAHAHAHAHAHAHA.

    I am legit dying of laughter.
     
  34. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    He even did the .01 discount there! Total bargain, black friday stuff!!
     
  35. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    You guys started charging money for cleaning up now? Have Charles been cutting down on the payments lately? :p

    If the following happens with R9 M390X happens, it will be worth the wait:
    - 20nm process for reduced power consumption and heat.
    - HBM for greater memory bandwidth. Im still very curious to what this will bring on the table regarding performance though.
    - XDMA engine for much greater scaling than Crossfire and SLI
    - Much better price than 980M. Maxwell is much cheaper than what Kepler were , and you can get 980M for $720. Not sure how much lower AMD can go.
    - Most importantly, better performance than GTX 980M. With 20nm, XDMA, HBM and say 4-6 months after 980M came out, they better bring something better to the table.

    GTX 970 isnt better solely because of the bandwidth. It got 128 more cores, 1664 vs 1536, and they are clocked over 100MHz above 980M. Bandwidth wise, based on the clock/core, its only 24GB/s above GTX 980M. The memory bandwidth on the GTX 980M is more or less spot on if the GTX 970 is perfectly calibrated after bandwidth.
    Im not saying HBM wouldnt help, but Im also curious and a little sceptical about what it actually can do for performance
     
  36. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,239
    Trophy Points:
    231
    Why, so you can come around and say that it is efficient only because it's on 20nm process and resume to glorify the damn Maxwell. Just get one and call it a day already.

    At 1080p - nothing, really.

    To sum up, to even consider getting one it has to be WAY better, AND cheaper?! Okay. I'll repeat myself - get the damn Maxwell and call it a day. It's already proven working in Alienwares and you'll be the first one to try it in an AW18, what's not to like? As for me, I wont mind if it costs the same, or even more, given it's more advanced product.
     
  37. R3d

    R3d Notebook Virtuoso

    Reputations:
    1,515
    Messages:
    2,382
    Likes Received:
    60
    Trophy Points:
    66
    HBM should also use less power running at the same bandwidth as traditional memory. So it could be used to further reduce power consumption rather than just to increase the memory bandwidth to the point of negligible performance returns.
     
    Cloudfire and maxheap like this.
  38. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    I don't get why bandwidth for higher resolutions is an issue for Maxwell for example, is the GPU utilization low because of bandwidth on 3k-4k? I don't think so, I think there is enough bandwidth even with current models (for 3k-4k). What may happen is exactly the efficiency (I guess, though again I don't understand from hardware much, maybe there is some advantage to it).

    Also people waiting for AMD might just be waiting because they like AMD more, just like we would for Nvidia, Cloud (well, I did not during Kepler :D). I think AMD has a real good ace in the hole atm (I just pointed out that it might take long). Fingers crossed for the red team! :)
     
  39. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    From my memory with playing around with 980M, increasing the core clock had a greater impact on the framerate than increasing the memory clock did. In Crysis 3, a bandwidth hungry game, at 3K resolution.

    Obviously a more powerful GPU core will require more memory bandwidth to feed it.

    My point is that the 980M didn't seem to be bandwidth starved. The amount of bandwidth seems about right for the number of shader cores it has.

    But we've pretty much reached the limit of GDDR5 bandwidth for laptops, so something new is needed to keep feeding the ever growing mobile GPU cores and prevent future bottlenecks.

    Sent from my Nexus 5
     
    Cloudfire and Robbo99999 like this.
  40. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Nah. It's downclocked because nVidia wants it to be, and for no other reason. Also, memory bandwidth isn't very much helpful in most games. The huge memory buffer might help the 980M more than hurt it, because it doesn't need to empty its buffer for more data faster. If it had say... 2GB vRAM, that memory buffer would likely have been a MUCH larger bottleneck (from what I'm thinking).

    Also, a stronger core doesn't need more memory bandwidth to feed it. How much memory a game uses is up to the game; a cartoony game could use insane vRAM footprints if devs used super high resolution models and whatever... looks really have little to do with it. It's something people are confused about often XD.
     
  41. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    Memory clock and memory bandwidth are totally different things no? (1250mhz memory and 256bit bandwidth)
     
  42. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    You have it wrong.

    Memory clock (5000MHz effective) * memory bus width (256-bit) / 8 (to get bytes) = memory bandwidth (in GB/s)

    So 5000 * 256 / 8 = 160GB/s

    Remember, a small b is "bits" and a large b is "bytes". So 1000GB/s is gigabytes per second. And 1000Gb/s is gigabits per second (divide by 8 for bytes; only 125 gigabytes). Internet speed is also measured in bits, so that's why with say a 100Mbps connection you only get about 12.5MB/s download speed. Also, further to that fact, when using bits, people usually type bps instead of b/s. So 1GB/s and 1Gbps is how they're normally written.
     
    Cloudfire and maxheap like this.
  43. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    Sorry, got it confused with bus width. Thanks mate!

     
  44. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Check the edit too
     
  45. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    They are different, but no 256bit isn't memory bandwidth, it's the memory bus.

    Memory Bandwidth Formula: (Memory clock)MHz / (1GHz/1000MHz) * Memory Multiplier(4X for GDDR5) * (Memory Bus)bit * (1Bite/8bit)

    And saying that because the core is being fully utilized and bandwidth is unimportant is false. What Memory Bandwidth does is feed the core with data from the memory, higher memory bandwidth means the core gets the data it needs sooner so there is less delay during which the core does basically nothing. Now the size of data being moved around from the core to memory and then back in 1080P is relatively small. This gets much larger with higher resolutions, higher quality textures, and AA. So memory bandwidth becomes much more important at higher resolutions, and especially in Multi-GPU configurations (SLI/Crossfire). The way it works in SLI/Crossfire is even though you have 2 or more cards the bandwidth does not multiply, because both cards need to have the same data in the memory and because of Alternate Frame Rendering each card has to work on all the data necessary for the entire screen on alternating frame (I think this is where Civ: BE does things differently in Mantle because each card gets to work on a portion of the same frame, so the latency is lower and frametimes are more consistent even though the framerate is a bit lower). The case is similar with framebuffer/VRAM size where you want to have 2 8GB cards vs 4GB for just 1 card, because again it doesn't double.
     
  46. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Remember that Maxwell use compressed algorithms to boost bandwidth 33% higher than previous generations.
    GTX 980 with 224GB/s reported memory bandwidth beats GTX 780Ti (336GB/s) and R9 290X (320GB/s). Improved bandwidth efficiency is one of the reasons
     
    Cakefish likes this.
  47. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    Look above your post

     
  48. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    It can't always compress stuff, it seems it mostly helps AA. And 290X Crossfire is faster than 980 SLI in 4K (780Ti SLI falls even more behind probably because of only 3GB memory) probably because of higher memory bandwidth besides better Crossfire scaling.

    I saw it later after I refreshed the page.
     
  49. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    780Ti has better memory bandwidth than any AMD card, and that applies also to SLI and Crossfire. You should check single R9 290X vs single 980 at 4k; apparently 970s have better SLI scaling than 980s for some odd reason right now. Single cards'll tell the best story.
     
  50. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    No I said 780Ti only has 3GB memory and probably very bad scaling in SLI too, and 970 has nothing to do with SLI scaling when compared to 980, 980 SLI is still faster but the reason 970 SLI scaling appears better is because the memory bandwidth is less of a bottleneck on it since 970 has much lower shader performance yet the same memory bandwidth.
     
← Previous pageNext page →