The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    AMDs upcoming Carrizo APU details leaked

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Cloudfire, Feb 21, 2015.

  1. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    So AMDs new architecture is soon coming out. Its based on Excavator, unlike Kaveri which is based on Steamroller.

    Charrizo will be a mobile platform (notebooks) and will feature APUs ranging from 15W up to 35W. The new architecture will be presented February 23rd.

    Videocardz posted the pictures yesterday on the forum, and wccftech published them.

    Short story:
    - Much higher density and more transistors: Kaveri 2.4billion transistors, Charrizo 3.1billion. And Charrizo have 23% less die size than Kaveri.
    - 5% IPC gain over Kaveri (5% better performance if both clocked the same)
    - 40% less power draw
    - Up to 2x the battery life compared to Kaveri

    Thoughts:
    - 40% reduction in power draw could mean a higher clocked Charrizo APU to gain good performance. (A10-5750M 3.5GHz. Charrizo: 4.0GHz?)
    - Im pretty sure the IGP part of the APU will get a substantial gain in GPU performance. Lets hope the CPU part does as well, because that will open up to a range of new opportunities for AMD.

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]
     
    Last edited: Feb 21, 2015
  2. bernieyee

    bernieyee Notebook Evangelist

    Reputations:
    57
    Messages:
    433
    Likes Received:
    50
    Trophy Points:
    41
    According to my research a few weeks back, it seems that AMD has dumped Carrizo for desktops despite what they said before.
     
  3. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, doesn't matter what AMD (or anyone really) promises, it's what they deliver that counts.

    Wait and see. But I'm guessing more Intel platforms in my future and for almost 100% of my clients too.
     
  4. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    I've read about these changes in Carrizo.
    Also, one of the 'big' things about this architecture is that it supposedly (at least according to what someone stated a few months ago) incorporates use of HSA on a hardware level through a method called 'dynamic switching' if I am not mistaken.
    This would automatically enable software to be processed through the GPU as HSA would do... even with software not being specifically programmes for HSA to begin with, though I would imagine that in such an instance, only specific things are shuffled via HSA, and so it wouldn't result in what we would see with actually HSA optimized software, but certain gains in performance and efficiency might be seen.

    If this is the case, and the CPU portion ends up about 20% better all round (I think I remember reading about these figures before and that, clock for clock, about 20% should be gained - more or less).
    On the GPU side, some indications seemingly indicate 2x performance increase. This should make 1080p gaming (with at least High preset) on this APU more than viable.

    Still, until we get actual tests of this architecture we don't know what the gains/changes are.
    I am excited to see it in action though... and hopefully it will come to pass.
     
  5. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    If they manage to squeeze out +20% more performance while still laying on max 35W thats a major win in my eyes.
    AMD need to offer an APU that have enough processing power to not bottleneck the mobile GPUs (I`m looking at you Kaveri) and if this +20% is enough to overcome that, perhaps we will begin to see some more notebooks with AMD APUs + 980M/M295X. If they are good enough for that, and temperatures is much cooler than the fireball Haswell, I might consider going for AMD for APU.

    Twice the IGP performance will also net them more sales from the cheap series notebooks, but AMD have great success there already I think.
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    There is no AMD APU that won't throttle an 980M or similar.

    20% additional cpu performance at 35W will only match what Intel has or will be rolling out soon at 15W. Yeah, the igpu is higher performance (vs. Intel). But it too will probably be throttled with the cpu attached.
     
  7. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    We don't know whether Carrizo will throttle top end dedicated gpu's.
    It might, but it also might not - without actually testing the chip in question, we don't know what kind of numbers it will produce. For now, we only have preliminary indications that came from AMD itself (and they had a tendency to be close to home in the past).

    As for 20% gain on CPU matching what Intel will have at 15W... well, we don't know what kind of gains Intel will have just by switching over to a smaller manuf. process.
    I wouldn't expect such gains for Broadwell... Skylake on the other hand is a 'maybe'.
    That, and the potential problem behind increased temperatures on Intel's part is another issue.

    This source indicates 20% gains... though that's just power reduction.
    http://wccftech.com/amd-carrizo-apu...amroller-die-consists-31-billion-transistors/

    Actual gains in IPC on CPU side will probably be about 5% after all is said and done... still, with addition on full blown HSA and potential dynamic switching, and 2x upgrade of the GPU... I dunno... I guess we need to wait and see.
     
    Last edited: Feb 21, 2015
  8. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Uh...isn't it spelled 'Carrizo'?
     
  9. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, but I also spell 'dog-slow' as 'A-M-D' too. :)
     
    Kent T likes this.
  10. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    I believe best use for AMD APUs, praised for their strong iGPU, is ultrabooks without dGPU. An AMD-powered convertible with active/dual digitizer upgradeable to 2x16GB DDR3 RAM one day, maybe? :newpalm:
     
  11. Tinderbox (UK)

    Tinderbox (UK) BAKED BEAN KING

    Reputations:
    4,740
    Messages:
    8,513
    Likes Received:
    3,823
    Trophy Points:
    431
    H.265 support the new video compression codec.

    John.
     
  12. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    I guess we will find out tomorrow about the performance. NDA ends February 23rd and Im guessing reviews are ready :)
     
  13. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Good luck having OEM's actually MAKING ultrabooks without a dGPU... or just plain laptops with top end Carrizo APU and no dGPU (that would be excellent for my nephew for instance... or sister), but paired with high performance RAM (for IGP performance).
     
    Kent T likes this.
  14. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I agree, we'll find out soon.

    Starlight5, maybe 2x16GB DDR4...
     
  15. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    Deks, most ultrabooks have either no dGPU or dGPU so weak they'd rather had expresscard/thunderbolt for eGPU instead.

    tilleroftheearth
    , by the time DDR4 becomes mainstream Carrizo might be obsolete.
     
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Starlight5, okay granted. But... Carrizo is already obsolete for me.

    To be considered current (in time) it has to have surpassed Intel on the CPU front...


    And... DDR3 will not be mass produced for 16GB SoDimms (or be compatible with most existing systems) at all with the info we have today.
     
  17. Atom Ant

    Atom Ant Hello, here I go again

    Reputations:
    1,340
    Messages:
    1,497
    Likes Received:
    272
    Trophy Points:
    101
    A possible 35W Carizzo 3DMark11 benchmark;

    P2645
     
    Cloudfire likes this.
  18. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Last edited: Feb 24, 2015
    davidricardo86 and Atom Ant like this.
  19. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    Meaning, Intel rapes it performance-wise once again? =/
     
  20. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yeah, a 10-15% increase in CPU performance from a new architecture is very dissapointing. Intels 35W i7 CPUs from the soon-to-be-dead Haswell score over 7000 in physics score in 3DMark11..thats over 2x as much on the same TDP as FX 8800P. Broadwell and skylake will have a massive increase in IGP performance which means Intel might beat AMD in IGP and have a massive lead in CPU performance

    Not very surprising considering Intel is at 14nm while AMD is still at bloody 28nm. I bet AMD might begin to lose money bigtime in the APU space too
     
    Last edited: Feb 24, 2015
  21. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Hey that's better than anything Intel has done after Sandy Bridge. Ofc Intel was so far ahead to begin with they can coast and still blow AMD out of the water.
     
  22. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Have to laugh that Intel is coasting. :)

    And, it's easy to get that 15% increase when the ancient SB platforms still outperform it.
     
  23. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Have to laugh at your predictable comeback with Intel apologetics
     
    Kent T and Cloudfire like this.
  24. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
  25. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    26% performance increase with a 17% increase in clock speed? That's only 9% faster clock-for-clock. Dat 4 years of Intel progress. Sad.

    Haswell runs hotter than Sandy and consumes more power when overclocked. Also overclocks worse.

    And to top it off, you used PassMark again. Way to go.
     
    Starlight5 likes this.
  26. baii

    baii Sone

    Reputations:
    1,420
    Messages:
    3,925
    Likes Received:
    201
    Trophy Points:
    131
    Only thing that matter is price imo, if they can deliver a system with better c/p than g3258 then people are sold. maybe push some touch optimized windows game for amd win tablet.
     
  27. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    Doesn't matter, I use PassMark for AMD chips too...

    Also doesn't matter how you try to spin it - I get more performance today than I got in late 2012, so yeah... it still counts.


    What is sad is AMD trying to make a comparable platform and still failing after almost a decade of trying...

    (And I mean it too; I can only imagine what Intel would have brought to the table if AMD was on the same playing field).
     
    Starlight5 likes this.
  28. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    By citing PassMark you actually helped my point because it doesn't stress the CPU enough to let Haswell pull away from Sandy. I know for a fact that Haswell is more than 9% faster clock-for-clock. ;)

    The last time AMD was competitive was Phenom II vs. Nehalem, not a decade ago.
     
    TomJGX likes this.
  29. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,456
    Messages:
    8,707
    Likes Received:
    3,315
    Trophy Points:
    431
    Indeed.. After that they went down the APU road and Intel has just crushed them.. Honestly,they need to put some focus into designing the CPU bit instead of the damm GPU bit..
     
  30. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Meanwhile, I still own a Llano based laptop that (vastly) outperforms any amount of commonly sold celeron and i3 laptops, for less battery draw. Never owned a more practical setup for small office/programming tasks, along with music/video playback. Until the i7u I have now, because of how it can be set to a reasonable processor speed and stay there, there's really been no contest for mobile office. Of course, the draw-back with the i7u/nvidia combo I have is that I can forget about 5w smooth 1920x1080 video playback. Which the "true crap" amd platform did well. But no one bought it, IT folks shun it. Review sites pan it. And enthusiasts seem to imagine that all laptops, including all Intel laptops, are sold to people who constantly run heavy simulation software all day. Something that most laptops based on Intel platforms wouldn't actually allow even if the chips technically were good for it. Instead, it seems people imagine it's worth a bit of pain with low battery life, overheat, wear on the components, curious instability hangs when switching between battery and power, etc. In order to have a 2s shorter execution time on the excel macro they run one single time a day..

    Imo, kind of curious that people want to torture themselves with bad products in that way.
     
  31. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    octiceps, Nehalem was previewed in 2007 with a shipping model in late 2008. Okay, not a decade, but close enough considering it must have been under development at least 2-3 years...

    Phenom II was released in late 2008 too (with a future incompatible socket), but it was based on the original Phenom that was also released in 2007 and considering the 5 year cpu upgrade cycle back then, it cannot be considered anything less than a decade old tech.

    Either way, that tech is ancient today, even if we still have examples around that are still working. Especially when performance/watt is considered.

    nipsen, what model and Llano platform are we talking about? I have no doubt that in late 2011 or 2012 AMD may have offered such a platform. The 'problem' is that I have been using a superior Intel platform since 2010 in the U30Jc based on the i3-350m.

    I have to agree with TomJGX that AMD's focus on igpu performance has effectively neutered their relevance to modern computing (ignoring gaming... and even then, it is throttling their excellent igpu's vs. what Intel can offer today) and I really can't blame manufacturers for seeing that and shunning them for it.

    The Intel race-to-idle idea has proven that more cpu HP is better not only for a vastly responsive system, but also for a system needed to sip power too (when it has to).

    AMD's take that 'XX' amount of cpu HP is enough for today (or whenever they release a new platform) may be fine, initially.

    But that doesn't stop O/S, browser, software and users' evolvement from advancing from where they were yesterday. And that is what makes AMD such a bad buy for most.

    Even if I recommend a system to a mere accountant, with a single/commercial piece of software that he/she depends on, an AMD platform is limiting them still - and they've seen this for themselves and ask for an Intel platform sooner, rather than later.

    The idea that 'good enough' is... well, nobody has that kind of time to waste anymore.

    And knowledgeable consumers buy the best available for their $$ and unfortunately, AMD hasn't been on that list for a decade for myself or any of my clients for almost as long.
     
    Kent T and TomJGX like this.
  32. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    *sigh* Look. I regularly sit in meetings with people who believe, from the bottom of their hearts, that a slick laminated slide with a round font cannot possibly claim anything remotely resembling a lie. And I understand that. After all, that's how they figure things out - if they can't judge the trustworthiness of something based on how bold the font is on the laminated slide, then their world would literally fall apart.

    So that is something I can at least grasp. It's a kind of test they use: if you impress me now, regardless of what you might actually say that resembles any degree of technical reality, then it's going to be good enough. They expect you to stretch the truth, and they use themselves as a test for whether it still sounds true enough.

    But most people, and I'm willing to bet, all of the people around here, are not invested in believing things that are technically sketchy. In the same way, I'm not invested in convincing you one way or the other.

    I'm just saying that for my specific use for a laptop... which is a word-processor (for example I used to really like the powermacs with just a screen and a Quark. Neat stuff, just cost too much... still, something to consider, that it took well over 10 years before something equally useful turned up on that part of the market again..), and the option to add music in my ears, run some programming IDE, and video and some games. When that is my requirement for use, and I wish that to be possible on the least amount of battery draw possible -- then I have more than one option to go to, for fulfilling that requirement. I was just saying that.

    As opposed to only having an i3 in a box that boils the plastic into the table at the end of the compilation run. ..You know.. it doesn't harm anyone to admit that until you could run the ULV processors down to 800Mhz last year, the silent and passively cooled laptops based on Intel - even on the ultra-low performance tier - just didn't exist. That you'd get these cut-down celerons "designed" for ultrathins - that still burn the goop on the chipset.

    In the same way, when people saddle a business-customer with a "power station" that supposedly can leverage Intel's magical demons to drive 3d-accelerated contexts in "computer-aided design and crafting" programs, unlike anything else in the known multiverse, etc. Rather than give them a bloody Ipad and a remote shell to the million dollar computer cave graveyard in the basement. Or at least convince them to deal with the inconvenience of the context shifts with "switchable graphics" in something that makes at least a little bit sense hardware wise. Anything. Heck, give them two laptops and a subscription to iCloud!

    ...when that doesn't happen, I'm just not convinced the discussion ever was about getting someone the best tool for the job. I mean, I can understand that not being the most important thing in all contexts. But when you rather go with an i3 from 2011 over a brazos or llano for writing on.... because the "performance" is better? - then you're just not making a choice based on criteria that are visible to the rest of us.

    I mean, you could be talking about installation quirks, you could be talking about experience on some specific software. There could be, and there are, a million programs written in old VB that start to crack if the processor speed isn't running stable at full burn all day, for example. These crack on ULV as well, because of the reliance on the on-demand governor, that cause irregular loops and so on. This kind of thing is well known for those of us who sometimes deal with rescue-projects for certain well-paid Microsoft University folks.

    And this kind of thing could be one reason for choosing an older Intel board for a customer and sticking with it. There are other reasons as well, just as.. good. I know for a fact, for example, that a major company around here chose a specific HP solution with horrendously outdated hardware, for the sole reason of sabotaging employees that might want to play games on their computers. It makes no sense on any level (also doesn't stop people from playing games), but it was the argument that someone bought at some point in the process - and that's how everyone got their i3 "workstations". Still overheating, but now also painfully slow when they're loading their expensively bought clipart collection from server, and they take a coffee cup to boot up, etc.

    But you're not going to tell me that no one in the whole world would /possibly/ want a computer that works as a typewriter, that generally can do most things, that can run hd video for a 3h train-ride without melting, etc. And then seriously claim that an i3 with internal graphics can easily do the same for lower power draw anyway. Because you know it's bs.

    Like I said, I would not have bought the i7u I have now if it wasn't possible to run it for normal tasks at around 8-15w. From my perspective, that ULV launch was Intel's saving grace, right? Without this, my battery would drain in 4h as opposed to 8h.

    Meanwhile, even that kit isn't able to run a 1920x1080 video at any half-decent quality without turning up the draw to at least the double. Because the IGP's decoder is inefficient to the point of sheer embarrassment if you compare it to the apu cores. It's just not in the same league. Because it's basically like comparing a radeon card to an intel IGP. It's not a real comparison. It draws a lot of power, and it doesn't provide the performance. In fact, it's more efficient to load something graphics intensive on the nvidia card and idle the processor as much as possible. So, like I said, even that kit I have now with the very latest "HD Graphics", can't actually match the featureset of one of the early brazos chipsets for certain uses.

    So while other segments of the market might have different requirements. I don't know.. maybe people want to run a web-page in IE 8 that redraws the accuracy of a masonry tile setup fifty times every millisecond or something... and so they have other concerns than battery life and minimal processor grunt, and don't care. But that's not the outlook in the low-tier mobile laptop market.

    Because there, what we're getting in that segment as consumers, is **** on top of ****. So don't "insist" that what I /actually/ want is an overheating rig with abysmal, laggy graphical performance, that drains the battery all of a sudden because I opened Facebook (so I have to move away from the sun behind the house to find my charging cable), and that costs five times as much. Because I really don't want that.

    More generally, though, let me just say one thing.... - what the hell, guys? They're posting something that scores a 2500 3dmark11 marks for 35w, and somehow.... it's a failure in all possible ways that count? What in the nine hells are you guys smoking? You've got something here that will run a 3d desktop with no lag for 5w, that can add some advanced Webgl or a 3d context on top of that for literally no noticeable additional battery draw. And it's ... yes, wait for it.. it's..... failure! Oh, lord! Utter failure!

    Makes no sense.

    But hey. Each to their own, I always say. By which I mean, I never actually mean that, but simply always make people agree to say that that's what I always say in the end.
     
  33. bibacula

    bibacula Notebook Enthusiast

    Reputations:
    148
    Messages:
    34
    Likes Received:
    3
    Trophy Points:
    16
    Carrizo appears to be optimized for low power instead of high performance (high clock rates). That's probably why there is no desktop chip.

    If AMD is successful, a 15W Carrizo should only slightly underperform a 35W Trinity.

    While Intel ULVs have done this already, processor competition for thin and light notebooks would be very welcome.
     
    Last edited: Feb 25, 2015
  34. Atom Ant

    Atom Ant Hello, here I go again

    Reputations:
    1,340
    Messages:
    1,497
    Likes Received:
    272
    Trophy Points:
    101
    That 26xx 3DMark11 points not bad at all, equal to a stock Radeon 8850M and that is pretty capable for every games in 720p. The question is just did they solved the throttling issue? In 3Dmark 11 the GPU can run at top speed, but in games both CPU and GPU parts used to throttling with AMD APU...
     
  35. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    Sorry, but we're obviously not smoking the same stuff...

    You said;
    No, you've misread what I wrote; I just said an i3 350M that I bought from 2010 has just as good performance and just as good battery life as the Llano from two years later...

    That means a great deal to me, but you obviously don't care with your laid back usage.

    NP, we're good. ;)
     
  36. davidricardo86

    davidricardo86 Notebook Deity

    Reputations:
    2,376
    Messages:
    1,774
    Likes Received:
    109
    Trophy Points:
    81
    From what I've gathered, the power management in Carrizo will be closer to that of Mullins/Beema that utilizes a more effective Turbo. It is able to exceed TDP limit for longer periods of time, similar to what I've noticed Intel CPUs do.
     
  37. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    Now everything depends on whether vendors release not-too-bad Carrizo-based notebooks in reasonable time, or f everything up like they did with Kaveri.
     
  38. davidricardo86

    davidricardo86 Notebook Deity

    Reputations:
    2,376
    Messages:
    1,774
    Likes Received:
    109
    Trophy Points:
    81
    Nipsen, this reminded me of what you said:

     
    Link4 likes this.
  39. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Well I think Carrizo would be a great mobile processor, especially considering even with all those transistors from having an integrated FCH it still beats a 35W Kaveri plus a ?W FCH within 35W TDP (possibly using less power on average than Kaveri APU alone under full load) by a decent margin. Also it would be great if we got more nice notebooks like this https://www.asus.com/Notebooks_Ultrabooks/N551ZU/specifications/ closer to launch and within $800 - 900. Sadly the N551ZU launched late and is only available in a few countries so far and the cheapest price I have seen was about 970 Euros (Don't know how much that would be in USD without all those extra taxes, and also that's the 256GB SSD version and ASUS prices laptops with SSDs very high).

    Edit: And here is one running Crysis 3!
     
    Last edited: Feb 26, 2015
    davidricardo86 likes this.
  40. Atom Ant

    Atom Ant Hello, here I go again

    Reputations:
    1,340
    Messages:
    1,497
    Likes Received:
    272
    Trophy Points:
    101
    That sounds good, hope will work that way. Wonder if support 2400MHz DDR3?
     
  41. davidricardo86

    davidricardo86 Notebook Deity

    Reputations:
    2,376
    Messages:
    1,774
    Likes Received:
    109
    Trophy Points:
    81
    I believe top of the line Carrizo FX-8800P will use the same as top of the line Kaveri FX-7600P, DDR3L 1.35V 2133 MHz. Maybe someone will figure out how to "unlock" 2400 MHz like on the desktop Kaveri models? Then again it won't be a while till we see desktop-based Excavator parts so it may not happen at all or is near impossible.


    Regardless, AMD's Carrizo should prove to be quite a potent energy-efficient mobile SoC APU when it comes to battery life it is indeed using the power management closer to that of Mullins/Beema (which I've read it is). Here's something really cool and telling of what we may see from Carrizo using current platforms.


    13.3" HP Pavilion x360 convertible (AMD vs Intel)

    Battery Runtime

    28nm 15W Beema A8-6410 43 Whr
    Idle (without WLAN, min brightness): 9 hr 48 min
    WiFi Surfing: 5 hr 48 min
    Load (maximum brightness): 2 hr 24 min

    22nm 15W Haswell i3-4030U 43.5 Whr
    Idle (without WLAN, min brightness): 7 hr 10 min
    WiFi Surfing: 5 hr 45 min
    Load (maximum brightness): 1 hr 40 min

    Power Consumption

    28nm 15W Beema A8-6410 43 Whr
    Idle: 4-6.8 W
    Load: 19.1-23.8 W

    22nm 15W Haswell i3-4030U 43.5 Whr
    Idle: 4-6.1 W
    Load: 26.1-29.7 W

    Important
    Hopefully we don't see as much throttling with Carrizo and Carrizo-L as shown here (and other previous AMD APUs) however for energy-efficient mobile APUs, its almost unavoidable and some OEMs seem to have more fine tuning than others in order to hit those battery run time and power consumption figures so performance may suffer. My previous HP EliteBook 725 G2 (Kaveri 19W A10 PRO-7350B) throttled similarly but even with the 46 Whr battery I only really saw about 4-5 hours WiFi Surfing with minimum-to-medium brightness. The additional FCH chipset and 1080P IPS touch display probably contributed to this short battery runtime too. Carrizo's integrated FCH chipset and revamped power management should bring about some decent improvement in overall battery runtime and power consumption. Will it be enough to keep up with the competition? I think so.
     
    Last edited: Feb 28, 2015
  42. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    davidricardo86, how does it help to compare what is coming in the next few weeks/months (AMD) with what has been out for over a year and almost 9 months (official Haswell launch was June 4, 2013)?

    Any comparisons with Broadwell?

    That is what will be telling, battery-wise and generational-wise too.
     
  43. davidricardo86

    davidricardo86 Notebook Deity

    Reputations:
    2,376
    Messages:
    1,774
    Likes Received:
    109
    Trophy Points:
    81
    EDIT PART 1:

    Tilleroftheearth,

    I'll admit, this is mostly pure speculation on my part regarding Carrizo's (and Carrizo-L's) near-future battery runtimes and power consumption using Beema (Puma uArch) as a base. However, my example here was simply to show or try to extrapolate that AMD already has/had the 'stuff' it needs (and will be/is using it) to see similar results from their top-of-the-line Excavator APUs and remain competitive in the mobile market when it comes to battery runtimes and power consumption even when using an 'inferior' node process. And that is exactly what I think were going to see with Carrizo going up against Broadwell-U (Core-M Intro September 2014). AMD will remain on a further refined/optimized but 'outdated' 28nm node while Intel will continue on to the newer and 'much more advance' 14nm node. Amazing!? I think so especially using 'off-the-shelf' power management from Mullins/Beema in Excavator, and it may be enough to remain competitive long enough til a Carrizo 'refresh' or even Zen makes an appearance.

    Intel's Broadwell parts will likely continue the 'trend' we've been seeing over the past recent years of being more 'efficient' than what AMD has to offer however this Puma vs Haswell example seems to show/say otherwise and is almost an 'apples-to-apples' comparison with what's available in the market right now (with almost all other hardware variables being the same and the only difference being the SoC).

    According to CPU-World, AMD's Puma A8-6410's introduction date was June 2014 while Intel's Haswell i3-4030U's introduction date was April 2014 but you are correct in stating that Haswell was introduced back in June of 2013. Unfortunately AMD's and Intel's product launches do not align with each other as much as we'd like a fair comparison so we cannot do anything about this. It is what it is, AMD is behind.

    I suppose I could compare the AMD A6-5200 (25W, Jaguar uArch, Intro May 2013) instead? But then I wouldn't have an equivalent HP Pavilion x360 convertible to compare with as this model did not exist back then with the A6-5200. I'm just saying.

    Conveniently the HP website is currently down for maintenance right now as I wanted to see if a Broadwell-M based HP Pavilion x360 convertible is available for purchase (or even reviewed by notebookcheck) to compare to Puma but I cannot access their site at this exact moment. I suppose we may have to wait for an equivalent Broadwell-U and Carrizo (or even Broadwell-M vs Carrizo-L) platform to be available with similar specs like the x360 example to make a more fair comparison.


    EDIT PART 2:

    I found some battery and consumption improvement estimates/figures from AMD and Intel for comparison sake and added some information regarding reduced power consumption resulting in longer battery run times. Of course this is all from marketing slides so as always its best to wait for actual products like my example above and wait for real life testing.

    http://www.theregister.co.uk/2015/02/24/amd_carrizo/


    http://www.anandtech.com/show/8814/intel-releases-broadwell-u-new-skus-up-to-48-eus-and-iris-6100/4

    http://www.anandtech.com/show/8995/amd-at-isscc-2015-carrizo-and-excavator-details

    [​IMG]

    Do not forget, Broadwell-U does not integrate the on-package 32nm PCH southbridge into the CPU/iGPU die utilizing two separate dies unlike Carrizo's on-die 28nm FCH! This will help Carrizo fend off the 'more advanced' 14nm FINFET Intel Broadwell-U uArch or at least allow competitive power consumption and battery run times. AMD fit more into one die, Intel didn't therefore can allocate more room on two dies and to the CPU in my opinion.

    http://www.anandtech.com/show/8814/intel-releases-broadwell-u-new-skus-up-to-48-eus-and-iris-6100

    [​IMG]
    [​IMG]

    AMD Carrizo's 'inferior' die and package
    [​IMG]
     
    Last edited: Mar 1, 2015
    Cloudfire and tilleroftheearth like this.
  44. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    davidricardo86,

    My questions were mostly theoretical (at this point), but you went above and beyond and presented the available information concisely and accurately.

    Great post! and summary of where both platforms are and where they might end up in the near future.

    I still have my reservations about how the final outcome will be played, but I can agree that your version/conclusion of future events may very well come to pass.

    I have seen in my own workflows that gpu/igpu prowess is important to my productivity, however, what will increase it more is support for more/larger/higher resolution monitors, not gaming prowess. And I'm sure that will eventually come to pass from both vendors, given enough time.

    But the biggest increases to productivity I can see in my lifetime are tied to the basic building blocks of computing; CPU+RAM=WORK done, or; productivity. With AMD seemingly ignoring the CPU part of the equation over and over again, they become less and less a platform of consideration. With Intel concentrating more or less equally on performance and efficiency on their mobile products (1% increase in performance... with not less than 2% increase in battery life), their design philosophy is in better tune with what the world needs (but perhaps, not gamers at this point).

    I cannot for a second believe that computing has or will ever become 'good enough' (even with 6 core/12 thread CPU's and 128GB of RAM or more...). Yeah; not even for 'casual' users. Not even for grandma's looking up recipes on google. Or, not even for anyone that wants a dynamic platform that can sip power like a miser, yet when called upon, provide performance too.

    When productivity stops advancing what is affected is our time. Even our 'off' time/play time... and even if it's 'only' affecting it milliseconds per time slice. That is a trade off I cannot make anymore (long gone are the days of thinking I would live forever and thinking Superman was a good, but not undefeatable foe - even if he is normally the good guy... :) ).


    Nominally, I agree battery run time is the most important aspect of a mobile system.

    I take that one step further and say that the amount of work I can produce on a single charge away from the wall outlet is the better system. Regardless of the absolute run time.

    For example, when I was able to provide on (remote) locations a preview of the day's shoot for my client, not only did I save time in uploading GB's worth of files to a server (and my client the same in downloading the same files), but I was able to select the files the client was actually interested in more accurately because their comments as the shoot progressed were still fresh in my mind.

    This may have been a mere couple of hours run time at the most, but that was all that was needed to increase my productivity tenfold.

    No system with substantially less performance would have given me that edge. Not even if the battery run times were increased by an order of magnitude. First; because the client would not wait an hour or two longer (up to double or more) for that stage to complete. Second, I would not last working straight that long either (these are already 18 hour days).

    To relate this to gaming... when I see my client's kids take 5 to 10 minutes to get a gaming system up and running to play for '10' minutes before bedtime, I clearly see the need for more HP from any system and any workflow known to man.

    And of course, those are not even battery powered setups.

    In 1996/97 I began being interested in mobile systems because they had finally reached a critical point; enough performance, (barely) enough battery time, and their weight was almost down to single digits (finally).

    But I have never regretted buying the most powerful examples I could afford. Nor have I ever thought I had bought the last/best/most powerful system I will ever need either.

    This is what AMD needs to get: cpu performance is still king. Any other thinking will make them a footnote in the history of computing. Sooner rather than later.
     
    Kent T likes this.
  45. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    The thing here is that AMD was able to achieve far greater leaps in efficiency and integration on a (granted 'high performance') 28nm node compared to Intel and it's 14nm.

    Problem is, most people won't see or care about HSA 1.0 for instance, or the premise that the northbridge was moved to the same die, etc, etc, etc.

    We have yet to see actual numbers from Carrizo and how it performs not just in benchmarks, but also in real-life situations.

    Having said that... there's also a matter of the upcoming Zen architecture which is supposed to be on 16nm and will come next year.
    And if Carrizo is any indication... all of the things seen here will probably be seen in Zen, and quite possibly then some.
    Jumping to 16nm with all AMD did with Carrizo and then made a new architecture to boot, I don't know just what kind of result it will be... but I can say this: if AMD decided to integrate HBM onto Zen for instance... that thing would quite likely trump anyhting in the graphics department on Intel part with the level of integration seen here, also tremendously aid HSA, and in combination with (hopefully a lot more powerful overhauled CPU cores), AMD just 'might' be back in the 'game' next year.

    I don't think anyone expected Carrizo to trump Intel in CPU performance this year, but one has to admit the strides they did on a 28nm node.

    But... even if AMD comes out with Zen next year that turns out to be far superior to anything on Intel's end... would the OEM's still care given that Intel can just as easily continue their bribery and tactics?
     
  46. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Deks,

    I really want AMD to dropkick Intel into another dimension (and wake them up/get them angry/hungry again). But that is not what logic and their historic response would indicate.

    How many years has Intel been past caring about 28nm for their cpus? 5 years? Almost 6 years?

    Given that kind of perspective, it doesn't matter how perfect AMD offers such ancient process technologies today.

    But Zen may be able to change the downhill course AMD has been riding for so long. We'll see.

    As for Intel's 'bribery and tactics'; it is called business with the big boys.

    I wish I was even a small part of that club. ;)
     
  47. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Well, my point is that AMD is essentially stuck on 28nm because their main focus is on GPU's, and GlobalFoundries (or is it TSMC now?) are still using that process, plus, AMD doesn't have the finances Intel does.

    Also, while Intel may not really care about 28nm... AMD will essentially move to 16nm next year and with everything we are seeing in Carrizo, Zen might actually trump Intel despite it using 14nm (although some people have argued that Intel's node process isn't a 'real 14nm' - whatever that means).

    Regarding Intel's bribery and tactics... I wouldn't call that 'business with the big boys'... it is disgusting (a prime example of what's wrong with the system as a whole and how idiotic it is that humanity is still using it - but that's besides the point).

    Its an intentional manipulation to force out a 'competitor' out of the so-called 'game' just because they have more cash to throw about.
    Intel was even sued over its practices in the past... although if you ask me, that $1 billion fine wasn't nearly enough.
    Plus, people seem to blame AMD for lack of products appearing in the market (and on time) and also blame it all on inferior CPU part... which is plain and simple ignorance.
    AMD APU's are more than enough for the 'average consumer' (regardless of the CPU part being inferior)... problem is, AMD cannot force OEM's to use their products.

    With little to no OEM's using AMD products, Intel has a free reign to flood the market with garbage products and lull consumers with their tactics to buy those products, and AMD cannot even get to the average consumer any-more.
     
  48. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Lemme guess, you're one of those thieves that work on Wall Street?
     
  49. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Stuck on 28nm is the key words.

    Agreed; business with the big boys is disgusting for us mere mortals. And intentional manipulations to force out other competitors is called... yeah; doing business.

    Every single client that has seen how my Intel systems work has sooner or later asked to be at that level too. There is no such thing as an average consumer - all manufacturers lie - but my clients that once believed the AMD marketing, do not any more.

    Intel has no free reign over me to force me to buy garbage. I buy with my eyes open.

    If and when I see an advancement over what I have now; then I part with my money.

    And I go out of my way to compare AMD systems when I'm in major purchasing mode - but they have consistently failed to deliver for longer than is good for them and the industry in general.
     
  50. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    No, the part of the club I want to be in is the 'business with the big boys' club.

    Instead, I'm here trying to help where I can.
     
    TomJGX likes this.
 Next page →