http://www.saturn.de/webapp/wcs/sto...nel=sedede&searchParams=&path=&query=FX-8800P
700 EUR. (£500.796)
APU: FX-8800P (2.1 GhZ base, 3.4 GhZ Turbo)
GPU: AMD Radeon R8 M355DX
RAM: 8GB DDR3 (No word on the speed or its timings though) - 2 x 4 GB
Screen resolution: 1920 x 1080 (TFT LCT - Full Matte)
HDD: 500 GB
Specs seem decent if you ask me (unlike the HP atrocity in USA) ... though I have no idea if the 8800P was configured with 15W or 35W (I think it might be 35W considering that some people have tested this APU in a limited fashion during its presentation).
Likely downside is that the RAM slots are filled to capacity, and that if you wanted to upgrade to more memory, you'd need to take the current sticks out and replace them with say 2 x 8 GB (low timings and high speed) which would add to the cost - but I think it would be worth it to gain an appreciable bump in APU performance and double memory capacity.
With a RAM and SSD upgrade, this could perform quite nicely (though the price tag goes up in that case to roughly £650 or £700 - which I think is not too much when you take into account you can get these upgrades later).
-
tilleroftheearth Wisdom listens quietly...
For almost $1K CDN (plus added taxes) this is nowhere near 'decent' specs for me.
The RAM is half of minimum. The HDD is a... HDD... The processor is 5% better than a 2012 AMD based model... sigh...
The screen seems like it could be decent at that size (15.6") - but not worth it with an AMD non-solution inside.
We are effectively in Q3 2015 - anything less than a QC HT 8 thread capable system with maxed out RAM is a waste of money if the objective is keeping the system for longer than ~18 months ( max). And at $1K CDN, these aren't throw away prices, imo.Starlight5 likes this. -
It has a decent 3dmark score, for running up to 35w. Not exactly a gaming laptop, but if it turns out you can run around 15w while decoding hd film, or running limited 3d applications, it's not a bad pick. Photoshop winner as well.. If you compare it to, say, a 30-17w nvidia/intel package, you're getting a lot of performance for very little battery draw. So if it turns out that someone managed to put this in a laptop with a good lithium polymer battery, and go for a power-mode that maintains a passive cooling scheme.. this might not be a bad setup. Might.
-
Actually the 35W FX-8800P is about 15% better than the 35W FX-7600P from 2014 so it is a significant improvement, better than the generational Intel improvement of 5-10% at least. In 15W mode though it is easily over 50% better than the FX-7500 though.
-
tilleroftheearth Wisdom listens quietly...
http://wccftech.com/amd-carrizo-apu...amroller-die-consists-31-billion-transistors/
Until I see a cpu passmark score of over 8K (even if it is at 35W), these are primarily netbook chips in my view - gpu performance? Don't give a damn. -
That wccftech arcticle is from 4 months ago.
Also, CPU Passmark is no real indication of how a system will perform in real life (its numbers are all over the place for one thing)... neither are synthetic benchmarks.
It is accurate that AMD stated how the 35W FX-8800P is up to 15% faster than the previous iterations.
Carrizo managed to achieve a higher single threaded IPC gain over Kaveri of about 5% with 200MhZ lower clocks (3.6 vs 3.4 GhZ), while the gains were 15% higher for multithreaded tasks in Cinebench (which I hardly think as representative).
Then again, at 15W, Carrizo did improve IPC by 50% - mainly because the architecture is optimized for lower power.
But as I said, this is not exactly representative of real life situations.
The IGP was revamped to include Tonga based architecture, also including colour compression algorithms to reduce bandwidth requirements, resulting in increased performance.
Plus there's fully fledged HSA 1.0 included - so, certain things I think will inherently be run much faster as they should be run through the HSA automatically (but, we have yet to see how Carrizo behaves in real world).
The reason I said the specs were decent was because already Kaveri was able to manage various games at 1080p at medium settings.
Carrizo should be able to do high at 1080p by comparison (at least) for most games.
Plus, the Acer laptop in question comes with a 1080p screen - where exactly did we get such offers before with AMD APU's?
Also, lack of RAM selections and SSD is on the OEM's (besides, its not like Intel systems in the same price range don't come with low performing mechanical drives and low RAM - of course they do) - but the user upgrade is not too expensive and can end up cheaper than being offered by OEM's.
Certain Intel systems are inherently weaker by comparison in graphics alone even though they are in the same price range (if not more expensive).
As for not keeping this kind of system for more than 18 months... uhm... there's the little thing known as DX12, which will probably alter the software scenery much to APU's favour and take advantage of the hardware a lot better as far as games go (and professional software too), because APU's (Kaveri and Carrizo) have been specifically designed to take advantage of that, so I think that the system in question would be very worthwhile to have long term (students, working parents, etc.) - plus, Photoshop already takes advantage of APU's and sees a lot of performance increase as a result without a need of a dedicated GPU.
A beefy CPU portion is not a be all, end all any more.
Yes, it is a contributing factor, but most people who have laptops and do everyday tasks probably don't bother themselves with that to begin with - and why would they, since the differences would be unnoticeable to them?
All they want is a system that runs the OS fine, Youtube, Internet browsing, Word processing and other similar tasks.
I think APU's fill this segment quite nicely, while offering a much better graphics performance which allow users to play many games at decent settings in native resolutions (this includes online games).Last edited: Jun 22, 2015 -
tilleroftheearth Wisdom listens quietly...
As you may know, benchmark scores are not what I live or die by. But with cpu's, a PM score of 8K+ is more than 2x more useful to me than a PM score of half or less... and again, QC 8 thread capable cpu's need only apply.
I won't repeat your whole post, sufficient to say that you're seeing the glass half full while I'm seeing the AMD glass almost empty. Each point you make is exactly the opposite of what I see in real world use of the newest platforms - gpu only matters for very limited workflows (games...) and none of the workflows I need to do (even in PS CS6).
The latest platforms (Broadwell) show significant jumps in everyday tasks - making anything below them ancient and effectively obsolete. Yeah, the older platforms do the mundane stuff okay today... but I can see the writing on the wall and it's saying don't bother with underpowered systems because, yeah... cpu's do matter (they always will).Starlight5 likes this. -
Like any other benchmark... I take them with a grain of salt.
They are at best a guideline... and as previously mentioned, not an indication of real life performance.
We hadn't even seen Carrizo tested in 'real world' scenarios, so without knowing what it can actually do, until its put through its paces, I wouldn't make any assumptions.
And besides, you have conveniently left out DX12 as a very important thing that will not only see acceleration of tasks in games, but professional software as well which is only now starting to make the transition into this area.
When Kaveri was tested with HSA for instance in Libre Office, it beat even the top end desktop Intel systems by a large margin.
As for Broadwell showing 'significan jumps' in everyday tasks... I'd definitely like to see what kind of tasks you're talking about because the average person won't be able to tell the difference for the most part.
Performance of everyday tasks is largely dependent on the speed of your storage unit (HDD and/or SSD).
A slow processor can certainly make things unpleasant, but APU's such as the FX-8800P are far faster than Atom's for instance and this isn't an E-450 here either.
Also, if your workflow requires a strong single threaded CPU performance, then an AMD APU (at least ones available until now, incl. Carrizo) probably aren't for you (unless the software you use will be optimized for DX12).
And if you also recall, I mentioned that APU's were mainly designed for everyday tasks and some occasional gaming.
I think it would be more than viable for majority of people.triturbo and Starlight5 like this. -
Intel's IPC advantage over Excavator is less than 40% now, Excavator has about 75% of Broadwell's IPC which is quite good, multicore performance is on par with 28W dual core, 4 thread mobile i7s, and last time I checked they put i7 5500U in gaming laptops which is only on par with FX-7600P in multithreaded.
-
Good to know.
This actually bodes well for Zen when it comes.
AMD stated that it will feature about 40% increase in IPC alone... of course, this doesn't take into account other changes that will affect performance (namely the 1 core 2 threads design which will operate more like Intel).
AMD is also supposedly aware of consumers desiring an APU with HBM.
HBM2 should be ready before Zen however, so I'm hoping to see that in operation. Heck, even HBM1 would do the trick for an APU.
At any rate, keep an eye out for more Carrizo laptops out there. Would be good to see more in circulation.
Also, is Acer doing anything to promote these laptops with Carrizo? -
tilleroftheearth Wisdom listens quietly...
See:
http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion
Please bear with me - I know this is a Xeon server review, but it does showcase what Broadwell is capable of.
More importantly, it also shows what AMD is only wishing it could achieve. Carrizo is not the answer to Intel's Broadwell at this time, let alone Skylake soon.
HSA? DX12? All of that will be included if that is a successful attack angle on all processors going forward. It is not an AMD exclusive except for 'right now'. And eve then, if/when Intel decides it is needed to stay ahead (or keep up with their competition), we'll get it. But it is of no use when almost every other relevant performance factor is in favor of Intel. (Libre Office? Why)?
To AMD... keep up the good fight. Stay strong. Stay focused and deliver tangible real world results. Not empty promises that cater to gaming oriented and other non-professional workflows. Games, like every other software in existence, will always grow to dominate and choke any platform. No matter how great it is at introduction.
Sure, the same applies to Intel. But what Intel is concentrating on is what IS most important; performance, power efficiency and price. Paying more isn't a sin or a sign of incompetence. At least not when the higher priced part is more efficient, offers more performance and also offers a longer usable lifecycle (allowing me to be more productive longer for the same relatively low initial cost).
That is what AMD should concentrate on, imo, to give Intel a sense of urgency once more (like in 2005...).
Is Carrizo a good step? Yeah, no arguing there. And I really hope the implied improvements show in real world tasks too.
But again; at the $1K price point... it is another swing and a miss for AMD.
For me to recommend such a system even for a 13 year old 'gamer', it would have to be at half price or less. Why? Because another will be needed in less than two years once again (and in my experience; nobody buys AMD twice).Kent T likes this. -
I haven't red all of what you wrote, but I'll ask - have you ever wondered why the GPU takes more and more space on newer Intel CPUs? It's quite some time since "it is just to have a picture on the display and be light on the battery". I think AMD has always pushed the envelope, just lacked the resources to carry it on. More and more applications would benefit from GPUs, just like more and more applications benefit from more cores. It wasn't that long ago when I was coming across opinions like - why all the cores, no application can take advantage anyway. You know which hardware is the most expensive one? The unused one. You have it there at your disposal, yet you can't get advantage off it.
Starlight5 likes this. -
Starlight5 Yes, I'm a cat. What else is there to say, really?
I personally see no point in coupling anemic, CPU-wise, APU, whose only strength is better-than-competitors GPU performance, with mediocre dedicated GPU and 15.6" chassis. I simply don't get it why such monstrocity even exists, yet laptops with top AMD APUs come only in this flavor, and any less-than-top AMD APU is so weak it's not even worth mentioning. Could someone please explain me what is so fundamentally wrong in the idea of putting that friggin FX-8800p inside a 11.6" or 12.5" ultraportable and calling it a day?
-
tilleroftheearth Wisdom listens quietly...
Intel plays it smart. Introduces things when they need to (based on economics and actual need/usability by their customers). Sure, we all ***** that we could use what we can get today last year - but a small fraction of a percentage of Intel's users don't hold much water. When the environment is right (especially for them); Intel delivers.
Yeah, the gpu is taking up too much space for my tastes - but Intel does not go willing down a road without a large (expected) reward at the end. I'll bide my time and reap the rewards when the rewards are there to reap.
Right now, theoretically (Libre Office... sigh) AMD is winning. But while this mole hill was claimed first by AMD, Intel will be the one to clean up big time... in due time.
A great product is one that performs in harmony with the (whole) environment in which it is introduced in. AMD has not found that sync yet. Intel has; consistently. Even when they were behind AMD a decade ago (the designs they were working on from even before then got them out of that slump and they've never looked back).Starlight5 likes this. -
Now, an x86 compatible cpu already runs longer command words than just the basic atomic operations. An intel processor could be called a CISC engine, for example (complex instruction set computing, where the system can collapse common assembly commands into more complex ones, cache the result, and allow this to be computed quicker the next time - most of the optimisation since Pentium has been in that area). As opposed to a RISC engine that would run longer command words with potentially completely different commands, and return that complex result every clock-cycle -- two main strategies for increasing computation power, with completely different limitations and requirements. The other improvement on the cpus have been that they also have started to run limited SIMD operations with various instruction set standards.
In other words, your average PC CPU as well as your typical GPU are not programmable engines with specialized instruction sets allowing parallelism across work ram with any sort of concurrency. They run a specific instruction set standard, where they add more and more specific algorithm reductions in hardware.
So, I hear you ask, what is actually the difference between a gpu's arithmetic unit and a cpu's arithmetic unit. And it's basically that gpu-instructions are very often easily parallelizable, in that you can render each pixel independent of the next one, etc. So the processor doesn't have to run as quickly, nor be as complex, and the ram doesn't need to be very fast. While running the same thing on the cpu is just a terrible waste of time - cost-efficiency goes down the drain, and multipurpose cores are wasted, and they have to be made in a huge number to work -- and when you get down to that, designing for parallel operations with concurrent access to the bus is not trivial, and also extremely expensive.
But. What if you could get RISC engine to be relatively cheap, while simply structuring the compiler and the hardware in it to run general code? It might not always be extremely fast, but code goes through the compiler anyway, so why not go for that? Basically, that's ARM. They did that.
So now you have a general purpose engine with programmable instruction sets. That essentially can execute "gpu code" as well as "cpu code" on the same chip. It's just about adding any amount of arithmetic units, improve the bus-speed, and you're going to have a cpu that can execute limited simd-type operations across enough memory to function as a gpu, while also being able to run general purpose code(to a certain extent). This is basically the Tegra chip - except they added an IO bus, memory bus and so on on the same die.
Of course, before that Intel claimed that there'd be a copyright infringement on their intellectual property if such a general purpose engine was made, as that would then have to be called a cpu in intel's definition. And that's that. So Tegra devices, by some curious and unexplainable coincidence, are now limited to tablets and phones only. Rather than, say, replacing Intel's entire motherboard and chip array sandwich in, say, the EeePCs with one single chip.
So what's AMD's apu? It's an offshoot where the gpu and cpu computing units are on the same die - but where they are separate types of computation cores with a separate pipeline. That is, rather than simply being a mass of general purpose cores in different clusters, etc., which was the initial design. Which, as explained, Intel put a stop to while essentially being about to sink AMD for good. But they still stuck with that offshoot design, and have worked their way off into somehow proving fairly good results with OpenCL thanks to the improved pipeline - basically a fast bus - between the gpu and cpu devices. Another concern is the cost of the chip if the number of general purpose cores were increased. And I don't know if it's entirely certain that such a system actually would be easily compatible with low-level or high-level standards all at once.
Anyway - the downside to this apu design is that these gpu cores aren't very efficient when it comes to space, and creating a separate pipe increases the size again. So even though it's a pretty nifty engineering feat, it's not nearly as energy efficient as it might be. And while you get quite impressive performance, it's not realistic to expect the same speed or energy consumption on that overpopulated die as on a larger module. So it has certain limitations that AMD will never get through in the long run.
What they do offer, however, is a possibility to have a general purpose x86 cpu, along with a decent gpu - and more than decent when it comes to decoding video, running OpenCL, etc. That has a very low effect draw on full burn, compared to the competition (read: Intel).
In an ideal world, we wouldn't be having this discussion now. Basically ARM would have taken over the smaller laptop market long ago, and we would all be sporting laptops with a week of running time for music, text-typing, movies and internet browsing. While AMD would be well on their way to designing a cpu with programmable arithmetic units along a common bus with concurrent parallel access to working ram.
Also, I suppose IBM would have already done that 10 years ago, and we would have had their PowerPC designs still running in 64-bit land. But that didn't happen either, of course, because those products as well were just too useful I guess. Besides, why pay through the nose for hardware that would force Microsoft to scrap their entire toolchain, while giving a monstrous boost to any company offering a solution that wasn't bound into proprietary code-bases that literally are written by undergraduates pressed for time.
So that's basically why we have an apu-design turning up. It's an attempt to optimize the gpu/cpu design down to the size where it no longer makes any sense to have it - while keeping the design there on the concept-level to please patent-lawyers at Intel, allowing Microsoft to still design bad software, and annoying the hell out of engineers everywhere, including, I'm sure, at Intel. So there you go: History in computing since ever, according to me.
Of course - when you can have a computer overheat from running youtube, but do that while scoring well on cinebench. That's obviously going to sell better than, say, a computer that doesn't overheat while running youtube, is about the 1/20th of the size... but doesn't score incredibly high on cinebench and artificial sequential tasks that... no computer used by humans or running programs created for something other than computing "1" in binary over and over again, will ever actually execute. I mean, everyone can see that... right?
But hey, I don't work at marketing over at Intel, or write my blog from whitepapers sent over from their PR office. So of course I'm not entirely sure about that last one.alexhawker and dzedi like this. -
Going down to 12.5" or 11.6", you'd need better cooling that would likely cost more for 35W TDP, but the 12.5" size at least ought to be doable. It'd still be cheaper than Intel Iris. If you're willing to add a little thickness, 11.6" should be doable, too - Alienware had an 11.6" with a dedicated GPU a couple years ago that seems to have sold well, and I'm sure you could make it slightly thinner than that with only an APU. Or put it in the 15W configuration on the 11.6" and have it ultrabook-thin.
Not that there's anything wrong with a 15.6" or 17" system with this APU and a dGPU, but I agree it seems like it's missing the sweet spot.Starlight5 likes this. -
With respect, these are early models with Carrizo, so its possible we might see more form factors over the next few months.
On the other hand, and do correct me if I'm wrong... Carrizo laptops seemed to have become available the fastest for purchase out of most recent APU releases when it comes to mobile, did it not? -
I just do not get why AMD calling Carizzo as the 6th generation???
-Llano,
-Trinity/Richland,
-Kaveri,
-Carizzo
So it is just the 4th generation or at the best 5th if we consider Richland as a new generation. -
-
tilleroftheearth Wisdom listens quietly...
In a nutshell; They don't play it smart for you or me, but for themselves and their lifeblood (the shareholders).
And when all is said and done, they still deliver the best we can buy as consumers.Starlight5 likes this. -
tilleroftheearth Wisdom listens quietly...
Here is more proof that AMD is not firing on all cylinders:
See:
http://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/11#.VYxu-XnbJ9A
While this is a gpu review, it does showcase how AMD cannot help but be directly accountable here... No third party manufacturers. No un-optimized BIOS settings. No 'but look at the low price' to partly defend their execution.
This is just AMD foolishly playing marketing games which may show great and even inspirational ideas on paper, but as the quote above says, no real substance for most users.
Come on AMD, wake up!
I am positive you know how to do things right. Just do it already!
To AMD:
Use brilliant ideas only if/when they actually help you get a leg up on the competition (today, not 3 or 4 iterations from now). Concentrate your resources and talent to where the most bang for the buck is for both you and your customers. These are not mutually exclusive goals for you today. They are one and the same.
Use (much) more of your resources to educate and collaborate with notebook makers on how to fully use/configure your current and future products (APU's et al.) for maximum benefit for the consumer in an optimized package (yes; that means a focused/specialized chassis - not a one size fits all).
Use common sense of how much increase in performance you introduce in each iteration. A 50% to 100% increase looks great for a very narrow aspect of performance, but doesn't translate well into real world workflows when everything else has been increasing 5% to 30% consistently with compound results over your single (major) iteration vs. your competitors multiple upgrades over the same half decade period.
Remember that performance, efficiency and price are all variables you can use to your advantage:
Offering a product with marginal additional R&D? Drop the price dramatically for this soon to be phased out older tech.
Offer a line with better efficiency (above all else). The performance (and even price) staying equal to last gen's options would still make this sku fly if the efficiency was markedly better or the heat generated was lower than what manufacturers and/or consumers had before from you.
Offer your performance options and you don't need to reach for the gold on your first or even your second or third try. But offer real overall performance (single and multi core performance) that users can did their teeth in to, and price these sku's according to the performance increase offered (not just from yourself, but from all players in the market).
Doing even one of the above right, along with working alongside chassis designers and planners will light a fire in your organization. And your industry too.
You have the tech side down, no doubt. Learn to execute like the fine tuned machine you should be. -
Oh really now? Please do tell me of a single technology that was released and that was about it. How much was Broadwell delayed by the way? In the end for a mere 5% increase, that most likely only data-centers and servers would benefit anyway. Also most OEMs would likely skip it altogether.
-
tilleroftheearth Wisdom listens quietly...
Just as easy to see the truth of what I'm saying. Intel's ~5% increases the last few years are equal to whole cpu jumps from mere years back. That is called working on your strengths and minimizing your weaknesses.
AMD is simply doing it wrong (for too long now). -
..I suppose it would have been an idea to mention that hbao+ has a slight performance increase on nvidia cards over ssao, specially in scenes with many distinct objects (i.e., prime candidate is Witcher 3), and.. that the scene based filters suffer in performance exponentially when you increase the resolution, and so on. While perhaps including a game with TressFX to figure out how that affects power draw, and how that might make the increased bandwidth useful. As well as perhaps pointing out that quite a few of the performance increases at NVIDIA lately have been there thanks to driver optimisations that a very skeptical person might say were long overdue, perhaps even pushed upstream to coincide with certain relaunches of older overhauled cards.
But AMD is not at the top of the game here, of course. Although not necessarily because of bad hardware engineering.. as usual. -
Yeah, how comes yearly 10 to 20%, maybe more with future updates is doing it wrong? They do have a lot to catch up, but in the mean time Intel does nothing. Actually OK, it does something - tries to improve the iGPU performance. Then again AMD does both better than Intel (I'm not comparing Intel vs AMD performance here, I'm comparing previous vs current generation of each brand). When was the last time we saw 20% increase from Intel? Sandy Bridge, some 4 years ago. The last 4 years the total increase is around that much - 20%. The "mere years back whole CPU jump" is more likely a decade (exaggeration).
-
Starlight5 Yes, I'm a cat. What else is there to say, really?
triturbo, AMD may be improving faster, but their mobile CPU performance is nowhere near Intel quads. Oh, and AMD executives, who are without doubt monitoring this forum, promised to follow tilleroftheearth's invaluable advice and adjust their business strategy accordingly. Yes, every night I dress in tights, cape and hockey armor, and go out fighting stupidity in the streets. They call me Sarcasmo.
tilleroftheearth likes this. -
Snarkasmo
But not sure AMDs business-strategy is all that unfortunate, compared to their pr-strategy. In the sense you can call avoiding the worst of the normal underhanded embedded blogging and advertisement-driven tech-development unfortunate, at least.Starlight5 likes this. -
Regarding Fury X results... are people keeping in mind that the GPU was tested on what could very well be unoptimized drivers?
This was the case in the past before.
Initial products scored well, but it wasn't until subsequent driver improvements that performance started to reflect what the GPU's could actually do.
And let's face it, AMD is intentionally releasing drivers less frequently than Nvidia because I don't think they can do it any faster due to lack of finances (so it might be a while before new drivers are released).
Plus, the Fury is actually performing rather well at 4k considering that it was able to close the gap on various Nvidia optimized games (something the 390X cannot do), and its an initial bed test for HBM which radically reduced the size of the GPU and is taking up less space.
Plus, AMD seems to be using GCN 1.2 which is a relatively old architecture. The performance per watt was also improved quite a bit compared to the 290x/390x, and it has enhanced compute capabilities (which have yet to be tested).
So, is it possible that people who are negative might be focusing on too few aspects that might be less relevant in the short term while overlooking other things?Last edited: Jun 26, 2015 -
hehe, yes, at least a small possibility of that
Still - doesn't change that at every release of any new Nvidia card, AMDs cutting edge competitor is going to look bad (lack of proprietary physics, post processing filters, driver optimisations). And miss out on any momentum with new large game-releases. Even though the actual difference in performance is extremely small.
On the other hand -- it really is mystifying that AMD doesn't just own the middle level of the market, and partner with producers for silent water-cooling, lower power draw variants, etc. While improving the hdmi support, and include some display ports, etc. And finally overhaul their driver package presentation. That's really.. strange, when they already spend so much resources on getting out these top-level cards out. They really shouldn't be doing that, or at the very least not focus on that segment. -
tilleroftheearth Wisdom listens quietly...
http://forum.notebookreview.com/thr...ops-spotted-in-eu.777669/page-2#post-10031387
Don't know if you have read all the posts in this thread?
The above seems pertinent to your statement.
Intel does nothing? Lol... it may look like that to most. But what they are always doing is building on their strengths - and can release a fully realized product from that position as they need to, depending on market (and I'm sure, manufacturing) demands.
The AMD E350 based platform has a PM 'score' of less than 800 points. 5% higher performance (when taking just the CPU into consideration) is already half of an 2011 AMD system (yeah; low end, no doubt).
In the link above you'll see increases much greater than that 5%... 65% is what is stated in the article - when taking the whole platform into account - and, at the same power/TDP rating. And, this is not from AMD's low performance jump point - it is Intel's own Xeon E3s from 2012 (and in spite of all the garbage talk of how Intel has 'sat' on their tech over the last two+ years).
To dismiss how important year to year 'mere' 5% increases bring to the table is missing the impact cumulative increases make.
The 'time value of money' idea is based on the same (mathematical) principles. A single immense increase can bring things to par (Intel, circa 2006 vs. AMD) but even a slow but steady cumulative increase is much more powerful and insidious over time. Especially for the competitors.
AMD doesn't need to figure out what to do. It needs to copy (Intel).
With regards to the drivers being un-optimized... uh...
If I sell you a service and can somehow convince you to buy it, but then mention that 'oh, by the way', I'll be able to deliver 100% of what I promised after you pay me for a few months/years so that I can learn more about it... I'd be shot point blank. Or at the very least, I'd be starving until what I promised is what I actually delivered to each client.
My post covered most of AMD's shortcomings, but lack of resources isn't one of them. Lack of focus is.
And my suggestions were not meant to show my intelligence - rather, show how wrong the path AMD is following is... that even a middle school student can see from a mile away and score an A+ on that topic (a client's grandson...).
A product on offer needs to be complete, not offered with excuses.
This is not the first time AMD is in this boat - but the reason the water is rising above them is all their doing (100%).Starlight5 likes this. -
But in spite of that very useful product from the outset, AMD ended up doing exactly what Intel did, and tried to compete on the same benchmarks on peak performance. And pushing out products with higher clock frequencies, and with standard setups that would target higher clocks and ram-timings. While then advertising that the somewhat nearby top performance had marginally better performance/watt. Instead of specializing in what the platform actually could do: extremely low watt-draw while maintaining acceptable/normal use response.
First time I tested an underclock on an apu, I didn't really believe what happened at first. Because I could put 4 cores at 300Mhz, have the temp dip up to 40 degrees, and still soundly beat a 1.7Ghz Intel dual core (that then would run at peak) when running a video-encode. OpenCL performance.. still amazing. Ram could be underclocked without the bus noticing anything wrong, without any instability, or - more importantly - destroying the gpu performance. But there's no official tool for doing that and allowing a balancing scheme to respond to sustained workload, etc. So a standard setup would idle fairly high, marginally below an intel system, and also not have peak performance to show off in the benchmarks. It's just a waste of actually good hardware. Meanwhile, vendors keep insisting - for various reasons - that these low-powered laptop typewriters with high above average 3d performance isn't possible to sell. And we get very, very few laptops with thin and silent cooling, a larger battery, etc. While the ones that were there, some of the brazos tops, were sold at a lower price-point compared to the intel offerings in the same chassis. In spite of how they all were designed for the same use, except they had higher watt-drain, and horrifying 3d performance. The intel graphics drivers are still practically useless for 3d games and 3d contexts that use any kind of shader package, even if they're based on known frameworks. And that's.. shamefully bad. They post tiny incremental increases in performance, for example by emphasizing the desktop variants, at clocks that will never be used. While somehow not managing to post power-saving modes for both sandy and ivy bridge for the igp -- hopeless -- the gpu-load in a desktop context is somewhere around 3%. Or by using the windows performance ranking.. also useless.. But they still sell, because they advertise with higher clock-frequencies on the cpu (which ... you need to run Office..?).
So what AMD have always had the opportunity to do was to polish their drivers and control software, design a competent package designed for a typewriter+ laptop, and ace a segment of the market that was unpopulated until Haswell. While what they've really done is to fight a losing battle with peak performance for the benchmarks Intel focuses on. While vendors put the AMD offerings on the market in a way that pushes the prices down, or more to the point, always makes sure that the profit margins will always be higher on the intel sales. Extremely predictable what's going to be the outcome of that.
Point is - AMD has screwed up by trying to "keep up" with Intel. We know on beforehand that synthetic performance on amd cores is never going to be better "MHz for MHz" -- so why try? Why go along with the common narrative about what matters when it comes to performance on laptops? Even as Intel keeps pumping out i3Us without boost, without the extended instruction sets, with tiny, tiny caches, etc. That's what doesn't make sense.
What's worse - as haswell rolled out, and we could actually have passively cooled intel packages (which had not happened before, in spite of actual physical capability) - laptop makers still persist with not creating power-profiles that default at the lowest possible power-draw. And they're doing that because the laptops then get slammed in reviews for not getting the nominal increases compared to the previous processors, etc.
So the laptops are still not getting more power-efficient in that lower watt segment. While Intel then actually manage to ace that segment by launching something that AMD had the capability of monopolizing in 2011.
Not exactly brilliant planning or execution for anyone from that point of view, if you wanted a low-watt drain typewriter+ laptop, in other words.Kent T and Starlight5 like this. -
tilleroftheearth Wisdom listens quietly...
I am enjoying this conversation.
Learning a lot too.
I can see the points you're making and I'm seeing the points I've made are conveniently or inadvertently ignored. Even if we seem to be agreeing on some/most things.
Comparing Braswell or anything AMD to Haswell is a stretch for anyone though...
That AMD E350 platform with 8GB RAM and an SSD with Win8x64Pro was obsolete when I bought it for the price of a good set of RAM - years ago. The low price didn't excuse it's poor performance either (even at that time) - today, I mainly use it for a dropbox and OneDrive backup system. And guess what? The OneDrive folder is hosed for a few weeks now... (and yes; I'm blaming the AMD architecture on this one... this is one of those glitches that happen on AMD systems that I simply do not see on Intel platforms).
To use it for normal tasks (browsing, replying to these forums, etc.) is effectively a nightmare today. Really. (Okay, at least for me vs. what other systems I have available...). Even acquaintance's that have seen the system in use (connected to a 50" TV) comment on 'what's wrong with the computer?'... that is how noticeable the performance delta is even with 'normal/light' tasks.
Today's AMD examples are no better, ime. Not even comparing them against an Intel platform - just using them for 5 minutes or longer in a 'vacuum' (except from memory/experience), they fall on their face, imo.
Video Encoding? OpenCL performance? Yawn. Not what I use a mobile system for.
Browse the O/S. Utilities, folders, internet, forums. I need snap (no dumb 'smart' phone for me... waiting for a 16GB RAM QC i5 with 1TB capacity Windows Phone...). Did I say snap? I mean SNAP!
AMD introduced these products ahead of anyone else. Then sat on it. Without improving performance, efficiency nor price.
How do I spell dumb? A-M-D.
Any and all AMD platforms I've used underwhelm with regards to performance, run too hot and noisy and do not cost comparatively less vs. Intel systems. The opposite actually; they are comparatively more expensive over the lifecycle of a system when all aspects are considered. Especially my time.
Do I waste any time on any phone to get any real work done? No, unless I'm using it to actually speak to someone. Same with AMD. Not worth turning on. Vs. other available options.
Bottom line:
- I am not saying AMD should chase Intel - they should mimic them.
- Further excuses for AMD's decisions (past, present and future) don't hold water any longer - if they planned to get carried on excuses, they could have planned to address the reasons for those excuses instead.
- Huge and monumental performance upgrades once or twice a decade is 1990's thinking - 2x per year is the new 'new'.
Unless AMD can pull an Intel (circa 2006) and knock one out of this galaxy with a single blow, they'd better be content to consistently and persistently aim for the moon instead.
And they should also realize that repeating those relatively small, tiny steps are what turns a boy into a man. And a man into a force to be reckoned with.
But only if they mimic what Intel is doing - and that is build on their strengths and at the same time strengthen (not ignore) their weaknesses.
Kent T likes this. -
Nevertheless - the hardware is capable. And the four-cores and dual-channel mainboards were even better. Again, that's why I sat and chucled when plucking at the power-states.
But for those tasks, processor performance isn't the bottleneck. A typical process diagram has something like 10% waits for processor activity during browsing, and so on. Bus-speed, ram writes, other IO.. much more significant. Reason phones are slow is often from how every write in ram is slow and needs to wait for cache hits on a slow "ssd".. Engineering - always a fun hobby, even if you're paid to do a good job, apparently
I mean, if you know you can get "decent" performance out of a phone by dropping the response times, skipping the sizeable storage, while saving a few pennies per phone - if you just design the OS to keep the running software out of fast ram, and limiting the sizes of the "runnables", etc. Then that's what's going to happen. Same thing, if you can put horrendously bloated MS-packages of dlls and run them on an ARM-platform without conversion and save development time for apps - then that's going to happen, even if it requires much higher processor load than what is strictly needed.
Even so - it's not technically impossible to get more than decent desktop performance for the kind of tasks you mention out of a 3-400Mhz processor. Never was, even with Windows.
Other than that - OpenCL performance may very well be.. allright, it will be.. the next area where any computation performance improvement can be made across platforms. So I wouldn't be incredibly surprised to see a lot of different program types, not just graphics, start to make use of that. Simply because it's not tenable in the long run to design software that drops computation tasks to an external server.. the "cloud".. or something like that. Doesn't work unless we get significantly higher transfer speeds, and so on. Right now - what about a "power work station" running video and photo editing, that has a power-draw budget of 20w? In 0.8kg? Rather than running on a 100w psu, and being married to the wall-socket, etc. That's worth something, surely.
-
tilleroftheearth Wisdom listens quietly...
Okay, I'll ignore your good points too then.
The hardware is capable? No. By a long shot. Of course 4 Core and Dual Channel is better... but 8 Threads is better still and with regards to the Dual Channel RAM; AMD still runs into a brick wall by offering single channel products.
We will disagree here: Intel took their sweet time to do Haswell fanless properly - but; they did it eventually. AMD showed the rest of the world what was possible (if I'm blindly believing your viewpoint) and what do they have to show for it half a decade later? Nothing.
I wasn't joking about my spec's for a Windows phone I would consider using. The reasons phones are slow and useless to me is because they are supremely underpowered. I don't want to see a single 4K screen at 8" size. I don't want to see a mobile website (ever) and I don't want to use a keyboard with one finger, let alone one hand. Even with the maximum RAM and latest SoC's; they suck. Not to mention they are running yet another proprietary O/S with all the limitations and restrictions (for no benefit at all). The handheld computing experience was something I looked forward too in 1990... the implementation so far has left me less than impressed every single time I try to make it work.
How bad is this performance/productivity delta? I can drive to my offices, my home or any of my clients' offices to use a real computer and/or network/internet connection and finish what I need before I can type the first search term on any flagship dumbphone today. So sad, so bad but I've had two and a half decades to get over this 'wish' and realize that I'm happy I didn't 'make it work' too. Life is too short to spend on a too small, too unstable and too expensive device that would be just as bad if it had been around two decades ago.
Performance of the cpu? Of the storage subsystem? Of the RAM? Don't care.
Productivity is the name of the tech train I'm on (and almost always have been). Using tech (devices) for the sake of simply using them seems kind of useless to me (and hence; why I don't game).
And this folds beautifully into your statement that 300-400MHz CPU speeds are enough. Lol...
Just what are you smoking? lol...
I just opened up Tomshardware.com and the i5-3427U CPU I am using hit over 51% cpu utilization over 4 threads with 16GB RAM and an SSD too... That is at an indicated clock speed of 2.58GHz so, lets say you're off by a factor of 300 to 400%, at least.
Your points of efficiency per watt is Intel whispering in your ear... AMD won't be there even at the end of this decade.
I'm not sure where you're going with PR coherency? Apple is a prime example of what to avoid. Not what to mimic, from an educated consumers standpoint. Business-wise? Yeah; I'll give them that. They are envied, but for how long remains to be seen (not one of my clients is currently running an apple setup - at least; not for business purposes and this is a far cry from when it was the other way around in the 'creativity' segment of computing).
We all want what you want. I want efficiency, cost savings and more - but not at the expense of never increasing performance (even at just 5% per iteration) relative to what current workloads (light or heavy) demand.
There is such a thing as too small (~12" screen/keyboard size or smaller), too slow (below current i7 QC without maxed out RAM), too limited (os x, chrome, android) and there is no such thing as 'good enough'. Because 'good enough' is a moving target... yeah, even the millisecond after I wrote this post.
What you wish for is properly expected as an Intel product at this point in time (or the near future). If you want it from AMD? You'll be waiting a very long time and I wouldn't be too surprised if they never deliver.Kent T likes this. -
One specific example. A Macbook Air is sold in different variants, with more or less "power", and in different price-ranges. But of course all of them run at a power scheme that generally drops the processor to 800-1700Mhz, regardless of the option to go higher. That's how they run without fans for most of the time. So the products are positioned very wisely for sales - without having any real difference in actual performance.
But it works for sales. And the product will reach customer expectation. Win-win. So to speak..
, even in the same iteration..
While the current idea of solving it by just raising the clock-speeds doesn't work, and arguably never did - it's been a long while since max processor speed was reached. So it's just not possible to design software that "expects" unlimited increases in "processing power" to shave off the hiccups, to get them small enough to not be seen, or to be short enough to be "acceptably short".
In my opinion, obviously, that was never the case either, and never will be. But opinions differ. -
Starlight5 Yes, I'm a cat. What else is there to say, really?
tilleroftheearth, though I agree with your hardware requirements for a smartphone in general, I have to say the situation ain't as bad for productivity as it looks. My smartphone has a great physical keyboard, and, though it was quite hard to find, software for my tasks that fully utilizes its' strengths and provides similar to desktop experience. Moreover that, I often use it with a docking station, which is a 14" laptop shell with great non-chiclet keyboard, SD slot and couple USB ports. It is very convenient and extremely safe because all my valuable data remains on the phone, and has decent battery life being dirty cheap at less than $100. While Android as an OS is certainly worse than Windows, it's bearable after you tinker with it, and other pros overweight the cons of this device package. Oh, and it's snappier than a notebook with traditional HDD.
I believe in future most devices will be like it. Compact main unit with enough umph for almost any task and different optional docking stations with peripherals for more specific usage scenarios. But I wanted the future today so badly that purchased what was available at the moment, Android-running ARM device with mediocre specs that is. Still does the job pretty well.Last edited: Jun 28, 2015 -
davidricardo86 Notebook Deity
HP has started selling Carrizo and Carrizo-L laptops on their US website.
Sent from my XT1049 using Tapatalk -
I want to see AMD improve their good aspects with their APU (Good GPU performance built in). I want power efficiency but I want hardware which is not bogged down when asking it to do real workloads in real life. AMD needs processors to improve on their speed and power, they need to run less hot, and they need to address their limitations they currently have. And take away none of their good aspects. And good battery efficiency is necessary. I need an AMD which does not throttle doing routine everyday tasks which performs like a good Haswell Core i5 ULV with the AMD GPU prowess. And for AMD to stay competitive with Intel offerings. These ideals will get AMD available in superior laptop offerings and keep Intel honest by real competition. I want to see AMD with the spirit and the fire they had when Athlon was released. I want them awakened and I want AMD to strike fear in the heart of Intel. Little useful improvements on the processing side, keep the GPU excellence, and keep power draws low, and gains made for real world usage will get AMD there. This we all here can agree on. To put it in simplistic lines.
Starlight5 likes this.
AMD Carrizo Laptops spotted in EU
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Deks, Jun 22, 2015.