With a silicon-germanium channel, and EUV lithography, IBM crosses the 10nm barrier.
Source
-
-
tilleroftheearth Wisdom listens quietly...
This seems to be the beginnings of a perfect storm on the cpu/igpu front...
Discrete GPU's will become obsolete sooner, rather than later. And at a power savings that will shake the computing world.
That is when 'massive parallel programming' practices will become important for the average joe.
And could make the anticipated Skylake improvements seem relatively insignificant indeed.
Faster transistor switches/higher cpu speeds, up to (effectively) twice the surface area or a quarter of the power requirements and throw in new capabilities over the next 2-4 years...
IBM - thank you for trying to shame Intel into being #2 again.
Intel, time to show the world who's in charge (again).
Me: Sips a favorite drink, sinks into a comfy sofa and doesn't care who will take the 'title' - all consumers win. -
Raidriar, triturbo, alexhawker and 1 other person like this.
-
tilleroftheearth Wisdom listens quietly...
Agree with all of the above.
But I still won't buy one.
My point was that when the processing capabilities of current gpu's are on the cpu die; that is when programming for them will become a reality for almost all programs going forward.
100TDP of 'anything' will not be sustainable in the too near future - even for hard core gamers.
A 7nm or smaller 'apu' from Intel will put the gaming crowd in their happy place - even if more powerful solutions are available for the 8K to 32K gaming monitor/resolutions that will be available for the filthy rich by then.
If Broadwell effectively wiped out anything below a discrete midrange card (depending on your definition of 'midrange'...), two+ clock cycles from now will clean up the upper range gpu market easily. Why? Because Intel doesn't need to put (too much) new IP into cpu's... so therefore, the default benefactor will be the igpu of the smaller node's improvements and advancements.
For myself, a discrete GPU is nothing but a heat source with very little benefit for many years now (relying on Intel igpu's). Even getting one 6 times more powerful than what we have today for less than a third of the power requirements (your 100W estimate) is still power, heat and money out the window in my view.
And that is if we see those manufacturing nodes for GPU's at the same time our CPU's get there... -
For someone who is always touting newer, faster, stronger, surprised on your viewpoint on this matter. If you don't have a use for a dedicated GPU then fine, but there is a large percentage of users that do.triturbo, ajkula66 and alexhawker like this. -
So no, a 7nm APU from Intel will not put the gaming crowd in their happy place unless programmers try their best to optimize as much as possible, instead of only making things "good enough" for the current gen of cards. Heck, look at Witcher 3 and their overuse of tessellation, neutering performance on every card prior to nVidia's Maxwell generation. They had to go back and reduce the amount being used for other cards. It's because they optimized for "now" and not for "prior to now", which is what you can expect to happen by the time 7nm takes off. Even if it's ready to be used now, it'll take at least 2 years to get production prep and mass production going, far less designing the architecture for it etc. I doubt that by the time a 7nm APU comes out with current-gen GPU performance that it'd be worth anything in at-the-time current games.
It's a neverending cycle of unoptimization, and I honestly get it. I don't expect anyone here to be coding for a single core 2GHz pentium 4 CPU, 256MB of RAM, GeForce 7800 "gaming" PC. It's unrealistic for anyone to be using this as a machine for any reason whatsoever and expect performance out of it. But there's no reason to go from generation to generation. A GTX 580 is for all intents and purposes quite capable of smacking the living daylights out of the current-gen consoles in terms of GPU power, and a 1st gen i5 (like an i5-750) should also be able to smack those consoles senseless in CPU power. But try to run games that get 1080/30 or 1080/60 on an i5-750 + GTX 580 system today and you're going to have a bad time getting the same visual quality and framerates. Because nobody codes for them anymore, and they expect that gamers will be using something stronger in their PCs. I'd say two gens behind GPU-wise should be the targeted for medium-high settings, but usually it's only one behind that is counted, and it's for low-medium. And on the other hand, half the new games are so single-thread heavy on CPUs it's a joke. "Minimum reqs: quadcore!" - Game uses core #1 at 99% and cores #2, #3 & #4 at <20%. This is the life of current games.moviemarketing, Starlight5, HTWingNut and 1 other person like this. -
tilleroftheearth Wisdom listens quietly...
To both,
You have valid points, but I need to clarify what I meant with my post you're both commenting on.
First and foremost: when I said 'current' processing capabilities of discrete gpu's... I meant current at that (future) time. And yes, it will happen. Maybe not in 2 or 4 years, but sooner rather than later? Yes. The efficiencies, economic feasibility and the competition will make it so.
Compare AMD vs. NVidia GPU's and you'll quickly see the desirable cards are not the most highly spec'd (i.e. 'overbuilt').
11% difference in (higher) number of transistors (R9 Fury X vs. GTX 980 Ti, higher power and noise levels and performance for the $$$ is weighing heavily in the Ti corner especially when all aspects are considered.
Yeah; almost a billion less transistors and superior (overall) performance. That is where we are at today.
Reminds me (in a limited way, of course) of Intel's Quick Sync tech where an igpu way back with an SNB platform would blow away any and all GPU's in that specific task. This is where Intel is heading with it's igpu's. Not just comparable performance - but to massively outperform what is thought currently possible.
What I'm suggesting is that what the currently accepted GPU progress has been will be turned on it's head (by Intel). They don't care what/how it is being done now - they see the true end goal and are solving for that (and not just at today's current levels of performance...). AMD/NVidia may provide a better (and even a much better) discrete GPU in that same future, but my guess is that gaming will not be a large part of that project scope that I am envisioning.
Will it take another few iterations before we get there? Yeah. But on board level 4 cache has proven to not only greatly enhance the igpu side of the performance front, but also the cpu side as well. And that is with 'only' 128MB cache size today (Broadwell) but a power savings of the order of a magnitude or more depending on what the gpu is being called to do.
Smarter ways of solving the problems gpu's currently excel at, not simply throwing more hardware at the problem, is what will give us real jumps in performance, power savings and pricing.
Intel is on that road for a while now. AMD and especially NVidia have no reason to go there.
But 'there' is where the future points to - if we free our minds and don't accept the status quo we're living in now (and led to believe (marketing) is the only way forward).
(Note: Some clients are stunned that I don't actually own too many GPU's in my various systems. I simply find them irrelevant to getting my work done.
Especially since SNB on, having a discrete GPU is a negative to my computing experience (hotter, louder and more power for very little increase in productivity.
When igpu's are at the levels that gamers want/need (and my point is that they will be), they too will feel the same about discrete, expensive, hot, noisy and superfluous GPU's that only give few real world improvements with just another component to account for).
And gaming? I do not think it pushes the boundaries of computing performance. It is still very limited by the programmers abilities to give just enough of what customers want (to make a sale) while still keeping enough in reserve (to somewhat ensure their continued future employment).
-
Not even just for gamers. Bitcoin miners, rendering applications, as well as games, raw compute performance, all benefit from GPU advances. It is silly to discount the usefulness of a dedicated GPU, and even more hilarious to suggest that a iGPU will be the equivalent to a dGPU of the same generation.
-
tilleroftheearth Wisdom listens quietly...
I'd put a bet on that - today's Broadwell based Iris Pro 6200 igpu's are above a 1080P capable R7 250X/GTX 560, even with those cards over clocked... Sure, a $100 card from a year and a half ago... but, we still haven't seen Skylake Iris Pro graphics yet either...
In two, three or four more iterations... I wouldn't be surprised to see Intel igpu's competitive with dGPU's in performance and totally annihilating them on the power, noise and cost front.
I'm sure NVidia and AMD see that coming too - and 'hilarious' is not the word they're using, That, I'm positive of.
Why can I believe this so much? Because Intel focuses on real world results - not simply trying to impress us with 'big' tech (or empty/late promises). They let the results speak for itself and consistently do that year after year.
Here is a good example of this:
See:
http://www.tomshardware.com/reviews/intel-core-i7-5775c-i5-5675c-broadwell,4169-2.html
To think an inherently compute component like a gpu will remain discrete and separate from a cpu is clearly seeing the problem like how AMD and NVidia would prefer.
Just like an 80287 math coprocessor was eventually integrated into the cpu, the Intel cpu's will swallow the discrete GPU too - as far fetched as that sounds to you right now.
Power and cost savings, in addition to further optimization and 'balancing' of gpu resources (not to mention the software/api side of things, like Metal, etc.) will make this happen sooner than most will expect it.
I'm sure you know this story, right? What would you choose? A million dollars in a month or a penny doubled?
Best to bet on the (too) small jumps in the beginning, but with persistence, the last few days (iterations) will pay off really, really big (especially if that month has 31 days...).
Intel understands this and is executing it perfectly.
And they're laughing hilariously all the way to the bank.
Last edited: Jul 13, 2015Starlight5 likes this. -
So it can compete with four year old low end GPU... impressive.
-
I don't agree with you very often tilleroftheearth,
but I am with you on this one.
-
-
Intel turned the crank on that monkey box five years ago when they said how the HD 4000 would end the need for low to mid range GPU's. It has improved greatly, but it's still only just "good enough". Iris Pro is also only in specialized very expensive CPU's especially mobile ones. Where for the same price you can get a non Iris Pro Intel CPU + twice as fast dedicated GPU.
If you don't play latest 3D games or have need for 3D CAD apps then sure, integrated GPU will be just fine. Otherwise there is no way an IGP will replace a dedicated GPU. I'll take the dedicated bandwidth, die space, cooling, and vRAM any day, thanks.
And with 4k on the horizon, a pseudo-128-bit integrated GPU will choke. Next gen dedicated GPU's will offer reasonably playable framerates with medium to high settings at 4k. Iris Pro will chug like moped engine in a freight train.RS4, ajkula66 and alexhawker like this. -
tilleroftheearth Wisdom listens quietly...
Okay, we see where your bets are placed. Here's your cool $1M each.
See:
http://www.anandtech.com/show/9436/quick-note-intel-knights-landing-xeon-phi-omnipath-100-isc-2015
Now, try to stay happy while I collect my $10M in due time (with the $'s/performance increasing each time another iteration is introduced). -
Starlight5 and ajkula66 like this.
-
tilleroftheearth Wisdom listens quietly...
What I hear from your side is... fear? Lol...
No problem, we might all get to be right. But ignoring the facts I present is not how that reality will transpire either. -
Fear? What would I be afraid of? Sure in an imaginary world of unicorns and griffins we might get some perpetual machine tech. I don't see any facts that you present that prove any way, shape or form. I'm looking at plain and simple reality. What you're saying is that you can produce more thrust with one jet engine than you can with two identical ones. With each iteration, sure the IGP's improve and possibly outperforms the low or even lower mid-end tech of the day, but it's quickly trumped by the next GPU iteration and never approaches mid to upper end tech. And it can't.
Only way it would become a reality is if dedicated GPU's just stop advancing/trying, which is likely due to Nvidia more and more becoming a monopoly. Or if games start to develop for the lowest common denominator of the integrated GPU only and we lose fidelity and FPS, which is also very likely considering how game consoles are these days. As long as games continue to push the envelope though, there is no way an IGP will replace a dedicated GPU.
I would love nothing more than to have a single low wattage CPU to be able to perform as well as a dedicated CPU and GPU do today. It means much lighter and thinner designs and more options to include other fun stuff in the laptop that's not package space consumed by a couple pounds of copper and several hundred cubic inches of volume. But it's not a reality.Last edited: Jul 15, 2015Starlight5 likes this. -
tilleroftheearth Wisdom listens quietly...
"Fear" was a question, not a statement...
I am not saying that one jet engine can produce more thrust than two. I am not saying gpu's need to stop advancing/trying for igpu's to catch up and/or surpass them. Nor am I saying that games need to stand still in how hard they push gpu's (but I certainly don't believe games push the envelope though either...) in order to give igpu's a chance.
The facts are:
1) Cost. As the process nodes get extremely tiny and the gpu is able to actually fit/share the cpu die - along with 16GB of onboard RAM (today) and do that with less power of previous platform cpu's (taking the power consumption of the discrete gpu entirely out of the equation...) - NOBODY will continue making discrete gpu's (at least not for long - or for the 'many' - and certainly not for 'cheaper').
2) We have seen this with Intel's 'pathetic' igpu's from around 15 years ago. 'Pathetic' from a gamers point of view. Good enough for someone that just needed a display driven to show the computers output. Yes, 15+ years is a long time and we're still only roughly at performance levels of a 4 year old NVidia discrete gpu (thanks for pointing that out (circa 2011 for the GTX 560) - I misread/misquoted the article on Tom's Hardware...). However, now that those painful baby steps are done, we are mere iterations away from real progress.
The reason this will happen is because with all coprocessors, none of them can function on their own. They need a CPU to function (and the CPU, RAM... to do any work) and that is why the cpu will swallow the gpu eventually too.
As stated before; thinking that how current gpu's handle video workloads is the only way forward is very shortsighted. Intel is showing the world what an optimized 48 EU's can do (vs. the 20-40 EU's previously available).
Looking at the above 'jump' in performance is very revealing of what is to come.
What the quotes above should indicate is that Intel is not making baby steps anymore. They are where they need to be to bring huge 2x+ increases to igpu performance going forward. It is not inconceivable that the days of the dedicated gpu are nearing the end (and for me; they never really even started - except for Matrox multiple monitor capable gpu's - a long, long time ago).
The above facts are irrevocable.
Your stated vision of reality is correct with one small issue: you expect things to stay the same (in a relative fashion).
They don't. -
The big problem you seem to be forgetting is that dGPU tech gets better as well. Sure Intel might be on 9nm by the time a GPU manufacturer makes a 16nm GPU... but the point is that the iGPU does not have enough available resources. You could multiply the current HD4600 by 10x and still not arrive at a single 980Ti, far less whatever is available in 2018 or 2020 when intel finally arrives at the point where their iGPUs will be "ready" as you say they are. The point I made earlier still stands: They can improve and improve and improve as much as they want, but this only holds true if they improve and dGPU tech does not. Each new architecture since Fermi (from nVidia) has been a massive jump. The Titan Black is the pinnacle of Kepler, and it smokes two GTX 580s (the pinnacle of Fermi) easily. The GTX Titan X is the pinnacle of Maxwell. Even though there hasn't been a die shrink and the Titan X card is limited in its power draw capabilities, it tears apart Titan Blacks for breakfast. It's not as strong as 2x, but it is much stronger. Pascal's GP100 (or GP110 or GP200; whatever they end up calling it) will be on a die shrink as well as a new architecture, and Volta is likely to do the same thing. If this happens, they're gonna see similar performance jumps as Fermi --> Kepler did, going from Maxwell --> Pascal and Pascal --> Volta. And then if 2020 is the year Intel "fixes up", even 20x the current Broadwell Iris Pro iGPU wouldn't be enough to scratch Volta's GV100 flagship, far less whatever comes after Volta (which should be out by 2020).
Your logic is simply flawed. What will HAPPEN is that iGPUs will get better and compete with low end dGPUs from nVidia and AMD (or any other company that jumps into the race). But again, even if they hit the realm of entry-level midrange GPUs, they're not going to replace high end cards, and again, as I said before, due to the way how games seem to be coded with previous gen's midrange card being the standard for "low-med" graphics (or "high" at 30fps if you're lucky), Intel's iGPUs being as strong as Volta's GV106-esque cards in 2020 will simply mean it'll just barely be able to run the more demanding games. -
This thread has got to qualify for the most misleading title on this forum -- there's not a single post after the first one that is about the new 7nm chip (which currently has very little to do with CPUs or GPUs, integrate or discreet).
On the de-facto topic of the thread: there is no way that an iGPU can ever compete with a dGPU that gets an extra 50W of TDP. However, it does not have to do that to win out over discreet graphics: it just has to be "good enough" and then there are a few ways discreet cards can become extinct. For example:
- The straightforward extrapolation of the current trend towards thin-and-lights. Basically, not enough people will be willing to pay for the extra card and deal with the associated extra heat, extra size and extra weight (because of the cooling) to make production of laptop chips viable.
- There's an upper limit to photo-realism and even though we are far from reaching it, the approach is asymptotic and we are not very far from the point of diminishing returns. That is, an improvement in GPU power yields much less of a visual improvement than it did a decade ago and soon the difference will not be enough to bother.
- Related point: building a photo-realistic graphics engine is expensive. As a result, practically all AAA games must currently run on consoles which generally use hardware that was mediocre 3-5 years ago. The iGPU need not keep up with the dGPU, it merely needs to be sufficiently above consoles to negate the extra optimization that goes into the latter (this is getting easier as consoles become more PC-like).
Starlight5 likes this. -
Also, remember this processor won't be likely found in laptops anyway. It will be in minicomputers and in servers used in business, which won't ever see any gaming. This processor will most likely be doing business workflow heavily. As this is what IBM is focused on.
IBM unveils world’s first 7nm chip
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Mr Najsman, Jul 10, 2015.