Nice. Sucks that I'll be stuck with a Lenovo X220 with only USB 2.0 for the next few years though.
-
Jayayess1190 Waiting on Intel Cannonlake
-
-
Jayayess1190 Waiting on Intel Cannonlake
-
Karamazovmm Overthinking? Always!
Leaked Intel slides reveal Sandy Bridge-E, Ivy Bridge details - TechSpot
here is something, about the chipsets and sockets, it appears that my fears were not justified, all hail the wrong ones!
I guess Im going to wait next year to buy a notebook, with the possible widespread use of thunderbolt, the upcoming 7000 series from AMD, and the possibility of USB3, that is too good to pass. -
I was going to wait until the end of summer for a new machine, but I guess I will wait a bit longer and see what Ivy Bridge and and AMD's Fusion line brings out. I do like the prospect of USB 3.0, & Thunderbolt and I think I can wait a bit longer for it; plus, hopefully by then more pen based tablets options(non-Atom of course) would be on the market.
Are the CPU back? -
Obviously now ATi is going to come out with an amazing Cayman-optimized chip that will completely prove my hypothesis wrong.
Back on track: I hope that Ivy mobile will be socket-compatible with Sandy Bridge chipsets. I don't plan on waiting on Ivy, because Haswell is supposed to bring a big jump in IGP performance, and currently that's more important than pure CPU for me. -
Woops I meant to say CPU wars, as AMD seems to be showing some nice competition for intel, specially against the Atom line.
-
Jayayess1190 Waiting on Intel Cannonlake
Intel Promises Revolutionary 22nm Technology
-
Karamazovmm Overthinking? Always!
at least my money is playing with the stock market -
Jayayess1190 Waiting on Intel Cannonlake
-
Karamazovmm Overthinking? Always!
-
-
Jayayess1190 Waiting on Intel Cannonlake
-
I've said it since SB, I'm way more excited for IB.
-
Forget Haswell here comes Pelican Lake shrink on 7nm in 2016
"Technologically, Pelican Lake is streets ahead of Sandy Bridge, with levels of parallelism that present designs can only dream of. Intel’s engineers will also need to have solved some serious leakage issues – and consider running large areas of the chip asynchronously, because it’s almost too complicated to try and maintain a single clock for the whole processor.
We’re presently moving into the area of stacked memory and, pretty soon, we could be looking at stacked CPUs – designs which are inherently 3D in more ways than one.
Right now, in 2011, we’re all marvelling at how much power we can achieve from a Core i7 2600k processor, costing just over £200, and clocking to 4.8Ghz on air. For moves like Intel’s 7nm process change, the present quad core 2600k processors will be made to look like today’s Atoms. We’re moving that fast every 5 years."
Intel to launch Pelican Lake shrink on 7nm in 2016 | Tecnobits
Cheers
3Fees -
7nm? Quantum tunneling would wreck that s*** up lmao
-
I wonder what this stuff is really about. Between now and a 7nm arch according to roadmaps we still have Ivy Bridge (22nm tick), Haswell (22nm tock), Broadwell (16nm tick), Skylake (16nm tock) & Skymont (11nm tick), then whatever-they-name-it 11nm tock, 7nm tick, and then that 7nm tock.
That's 3 arch forward and at least 8 years from now assuming intel can keep up with the 2 year tick-tock cycle, and frankly I doubt they will (Sandy Bridge was already 1y late). So that's 2019 not 2016 unless 16nm - 11nm - 7nm is incorrect (I think I read somewhere 14nm rather than 16 might come after 22). As a reminder 3 arch before Sandy Bridge we were in the Netburst-era which now seems a distant past. So why even bother communicating on it. -
In this article just recently published, there are many slides that show new details about Ivy Bridge.
Google Translate
A big "MEH" from me -
Still no 35W quad-cores it seems. So what, the only benefits to be expected from 22nm will be a lousy +400-500mhz boost + lower idle consumption and enhanced iGP ? Not really overwhelming imo.
-
o_o And what exactly is so impressive about SB if speed + low power + better IGP's are underwhelming to you?
-
Jayayess1190 Waiting on Intel Cannonlake
So this means that Haswell is going to be BIG! Mobile quad's for all!
-
Sandy Bridge gave us cores, IGP and memory controller on the same die. It also featured a new instruction set and lower power consumption than the previous generation. It introduced Turbo boost 2.0, where it can turbo above TDP.
Ivy Bridge will be 22 nm but will feature the same as SB except they have improved the speed a bit and IGP will have 16 EUs instead of 12 EUs. And as seen from the spec sheets the TDP of the new processors will be the same as Sandy Bridge. Ivy will also have the same turbo technology as Sandy Bridge. Shure Ivy is an improvement, but it feature the same stuff as SB and is not a big jump like Nehalem to Sandy Bridge.
imo -
Well, of course. It's a tock, not a new architecture. All IB has ever been made out to be is an improvement of the architecture. But it will improve performance and improve power usage and improve the IGP.
-
Would much rather have lower TDP than lower power consumption though. But it will be interesting to see how much less watt they will draw
-
Higher frequencies and lower idle power are the usual goodies you can expect from a die-shrink. It seems we won't get many additional refinements besides a 50% or so more powerful iGP (16EUs, dedicated memory, maybe higher clocks. But unless intel finally offers real support they will still suck for all I care.
Finally I was excited about Ivy Bridge because I expected mainstream quad-cores in small form-factor & cheap notebooks : 45W mobile quads have been around since 45nm, 22nm is two nodes further damn it. Sure the chips have been shrunk and they stuck quite a few things on it (memory & PCIe controller, iGP, etc) but I'm sure that technically they could but they won't, because they believe quad-cores won't interest that much people so they'd rather stick with smaller, cheaper dual-cores. Anyway if it's confirmed, like Cloudfire said Ivy Bridge is definitely not the kind of improvement that Clarksfield/Arrandale -> Sandy Bridge was. -
I think that what many people are forgetting that improved power management is supposed to come in Haswell, along with a much better IGP. Ivy is only supposed to be a shrink, always was, and the same would go for Rockwell.
Also the Pecan Lake 7nm fab is starting in 2016, the processors would most likely not be out at least a year after that. But the projected performance improvements, if they do come true are enormous! -
This doesn't seem to have been posted...
Intel will mass produce 3D transistors for all future CPUs, starting with 22nm Ivy Bridge (video) -- Engadget
Holy...
-
Jayayess1190 Waiting on Intel Cannonlake
-
OK, so it's not just a die shrink -- they change the transistor type. But what exactly does this mean? How do 3D "Tri-Gate" transistors differ from ordinary ones? What makes them better? What makes them hard to do (I think Intel promised something like this a long time ago, but it's only here now)? I need an AnandTech article...
-
tilleroftheearth Wisdom listens quietly...
-
Looks very promising. Already now we have SB laptops with IGP that have many hours of battery life. Will be crazy
But i wonder how 50% reduced power consumption of transistors mean for the entire CPU though. -
tilleroftheearth Wisdom listens quietly...
What it will eventually mean is that Off, Standby and Hibernate are things of the past. On and Reboot will be the most used options. (Because idle just doesn't use any appreciable power to worry about).
-
3D chips. Now that's going to be a worthwhile upgrade.
-
-
tilleroftheearth Wisdom listens quietly...
-
Jayayess1190 Waiting on Intel Cannonlake
Comment from Anandtech article:
-
And this is starting with IB?
-
Well for one thing, I would rather we actually have some progress/innovation an get those 30Ghz Graphene CPU's with a 3D setup and not stick to outdated silicone tech.
Seriously, on one end, it's a creative way to add up more computational power, but sticking to silicon and it's limitations is just wrong (not to mention it further delays actual progress).
At this rate it's gonna take well over several decades before we see graphene cpu's, let alone quantum computers.
Delay and more delay takes priority just so they can keep profiteering from outdated (but revised) technology. -
tilleroftheearth Wisdom listens quietly...
Deks,
Seriously?
With what do you think they're going to design/build/model those (future) chips on? Air?
I agree that we are using old tech (even if we have the latest, 'latest') - but graphene and quantum computing is not going to come from just wishful thinking. Todays tools/computers will build the next generations - just as they always have. -
A bit ot, but could we see this processing method on the next gen of Atom cpu that are coming out? And could this also mean quad core Atom and ULV cpus come out earlier than we expected?
-
-
So Anandtech estimates around 18% better performance with Ivy than Sandy Bridge, and 50%+ less power draw from the transistors. I am more interested in knowing how much the entire chip will draw though because it is that what matters. I suspect the chip power draw to be not so far away from Sandy Bridge because the dual and quads have the same TDP as SB and they increased IGP from 12 EUs to 16 EUs. But 18% performance boost with the same or less power draw is good anyway.
-
The point is that the architectures for C2D, SB, IB, etc... were already thought of and made well ahead of schedule but were making their debuts in the markets only recently.
If we were supposed to see the 'cutting edge', we would have been well onto graphene by now, perhaps even beyond that.
But the industry doesn't work like that.
They give us piece by piece, revisions if you like and charge heaps of money so they can make as much profit as possible.
I mean seriously, if they were in this for 'progress', then we wouldn't be using the tech we have now but we'd be light years ahead.
They will use this 3d stacking to simply churn out more of the same.
Sure, computers will become more powerful, but they would still be inferior compared to what they could do with graphene for example (or switching to new technologies altogether).
And that's just the tidbit of info of what we do know. How about possible techs that hadn't even been disclosed to the general public?
Then again, Intel and AMD are mainstream companies...
IBM on the other hand... -
Forget Ivy Bridge, Haswell on the way!
-
Hasswell to introduce 5 dimension processing in 2012.
-
Deks, I'm sure we'll see that eventually but I think it's likely that graphene is just more expensive than sillicon or there isn't enough research done in it. I would bet money that intel and AMD are both researching it.
-
Technology will always improve simply because developing better products than one's competitors is the most profitable business strategy. No business creates new things for the sake of "progress". -
IBM has them well beaten to the punch in terms of research (if the industry was interested, we could have been running quad core 30Ghz cpu's by now, or at least 2 to 4x more cores per chip), and graphene was stated that it can prove to be cheaper than silicon, not to mention self-cooling.
Besides, it would force software makers to get off their rear ends among other things.
I'm just irritated at how money and profit is forcing us into stagnation and very slow actual progress.
The use of silicon at this stage is just ridiculous. -
My beef with the system is that these people are giving us what is essentially outdated technology that has been slightly revised.
Technology is not 'evolving' at a high rate... it only seems like it does because they keep releasing revisions every couple of years that abide by Moore's law (most of which have been made a decade ago).
Technological evolution would be switching from silicon to graphene, or from graphene to quantum computers.
Everything in between is mere 'revision'.
At the moment, the industry is driving revision through the roof and will keep doing so if it means more profit for them before they 'have' to switch to new technology like graphene (or possibly something better).
Let's not even go into the area where laptops can easily be on equal footing with desktops in terms of performance for a lower power drain with current tech. It's certainly doable, but a profitable business isn't here to drive out techs fast out of the market (nope... merely to milk the people out of cash for as long as possible).
I'll probably still get an Ivy Bridge or the cpu with architectural changes that come right after that (after all, I need a more powerful computer for my 3ds Max), but that doesn't negate the premise that by now we could have had computational power that goes way beyond of what we are presently using.
I wonder how long will the industry keep pushing these revisions until they finally change to actual new technology. -
When people are put to choose between philanthropy or profit, guess what they'll choose? Why skip several generations of CPUs bringing your newest and best designs and loose literally billions of dollars when you can secure yourself a steady income from just revising old stuff?
Sure, I would like to power my 60MPG internal combustion bike with a revolutionary battery that would give me 50000 "MPG" but as long as petrol keeps pumping from countries easy to invade, I'll keep dreaming. Same with CPU's.
On the other hand, we could have had Pentium4 computing level CPU's in 2011 if Intel or AMD would have been more profit oriented than they already are.
Just keep thinking that in 10 years will have Avatar like, real-time rendered games on mainstream laptops. That's really enough for me.
Forget Huron River, 22nm Ivy Bridge on the Way
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Jayayess1190, Oct 1, 2010.