I'm looking through the coverage of Fall IDF 2006 here, and I can't help but be a little bit excited.
Santa Rosa in '07, 45-nm process CPUs in late '07 and early '08.... Intel kept mobile Core 2 Duo on the same chipset as Core Duo initially to make the transition easier on the ODMs, so I can't wait to see C2D running on a chipset that was designed for it from the ground up. Intel is hoping for a 40-60 minute battery life improvement, but I'm sure that will be offset by more integrated peripherals in a lot of cases.
Until I bought my Z33, I hadn't paid a lot of attention to notebooks. Always kind of wanted a Thinkpad - still wouldn't mind an old 600E or something to play around with, actually, but they hold their value so well I've never found one for the right price - but mainly I had a lot of terrible experiences with my dads 3 Toshiba Satellite models that he had for work throughout my childhood. My mom bought a PIII-M Satellite, and I always thought it was a piece of junk, so I went the way of the desktop.
Lately, though, I've become obsessed with performance/watt, performance/size, and performance/size. I don't want a huge hulking tower with 3 120mm fans and a heat sink the size of Kansas anymore. I want the tinyest, sleekest notebook I can have that will still do what I want to do.
What I am really hoping for is that in two years, when the 45nm process chips are available, I will be able to replace my X2 box and my Z33 with a single machine, a 14" Asus Ensemble with dedicated graphics (hopefully closer to the then-currently available desktop dedicated graphics, if the graphics companies can shrink their manufacturing processes and cut down on heat a bit) and a docking station like the one on the V1J.
-
You actually touched on the real problem -the GPU. The die on the graphics processor needs to shrink like the CPU so we can have cutting edge graphics in a laptop that weighs less than 3 kilos...
-
CPUs have had their process shrink much faster then GPUs, reducing leakage and heat. I am hoping that AMD buying ATI and Intel working on better graphics solutions may give us 65nm graphics at the same time we get 45nm CPUs. -
ltcommander_data Notebook Deity
GPU processes are not significantly behind that of CPU processes. The only company with really aggressive processes is Intel. AMD won't ship their first 65nm chips until December and even then they are only the low-middle end chips. FXs don't look to transition until mid-2007. This means that AMD's 65nm process is still maturing. In contrast, TSMC's 65nm process used by ATI and nVidia is supposed to start shipping products by Q2 2007. In that sense, they are right in line with AMD. Again Intel is the exception since they always lead in die shrinks.
Now CPU and GPU processes work inherently differently. For one thing CPU processes work in standard steps, 90nm-65nm-45nm and it averages about 2 years between each step. For GPUs, because of the need to increase transistor counts, they use half steps going from 90nm-80nm-65nm-55nm-45nm. In principle the 80nm processes is actually just a "shrink" of the 90nm process meaning that 80nm don't add any major new features and it means that ATI and nVidia do not need to redesign their GPUs going between the 90nm process and it's 80nm half-step. The half-steps are supposed to be used for cost reductions. The 65nm process however offers different characteristics than the 90nm/80nm process and so require a core specifically designed for it. 55nm is the half-node for the 65nm process.
As of right now, Intel is at 65nm, TSMC is at 80nm and AMD is still at 90nm so putting things in perspective GPUs are not behind CPUs.
My current concern with GPUs is the DX10 models. nVidia's G80 has a large transistor count, 128 unified shaders, uses some large 384-bit memory interface, yet looks to still be produced on the 90nm process. What's more the demos show it coming stock with a water cooler and it's supposed to consume 200W of power. My only reaction is you've got to be kidding me. Getting that thing in notebooks is going to be near impossible. Even if they are using a mid-range part which is only 1/4 of the full G80 we're still looking at 50W of power which is more than 2 times higher than current mid-range parts. (Current nVidia mid-range parts are 1/2 or 1/3 of full G72s.)
What's more ATI is doing no better. They are marketing their card as having even more transistors than nVidia's and the R600's power consumption is supposed to be around 250W using the 80nm process. Completely ridiculous. And joining AMD will not help things at all. AMD has said they are planning on producing ATI cards in their fabs anytime soon and they couldn't even if they wanted to since they have no extra capacity anyways.
Unless something shapes up soon, any gains in battery life from more efficient processors are going to be eaten up by these mega-GPUs. -
-
I agree with what you have to say LtCDR Data, but let me see if I can get you to think about it from a different perspective. The current GPUs are clock speed limited by the etching process and the amount of power that they consume is also dictated by the size of the transistors in the die. So the trend has been to increase the number of pixel pipelines -if you can't raise the frequency of the part then put more processors on the same chip to make it faster...
Once they shrink the die down to the same level as current CPUs then, maybe with a redesign of the architecture, the speed of the GPU can be ramped up to ~ 2Ghz and the voltage reduced -and you wouldn't need a high transistor count to get high performance. Imagine what a 7400Go class GPU, currently running at 400Mhz, could do if you could clock it to 2Ghz?... -
Someone needs to convince the graphics companies to start seriously building for power consumption and efficiency.
I suspect this will have to be a two prog effort, one prong coming from desktop users refusing to accept 200W video cards with water cooler, and a second prong coming from the growing market for notebooks w/dedicated graphics. -
ltcommander_data Notebook Deity
I don't think it works that way. Unlike CPU code, GPU code is inherently parallel which is why the push is for a wider core. The way I interprete the reasoning is that a frame can be easiliy split into many parts that are processed in parallel and then placed back together for output. In such a case a wider GPU is beneficial because more pieces can finish together as a whole rather than a few pieces finishing quickly but having to wait for the other pieces to finish processing before display. In the later case, the amount of space dedicated to buffers and memory is large since you have to hold all the pieces that have finished quickly in storage before the other parts are done so the whole thing can be recombined. Also if you look at AA and AF these are parallel tasks where you are taking multiple readings from the same block. I would think that again a wider core would be more beneficial in this type of task rather than a narrower faster core.
This means that higher clock speeds do not necessarily mean that the die size is smaller just that the die is largely devoted to buffers and caches rather than shaders. Also I don't even a relatively small Go 7400 on a 65nm process could reach 2GHz. Even 800Mhz or double the speed would be difficult. -
ltcommander_data Notebook Deity
-
Check out this Belkin expansion dock for Express Card capable laptops -it has a built in GPU... http://www.belkin.com/pressroom/releases/uploads/10_10_06NotebookExpansionDock.html
-
Yeah but the ExpressCard/54 standard is only a PCI-e 1x bus, so bandwidth is rather limited.
-
True, but it can handle 2.0Gbps which is plenty fast for anything non 3D. Imagine if they were to add a hard drive to it... -
I'm interested in how that built-in GPU plays with Vista. It says it is 'Vista Ready', but I suspect that's only for the 2D non-accelerated interfaces. Is it just capturing what the GPU on the notebook is producing somehow?? I have no idea... It would be pretty dumb for people with X1600s to loose that when plugging into the dock, so I wonder if it just just a framecapture type system with a custom display driver...
What's Ahead?
Discussion in 'Asus' started by Jumper, Oct 7, 2006.