The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Was TDP measured differently in the old days?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Peon, Dec 22, 2015.

  1. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    It seems that anything pre-Pentium 4 has (by today's standards) insanely low TDPs - for instance, the Pentium III 1000 has a 29W TDP. By modern standards that would be considered a mobile TDP, but the 1 Ghz Pentium 3 was clearly Intel's flagship desktop CPU back in the day.

    Was TDP somehow measured differently back in those days? Or did the Pentium 4 (and especially Prescott) just shift the goalposts, meaning that today's CPUs simply run extremely hot when viewed through the lens of the past?
     
  2. John Ratsey

    John Ratsey Moderately inquisitive Super Moderator

    Reputations:
    7,197
    Messages:
    28,840
    Likes Received:
    2,165
    Trophy Points:
    581
    The basis for TDP calculation should not have changed. Number of transistors, voltage and frequency are factors (and note that for a given CPU, higher speed needs higher voltage to make the transistors behave correctly). However, the value suggested by Intel for manufacturers to use for designing of cooling systems may have changed. The current CPUs are allowed to briefly run at above their nominal TDP (see what HWiNFO reports) so the power supply needs to be able to provide above the TDP but the metal of the cooling system can absorb the power spikes.

    I remember that my AST Ascentia 950N was the last Intel notebook I had that did not have a fan. Google suggests the CPU options were in the range 75 to 120MHz which indicates a power rating of 8 to 12W. Then there was the rapid increase in CPU speed and transistor count with corresponding increase in the power requirements so, by the turn of the century notebook CPUs became power guzzlers.
    This prompted development of the Transmeta Crusoe CPU which brought the power consumption back down although at the expense of performance. Intel then reacted by starting to implement better power management and, after Pentium 4 demonstrated that there was a practical limit to increasing speed, the use of slower, multiple cores .

    The Pentium 3 may have a similar power rating to current CPUs but the transistor count (and speed) was much lower. The growth in transistor count is illustrated here. Part of the growth is due to adding cache memory, then multiple cores and more recently, a GPU, into the CPU so the number of other chips in a computer which also needed significant power has reduced.

    John
     
  3. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,075
    Trophy Points:
    931
    John said it; today's CPU's have a lot more on-chip than their predecessors, so their TDP is rated accordingly. In the Core 2 Duo days and before, there was a Front-Side Bus and the memory controller was located on the motherboard's Northbridge. Today the memory controller is integrated into the CPU on Intel chips. The situation is similar with integrated graphics, which are now integrated into the CPU and share the same heatsink. Look at one of Intel's technology slideshows on their CPU's, they usually post a diagram of the CPU and how it's laid out.

    Charles
     
    TomJGX and John Ratsey like this.
  4. Kent T

    Kent T Notebook Virtuoso

    Reputations:
    270
    Messages:
    2,959
    Likes Received:
    753
    Trophy Points:
    131
    In the old days, machines were slower and ran cooler. And less was done on the motherboard than current designs. All the above affected TDP. Newer consumes more power and also runs hotter due to being much higher clock speeds and performance. Today's CPU has integrated graphics and the memory controller function on board.
     
    Last edited: Dec 26, 2015
  5. pete962

    pete962 Notebook Evangelist

    Reputations:
    126
    Messages:
    500
    Likes Received:
    223
    Trophy Points:
    56
    Actually, TDP is some arbitrary number picked up by Intel marketing and engineers and it could be as much as1.5 times less than maximum CPU power dissipation under full load (for example most Core2Duos, running from 1.8GHz. to 3GHz have TDP of 65W, go figure that one) To make things more interesting TDP calculated by Intel would be different from TDP calculated by AMD (for the same CPU).
    In other words TDP is just a guidance, for some minimum power dissipation requirements so the computer runs more or less without too much thermal throttling under typical load of 75% if I remember correctly.
    And for little trivia, original Pentium had TDP of around 10W and Pentium 3 models were rated as low as 18W and as high as 42W.
     
    Starlight5 and TomJGX like this.
  6. jmg2

    jmg2 Notebook Enthusiast

    Reputations:
    0
    Messages:
    30
    Likes Received:
    6
    Trophy Points:
    16
    the pentium III tualitain was actually a very efficient chip--it was superseded by the pentium 4 "netburst" architecture, which emphasized meghertz over efficiency. so, there was huge jump in tdp, however it was measured, from the pIII architecture to the p4 netburst architecture.

    This is complicated by the fact that intel abandoned netburst in favor of the p4m arhitecture, which was based on the (far more efficient) PIII architecture, and was the predecessor of the core line. so, yes, tdp is arbitrary, but it shouldn't surprise that pIII was lower, netburst much higher, and then a subsequent drop starting with p4m and steady rise with the increase in number of cores.
     
  7. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    But.. yeah, tdp used to mean the maximum power-draw at the locked maximum processor speed. Or, it would be an indication of how much heat you would need to shift off the core to be able to operate at peak (or at all, until.. p4?). Such and such tdp then being relevant since it would allow cooling on air with such and such a cooler, etc.

    What complicates things is that you could obviously lock the processor to a clock frequency, and then have increased heat during load compared to during idle. From overclocking experience, this wasn't very significant until some time after the later p3s, I think. If the bus and southbridge didn't blow up, and the processor was cooled, then the overclock would work. Later, a stable overclock would be a very different thing from a working one (and typically it was the ram and bus that croaked - the processors had a very high ceiling). But it comes from more integrated components being active during load compared to when idle, on a smaller area, which is of course intelligent design. But what it means is that there's a difference between theoretical maximum power-draw and your practically reachable power-draw during normal code-execution. Since running as many shift-operations as technically possible, for example, so all the metal is active at any time, is a scenario you literally will never have when you run something useful (or even something that.. well.. went through a compiler).

    Further compounding this is that if you succeeded at a fairly high overclock, the average power-draw might actually be lower on average/typical usage scenarios, and even in games. I.e., back to the entire "ram giving out" before the processor. In spite of the increased clocks (and likely higher power-draw). Because you would just have bursts of activity on less frequent intervals than before. And when I look at the design documents later now, that makes a lot of sense, since that's really how the processors that allowed these ridiculous overclocks were designed. To perform short bursts of activity, instead of maintaining activity on all components at all times. So the overclock would give you better response and sometimes better performance, at the cost of skirting the edge of what the hardware would endure. As opposed to simply being faster because of higher clocks.

    And this of course goes back to how certain benchmarks score better on a system with intelligent instruction set handling over potentially more activity on all cores. And vice-versa. And a suspcious person could of course wonder if that had something to do with the bigger focus on higher clocks over more efficient processor output since the p4.

    In any case, it's probably natural that Intel would start to report tdp values for "nominal" loads, instead of having a possible peak draw. Even though that means the processor regularly breaks the tdp-limit (i.e., the entire hoopla over the mobile processors a while back, where they would use an unknown workload for measuring nomal tdp, and then differentiate processors based on the maximum clocks that were set. When in reality all of the processors broke the tdp towards more or less the same limit, but just took slightly longer, on nominal loads, to produce the same heat. And you end up with laptop-owners running into hardware lock on the clocks during normal use, since the actual limit for breaking the tdp is the temperature trigger - and the cooling solution would be scaled towards a reported "nominal" tdp, etc. But really needed exactly the same maximum heat dissipation to function on full load. It's pretty obvious that this would happen, but apparently people were really mad about it for some reason).

    Just as AMD would design the processors to throttle when it approaches actual power-draws that would result in increased thermal dissipation over a certain limit. Because, obviously, all the processors can be clocked until the sand melts. And that's really the discussion - how useful is it to have potential benchmark scores that favor, very likely, usage scenarios that never will happen. Just as it makes very little sense to report the tdp during those tests, if that usage scenario doesn't occur during normal use.

    Though of course combining the two as a PR and business-move, and suggesting that a processor has higher performance but lower tdp-rating is, and was, a very dishonest thing to do. But that's how things are, I understand.

    But it means, if you accept my extremely inductive reasoning here, that for Intel processors, the "tdp" value is (and was) an expected average heat dissipation over time. While the maximum power-draw can be extremely high at peak, and is fairly easy to predict from architecture to architecture based on number of cores and clock-speed. While that increased core package load is generally allowed, until the temperatures reach a certain point (set by vendors at utterly random values, unfortunately..). While for AMD, the tdp is (and as far as I understand, also was) the actual maximum effect-drain for the components on the core that is allowed at the set maximum clocks. In the sense that you can massively overclock those processors, but they will only be allowed (until you unlock the throttling mechanism) to run at those speeds for as long as the actual workload doesn't require the components to draw over a certain amount of power. It's just that with differentiating clock-states and idle modes, along with a greater amount of components integrated on the actual chip-die, the difference between the two ways to measure things is more visible.

    In other words, overclocking now really doesn't exist on amd - what you're targeting is an intelligent way to spend the tdp (..and while I don't have first hand info on this, what I've heard is that some of the amd packages are good overclockers, though - that you can break the tdp-limit and increase the clocks very high.. even if that on it's own won't increase performance linearly, and so on, like discussed). While on intel, you can in theory burst every processor to the same clocks if your cooling allows it, and actually get the same performance (after all the hardware is the same). The relevant differences then really being from bus-speed, ram, cache-size (of course), and core architecture, the cooling solution, and programmed locks on burst-durations and clock limitations. In the sense that all the different processors for different markets and uses are actually in software/firmware, rather than in hardware. But we knew that already, surely, after the i5 vs i3 thing on desktops over the past couple of years. :p
     
  8. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    Pardon my insolence, but I call this bs. Given a proper platform with adequate RAM, processors could exhibit anywhere from measly 200mhz increase before they lost stability to over 50% increase without any problem.
     
    Last edited: Jan 4, 2016
  9. jmg2

    jmg2 Notebook Enthusiast

    Reputations:
    0
    Messages:
    30
    Likes Received:
    6
    Trophy Points:
    16
    agreed. This is just crazy wrong. I've been overclocking since the 486 days (and back then overclocking meant using a piece of electrical tape), and the processor has always been the limiting factor.


     
    Starlight5 likes this.
  10. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    ..wasn't that what I said? :p If you could cool the processor well enough, you could get really really high clocks, and break the expected watt-drain by ridiculous margins (even if the percentage increase in clocks might not have been that huge). And that there's nothing in the architecture as such that limits it. At the very least not in between variants in the same "generation", but likely it's the case between similar generations as well.

    Or, I wanted to point out that the tdp-max can be broken by insane margins for at the very least short bursts on intel platforms.

    And.. I was just saying that if you measured where the errors turned up that caused an unstable overclock to break, it would be on timing problems in ram areas. Mapped that on the logs - IO sometimes, then ram, and then protection faults. And not actually the processor burning - far from it. So from what I know about how computer chips are put together, I'm thinking that this doesn't come from inherited errors in level2-cache, as if the cpu is running it's legs too fast, or something ridiculous like that. It comes from timing errors, or the bus and ram not keeping up.

    Like on the p3s where some setups and mainboards could put a p3-600 to over 1000Mhz (multipler locked, but bus-speed could be raised, etc), but other mainboards croaked long before that on the same processor and cooler. That's not the processor croaking, it's the bus and ram. Or, all of the cpus were designed to withstand the same peak power-draws, but they were clocked in different variants (to different prices).

    I'm not claiming to be a huge expert on this or anything. But it's a good guess at least, and that it makes sense, when you look at some of the best overclockers, how they're put together, where the components are - and then why increasing the voltage to get higher bus clocks and tighter ram-timing sometimes would make a bad overclock suddenly succeed. And that maybe this is not as easy to do now when some of the bus-components are integrated on the chip-die - since, perhaps, these components are more sensitive to heat increases than the cpu-components, but still are suddenly becoming very hot along with the cpu, and maybe rely on some other variables than what's exposed to you in the bios as well.
     
  11. jmg2

    jmg2 Notebook Enthusiast

    Reputations:
    0
    Messages:
    30
    Likes Received:
    6
    Trophy Points:
    16
    not really.

    because of the way the manufacturing process works, some chips you can drop in liquid NO2, and get a miserable overclock from, and others you can put a aftermarket aio cooler on and get really decent overclocking on.

    these are obviously extreme examples I'm raising to make a point. The reality is that some processors in the same family will respond to overclocking far better than others. It's luck of the draw in some cases (although far less today than in the old days with the unlocked processors)

    Anyway, the limiting factor is often the processor, not the memory, as you said in your previous post.
     
    TomJGX and Starlight5 like this.