The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    The 9262 vs XPS1730 3Dmark06 score fiasco

    Discussion in 'Sager and Clevo' started by WackMan, Apr 2, 2008.

  1. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Guys run 3D Mark 05 instead. It isn´t as heavily CPU bound like 3D Mark 06 is and see what scores you get there. I agree with former that SLI really shines with resolutions of 1600x1200 and above. I mean just look at my score I have 10600 in 3D Mark 06 1280x1024 and 10100 in 3D Mark 06 at 1920x1200 that tells you something. On the lower resolutions these GPU´s really bottlenecks the CPU.
     
  2. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Excellent example, and I see that your's was one of the scores I picked up in the 1730 Owener's Lounge when I was attempting to understand machine-to-machine variation.
     
  3. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Glad I could help wobble. Though in real life performance i.e (gaming) I have a hard time believe the Sagers/Clevos would do any worse than the M1730´s. This synthetic benchmark doesn´t mean much, it is better to compare gaming to gaming benchmarks instead and see where we stand.
     
  4. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    guys why don't you simply run the be all end all test of tests to get a final answer? ......

    m1730 VS. d900c

    crysis timedemo internal benchmark;

    wuxga resolution, all settings high in dx9, stock un-modified game, a pass with single gpu and a pass with sli enabled using the same drivers. you may overclock your systems as high as you can as long as the timedemo can finish. you can also run them stock but the side by side comparisons must be on equal grounds.

    i mean, seriously, why not use a real world game instead of the stupid 3dmark? it has already been shown that certain cards in the past catored favorably to this popular benchmark, yet did poorly in real games....like the 8700m gtx, as an example.
     
  5. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Good suggestion.

    Justin has done this with the 9262 and I think I saw such a result from the 1730. I could be wrong, but I seem to remember that the two machines fare about the same. Perhaps it's just a matter of gathering the results. I'll look around.

    Edit: Since it is beginning to appear that the best driver for SLI is not always the best driver for non-SLI (please don't ask me why!), it might be best to skip the non-SLI comparison.
     
  6. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    I agree the 3dmark thing is trite now.

    realworld benchmark time
     
  7. Vedya

    Vedya There Is No Substitute...

    Reputations:
    2,846
    Messages:
    3,568
    Likes Received:
    0
    Trophy Points:
    105
    ^LOL

    Dexs magnanimous consesus after 11 pages :p :D
     
  8. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Yes, you hit on the 64-dollar question: The dual core machine gets a nice boost in GS score (average of SM2 and SM3) when run with SLI, so why doesn't the quad core machine get a similiar boost?

    This shortfall has a dramatic impact on total 3DMark06 score. For example, if you run the calculation using the dual core GS score (which the quad should have gotten I would think) and the quad core CPU score, you get a total score of 14426, nearly 10% higher than the score you actually got.

    So... is 3DMark06 screwing up the quad GS scores somehow, or is the quad somehow screwing up? Perhaps the quickest way of finding out is to compare the dual and the quad in actual game FPS, and the Crysis benchmark might be a good start.

    Can you do that, Justin?

    wobble

    PS: The GS score is determined exclusively by the fps results of the four, full-speed graphics runs in 3DMark06. It may be of interest to someone smarter than I to ponder why the quad did especially poorly in the forth run, "Deep Freeze".
     
  9. eleron911

    eleron911 HighSpeedFreak

    Reputations:
    3,886
    Messages:
    11,104
    Likes Received:
    7
    Trophy Points:
    456
    Drivers! blame the drivers!
    Also,I`m betting Vantage will make a better assesment of the system.But that also means 1/2 the score again.
     
  10. Justin@XoticPC

    Justin@XoticPC Company Representative

    Reputations:
    4,191
    Messages:
    3,307
    Likes Received:
    0
    Trophy Points:
    105
    Working on the Crysis will try to update today or Monday. :) Here are a couple new 3D Mark 06 scores to view that scored very well. :) (Mid score out of 3 Runs). These are both with:
    -Fresh Install of Ultimate 64 w/ SP1
    -Q9550 Quad Core CPU
    -4GB of Ram
    -SLI 8800's

    This is with a new driver version 174.82 that has started pre-loading on new orders and will be updated on Sager's website later tonight or next week. I will report back after Crysis is complete. :)

    1st Run @ 1920x1200 and 2nd Run @ 1280x1024
     

    Attached Files:

  11. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Nice score at 1920x1200!

    Just for fun... I'll bet a penny that the dual beats the quad by 6-7% in SLI. :)

    Thanks in advance for doing the tests.
     
  12. duane123

    duane123 Notebook Consultant

    Reputations:
    72
    Messages:
    233
    Likes Received:
    0
    Trophy Points:
    30
    Man those driver updates are coming every week! I'll have to give the new ones a try. I haven't updated sine 174.31 because I've been having pretty good luck with them, but now I've skipped quite a few revisions so I guess it's time to give it a shot.
     
  13. Justin@XoticPC

    Justin@XoticPC Company Representative

    Reputations:
    4,191
    Messages:
    3,307
    Likes Received:
    0
    Trophy Points:
    105
    Here is Crysis, everything high 1920x1200 :D
     

    Attached Files:

  14. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Justin, how long has the version of Vista you're running these benchmarks on been installed, and long have you been using it? From what I've gleaned, Vista takes a while to get broken in and optimize its prefetch and I/O optimization routines, and until it gets broken in, its performance will suffer.
     
  15. Justin@XoticPC

    Justin@XoticPC Company Representative

    Reputations:
    4,191
    Messages:
    3,307
    Likes Received:
    0
    Trophy Points:
    105
    These tests have all been completed nearly immediately after the OS was installed and updated. I think there would be little variation if tested after it was broken in or used for a little while. :)
     
  16. eleron911

    eleron911 HighSpeedFreak

    Reputations:
    3,886
    Messages:
    11,104
    Likes Received:
    7
    Trophy Points:
    456
    Any chance for some Xp results Justin? :D
     
  17. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Well, both the new I/O prioritization mechanisms used - prioritizing I/O to disks, etc, and the enhanced prefetch, now known as superfetch, can affect system performance until they've developed a sense of the user's usage habits. For a short explanation of superfetch, plus a free utility that looks interesting (I haven't tried it yet) and will (the author claims) permit one to monitor superfetch activity, see this webpage.
     
  18. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    I agree XP because Vista sux.
     
  19. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Sorry if I missed it, but is that the quad?

    If it is, it's a step in the right direction, because the minimum FPS numbers are higher than what Swiftnc got, even though he ran at a lower resolution.

    I'd love to see the dual and the quad compared (remember, you can make a penny :) ), for a few of us do have the dual.
     
  20. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Nice results Justin. If we compare against my benchmark in the same resolution no tweaks though with my GPU overclocked and with DX10 high it looks like this:

    !TimeDemo Run 3 Finished.
    Play Time: 55.32s, Average FPS: 36.15
    Min FPS: 9.15 at frame 144, Max FPS: 47.40 at frame 86
    Average Tri/Sec: -23192520, Tri/Frame: -641478
    Recorded/Played Tris ratio: -1.43

    Your min is higher than mine, though my max is higher most likely due to my overclock on the GPU´s. It would be nice to see some overclocking on the GPU´s on this Clevo beast and see what the score would be then, should probably beat my machine at least :)
     
  21. Kathman

    Kathman Notebook Enthusiast

    Reputations:
    8
    Messages:
    29
    Likes Received:
    0
    Trophy Points:
    15
    Thanks guys,

    I had uninstalled the drivers correctly & used an extra prog to delete the drivers, & having formatted I would expect no trace of older drivers. Had closed many services running in background & had no other progs running.

    Well I will see how vista optimises itself. DAMN MICROSOFT
     
  22. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    There are some overclocking results in this thread:

    http://forum.notebookreview.com/showthread.php?t=236240

    It ain't pretty! :confused:
     
  23. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    ok so only one posted the results? where is everyone else, scarred or something? this crysis benchmark will tell who is the king of the hill. m1730 vs. D900c! all other games and benchmarks do not matter. this is the main reason why crysis is even popular, to benchmark systems. let's go!

    hey dexgo, you stated 35 fps average at wxga res on all high? i fail to see how that is true when justin's sli rig is more powerful and is scoring much lower than you......

    you guys should post a desktop screenshot with the timedemo results like justing has and upload them here since everyone is allowed to do that. no more simply typing the results. and, make sure your game is STOCK with no mods!
     
  24. eleron911

    eleron911 HighSpeedFreak

    Reputations:
    3,886
    Messages:
    11,104
    Likes Received:
    7
    Trophy Points:
    456
    Justin has not OCed, Dexgo,Magnus and others HAVE OCed.
    It`s why you see the differences.
     
  25. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    OC sure, but you are still going up against justin's sli rig with a better cpu so one would easily suspect that justin should be getting much better benchmarks in crysis than dexgo.
     
  26. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    he has SLI, I don't no sit!

    I am tired of this. now it's going E-Peening.

    I started this to see why the M1730 had Higher Scores.

    that's all.

    not VS VS VS.
     
  27. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    you claimed higher frame rates than what he posted...there is obviously a large discreptancy here. his 28 fps to your 35 fps is about a 28% performance difference.

    maybe justin can run the benchmark at wsgx res (to match yours) and see if the gap can be closed.
     
  28. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
    !TimeDemo Run 0 Finished.
    Play Time: 94.94s, Average FPS: 21.07
    Min FPS: 12.84 at frame 150, Max FPS: 25.15 at frame 1773
    Average Tri/Sec: -19077660, Tri/Frame: -905582
    Recorded/Played Tris ratio: -1.01
    Press any key to continue . . .

    this is what I get at 1900x1200 all high in the benchmark tool
     
  29. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
    !TimeDemo Run 0 Finished.
    Play Time: 68.76s, Average FPS: 29.09
    Min FPS: 11.38 at frame 144, Max FPS: 36.42 at frame 1006
    Average Tri/Sec: -26027616, Tri/Frame: -894873
    Recorded/Played Tris ratio: -1.02
    Press any key to continue . . .

    here 1680x1050
    If I let it run to 2+ runs it does go to max of 32-33fps avg.

    I was using fraps before for averages.

    this is all high no tweaks

    still higher than justin with 2 cards??
     
  30. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    curious to see justin's wsxga run.

    what do you get at 1280x1024? (or whatever the 16x10 widescreen aspect is that resembles the 720p of an xbox)
     
  31. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Also dexgo did run at DX9 XP I think, Justin ran at DX10. Dexgo´s score is right at the 1920x1200 res. There lots you can do to increase performance, first off overclocking the GPU´s and especially the shader area. Then of course everyone´s OS is a little different from each other which also can inflict on the scores.
     
  32. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    right but he wasn't running dx10 very high settings?
     
  33. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    !TimeDemo Run 1 Finished.
    Play Time: 53.77s, Average FPS: 37.20
    Min FPS: 18.81 at frame 138, Max FPS: 44.60 at frame 76
    Average Tri/Sec: 39300764, Tri/Frame: 1056520
    Recorded/Played Tris ratio: 0.87
    Press any key to continue . . .
     
  34. eleron911

    eleron911 HighSpeedFreak

    Reputations:
    3,886
    Messages:
    11,104
    Likes Received:
    7
    Trophy Points:
    456
    That`s not much of a difference between WSXGA+ and WXGA. Weird.
     
  35. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Well it´s easy to understand, the 8800m GTX is bottlenecked by the CPU there at that low res, doesn´t matter if it is a quad. Same with the desktop 8800GTX´s. I know myself my desktop GTX is bottlenecked by a E6600 I have, though not anymore since I have overclocked it. But at higher res it is the GPU that has to do more work.
     
  36. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    if we could overclock the processors in the d901c it would be better.

    and e6850 might be better for me for gaiming or an e8400
     
  37. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Eleron Warning (sorta like a Miranda Warning:D ): TrafficCone2.jpg TrafficCone2.jpg TrafficCone2.jpg

    EDIT: Ok, Ok, I know I've posted another rambling monstrosity. :D Instead of trying to rewrite the entire thing (even Hercules couldn't have accomplished that task, I'm afraid), I'll just try to give the Cooks' Tour version here:

    Begin Cooks' Tour version:

    Basically, because the current Quad-core Intel CPUs are really two Dual-core dies stapled together, the processors on Die1 do not share cache with the processors on Die2, and, thus communication between the cache on Die1 and the cache on Die2 must take place over the shared-memory bus the four processors share. As a consequence, cache coherency functions, including even cache copy invalidation, take bus time in addition to the bus time that is normally taken to reread an invalidated cache copy from main memory.

    As a result, on a per-processor basis, the current Quad-core CPUs are devoting a lot more shared-memory bus time to cache coherency tasks than are the Dual-core processors. Since each bus access is exclusive, and since the bus is slower than intra-cache access (in the case of shared cache) is, this all adds up to a much greater performance hit from cache coherence than you would expect solely on the basis of the Dual-core CPUs.

    In addition, because each time the cache coherency function accesses the bus, even if only to invalidate multiple cache copies, access to the shared memory bus is denied to any other function that might access it, including any access that a GPU or a streaming audio application might want to make.

    As a result, the performance of the two-die architecture of the current Quad-core processors is significantly below where it should be as a result of the cache coherency mechanisms used by the CPU.

    EDIT: End of Cooks' Tour version.

    I think I might have an idea why some folks have noticed that the Quad-core CPUs are underperforming the Dual-core CPUs being tested in these postings - performance degradation due to the overhead required to maintain cache coherency across the cores and, in particular, the overhead associated with maintaining cache coherency between the two pairs of processors out of which the current Quad-core is built.

    Without getting into too many of the details (most of which I haven't mastered yet in any event), in any multithreaded or multiprocessor system, there are frequently multiple copies of data in cache that have to be kept consistent so that, e.g., two threads that are supposed to be accessing the same data item actually do so - if processA and processB both need to access Data1, and thus there are two copies of Data1 in cache, one each for processA and processB, each time one process accesses Data1, the copy kept in cache for the other process must be updated (or invalidated, so that the second process suffers a cache-miss and must read a fresh copy from memory).

    For example, suppose that Data1 starts with the value 10. If both processes have started, then there is a copy of Data1=10 in cache for ProcessA and another copy of Data1=10 in cache for ProcessB. If ProcessA then changes the value of Data1 to 15, the ProcessorA cache copy of Data1 is now Data1=15. If cache coherency is not maintained, however, the cache copy of Data1 in ProcessorB cache is still Data1=10, which means that if ProcessorB then performs a computation that depends on Data1, that computation will be in error without cache coherence.

    However, maintaining cache coherency takes up cycles and, if applicable, time on the memory bus the processors involved share - this is particularly onerous in the case of current Intel CPU designs because use of the bus by one processor excludes the other processors. Thus, if cache coherency requires use of the shared-memory bus, the system will suffer a hard performance hit as other necessary access to the bus is blocked for the duration of the coherency transaction.

    Typically, the shared-memory bus is occupied for cache coherency purposes only after a cache copy has been invalidated and the processor accessing that cache copy has a cache miss and has to reread the invalidated data copy from memory. This causes a performance hit because (a) the bus is slow relative to the CPU, (b) it may have to pass a big data chunk, and (c) the bus is not available for anything else.

    In the case of the Dual-core processors, cache coherency only monopolizes the bus when an invalidated copy is subsequently accessed by its owner processor, and thus has to be read back from memory. The invalidation process itself, however, takes place within cache because the two cores in a Dual-core Intel CPU share all of the L2 cache.

    By contrast, in the case of the current Quad-core processors, cache coherency monopolizes the bus, not only when an invalidated copy must be reread from memory, but also when the invalidation process itself takes place. The reason for this is that the current Quad-core processors are not true quad-core processors, but two dual-core processor dies stapled together so that they appear to be a quad-core processor. However, L2 cache continues to be shared only by the two cores on each die, and thus communication between the cache on die 1 and the cache on die 2 must necessarily pass across the shared-memory bus.

    To see this, I've attached a screen copy of Figure 2-9 on page 2-44 of the Intel 64 and IA-32 Architectures Optimization Manual, Order No. 248966-026, November 2007.

    What this means is that each time a cache coherency invalidation task takes place, the Dual-core CPU takes care of it within cache, and doesn't tie the shared-memory bus up until an invalidated copy has to be subsequently refreshed. On the other hand, each invalidation task on a current Quad-core necessarily ties up the shared memory bus both when the invalidation is done as well as when the invalidated copy is later refreshed.

    Now, obviously, a certain amount of bus-use for cache coherency is necessary and contributes to the overall performance of the quad-core; however, there are other multi-core logistical functions that can lead to so-called false-sharing, in which a data item in cache appears to be a shared item, the coherency of which must be maintained, even though in reality the item is not shared. One of the chief trouble-makers in this instance is so-called thread migration - the process whereby the scheduler moves a thread from one processor to another, frequently for load-balancing purposes. In that case, not only is the thread itself moved, but its cache is duplicated in the cache available to the other processor to which the thread has been moved. Until the original in the cache of the original processor is removed, the system will see two copies of a data item that need to be kept consistent with each other, and cache coherency overhead will be incurred to maintain that consistency even though it isn't needed because the data item really is not a shared data item.

    This process is bad enough when it occurs on a Dual-core processor, or between the two processors on the same die of a Quad-core processor; however, it becomes a horrendous drag on performance when a thread is moved from a processor on one die to another processor on the other die. In that case, the effort expended to maintain the consistency of the false-shared data item necessarily results in monopolization of the shared-memory bus during the time that process takes.

    Thus, at bottom, one of the sources of the problems everyone's been having with benching the quad-core NP9262 against the M1730 with a dual-core (or even, as Justin did, benching an NP9262 against itself with a quad-core and with a dual-core) is most likely the fact that the benchmark program is either not policing itself carefully enough in terms of avoiding any unnecessary migration across dies (as opposed to processor cores) and probably also the fact that Windows (as far as I know) does not recognize the difference between cores and dies and, in pursuit of its overall load-balancing service, is probably willy-nilly moving threads across dies in pursuit of performance gains that are, in fact, illusory because the performance hit taken by the concommittant cross-die cache coherency is greater than the performance gain obtained by moving the thread in the first place.

    This problem may also explain some of the problems with the 8800M in real-world situations such as running graphic-intensive games or even other heavy-duty I/O tasks such as audio streaming. Problems would arise to the extent that the GPU (or the audio player in the case of audio streaming) had to access main system memory - each cross-die cache coherency task will block the memory bus, not only for the CPU cores, but also for any other function that needs access to memory.

    I don't know to what extent this problem can be mitigated using the current quad-core processors. At the level of overall load-balancing, only changes to Windows itself would permit the functionality to distinguish between cores and dies, and to take into account the performance loss arising from cross-die thread migration when determining how to balance the then-applicable load; however, even at the application/driver level, and even if Windows' load balancing cannot be controlled, the losses due to cross-die migration and cache coherency could be reduced by, e.g., assigning a forced affinity to each thread created that kept each thread from crossing from one die to the other (to the extent possible, obviously) - that might also entail limiting the extent to which driver workload is offloaded to system worker threads instead of being completed on a thread owned by the driver process(es) itself.

    This also suggests that the NVidia unified driver system is inadequate to the task in this case because the unified driver that suffices for the dual-core systems is, at least right now, insufficient for purposes of running on a current quad-core system.

    Unfortunately, I doubt if this problem (if it in fact exists) will really get addressed by either Intel, Microsoft, or NVidia given that Intel will, sooner rather than later, be releasing native quad-core processors to replace the current ersatz 2x2 "quad" cores.

    Fig. 2-9 Core 2 Duo/Core 2 Quad Architecture Diagram (click for bigger pic):
    Intel-C2D-Q_Arch.JPG
     
  38. wobble

    wobble Notebook Evangelist

    Reputations:
    68
    Messages:
    340
    Likes Received:
    0
    Trophy Points:
    30
    Wow!

    I really like your ideas, Shyster. I don't have any idea if this one is true, but, it doesn't matter, it's really nifty. :)
     
  39. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    shyster me eyes are bleeding, aaahhhh!!!
     
  40. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    I know that I write long posts but you are a marathon writer Shyster. Will print it out and read it in thr train to tommorow

    +1 for a well formed technical explanation of ones point of view. But I need to spread points before giving it to you again :(

    Stay cool,

    Trance.
     
  41. dtwn

    dtwn C'thulhu fhtagn

    Reputations:
    2,431
    Messages:
    7,996
    Likes Received:
    4
    Trophy Points:
    206

    Deleted your main post as quoting it would make it unbearably long.
    This is the area my question is for. Am I correct to say that you're theorizing here and not actually certain? I think you make some valid points, but I'm just checking.

    Great Job by the way.
     
  42. eleron911

    eleron911 HighSpeedFreak

    Reputations:
    3,886
    Messages:
    11,104
    Likes Received:
    7
    Trophy Points:
    456
    @Shyster1 , I dunno wth you just wrote there but I had to scroll 5 min to get the end of it.
    I give you that much , you are a man of patience.
    Must call Dance of Patience, if only I would know native american...
    I am at a loss though,why is this a fiasco in the first place? it`s not like the difference is that huge.

    Thanks for the cones,my brakes work fine now :D

    Found it : Your name is EwGsf(chaw la chee sss gee) of Sorry, That word was not found in the dictionary. :twitcy:
     
  43. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    Well the fiasco is OVER... CAPUT!! FINITO!

    now that I've solved the Overclocking of d901c problem.

    we are complete now.
     
  44. pasoleatis

    pasoleatis Notebook Deity

    Reputations:
    59
    Messages:
    948
    Likes Received:
    0
    Trophy Points:
    30
    So the processor is holding back the 3Dmark 06 of the video card? I thought that in games it does not matter.
     
  45. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    In 3dmark it clearly made a diff in the sm2.0 scores.

    as you seen.
     
  46. Aryantes

    Aryantes Notebook Evangelist NBR Reviewer

    Reputations:
    445
    Messages:
    336
    Likes Received:
    3
    Trophy Points:
    31
    This was also shown with Justin's run using the E8400 processor, interesting isn't it.

    Can't wait to see SLI performance with OC CPU and OC GPU.

    I would do it, but i don't have the guts :(
     
  47. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    cummon. do it bud.

    FUN :D

    My temps are well within range.

    67 at max with TAT and even playing crysis only goes to 56 degrees.
     
  48. pasoleatis

    pasoleatis Notebook Deity

    Reputations:
    59
    Messages:
    948
    Likes Received:
    0
    Trophy Points:
    30
    How much is changing from 2.4 Ghz to 3GHz going to affect the gaming perfomance. Are you going to be able to get for the same settings more FPS?
     
  49. dexgo

    dexgo Freedom Fighter

    Reputations:
    320
    Messages:
    1,371
    Likes Received:
    2
    Trophy Points:
    56
    YES!

    I was bottlenecking in some tests of Games.

    especially at lower resolutions.

    I also changed the FSB to 1333mhz

    so essentially I have a wicked processor now eh? =2 qx9650 stock.
     
  50. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Probably another question out of sheer ignorance (or lack of ability to keep up with all your posts :D ), but have you tried overclocking just the FSB to see what sort of performance gains that alone nets?
     
← Previous pageNext page →