The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    This is just for the experts

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by conejeitor, Nov 21, 2006.

  1. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    So, I run a statistical software basically, that has to run several algorithms to get the result. It usually takes 3-5 minutes to finish.
    Now, when I run it, my CPU is working at 55%, my RAM at 45%.
    Then: Where is the bottleneck? I mean, if nothing is working at maximum, why does it take so long to complete?
    Thanks for the help.
     
  2. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    How do you know your RAM is running at 45%? Capacity, or bandwidth? How much data are you processing? If you're reading it off the hard disk, that would almost certainly be your bottleneck. It depends on your software as well, how it caches data, how well it's programmed. Does it load everything up all at once, does it do periodic file access, what? All kinds of things can cause the bottleneck.

    Also, what other processes are running... any graphical programs? A screen saver kicking in (blank screen is fine)? Funny drivers, such as a printer monitor, or sound drivers?
     
  3. bgeiger

    bgeiger Notebook Enthusiast

    Reputations:
    0
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    15
    Try this:
    -Start your software.
    -press CNT_ALT_DEL to bring up task manager
    -go to processes
    -highlight the process That corresponds with your software
    -right click on that process
    -go to "set priority", and set to high
    -close out of task manager, and run program again. Might be faster now...
     
  4. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    Thanks, I'll try that bgeiger.
    Pitabread, my used Ram is 45%. The HDD is barely working (the data is not much, but the analysis is tricky). Also, not many other programs are running. Still, that would be reflected in my CPU load, don't they? Also, if the program load all at once or by parts, that still would be reflected on the CPU load, right? (which is 55%)
    Thanks a lot for your answers.
     
  5. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    Well, it could be the CPU is just thrashing on the cache, and is getting a lot of stalls in the pipeline due to the way that the software calculates the data. That will slow down your CPU's usage percentage, but it doesn't mean the CPU isn't working. It just realizes that it branched wrong or something, and has to go back, flush out some data, and try the calculation along a different path, which incurs a performance penalty.

    In all likelihood, it's badly programmed (or just horribly complex and calculation prediction error-prone) statistical software that is preventing you from getting the "full" performance of your CPU.

    And there's only one 'a' in my nick ;)
     
  6. chrisyano

    chrisyano Hall Monitor NBR Reviewer

    Reputations:
    956
    Messages:
    5,504
    Likes Received:
    0
    Trophy Points:
    205
    Also sounds like it's a single-threaded application you're running--so that could account for 1 core being used to the max (i.e. 50%) rather than both.
     
  7. JadedRaverLA

    JadedRaverLA Notebook Deity

    Reputations:
    273
    Messages:
    724
    Likes Received:
    0
    Trophy Points:
    30
    Exactly. Assuming you have a dual-core processor, you are more than likely maxing out one core with that program and barely using the other for the other background tasks that are running. If you open up task manager and watch the CPU usage history graph, one core should be maxed out while the calculation is running, the other barely used.

    If you don't have a dual-core processor then there isn't any really good explanationm aside from poor programming in the statistics software.
     
  8. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    Good call. I didn't even think to ask if he was running a dual-core processor, or what he was using to gauge the usage. If you hit ctrl+alt+del and look under the performance tab of the task manager, you should see two CPU graphs. If one of them is at 100% and the other is very low usage when you're running your statistical application, then chrisyano hit the nail on the head. Teaches me to overthink things ;)
     
  9. Gator

    Gator Go Gators!

    Reputations:
    890
    Messages:
    1,889
    Likes Received:
    0
    Trophy Points:
    55
    Sounds like you have a single core CPU from your description. Your software is causing a lot of cache misses, making your CPU wait while instructions are fetched from main memory.
     
  10. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    There are many more potential bottlenecks than CPU and RAM. (More particularly, what you measured was only how much RAM was consumed. A more important bottleneck might be the bandwidth. The CPU can only pull so much data from RAM every second, so that might easily become a bottleneck in many cases.

    About the CPU stalls suggested above (cache misses, branch mispredicts), a lot of those would still register as CPU activity. That is, the OS would probably not be able to swap in another thread while waiting for the pipeline to flush. It might be able to do so while waiting for data from RAM (on cache misses), but I'm sceptical about that as well.

    Still, there are countless things that could limit performance in your case. Even a small number of harddrive accesses can cause much lower CPU usage (One single HD read might take up to 10 milliseconds to complete, during which the program is unable to do anything, so the CPU will likely idle)
    Now if we assume just 10 random HD accesses per second, we've already reached 100ms wasted (or 10% CPU time). Granted, often the actual time is quite a bit lower than 10ms, but my point is that even very few HD accesses can have a huge impact.

    OS limitations might also be at play. Some calls to the OS may block the process, causing it to suspend untill... well, until the OS decides to resume the process. Depending on what exactly your program needs, that may or may not have an effect.

    Badly programmed multithreaded software might also cause artificial pauses that end up lowering the CPU usage.

    But yeah, what the people above said still stands. If you have a dualcore system (or P4 with hyperthreading?), then one core at 100% will show as 50% CPU usage. And Pitabred has quite a few good suggestions as well :)

    What software is it? Anything we might know? Would make it easier to guess what could cause it :)
     
  11. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    you guys are genius, I checked the task manager, and actually one of the cores is at 90% most of the time while the other is like 10%. I guess that 10% left in the first core is up to Pitbred reasoning, of the programing issues.
    The program (in case you want to know) calculates phylogenetic trees (!?), so basically compares multiple sequences and group them by similarity.
    Thanks a lot guys.
     
  12. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    *Pretends to know what a phylogenetic tree is* :D
     
  13. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    Something pretty interesting, just to check how good dual processors are:
    I run super pi, and for 1M I got 33 sec, and as expected, only one of the cores was doing most of the work. Later I run that "phylogenetic program" at the same time than super pi. Both cores were working full, and I got the 1M digits in only 34 second (just 1 second slower).
    pretty impresive how they distribute the duty.
    Thanks again.
     
  14. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    Now, this is a related question, so I'll put it here to save a thread:
    Super pi takes 10 times more (5 minutes!) to finish when the PC is on battery. I know that the processors are lowered so they wont heat much. However, the actual processor activity doesn't change when I check the task manager. The load is still the same, around 50%.
    Then, what is really lowered when the laptop is on battery power? It clearly affects a lot the performance, but how it doesn't affect the processor load?
    Thanks again.
     
  15. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Well, your CPU's clock speed is lowered. 50% of 1GHz is a lot less than 50% of 2GHz. In both cases, Task manager will show 50% CPU usage, but at a lower clock speed, less work will get done.

    What task manager tells you is just how large a percentage of the time the CPU was busy in. (50% on a dualcore system means that one core was busy 100% of the time. But it doesn't say anything about how fast the CPU worked, just that it was busy. And obviously, a slow worker is just as good at being busy as a fast one. (In fact, the slow worker will be busy for even longer, since, well, he works slower)

    (Plus it might be able to enter a sleep state where it still can get work done, but just does so much less efficiently. The Pentium 4 did something like this when it overheated. It didn't lower its clock speed or idle, it just switched off the CPU at brief intervals. So the CPU was always running at the "original" clock speed, and would even show 100% activity in Task Manager, because it never ran idle. It just got much less work done)
     
  16. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    I see, thanks a lot Jalf.
     
  17. dumanator

    dumanator Notebook Enthusiast

    Reputations:
    7
    Messages:
    31
    Likes Received:
    0
    Trophy Points:
    15
    I read through all of this info even though i'm not an expert. As a side note, if you guys don't mind, how did you all become so computer savvy? Is it through career, classes, or simply personal interest?
     
  18. hmmmmm

    hmmmmm Notebook Deity

    Reputations:
    633
    Messages:
    1,203
    Likes Received:
    0
    Trophy Points:
    55
    thats weird

    on battery shouldn't make it that much slower...

    i'm not sure about the turions, but the core duos have speed step that will clock right back up when cpu intensive tasks are needed so it runs back at full speed even on battery

    there shouldn't be any reason for allowing the cpu to run at 100% on AC and only 50% on battery due to heat alone...


    anyway, if you want the same performance on battery as AC, i suggest you try go to control panel-> performance and maintance->power options -> and under "power scheme" turn it to "always on" and disable speed step in bios (though i don't recommend as speed step save a lot of power)
     
  19. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    I think you are right. I was using NHC to set the battery powered CPU, and the option might have been to harsh.
     
  20. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Classes (computer science at one of the more theory-heavy universities) and interest.

    And yeah, a 10x slowdown sounds huge, but I don't have a modern notebook to compare or test with myself...
     
  21. conejeitor

    conejeitor Notebook Evangelist

    Reputations:
    28
    Messages:
    652
    Likes Received:
    6
    Trophy Points:
    31
    I tested other options and hmmmm is right, there are other settings that drop the performance to only half and still do not heat the processor too much. The one I was trying before was on "maximum battery".
    I still wonder if that drop in 10 times (although it doesn't affect the temperature much), will still save more power.
     
  22. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Of course it will. The question is how much more power it'll save. It might not be enough to be noticeable (and it might not be enough to justify a 90% performance loss in particular)

    But yeah, it will consume less power.