The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    A question on Mobo, Graphic Cards and processors?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by True_Sultan, May 29, 2010.

  1. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Hi guys

    I've always wondered whats the functions of each part. Like the difference in function between an i7 and Quad core. or a differnece from a nvida card to an ati card and the difference of the mobos? Like can someone explain to me the functions? Go as deep as you want. Use physics to explain if you want...i wanna know =) :rolleyes:
     
  2. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    First you have to get your terminology straightened out. I suggest using Wikipedia to get a foundation and other sites to compare different parts such as Intel's ARK.

    Intel Core i7s are part of their newest CPU lines and they can be dual cores or quad cores. If you mean to compare them to Core 2 Duo quad cores, when you said "quad cores" then there is a difference in architecture. What exactly are you looking to learn about these parts?
     
  3. kosti

    kosti Notebook Virtuoso

    Reputations:
    596
    Messages:
    2,162
    Likes Received:
    466
    Trophy Points:
    101
  4. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Thanks guys for your replying, I checked out the sites, it was a review to my grade 11 computer engineering class, so I get the terminology and such. So can you guys now explain to me the functions, science, engineering behind each and every computer parts. Like i get the whole thing about CPU which contains a ALU and CU and stuff...but what do they do when it comes to say, scientific analysis or gaming? Does it keep on calculating? How does that data flow in the electric circuit it creates? You know what I mean? And what is the difference in the newer parts vs. the older. (Core 2 Duo/Quad vs. i3/i5/i7)? Whats the function and architecture of a GPU and stuff? You know :rolleyes:
     
  5. Judicator

    Judicator Judged and found wanting.

    Reputations:
    1,098
    Messages:
    2,594
    Likes Received:
    19
    Trophy Points:
    56
    Your focus on hardware gives you only half the picture. When you ask what the CPU does when it comes to scientific analysis or gaming, then you really need to know what the software is asking. The hardware only performs as the software asks it to, and depending on how the software is coded, different things can happen.

    As a very general analogy, let's say that we both have the same car, both start from the same place, and are both going to the same destination. The difference is that you decide to take the highway/freeway, while I limit myself to local roads. Note that we both have the same "hardware" (car, starting place, destination, road network), but we took different routes ("software") to get there.

    As for the exact difference between, for example, C2D/Q and the i-series, you'll have to get a job at Intel and probably sign a NDA (non-disclosure agreement). The actual "mechanical" differences in their architectures are Intel's bread and butter; if they were published, "anyone" could make an Intel chip, and Intel would lose money. I believe they have published some white papers on the subject, which you could probably look up that will give you some of the general data you are looking for, but they probably won't give away the actual design of the architecture.

    Same with the GPUs. If you're just looking for general information on how GPUs work and their function, Wikipedia actually isn't a horrible source (usually) to get a basic idea of what the purpose and general architecture of a GPU is. If you want all the nitty gritty, again, you'll have to apply to NVidia, ATI, or Intel (or any other GPU designer), and hope that what you're looking for isn't covered under "trade secret".
     
  6. Trottel

    Trottel Notebook Virtuoso

    Reputations:
    828
    Messages:
    2,303
    Likes Received:
    0
    Trophy Points:
    0
    Sounds like you need to enroll in a 4 year degree program in computer engineering, and then go on to work in the design departments of the companies you are asking question about. There is no simple explanation to anything you are asking and there are zillions of textbooks and white papers written about this stuff.

    Also a lot of the things you are asking I have seen answered "in-depth" (as far as internet information goes) on various websites, though it is few and far between. I would not be comfortable answering any of you questions without going back over material I've come across. You just need to do the work yourself finding out this information instead of asking for novels and textbooks to be written for you on an internet forum.

    If you want people to answer your questions, they have to have a narrow scope and be answerable by people who didn't actually design the thing, not "tell me the path an electron flows through a core i7 processor" or some other junk like that.
     
  7. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    I'm really sorry :(

    Is there a site you guys know that will give me a starting point? Anyway lets get down to the basics. So for say laptops, what is the advantage in using say an i7 over a Core 2 Quad or i5 when utilizing programs such as Matlab or 3DS Max, Autocad, data analysis (like scientific data, video/audio analysis, modeling/simulating or reseearch on the unknown :eek: )? I would also ask the same question about gaming, like for example Crysis, it runs alright on my Core 2 Quad on a desktop, how would it differ on a core 2 quad on a laptop or an i7/i5? Also which brand of graphic cards and GPU are better like for example ATI mobility HD 5870 Crossfire vs Gefoce 9800M GTX SLi? Which card would be better for the above tasks? Including just basic office stuff, web browsing and stuff? Also which mobo would be better to use in each case? Sorry for the questions, just want to know =)

    Also which laptop would have all these?
     
  8. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
    i7 quad definitely has an advantage in these programs... as for crysis and all , it wouldn't run much differently on a laptop with core i7 quad because it definitely has more than enough power.. as for good GPU laptops , there's clevo X8100 with GTX285M SLI which till now was top dog untill alienware m17x came out with 5870 crossfire= 5770 desktop crossfire... the cheapest and best speced gaming laptop out there is no doubt Asus G73 with a i7 quad , single 5870 , 8GB RAM expandable to 16GB , full HD screen , 1TB hard drive space for about 1600...
     
  9. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Okay so these are the task's I wanted to do:

    My question is that which hardware will give me the best performance, heat rating, batter life and such? I know an i7 drains battery life like there is no tommrrow
     
  10. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Where did you hear this? If you're just doing light work on your battery, the i7 is a very efficient CPU and will consume less power than similarly specced C2D/C2Q and past processors. Even if you're doing CPU intensive work on battery, it will get things done faster and hence consume less power for a certain task. For your school work, the 820QM would probably be your best bet in terms of power per cost.
     
  11. Lithus

    Lithus NBR Janitor

    Reputations:
    5,504
    Messages:
    9,788
    Likes Received:
    0
    Trophy Points:
    205
    A CPU processes instructions, one at a time (per core). It generally consists of a handful of registers that are able to hold a value, and access to the system memory, which again contains millions or billions of storage places for values. A CPU can then manipulate those values per instruction. A few common instructions are "add", "subtract", and "move", which do pretty much what they sound like they do.

    Memory is just a large collection of gates that are able to remember their value. Memory is constantly accessed by the CPU.

    A video card is a specially designed processor to quickly accomplish certain tasks. It generally can handle many more instructions per second than the CPU. Their specialization allows them to perform complex mathematical algorithms very quickly.

    A motherboard consists of all the connections required between the various hardware parts. As the CPU, memory, HDD, graphics card, and everything else need to communicate with each other, the motherboard is what allows this to happen.

    The Clarksfield i7s do indeed consume a lot of power. The Arrendale ones should be better since they're 32nm.
    http://www.tomshardware.com/reviews/mobile-core-i7,2443-12.html
    http://blog.laptopmag.com/hands-on-with-intels-blazing-core-i7-itll-rip-your-eyelids-off
     
  12. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    but the clock speed is nowhere near 2.0Ghz :(

    Anyway whats with this hexacore a keep hearing about?
     
  13. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Clock speed isn't really a factor in most engineering and heavy CPU software as they are optimized for multiple cores. The quad core i7s have 4 cores and 4 additional threads (hyperthreading). There are no hexacores in laptops for at least a year or more (unless you're purchasing a notebook that utilizes desktop CPUs).
     
  14. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    That's not true; clock speed is definitely a factor, and quite a big one at that, it's just that the number of cores is often a bigger factor.

    When you compare, say, the i7-720QM and the i7-620M, the i7-720QM theoretically has around 13% more raw computational power, because in terms of raw power 1.73GHz on 4 cores is equal to 3.46GHz on 2 cores, while the i7-620M can only do 3.067GHz on two cores. However, as far as raw power is concerned, it's harder to make good use of extra cores than extra clock speed, and so many multithreaded benchmarks will show very similar performance for these two CPUs.

    Obviously, such a "raw power" calculation is only a rough estimate, but as long as you're looking at CPUs with pretty much the same overall architecture (in this case two Nehalems) it's not a bad way of looking at things.
     
  15. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Isn't an Asus G73 or a Sager a desktop replacement?

    Yeah that makes sense in raw computational power sense more clokcspee and more cores equate to more power. So pretty much lack of cheese your saying that the extra power don't matter as we don't have the software to do it or utilize it? I think for the consumer thats true, im pretty sure for government and private sectors its prolly diff :rolleyes:
     
  16. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    No, I'm not saying we don't have the power to utilize it; there's plenty of tasks that make every effort to use the full power of the CPU in order to be finished as soon as possible.

    However, my point is that in order to scale well from one core to many, you have to be able to divide up the work equally into many parts. The problem is that there's always limitations like uneven workloads and overhead that prevent programs from scaling perfectly to multiple cores. As such, the performance boost from 1 core to two cores is bigger than the boost from two to four, which is bigger than the boost from four to eight, etc.

    These limitations are much less for certain tasks that are very parallel in nature, especially tasks that involve video content. It's precisely for that reason that GPUs (which are extremely parallel and, in terms of raw computational power, are vastly more powerful than CPUs) are very good at video-related tasks. Unfortunately, GPUs are not general-purpose devices in the way CPUs are, and only recently have general-purpose GPU computing frameworks like OpenCL and DirectCompute come forward to replace Nvidia and ATI's proprietary frameworks, which should mean a lot more GPU-based computing in years to come.
     
  17. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    So for example say if i was doing 3D modeling and silmulations. So the program assigns the shading and such (a heavier task) to one core but assigns the 2nd core with basic attach detach, move, fit, shrink, etc (lighter task) to the other. So the program would say slow down as the first heavier task needs to be executed in order for the lighter task to happen, so there is not much a difference because of the uneven load? So pretty much the time it takes to do that task is longer then if one core would do it by itself? :rolleyes:
     
  18. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    Well, if you have tasks that need to happen in order, then it won't help to assign them to different cores because, indeed, one core will still have to wait for the other.

    However, the shading workload could be split between the cores in order to get the work done faster - if you want to see an illustration of this, try running Cinebench (preferably the latest one, version 11), and watch how each core handles a different part of the screen at any one time.

    The thing is that not every task in the sequence of tasks will necessarily be susceptible to parallelization, which limits your ability to speed up the overall sequence of tasks. This is Amdahl's law. I recommend you read that Wikipedia article; it should help you understand.


    As an example, if you have a task where 96% of the work can be parallelized and 4% can not, then using two cores for the 96% part would, roughly speaking, mean that the total time taken is now 4% + (96% / 2) = 52% of the original, and using 4 cores gives you 4% + (96% / 4) = 28%.

    This means that for this task two cores are ~92% faster than one, while 4 cores are only ~86% faster than two.
     
  19. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    As far as 3D modeling goes, those tasks are typically dumped on the GPU to process. Simulations vary, depending on the type of environment simulated, but they tend to be more CPU-heavy.
    GPU coding is a different ballgame than CPU coding. This becomes really clear when comparing applications that have been optimized for both multi-core and gpu processing. I'm thinking of Folding@home in particular. Even with multi-core and SMP support, GPU processing beats the pants off CPU processing. Of course, Folding@home is a 3D simulation by nature, so it translates best to GPU coding. YMMV based on the way the software was written, and even then different functions will not be processed the same way.

    As far as the software you listed, MATLAB is mostly serial in nature due to the way most of the packages are scripted. You might calculate some FFTs and other transforms that have been specially coded for multicore, but for the most part, no.
    Autocad in wireframe is mostly single core. It'll jump to multi for rendering, but rarely otherwise.
    Based on my limited understanding of video analysis/manipulation, parallel processing makes sense for processing multiple large segments, but I think the bottleneck for using large files tends to be in the drive controller, not the processor.

    As far as battery life goes, the bulk of the usage you mentioned isn't usually done on battery power except as a last resort. Most of it is done either in a lab or cubicle setting where power is easy to find.
     
  20. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Ah i read it, Amdahl's law states that only around 95 or so percent of the time it takes to execute can be used, so say 20 hrs is needed for a program, the 19 hrs of the 20 can be shortted by parallel processing while the other 1 hr cannot, so either way one must wait 1hr correct? So in a sense theoretically a dual core might be a better choice in terms of the boost, but more cores are better to, correct? I only say this as the supercomputers have many cores and (correct me if im wrong) some drones use old pc processors and such but they are parallel so they can yeild a significant boost in data flow and execution time? correct :p :rolleyes:

    Ahh thanks. So wait your pretty much saying that most of the work can be done on a single core? So buying a hexacore or quad core is a waste? :p Also what programs can be considered CPU heavy? Also this is a funny questions, i have noticed the GPU's can calculate more bigger data faster, it has i believe many threads. So why is the CPU these days slower then GPU? :cool:
     
  21. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55

    Mm.. not that it can only be done on a single core, but that it really depends on the function you're calling. Here's a physics simulator example that might help:

    Imagine you've got a pool table. Simulating the cue stick hitting the cue ball until the cue ball hits its first target is a serial calculation - it's a very linear progression. Now say the cue ball hits the 9-ball rack. You've suddenly got 9 balls in motion that, to some extent, can be simulated independently. Yes, they'll need to hit each other and a new calculation gets spawned off of that, but for the most part, until two balls collide you can simulate each ball's motion independently, using parallel architecture. Whether this actually happens or not really depends on the coding of the simulator.

    The biggest boost that GPUs get tends to be because of the basic architecture of the GPUs. GPUs are built to take advantage of stream processing. A basic analogy would be that GPUs can process whole blocks of data at a time instead of shaving off a little at a time like a CPU. GPUs have specialized instruction sets that let them do this. When they say a GPU has 48 pixel shaders, that really means the GPU can handle 48 simultaneous calculations, as opposed to a mere 8 for quadcores. It's the reason GPUs can take a pile of vertices and texture maps and output pixels on a screen at 60hz without breaking a sweat while a CPU will struggle at the same task. However, those calculations only go so fast, and they aren't nearly as flexible as CPU calculations for basic operating system tasks. In a sense, GPUs win on efficiency, but CPUs will win at basic speed. You might want to look into GPGPU for more info on the move from CPU to GPU processing for handling floating-point heavy modeling.

    As far as CPU power, the "Turbo Boost" function is really just built-in overclocking, where a single core will be set to a higher multiplier of the base frequency to achieve more calculations per second, at the cost of excess heat and reduced performance from the other cores.
     
  22. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    oh okay I will check that out. Also i see so hyper-thread just over clocks 1 core while the over has to be at base frequency? I also thought maybe having multiple processors would allow massive data crunching and such (data analysis and such) like breaking a single movements into mathematical values, electrical information and then just interpreting them. Much like an actual nervous system =) :cool:
     
  23. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    Technically, it's because it can be parallelized very easily. 3D doesn't really have much to do with it. CUDA is the software architecture that's used to program these types of things, and instead of computing things via rendering, it's computing things like a bunch of tiny CPUs working together. but I just want to clarify, CUDA is separate from typical graphics like you see in games. And yes, folding@home has a visualization for you, but it's not actually rendering that when it's not visible, and moreover is not the way folding@home actually processes its workunits.

    At its heart, a CPU is slower than a GPU because it is general purpose and serial. It can do almost anything, and do just a few things at a time really well.
    A GPU is a specialized parallel CPU. It is more specific-purpose and parallel. It can't do as many things as a CPU can, but is really good at doing a lot of things at a time. Each of its cores is less powerful than a CPU, but together you can get better performance watt per watt. The catch is a GPU must be coded specially for in its own language. There are attempts to unify this like OpenCL (not OpenGL, that's a graphics language), but as of yet if you want to harness your GPU you must learn Brook+ (ATi) or CUDA (Nvidia).

    To give you a comparison, a typical process on a CPU perhaps a dozen or so threads to get a job done. In CUDA, a process can spawn tens of thousands. It's like a bulldozer vs a bunch of people with shovels.

    Er, as woofer said, hyperthreading is basically the ability to handle two "threads" at a time on each core. The computer sees an extra processor for every thread (so for a dual core with hyperthreading, the OS sees it as a typical quad core CPU, and for a hexacore with hyperthreading, the computer sees it as a 12-core CPU!) What you're thinking of is TurboBoost, which allows the CPU to take a single thread that's running, assign it to a certain core, and then overclock that core. This works well for applications that are "single threaded," which most older programs are.



    As an aside, an i7 quad definitely has a shorter battery life than any Core 2 Duo, and even Core 2 Quads at the same power settings. Yes, it's more efficient when you compare performance per watt, but still consumes more power. When doing tasks like watching a DVD every benchmark I've seen puts the old Core 2 Quad several minutes ahead of the Core i7.
     
  24. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    More precisely, Turbo Boost is a technology that allows the CPU to make better use of additional overhead in thermal and electrical properties. One of the essential technologies in Intel's newer CPUs is that cores can be switched off entirely, which gets rid of a lot of heat and power consumption. This gives the other cores enough overhead to increase their clock speeds.

    However, Turbo Boost can work even when all of the CPU's cores are active, as long as the CPU is adequately cooled. If you want to find out what the boost specs are, Wikipedia's articles on the i5 and i7 CPUs list the multiplier increase for each CPU depending on the number of cores active (where +1 = +133MHz and so on).

    However, especially when using all of the CPU's cores under heavy load. you're not guaranteed to get the full Turbo Boost - it will even be different for different individual CPUs of the same model due to variation in power characteristics. In order to keep power/heat in check, the CPU can actually change its multiplier many times per second to obtain a balance. For example, when running 4-thread Prime95 on my Core i5-430M, it spends about 30% of its time at the maximum +2 Boost, but around 70% of its time at +1.
     
  25. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    Bleh, was oversimplifying things, then dove into detail where I didn't need to. The underlying calculations for 3D simulations tend to be very light and easily non-serialized, and folding can translate well to that. I'll leave the floor to you though, my knowledge of the gpu folding variants is limited to a basic understanding of the original gpgpu concept. Sadly had to stop folding once I stopped using desktops.
     
  26. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    Interesting... makes me a little sadder about my P8400 lol

    Hey it's all good, we're all cool here. Ah yeah it's a shame Intel scrapped the the Larrabee project. I've heard rumors recently of a 50-core x86 processor of some sort down the road (perhaps a gpgpu?)... Intel fashions supercomputing phoenix from ashes of Larrabee


    And to the OP: Are you trying to decide on a type of CPU/GPU/computer of some sort? Or is this just general curiosity?
     
  27. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Thanks guys. To be honest, it was just a basic curiosity but i thought of using this knowledge to buy processors and understand why some government people still use old processors.

    Also from what I read, a CPU can do many types of processing, calculations and analysis, and modification to data (which is i believe electrical current), but since the CPU is linear that's why it executes in steps. While a GPU is more specialized, but its threads are clocked at lower speeds and CPU and is specialize so there is no flexibility in calculations correct =) CPU is a little slower but the GPU is faster when handling work that is specialized so the commands must be given in the CUDA language or something?

    Also i did not get anything about the whole rendering and not actually rendering part :(

    Wait so hyper threading is a technology that allows each core to house 2 threads (which the OS handles as cores itself)? So these threads do calculation like a single core, so pretty much 1 core has 2 processors in it? Also how can these cores be turned off when not needed? So Turbo boost gives each core a boost in its clock? Also as you guys said Turbo Boost can work even when the other cores/threads are in use, but as Amdahl's law states, I guess the turbo boost only can in maximum give 95% boost as even with out turbo boost the multi cores when used only gives a significant boost (like for example a process that takes 20hrs, 19hrs may be boosted by say using multiple cores but the rest 1hr of process cannot be boosted :cool: )
     
  28. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    Aha, here we go :)
    Oh nice. I know some military applications use older architecture radiation-treated CPUs for durability or something of the sort. They run slower but are more resilient to the environment.

    Correct.
    A further note on data: Data, when not on a storage medium like RAM or a hard drive, ie: when it's being processed or in transit from one component to the next, is translated into electrical pulses. But when it's sitting on a CD or hard drive or something, there's no electricity required to keep it there. The one exception to this is RAM, because it is "volatile" flash memory. This means that RAM must be powered on in order to retain its memory. Within a few seconds of powering off, all data in the RAM is lost.

    This is for all intents and purposes correct.

    Don't worry too much about it lol. I was differentiating between using a GPU for displaying 3D objects (I called it rendering) versus general computation (CUDA, or what Folding@Home uses).

    Yes.

    You have the concept correct. For the sake of keeping terminology correct, it'd be 1 processor with say, 2 cores, and each core has the ability to handle 2 threads. It's kind of like a quad core, except instead of replicating a whole core, which takes up a lot of resources, they replicated parts of a core so it can handle a second thing at once. In rare cases HyperThreading ends up making a computation slightly slower because of the way it's set up. It's basically a cheap, efficient way of letting a core do 2 things at once.

    Processors detect how many threads are running at a given time and do a lot of things preparing for what kind of data is coming next. If it sees that there's no data to crunch and it's just sitting there, it can power off or downclock parts of itself. If you've ever heard of Intel SpeedStep or AMD's PowerNow, that's an example of this. GPUs do this, too. This is why a computer is hotter when you play games or do computations--everything turns on. This evaluation of what to power on/off/etc happens many times each second.

    Yes. It boosts certain cores if the CPU is cool enough.
    Hmm okay we're almost there. Amdahl's law is about adding more cores to do work faster (dual core->quad core). TurboBoost is about making cores work faster to do work faster (2.66GHz->2.93GHz). The two are not really related here. Turbo boost gives you a linear boost on the cores it's applied to.

    Taking a loose analogy of people with shovels digging a hole:
    Amdahl's law says that as we add more people with shovels, the hole digging rate won't quite scale linearly because we lose some efficiency to managing a larger number of people and giving everyone instructions on what to do.

    Turbo Boost says you have a few people digging shovels at a certain speed, and then you give some of them cocaine and they dig faster. There's no efficiency lost like in Amdahl's law.

    So from this you can conclude: Turbo Boost is like cocaine for part of your processor. :cool:
     
  29. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    oh so its one processor that has a core in which it has a thread that can be acted like a core. So the processor receives instructions and executes them by utilizing its core. So a thread allows the core to be two core in one kind of thing. Correct =)

    Also nice analogy. LMAO turbo boost is the opium for the CPU :rolleyes:

    So is there anything else I'm missing here? And oh i get it, so rendering is more like the 3D and 2D creations while u were saying it versus computation using the threads on a GPU =)
     
  30. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    It's a REALLY good thing I hit preview before I send long posts, most of this would have just driven you a little batty reading it rephrased again.

    Usually this happens in the military/air/space applications. For example, I think Hubble still carries 486 processors (first made in the late 80's) as a backup for when everything else dies (this happen in 2008). I remember in the 90's NASA was scrambling to find 386 and 486 processors for its shuttles long after Pentium rolled around.


    You might think of Speedstep as hyperthreading in reverse.

    I just like this imagery.
     
  31. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    LOL i think it was a good idea. Also oh i get it, they do it for environmental purposes and back-ups. But say if there satellite is processing and immense data and say if something happens, like an EMP or something...how can then old 386 processors handle that type of processing? :eek:

    Also yes i love the analogy to =) :cool:
     
  32. Trottel

    Trottel Notebook Virtuoso

    Reputations:
    828
    Messages:
    2,303
    Likes Received:
    0
    Trophy Points:
    0
    The shuttle uses (used?) much older processors. 486 didn't come out for a very long time from after development of the shuttle's computer system was finalized. NASA turned to Ebay to stockpile spare processors once Intel finally stopped production of those processors after ~20 years. I think they were 8086 but I could be wrong.

    Also I've *heard* that IC's made at larger processes are more resistant to errors induced by cosmic radiation.
     
  33. lozanogo

    lozanogo Notebook Deity

    Reputations:
    196
    Messages:
    1,841
    Likes Received:
    0
    Trophy Points:
    55
    Of course, a bigger processor means the oxide layer in the semiconductor is thicker, so less prone to current leaks that may happen when a cosmic event takes place on the semicoductor.
     
  34. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Yeah thats my point, if say the older processors were used as back ups. So how would it stand in a cosmic effect if the newer thicker semiconductor couldn't handle it :(?
     
  35. Judicator

    Judicator Judged and found wanting.

    Reputations:
    1,098
    Messages:
    2,594
    Likes Received:
    19
    Trophy Points:
    56
    They were actually saying that the older processors used thicker semiconductors. The newer processors use thinner semiconductors, so they can be shrunk down appropriately to the smaller die sizes.

    And in terms of the military/government using older processors, never underestimate the power of legacy, back-stock, and compatibility woes. While I'm sure the military/government would love swapping over to all new gee-whiz super high tech, don't forget that being such a large organization, they still have immense stores of older technology that still needs to be used and kept compatible for as long as possible with their newer technology, as the cost of replacing _everything_ would simply become prohibitive. Also, new technology is often plagued with bugs and other teething problems as well, so oftentimes they prefer to stay a generation or two back, with technology that is well understood and reliable.

    Edit: Oh, and as a quick aside on a previous topic, a desktop replacement notebook like the G73 or a large Sager does not necessarily mean that it has a desktop processor. Desktop processors are larger, require more power, and run hotter than just about any mobile processor. Intel mobile core-i processors, for example, run from about 18 watts TDP to 55 watts TDP. Intel desktop core-i processors start about 80 watts TDP and go up to 130 watts TDP. In point of fact, the only notebooks that typically use desktop processors are a few of the large Sager models.
     
  36. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Oh never knew...so desktop replacements don't always have desktops CPU's..amazing. Also I always thought that the military is always 20 years ahead of us. So Im pretty sure there using the old processors and overclocking them or something or just modifying them I guess. Any other thoughts...this is interesting :cool: :rolleyes:
     
  37. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    What Judicator said is correct. Actually the Air Force I believe was using PS3s running a form of Linux for a very cheap supercomputer, however Sony just released an update that blocks new installations of Linux on any PS3. So when any PS3 of their supercomputer fails, they can't replace it. Pays to have mature hardware.
     
  38. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Yeah true true..Im kind of sad, I wanted to experiment with Cell on my PS..but i have to update to play games...I haven't played it in a long time..but when this summer im gonna start again..i have to upgrade :(

    So what else am i missing in my computer knowledge?
     
  39. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    Best of luck with your PS3 then :)

    Haha I'm not sure... We covered the main points and concepts of CPUs and GPUs, and even one or two more involved concepts of them... field a specific question and see what we've got. I've never run into someone with so much curiosity on all this. It's refreshing :)
     
  40. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    hahahahahaha Thanks :rolleyes:

    Are you a computer or electrical engineer by any chance? Anyway how many FLOPS does commercial CPU be able to do? Like an i7, i5, i3, Core 2 duo/Quad, Celron, Atom, etc.

    I know FLOP's is floating-point Operations per second. It is used to measure the processing power. Floating-point operation is algorithms using floating-point numbers which are basically real numbers (like 1.25, 1/2, etc.) Correct? Also Algorithm is used to analyze/modify/create data flow and data analysis. Correct? :cool:
     
  41. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    This might help:

    MaxxPI² - Flops scaleability

    MIPS are more useful for day to day apps, but the charts are nearly identical. FLOPS tend to be more useful for statistical/analytic work.
     
  42. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    That's a pretty sweet chart.

    Not an expert on this but MIPS are measured according to two tests that I know of, Whetstone (floating point) and Dhrystone (integer) (one is named after someone I think and the other was named that way as sort of a pun). My CPU is capable of about 2300 Whetstone MIPS at stock speeds.

    I know GPUs are exceptionally powerful and have a set alogorithm for calculating theoretical peak GFLOPS. I don't know it off the top of my head but it's something with the number of shaders times the shader speed. My 64-shader GPU is theoretically capable of a peak 224 GFLOPS, which I'm sure it almost never reaches.
     
  43. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Awesome =) but i got a question, is it only for mobile processors because my Intel Core 2 Quad Q8200 is not in there (its a desktop) but hmm the last i7 processor looks beast :) :cool: :eek:

    Wow :O, Im pretty sure that would help in research, 3D simulations and such with CUDA or OpenCl....ahaha personal supercomputers for the win :cool:
     
  44. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    I think it's mostly desktops, and just people who reported notable results.

    smackdown
    scroll down to "Speeds & Feeds"
     
  45. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    4.64 teraflops...holy mother of GOD :eek: I wonder how much work that GPU can do =)

    To bad its a Desktop GPU..Are there any mobile GPU close to the HD 5970?
     
  46. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
  47. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Notebook GPUs will never match or get close to the highest end desktop GPU of the current generation. The best notebook GPUs don't even give 40% of the HD 5970. The HD 5870 is 128-bit GDDR5 and will give good synthetic benchmarks, but the GTX 480M is 256-bit GDDR5 and will definitely give better FPS in games (based on how their desktop equivalents tend to be).
     
  48. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    Worth mentioning is that the 5970 consumes between 50 and 300 watts on its own between idle and load - I could run 3 of my Studio 1558s and 2 AAOs on the max power draw of that one card. Also, don't forget that it can be crossfired.

    As for Nvidia, I think the tri-sli gtx480 is neck and neck with 5970 sli, even if it's not quite as good at supercomputing
     
  49. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    Definitely a good point about power consumption. Mobile GPUs are bred for efficiency over power usually.
    From what I've seen, it seems like ATi performs better in games and such, but Nvidia focuses more on alternative applications like CUDA and GPU-accelerated physics processing. So while an ATi chip might be able to crack out 4TFLOPS of power, it's not as common to use it since Brook+ isn't as widespread as CUDA. I remember reading papers from presentations on a graphics conference last year, and Nvidia was showing all its new technologies that help accelerate physics and water and all that, while ATi was showcasing how it managed to get 8x AA in games with a very small performance hit and get better framerates in games. It's a matter of approach.
     
  50. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Wow guys alot of reply :) well on the AMD site the mobile HD 5870 gives out 1.12 TeraFLOPS

    The link: ATI Mobility Radeon? HD 5870 GPU Specifications

    So u are i guess kinda wrong idk. Also I know ATI gfx cards are good for games an all, but on one of my threads and actual Open CL and CUDA student who has alot experience says that Open CL is far better and such, he didn't specifiy if OpenCL is far better on a nvidia or ati gpu, so idk about you saying ati not being good for physic simulation and such. :rolleyes:
     
 Next page →