The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    Question about CPU's

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by VaultBoy!, Oct 7, 2012.

  1. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Can someone explain to me how hyperthreading works and what the clock multiplier means, I know it means a multiple of a certain clock like 12x21.3Mhz which get you to your max clock shown when you buy your processor but why do processors work that way, why do CPU manufacturers (I.E Intel and AMD) lock their multipliers?
     
  2. mattcheau

    mattcheau Notebook Deity

    Reputations:
    1,041
    Messages:
    1,246
    Likes Received:
    74
    Trophy Points:
    66
  3. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    So the CPU multiplier is not actually multiplying the clocks, it just represents a ratio? I don't understand hyperthreading at all, I guess I'm too young (15) to understand all this... I do have more knowledge of PC's then most people my age I know so I guess that's good :D
     
  4. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    Just to make sure that there is no confusion, since the first generation core i, Intel ditched the Northbridge + Southbridge design. The incorporated the memory controller into the CPU, the PCI-E lanes for the graphics too at least in Sandy Bridge and Ivy Bridge and integrated the remaining Northbridge and Southbridge functions into a single chip called the Platform Controller Hub (PCH). You can find block diagrams for ivy bridge here: AnandTech - The Intel Ivy Bridge (Core i7 3770K) Review. QM77 chipset diagram here: http://www.intel.com/content/www/us/en/chipsets/performance-chipsets/mobile-chipset-qm77.html

    As for locking the multipliers, i would say they did it for monetary reasons (why buy a higher end CPU if you can OC a lower end model and also they can sell CPUs with unlocked multis at a premium) as well as to prevent people from frying their chips by messing too much with them.

    EDIT: Much more detailed explanation of Hyperthreading: http://www.makeuseof.com/tag/hyperthreading-technology-explained/
     
  5. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    What the fudge did you just say, I honestly understood a mediocre amount of the technical talk :p
    EDIT: I didn't fully read it the first time, I understand most now :D
     
  6. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Is that why the i5/i7-XXXXk editions and the XM editions cost so much more than the regular locked editions?
     
  7. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    The desktop K editions aren't all that much more expensive (~20-30$ more), for the laptops, it's because they can get away with charging way more and probably because there aren't many chips produced that have the qualities required to be xm chips (you'd have to check what binning is and how CPUs are made to understand why some chips can reach higher clocks and some cannot).

    Also, i added a link to a more detailed article about hyperthreading in my previous post.

    I stumbled upon a nice article that explained cpu multipliers among other thing, i'll try to find it again.
     
  8. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    thats because of other process. When you manufacture something there are several products of this:
    1) trash
    2) viable
    3) good
    4) exceptional

    In the case of cpu manufacturing there is something called binning, and that means that higher quality chips are destined for higher price ranges.

    For example a i7 3610qm is the same chip of the i7 3720qm, only that the former has a lower binning and wont support efficiently the clocks or other features, so to avoid just throwing the chips away, intel sells those for less money and less features enabled.

    What you have to think of the cpu is that its just a great calculator, so when evaluating things like computational power you are focusing on floating point operations

    Im going to look for a basic presentation that I did based on SB cpus and try to post here, its old more than 1 year and did it for the uni. I dont remember what is in the slides but I can complement if there is any more questions
     
  9. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    I do in fact no what binning is, they do it with the GT640M/GT650M/GTX660M Kepler chips, and anything you could post that would explain CPUs better would be greatly appreciated!
     
  10. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    This is getting a bit dated, but still covers the basics, do not that some thing have changed since 2005 of course: http://www.hardwaresecrets.com/article/How-a-CPU-Works/209. One noteworthy change that i mentioned earlier is the fact that the memory controller is now integrated on Intel CPUs. Hardwaresecrets has a few interesting articles, some rather technical, some easy to understand and others in between. The one on the chipset (you'll get to know why a single chip is called a chipset) is also interesting as well as the one on RAM timings, just be sure to look at what date the article was posted.
     
  11. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    I'll make sure to read that tonight, before I go to sleep, thanks for all the help!
    EDIT: Just read it and it really explains it in a much simpler way than Wikipedia, thanks for posting that article :D
     
  12. mattcheau

    mattcheau Notebook Deity

    Reputations:
    1,041
    Messages:
    1,246
    Likes Received:
    74
    Trophy Points:
    66
    nice post. my FSB link is a little ambiguous and the diagram is definitely outmoded. since i'm on a wikipedia roll though, platform controller hub and direct media interface (sorry, OP ;)). question for you too, tijo--where is the PCH physically located? it's probably a silly question but is it a separate chip or does it exist throughout the board (like as multiple "chips")?
     
  13. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    It is a separate chip, it's location depends on the board design. It is usually covered by some sort of heatsink, it doesn't necessarily require active cooling, but in some notebooks, it is covered by the same heatsink as the CPU, in others it's not and has some other method for cooling it.
     
  14. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Hahaha, way to give me more Wikipedia info I can read so I'll get more confused xD
     
  15. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    Here is a pic of the PCH on my G73JH, it is passively cooled by the palm rest assembly (there is a thermal pad making the contact between the metal part of the palm rest and the PCH).



    EDIT: as you can see, the PCH is a single chip, it's called a chipset because back in the early days the functions now managed by the PCH were managed by multiple chips (or a set of chips). The functions were then integrated into two chips (Northbridge and Southbridge) and eventually merged into a single chip when the memory controller was moved to the CPU. For the Core 2 and some other Intel CPUs, the Northbridge was called the memory controller hub (MCH) and the Southbridge the I/O controller hub (ICH).
     
  16. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    ^nice :) That board still have the same components - or the same path as before, though. Just that they're on the same chip.

    Not sure what's controversial about hyper-threading, by the way.

    Essentially, it works like this. You ask a program to.. add 1+1. So the program translates that to machine-code.. something like..
    mov ab, 1 ;sets ab to 1
    mov ac, 1 ;.. ac to 1
    add ab, ac ; adds ac to ab, result stored in ab.

    And.. that probably wouldn't actually work. And it wouldn't look like that when translated by an OS. But in principle you would do three main operations like this from the top of the hardware layer. The two puts are essentially not reducible. But the add function might in turn be split up in several steps by the microcode on the processor. And when that happens, it's possible that:
    1. The scheduler is waiting for something to complete.
    2. There are free resources on the processor.

    And in that case, it might be possible to place two program routines on the same processor. And execute them at the same time, since they didn't actually use the same registers or resources anyway. And it would ideally end up being able to run two programs as if both of them thought they had exclusive access to the processor.

    In practice that doesn't actually happen all that often, though. And in reality the typical use for it is if a thread in a program wants to read or write a file, and then stalls while IO happens. Then the processor will be free to do something else. And another thread can be scheduled on that processor while that thread is essentially idle, waiting for IO to complete. From a certain point of view, this is extremely hairy and prone to all kinds of problems. And has of course low probability of striking home outside synthetic benchmarks.. But there is an effect in practical execution as well, of course. The question is how much extra microcode is prepared, and if this requires faster/hotter hardware, that sort of thing.

    Arguably, a more efficient way to do things would be to strictly schedule an OS instead. And require programs to declare on beforehand what resources they would need, and what sort of response they would expect from the different routines. Then you would be able to schedule things a lot more tightly, and run parallel operations as a rule instead of as an exception. Potentially being able to execute a lot of tasks concurrently, while not losing response - even at much lower clock-speeds. But it would mean a bit of discipline on the programming, and care with memory and IO operations. Along with actually learning people to program from the bottom up.

    So in that context, the Hyperthreading is basically transparent parallelism, or parallelism that happens without the OS or a program controlling it. As opposed to explicit parallelism, where you would program the routines concurrently yourself or via an interpreter, and schedule tasks programmatically. Which we really don't have any hardware that truly would take advantage of anyway at the moment. So as things are, transparent parallelism is where it's at. But as processors become cheaper, we can have more cores, and modules and bus-speeds become higher, ram will be able to do concurrent reads and writes to different processing elements... While also clock-speeds hit the ceiling, preventing us from executing single threads quicker than before, meaning that computers just won't go any faster.. then it's with explicit parallelism where the advancements are made.

    This will be a new paradigm, though.. sort of.. And there's basically a requirement for microprocessor makers right now to offer linear thread performance for an extremely simplistic scheduler. Even see that with mobile phones, that there is more and more "demand" to make something that has extremely high single-threaded performance (so you can burn your battery in a few minutes), but no demand for massively higher number of concurrently running operations at for example a set effect-draw. Via either longer instruction sets, or more cores.. This has to do with.. OSes and design choices, mainly.
     
  17. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    So hyperthreading could be described tily as being capable of doing work 2x as fast as the same processor with hyperthreading disabled? Also, another question... With binning, why do chips end up with different qualities when they're all manufactured by a machine (right?)? A. Isn't all silicon the same? So why do they end up with some being of higher quality than others?
     
  18. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    you have to hands right? close both, while the air that is ''trapped'' on both hands is similar, it doesnt mean they are equal.

    thats the thing. The silicon substrate might not be uniform, or there is an error when they are printing, and the list goes on, small variations leads to different products
     
  19. R3d

    R3d Notebook Virtuoso

    Reputations:
    1,515
    Messages:
    2,382
    Likes Received:
    60
    Trophy Points:
    66
    This. When you mass produce something (anything), you're not going to get 100% consistency.

    Though Intel/AMD probably get pretty good yields. iirc, most of the low end Intel chips (desktop celerons/low end i5s) are artificially cut down i3/i7s instead of actually having lower performing/defective parts, since if they relied only on "bad" chips, they wouldn't have enough of them to sell.
     
  20. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    So the Celeron and Pentium processors being sold in laptops today are defective/not high enough quality to be i3/i5/i7 chips?
     
  21. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    Some of them at least, yes.
     
  22. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Luckily for me, I have a superior quad-core xD
     
  23. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Well, until someone shows up with a -XM chip :p

    (And, if we're talking about special features, my -2720QM trumps your -3160QM)
     
  24. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    I meant to my friends with sucky Pentium/Celeron xD An -XM chip kills my CPU :)
     
  25. Althernai

    Althernai Notebook Virtuoso

    Reputations:
    919
    Messages:
    2,233
    Likes Received:
    98
    Trophy Points:
    66
    For some very, very specific workloads, yes -- but these are generally artificial. In practice, the gains are quite a bit lower than that. At my work, it produces an improvement of around 30%. This doesn't sound like much, but keep in mind that it works out to over a million dollars (even more when you factor in the cooling and the electricity).

    The scale modern chips are made at is tens of nanometers (22nm is the current limit for mass production of CPUs). This is insanely small and requires an extremely clean environment and extremely precise machinery. Since the chipmakers live at the edge of their skill, the machines are not quite perfect and neither is the procedure -- they're just good enough to get a reasonable yield of viable chips. As the process matures, the yields improve and indeed, the average quality goes up... but very soon after that they shrink the scale again.
     
  26. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    If the conditions are ideal, at least. (They never are :p)
    The manufacturing process isn't 100% accurate. But for the most part now, the chips will perform over the "expected" specification at a very high rate. Same as with the Pentium chips a long while ago - they had a huge amount of chips that were basically identical, but locked to different speeds (i.e., "the market" demanded chips in various speeds for different levels of wallets). Was much cheaper for intel to produce one chip than nine different ones. So unlocking them essentially meant you could overclock 100- 250%, etc., with no issues.

    A bit like with the 640LE, 640M, 650m, 660m cards Nvidia have. They're actually made over the same chip. And I'm sure someone is going to insist that they've been selected and so on according to performance and yield and whatnot. But all of those chips have to perform over a certain level to at all work. Same for the ram they use - it has to be the same speed (which again is cheaper to do than have fifteen different designs being manufactured). Which means that all of those cards actually perform closer to each other than the specifications on the vbios allows (and suggests). And that's why we're seeing a pretty much 100% success-rate for people who overclock the 640LE cards to 650 and 660 frequencies.

    Happens once in a while. AMD capitalized on it for a while with the K6 and K7 processors, for example. By shipping them unlocked, to allow people to overclock until the chips would fry. Asus had mainboards with bioses that allowed you to mess around with it, and advertised with that. And you had some insane percentage overclocks on just air-cooling that haven't been copied since then. Athlon XP chips raised from 1.2Ghz to 2.8Ghz, that sort of thing.

    Now the chips are more similar, though. Simply because like I said the performance on the chips have to be inside a level that is fairly high in the first place. And the process is more complex and much better than then. So you don't have those huge gaps any more. The architectures tend to hit the target they're designed for (at different yields - they're either usable or they're not), and when a new layout/design turns up it's usually bus-changes or optimisations.. architecture changes.. that matter more than processor speeds.
     
  27. maverick1989

    maverick1989 Notebook Deity

    Reputations:
    332
    Messages:
    1,562
    Likes Received:
    22
    Trophy Points:
    56
    There is one point that needs to be mentioned here. Hyperthreading is a two way road. Just because a system has HT does not mean everything you run on it will make full use of the technology. That is one of the reasons the Core 2 Quad did not become that big. Software needs to be written to make use of multithreading. At the time of the Quad, few softwares were witten to run on four cores. When you want to run on multiple cores, the complexity in designing software also increases severalfold. There are still several softwares that use only two threads. Many of the more common ones, like newer games, Office Suite and Windows itself makes full use of multithreading.
     
  28. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Pentium doesn't necessarily suck. My gaming rig uses a G630 that can keep up with all the games I throw at it (even Command and Conquer and Supreme Commander series, both RTS games). Dual-core, modest clock (2.7GHz), no HT, no anything really. GPUs and HDDs are the main bottleneck these days.

    I won't discount a CPU based solely on name/number alone.
     
  29. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Tried ArmA 2 with it? :D
     
  30. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Admittedly, I don't play a lot of newer games such as DayZ (mods ftw) or whatever's out today. Newest games I have now are Supreme Commander 2, Portal 2, and Battlefield: Bad Company 2. And Minecraft :eek:.

    But I find driving more fun than gaming now-a-days. When I was your age, I was glued to the PS2, and I still game on my Xbox 360 a bit (mainly MW3 these days, with a bit of Ace Combat 6).
     
  31. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    try to play shogun 2 tw

    Shogun 2 CPU Test Sandy Bridge vs Ivy Bridge - Total War Center Forums

    thats a thread on how much the cpu matters on the minimum clocks. Basically its advisable to get a xm cpu, overclock it to 4.5-5ghz and overclock that 680m to however high it can, and it will handle ultra at 1080p, Im not confident that battles that have 20-30k troops would be buttery smooth though, you need more horsepower for that.
     
  32. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    Or take a look at GW2, another insane CPU hog. For now, it doesn't really matter for most games though.
     
  33. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    And btw I have found the presentation, its corrupted, I would have the trouble to translate from portuguese to english but with 70% of the content lost, I threw it out. I had a major breakdown of my NAS late friday, spent the whole weekend trying to solve that, got most of the things back though thankfully. Thats a first on how a raid 5 can fail so miserably.
     
  34. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    RAID 5 is what? 4 hard drives, 2 in RAID 0 and the 2 others mirrors of those two drives?
    BTW, 100th post :D
     
  35. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    as I said wiki is good for basic info

    RAID - Wikipedia, the free encyclopedia

    RAID 5 (block-level striping with distributed parity) distributes parity along with the data and requires all drives but one to be present to operate; the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. However, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced and the associated data rebuilt. Additionally, there is the potentially disastrous RAID 5 write hole. RAID 5 requires at least three disks.

    RAID 5 write hole - Wikipedia, the free encyclopedia

    this is what happened, it reminds me to get another type of raid
     
  36. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Saw the RAID wikipedia before, I just forgot which was which :p
     
  37. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    nah no problems, so which one would you recommend? it doesnt matter the amount of drives, its fairly easy to get more if needed
     
  38. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    There seems to not be a real good understanding of how hyperthreading works. It does not increase processing power of the CPU. The processing power of the CPU remains the same whether you have hyperthreading enabled or disabled. All hyperthreading does is to help come closer to the theoretical maximum processing power of the CPU.

    There are the parts of the CPU core that do the computations. But these are never used 100% efficiently. When your computer says 100% load, it does not mean that everything is being utilized to its fullest. Normally, every clock cycle only one thread may be executed on the execution unit of the core, but multiple instructions can be executed at the same time from that one thread, to try to use as many of the execution units as possible. Many times there are long lapses when the core is not doing anything while it is is waiting for data from cache or memory, so many clock cycles can be wasted. Also it is unlikely that any single thread will be able to use all of the execution units available at any one time. Where hyperthreading comes in is that it allows two threads to execute instructions on the core at the same time. This does not mean a theoretical doubling of processing power! Processing power remains the same. It just reduces inefficiency in the core and helps utilize otherwise unused execution units. In practice, performance improves by negative 5% up to a limit of around 30% depending on the scenario.
     
  39. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    From my basic knowledge of how RAID works, I would think RAID 10/1+0 offers the best performance/file security that you can get, what do you think?
     
  40. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    I agree, but you need a minimum of 4 drives for that meaning that the devices that can support it are limited.
     
  41. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    I really, really used to love overclocking. I was very passionate about it and was buying, selling, and swapping computer components ALL the time. But eventually there was no more sport in it, and it was just an exercise in spending the most money on hardware.

    When the K6 was around, Intel processors came unlocked too. When AMD released K7 (Athlon), Both AMD and Intel processors came locked, although Intel started locking theirs first. When the Athlon XP processors came out they were all locked too but could still be unlocked by a hard mod, although it was more difficult. Then AMD released Athlon XP Thoroughbred (256k L2) and Barton (512k L2) 130nm cores, and these were all initially unlocked. But after a short time, AMD locked them again, but this time in a way that was impossible to unlock. They have been like that ever since except for AMD's FX and Black Edition processors.

    Yes, but then again ever since moving away from jumpers on the motherboard controlling specifications, pretty much every motherboard aside from those branded by Intel or that comes in a pre-built, name-brand PC has had at least some overclocking features (voltage, FSB, multiplier, etc).

    There was no 1.2Ghz Athlon XP, but 1.2Ghz Athlon XP based Duron would never ever reach even 2Ghz on air cooling. And the only way even the latest and greatest Athlon XP chips would ever reach 2.8Ghz is with sub-zero temperatures under phase-change cooling.
     
  42. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Most laptops are unable to support RAID 10, correct? Would it only be usable on a Desktop and even then, limited by the MoBo? Would a RAID array of 4 mirrors be most useful in servers where data redundancy is most needed?
     
  43. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    Desktop, NAS, server boards with enough drives, etc. I don't know whether laptops support it, if were to use a caddy, i'd be able to put 4 drives in my M6700, but i don't know whether it supports RAID10, it does support RAID 5 though (surprised it does).
     
  44. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Is RAID 10 most useful in servers?
     
  45. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    Not at all. RAID 0, RAID 1, RAID 5, RAID 6, and RAID 10 are all used in servers, depending on the circumstances. RAID 6 is safer than RAID 10 and doesn't lose 50% of it's storage due to redundancy.
     
  46. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    What is RAID 6?
     
  47. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    raid 10 is one of the most secure raids that exists

    and I use a NAS at home, have to give away mine to my father and built another one, and that last new, spanking new, failed catastrophically.

    Im going to return all the HDDs and get new ones. There were 16tb of storage in the new one. How many drives do I have? the drives are of 4tb capacity, hitachi travelstar type
     
  48. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    In other words, it's very good for servers where a loss of data would be catastrophic xD
     
  49. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Sure. But the most secure backup is one on completely separate media, located far off-site, and rarely touched until needed.

    So the best backup today is tape.

    Or, if you **really, really, really** need to make sure your data survives for a long, long time, you need to look at the Egyptians: stone, and lots of it!
     
  50. VaultBoy!

    VaultBoy! Notebook Consultant

    Reputations:
    24
    Messages:
    220
    Likes Received:
    0
    Trophy Points:
    30
    Hahaha, how bout just store it in Egypt, that's very far off xD my dad works at a law firm as human resources so he deals with all the backups, the firm uses ultrium 3 tapes which store 400-800 gb of data
     
 Next page →